Mar 6 00:56:18.143609 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Mar 6 00:56:18.143657 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Thu Mar 5 23:10:47 -00 2026 Mar 6 00:56:18.143682 kernel: KASLR disabled due to lack of seed Mar 6 00:56:18.143698 kernel: efi: EFI v2.7 by EDK II Mar 6 00:56:18.143716 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a734a98 MEMRESERVE=0x78557598 Mar 6 00:56:18.143732 kernel: secureboot: Secure boot disabled Mar 6 00:56:18.143750 kernel: ACPI: Early table checksum verification disabled Mar 6 00:56:18.143768 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Mar 6 00:56:18.143787 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Mar 6 00:56:18.143805 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 6 00:56:18.143822 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Mar 6 00:56:18.143845 kernel: ACPI: FACS 0x0000000078630000 000040 Mar 6 00:56:18.143862 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 6 00:56:18.143878 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Mar 6 00:56:18.143896 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Mar 6 00:56:18.143912 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Mar 6 00:56:18.143938 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 6 00:56:18.143955 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Mar 6 00:56:18.143971 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Mar 6 00:56:18.143988 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Mar 6 00:56:18.144005 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Mar 6 00:56:18.144022 kernel: printk: legacy bootconsole [uart0] enabled Mar 6 00:56:18.144038 kernel: ACPI: Use ACPI SPCR as default console: Yes Mar 6 00:56:18.144054 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Mar 6 00:56:18.144071 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Mar 6 00:56:18.144086 kernel: Zone ranges: Mar 6 00:56:18.144103 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 6 00:56:18.144123 kernel: DMA32 empty Mar 6 00:56:18.144140 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Mar 6 00:56:18.144155 kernel: Device empty Mar 6 00:56:18.144173 kernel: Movable zone start for each node Mar 6 00:56:18.144189 kernel: Early memory node ranges Mar 6 00:56:18.144207 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Mar 6 00:56:18.144266 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Mar 6 00:56:18.144291 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Mar 6 00:56:18.144309 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Mar 6 00:56:18.144326 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Mar 6 00:56:18.144346 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Mar 6 00:56:18.144362 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Mar 6 00:56:18.144386 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Mar 6 00:56:18.144411 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Mar 6 00:56:18.144430 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Mar 6 00:56:18.144447 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Mar 6 00:56:18.144464 kernel: psci: probing for conduit method from ACPI. Mar 6 00:56:18.144486 kernel: psci: PSCIv1.0 detected in firmware. Mar 6 00:56:18.144502 kernel: psci: Using standard PSCI v0.2 function IDs Mar 6 00:56:18.144519 kernel: psci: Trusted OS migration not required Mar 6 00:56:18.144536 kernel: psci: SMC Calling Convention v1.1 Mar 6 00:56:18.144553 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Mar 6 00:56:18.144570 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Mar 6 00:56:18.144587 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Mar 6 00:56:18.144605 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 6 00:56:18.144621 kernel: Detected PIPT I-cache on CPU0 Mar 6 00:56:18.144638 kernel: CPU features: detected: GIC system register CPU interface Mar 6 00:56:18.144655 kernel: CPU features: detected: Spectre-v2 Mar 6 00:56:18.144677 kernel: CPU features: detected: Spectre-v3a Mar 6 00:56:18.144694 kernel: CPU features: detected: Spectre-BHB Mar 6 00:56:18.144711 kernel: CPU features: detected: ARM erratum 1742098 Mar 6 00:56:18.144727 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Mar 6 00:56:18.144744 kernel: alternatives: applying boot alternatives Mar 6 00:56:18.144763 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=68c9ef230e3eed1360dd8114dada95b6a934f07952c3a5d42725f3006977f027 Mar 6 00:56:18.144781 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 6 00:56:18.144798 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 6 00:56:18.144815 kernel: Fallback order for Node 0: 0 Mar 6 00:56:18.144832 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Mar 6 00:56:18.144849 kernel: Policy zone: Normal Mar 6 00:56:18.144871 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 6 00:56:18.144888 kernel: software IO TLB: area num 2. Mar 6 00:56:18.144905 kernel: software IO TLB: mapped [mem 0x0000000074557000-0x0000000078557000] (64MB) Mar 6 00:56:18.144921 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 6 00:56:18.144938 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 6 00:56:18.144956 kernel: rcu: RCU event tracing is enabled. Mar 6 00:56:18.144973 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 6 00:56:18.144991 kernel: Trampoline variant of Tasks RCU enabled. Mar 6 00:56:18.145008 kernel: Tracing variant of Tasks RCU enabled. Mar 6 00:56:18.145025 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 6 00:56:18.145042 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 6 00:56:18.145065 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 6 00:56:18.145082 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 6 00:56:18.145099 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 6 00:56:18.145116 kernel: GICv3: 96 SPIs implemented Mar 6 00:56:18.145132 kernel: GICv3: 0 Extended SPIs implemented Mar 6 00:56:18.145149 kernel: Root IRQ handler: gic_handle_irq Mar 6 00:56:18.145165 kernel: GICv3: GICv3 features: 16 PPIs Mar 6 00:56:18.145182 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Mar 6 00:56:18.145199 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Mar 6 00:56:18.145216 kernel: ITS [mem 0x10080000-0x1009ffff] Mar 6 00:56:18.145261 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Mar 6 00:56:18.145282 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Mar 6 00:56:18.145327 kernel: GICv3: using LPI property table @0x0000000400110000 Mar 6 00:56:18.145345 kernel: ITS: Using hypervisor restricted LPI range [128] Mar 6 00:56:18.145362 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Mar 6 00:56:18.145382 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 6 00:56:18.145399 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Mar 6 00:56:18.145416 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Mar 6 00:56:18.145433 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Mar 6 00:56:18.145450 kernel: Console: colour dummy device 80x25 Mar 6 00:56:18.145467 kernel: printk: legacy console [tty1] enabled Mar 6 00:56:18.145484 kernel: ACPI: Core revision 20240827 Mar 6 00:56:18.145502 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Mar 6 00:56:18.145525 kernel: pid_max: default: 32768 minimum: 301 Mar 6 00:56:18.145543 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 6 00:56:18.145560 kernel: landlock: Up and running. Mar 6 00:56:18.145578 kernel: SELinux: Initializing. Mar 6 00:56:18.145595 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 6 00:56:18.145612 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 6 00:56:18.145630 kernel: rcu: Hierarchical SRCU implementation. Mar 6 00:56:18.145649 kernel: rcu: Max phase no-delay instances is 400. Mar 6 00:56:18.145667 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 6 00:56:18.145688 kernel: Remapping and enabling EFI services. Mar 6 00:56:18.145705 kernel: smp: Bringing up secondary CPUs ... Mar 6 00:56:18.145722 kernel: Detected PIPT I-cache on CPU1 Mar 6 00:56:18.145739 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Mar 6 00:56:18.145756 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Mar 6 00:56:18.145774 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Mar 6 00:56:18.145791 kernel: smp: Brought up 1 node, 2 CPUs Mar 6 00:56:18.145808 kernel: SMP: Total of 2 processors activated. Mar 6 00:56:18.145826 kernel: CPU: All CPU(s) started at EL1 Mar 6 00:56:18.145857 kernel: CPU features: detected: 32-bit EL0 Support Mar 6 00:56:18.145875 kernel: CPU features: detected: 32-bit EL1 Support Mar 6 00:56:18.145897 kernel: CPU features: detected: CRC32 instructions Mar 6 00:56:18.145915 kernel: alternatives: applying system-wide alternatives Mar 6 00:56:18.145934 kernel: Memory: 3796332K/4030464K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 212788K reserved, 16384K cma-reserved) Mar 6 00:56:18.145952 kernel: devtmpfs: initialized Mar 6 00:56:18.145970 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 6 00:56:18.145993 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 6 00:56:18.146011 kernel: 16880 pages in range for non-PLT usage Mar 6 00:56:18.146030 kernel: 508400 pages in range for PLT usage Mar 6 00:56:18.146048 kernel: pinctrl core: initialized pinctrl subsystem Mar 6 00:56:18.146067 kernel: SMBIOS 3.0.0 present. Mar 6 00:56:18.146086 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Mar 6 00:56:18.146104 kernel: DMI: Memory slots populated: 0/0 Mar 6 00:56:18.146123 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 6 00:56:18.146141 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 6 00:56:18.146164 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 6 00:56:18.146184 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 6 00:56:18.146202 kernel: audit: initializing netlink subsys (disabled) Mar 6 00:56:18.146221 kernel: audit: type=2000 audit(0.271:1): state=initialized audit_enabled=0 res=1 Mar 6 00:56:18.146277 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 6 00:56:18.146297 kernel: cpuidle: using governor menu Mar 6 00:56:18.146315 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 6 00:56:18.146333 kernel: ASID allocator initialised with 65536 entries Mar 6 00:56:18.146352 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 6 00:56:18.146376 kernel: Serial: AMBA PL011 UART driver Mar 6 00:56:18.146398 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 6 00:56:18.146416 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 6 00:56:18.146434 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 6 00:56:18.146452 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 6 00:56:18.146470 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 6 00:56:18.146489 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 6 00:56:18.146507 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 6 00:56:18.146525 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 6 00:56:18.146546 kernel: ACPI: Added _OSI(Module Device) Mar 6 00:56:18.146564 kernel: ACPI: Added _OSI(Processor Device) Mar 6 00:56:18.146582 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 6 00:56:18.146600 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 6 00:56:18.146618 kernel: ACPI: Interpreter enabled Mar 6 00:56:18.146635 kernel: ACPI: Using GIC for interrupt routing Mar 6 00:56:18.146653 kernel: ACPI: MCFG table detected, 1 entries Mar 6 00:56:18.146671 kernel: ACPI: CPU0 has been hot-added Mar 6 00:56:18.146688 kernel: ACPI: CPU1 has been hot-added Mar 6 00:56:18.146710 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Mar 6 00:56:18.146996 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 6 00:56:18.147186 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 6 00:56:18.148470 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 6 00:56:18.148678 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Mar 6 00:56:18.148863 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Mar 6 00:56:18.148888 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Mar 6 00:56:18.148915 kernel: acpiphp: Slot [1] registered Mar 6 00:56:18.148934 kernel: acpiphp: Slot [2] registered Mar 6 00:56:18.148952 kernel: acpiphp: Slot [3] registered Mar 6 00:56:18.148970 kernel: acpiphp: Slot [4] registered Mar 6 00:56:18.148988 kernel: acpiphp: Slot [5] registered Mar 6 00:56:18.149005 kernel: acpiphp: Slot [6] registered Mar 6 00:56:18.149023 kernel: acpiphp: Slot [7] registered Mar 6 00:56:18.149041 kernel: acpiphp: Slot [8] registered Mar 6 00:56:18.149058 kernel: acpiphp: Slot [9] registered Mar 6 00:56:18.149076 kernel: acpiphp: Slot [10] registered Mar 6 00:56:18.149098 kernel: acpiphp: Slot [11] registered Mar 6 00:56:18.149116 kernel: acpiphp: Slot [12] registered Mar 6 00:56:18.149133 kernel: acpiphp: Slot [13] registered Mar 6 00:56:18.149151 kernel: acpiphp: Slot [14] registered Mar 6 00:56:18.149169 kernel: acpiphp: Slot [15] registered Mar 6 00:56:18.149187 kernel: acpiphp: Slot [16] registered Mar 6 00:56:18.149205 kernel: acpiphp: Slot [17] registered Mar 6 00:56:18.149256 kernel: acpiphp: Slot [18] registered Mar 6 00:56:18.149282 kernel: acpiphp: Slot [19] registered Mar 6 00:56:18.149324 kernel: acpiphp: Slot [20] registered Mar 6 00:56:18.149345 kernel: acpiphp: Slot [21] registered Mar 6 00:56:18.149363 kernel: acpiphp: Slot [22] registered Mar 6 00:56:18.149381 kernel: acpiphp: Slot [23] registered Mar 6 00:56:18.149399 kernel: acpiphp: Slot [24] registered Mar 6 00:56:18.149417 kernel: acpiphp: Slot [25] registered Mar 6 00:56:18.149434 kernel: acpiphp: Slot [26] registered Mar 6 00:56:18.149452 kernel: acpiphp: Slot [27] registered Mar 6 00:56:18.149469 kernel: acpiphp: Slot [28] registered Mar 6 00:56:18.149487 kernel: acpiphp: Slot [29] registered Mar 6 00:56:18.149510 kernel: acpiphp: Slot [30] registered Mar 6 00:56:18.149527 kernel: acpiphp: Slot [31] registered Mar 6 00:56:18.149545 kernel: PCI host bridge to bus 0000:00 Mar 6 00:56:18.149828 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Mar 6 00:56:18.150910 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 6 00:56:18.151111 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Mar 6 00:56:18.151350 kernel: pci_bus 0000:00: root bus resource [bus 00] Mar 6 00:56:18.151738 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Mar 6 00:56:18.151983 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Mar 6 00:56:18.152193 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Mar 6 00:56:18.152464 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Mar 6 00:56:18.152671 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Mar 6 00:56:18.152875 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 6 00:56:18.153108 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Mar 6 00:56:18.153358 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Mar 6 00:56:18.153557 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Mar 6 00:56:18.153748 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Mar 6 00:56:18.153938 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 6 00:56:18.154114 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Mar 6 00:56:18.155455 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 6 00:56:18.155662 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Mar 6 00:56:18.155688 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 6 00:56:18.155708 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 6 00:56:18.155726 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 6 00:56:18.155744 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 6 00:56:18.155763 kernel: iommu: Default domain type: Translated Mar 6 00:56:18.155781 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 6 00:56:18.155798 kernel: efivars: Registered efivars operations Mar 6 00:56:18.155816 kernel: vgaarb: loaded Mar 6 00:56:18.155839 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 6 00:56:18.155858 kernel: VFS: Disk quotas dquot_6.6.0 Mar 6 00:56:18.155876 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 6 00:56:18.155893 kernel: pnp: PnP ACPI init Mar 6 00:56:18.156090 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Mar 6 00:56:18.156117 kernel: pnp: PnP ACPI: found 1 devices Mar 6 00:56:18.156135 kernel: NET: Registered PF_INET protocol family Mar 6 00:56:18.156153 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 6 00:56:18.156177 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 6 00:56:18.156195 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 6 00:56:18.156213 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 6 00:56:18.156287 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 6 00:56:18.156309 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 6 00:56:18.156328 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 6 00:56:18.156346 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 6 00:56:18.156364 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 6 00:56:18.156382 kernel: PCI: CLS 0 bytes, default 64 Mar 6 00:56:18.156406 kernel: kvm [1]: HYP mode not available Mar 6 00:56:18.156424 kernel: Initialise system trusted keyrings Mar 6 00:56:18.156442 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 6 00:56:18.156460 kernel: Key type asymmetric registered Mar 6 00:56:18.156478 kernel: Asymmetric key parser 'x509' registered Mar 6 00:56:18.156496 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 6 00:56:18.156514 kernel: io scheduler mq-deadline registered Mar 6 00:56:18.156532 kernel: io scheduler kyber registered Mar 6 00:56:18.156550 kernel: io scheduler bfq registered Mar 6 00:56:18.156762 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Mar 6 00:56:18.156789 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 6 00:56:18.156808 kernel: ACPI: button: Power Button [PWRB] Mar 6 00:56:18.156826 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Mar 6 00:56:18.156845 kernel: ACPI: button: Sleep Button [SLPB] Mar 6 00:56:18.156863 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 6 00:56:18.156882 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 6 00:56:18.157079 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Mar 6 00:56:18.157110 kernel: printk: legacy console [ttyS0] disabled Mar 6 00:56:18.157129 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Mar 6 00:56:18.157147 kernel: printk: legacy console [ttyS0] enabled Mar 6 00:56:18.157165 kernel: printk: legacy bootconsole [uart0] disabled Mar 6 00:56:18.157183 kernel: thunder_xcv, ver 1.0 Mar 6 00:56:18.157201 kernel: thunder_bgx, ver 1.0 Mar 6 00:56:18.157219 kernel: nicpf, ver 1.0 Mar 6 00:56:18.157271 kernel: nicvf, ver 1.0 Mar 6 00:56:18.159805 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 6 00:56:18.160020 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-06T00:56:17 UTC (1772758577) Mar 6 00:56:18.160046 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 6 00:56:18.160065 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Mar 6 00:56:18.160083 kernel: watchdog: NMI not fully supported Mar 6 00:56:18.160102 kernel: NET: Registered PF_INET6 protocol family Mar 6 00:56:18.160121 kernel: watchdog: Hard watchdog permanently disabled Mar 6 00:56:18.160139 kernel: Segment Routing with IPv6 Mar 6 00:56:18.160157 kernel: In-situ OAM (IOAM) with IPv6 Mar 6 00:56:18.160174 kernel: NET: Registered PF_PACKET protocol family Mar 6 00:56:18.160198 kernel: Key type dns_resolver registered Mar 6 00:56:18.160215 kernel: registered taskstats version 1 Mar 6 00:56:18.160260 kernel: Loading compiled-in X.509 certificates Mar 6 00:56:18.160281 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 3a2ba669b0bb3660035f2ce1faaa856d46d520ff' Mar 6 00:56:18.160300 kernel: Demotion targets for Node 0: null Mar 6 00:56:18.160317 kernel: Key type .fscrypt registered Mar 6 00:56:18.160336 kernel: Key type fscrypt-provisioning registered Mar 6 00:56:18.160353 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 6 00:56:18.160371 kernel: ima: Allocated hash algorithm: sha1 Mar 6 00:56:18.160396 kernel: ima: No architecture policies found Mar 6 00:56:18.160414 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 6 00:56:18.160431 kernel: clk: Disabling unused clocks Mar 6 00:56:18.160449 kernel: PM: genpd: Disabling unused power domains Mar 6 00:56:18.160467 kernel: Warning: unable to open an initial console. Mar 6 00:56:18.160485 kernel: Freeing unused kernel memory: 39552K Mar 6 00:56:18.160503 kernel: Run /init as init process Mar 6 00:56:18.160521 kernel: with arguments: Mar 6 00:56:18.160538 kernel: /init Mar 6 00:56:18.160560 kernel: with environment: Mar 6 00:56:18.160577 kernel: HOME=/ Mar 6 00:56:18.160595 kernel: TERM=linux Mar 6 00:56:18.160614 systemd[1]: Successfully made /usr/ read-only. Mar 6 00:56:18.160638 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 6 00:56:18.160658 systemd[1]: Detected virtualization amazon. Mar 6 00:56:18.160677 systemd[1]: Detected architecture arm64. Mar 6 00:56:18.160700 systemd[1]: Running in initrd. Mar 6 00:56:18.160719 systemd[1]: No hostname configured, using default hostname. Mar 6 00:56:18.160738 systemd[1]: Hostname set to . Mar 6 00:56:18.160757 systemd[1]: Initializing machine ID from VM UUID. Mar 6 00:56:18.160776 systemd[1]: Queued start job for default target initrd.target. Mar 6 00:56:18.160795 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 00:56:18.160814 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 00:56:18.160835 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 6 00:56:18.160858 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 6 00:56:18.160878 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 6 00:56:18.160899 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 6 00:56:18.160921 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 6 00:56:18.160940 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 6 00:56:18.160960 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 00:56:18.160979 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 6 00:56:18.161003 systemd[1]: Reached target paths.target - Path Units. Mar 6 00:56:18.161022 systemd[1]: Reached target slices.target - Slice Units. Mar 6 00:56:18.161041 systemd[1]: Reached target swap.target - Swaps. Mar 6 00:56:18.161061 systemd[1]: Reached target timers.target - Timer Units. Mar 6 00:56:18.161080 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 6 00:56:18.161099 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 6 00:56:18.161119 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 6 00:56:18.161138 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 6 00:56:18.161158 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 6 00:56:18.161181 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 6 00:56:18.161201 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 00:56:18.161220 systemd[1]: Reached target sockets.target - Socket Units. Mar 6 00:56:18.161264 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 6 00:56:18.161284 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 6 00:56:18.161323 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 6 00:56:18.161345 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 6 00:56:18.161364 systemd[1]: Starting systemd-fsck-usr.service... Mar 6 00:56:18.161390 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 6 00:56:18.161410 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 6 00:56:18.161429 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 00:56:18.161448 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 6 00:56:18.161468 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 00:56:18.161492 systemd[1]: Finished systemd-fsck-usr.service. Mar 6 00:56:18.161511 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 6 00:56:18.161568 systemd-journald[259]: Collecting audit messages is disabled. Mar 6 00:56:18.161611 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 6 00:56:18.161635 kernel: Bridge firewalling registered Mar 6 00:56:18.161654 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 6 00:56:18.161674 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 00:56:18.161693 systemd-journald[259]: Journal started Mar 6 00:56:18.161729 systemd-journald[259]: Runtime Journal (/run/log/journal/ec2c533688fd290bcc9c38c1e22f89cf) is 8M, max 75.3M, 67.3M free. Mar 6 00:56:18.104605 systemd-modules-load[261]: Inserted module 'overlay' Mar 6 00:56:18.154305 systemd-modules-load[261]: Inserted module 'br_netfilter' Mar 6 00:56:18.172301 systemd[1]: Started systemd-journald.service - Journal Service. Mar 6 00:56:18.174354 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 00:56:18.183413 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 6 00:56:18.191679 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 00:56:18.199037 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 6 00:56:18.209679 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 6 00:56:18.248211 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 00:56:18.252952 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 00:56:18.265123 systemd-tmpfiles[284]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 6 00:56:18.269692 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 00:56:18.277723 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 6 00:56:18.283474 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 00:56:18.294520 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 6 00:56:18.332561 dracut-cmdline[301]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=68c9ef230e3eed1360dd8114dada95b6a934f07952c3a5d42725f3006977f027 Mar 6 00:56:18.401482 systemd-resolved[303]: Positive Trust Anchors: Mar 6 00:56:18.401512 systemd-resolved[303]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 6 00:56:18.401573 systemd-resolved[303]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 6 00:56:18.533274 kernel: SCSI subsystem initialized Mar 6 00:56:18.542282 kernel: Loading iSCSI transport class v2.0-870. Mar 6 00:56:18.555279 kernel: iscsi: registered transport (tcp) Mar 6 00:56:18.577796 kernel: iscsi: registered transport (qla4xxx) Mar 6 00:56:18.577881 kernel: QLogic iSCSI HBA Driver Mar 6 00:56:18.615398 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 6 00:56:18.648121 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 6 00:56:18.660896 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 6 00:56:18.671550 kernel: random: crng init done Mar 6 00:56:18.671760 systemd-resolved[303]: Defaulting to hostname 'linux'. Mar 6 00:56:18.675847 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 6 00:56:18.681793 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 6 00:56:18.769381 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 6 00:56:18.776272 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 6 00:56:18.882337 kernel: raid6: neonx8 gen() 6335 MB/s Mar 6 00:56:18.900300 kernel: raid6: neonx4 gen() 6369 MB/s Mar 6 00:56:18.918293 kernel: raid6: neonx2 gen() 5310 MB/s Mar 6 00:56:18.936302 kernel: raid6: neonx1 gen() 3896 MB/s Mar 6 00:56:18.953310 kernel: raid6: int64x8 gen() 3569 MB/s Mar 6 00:56:18.971302 kernel: raid6: int64x4 gen() 3676 MB/s Mar 6 00:56:18.988289 kernel: raid6: int64x2 gen() 3552 MB/s Mar 6 00:56:19.006847 kernel: raid6: int64x1 gen() 2736 MB/s Mar 6 00:56:19.006920 kernel: raid6: using algorithm neonx4 gen() 6369 MB/s Mar 6 00:56:19.025584 kernel: raid6: .... xor() 4826 MB/s, rmw enabled Mar 6 00:56:19.025661 kernel: raid6: using neon recovery algorithm Mar 6 00:56:19.035360 kernel: xor: measuring software checksum speed Mar 6 00:56:19.035448 kernel: 8regs : 12863 MB/sec Mar 6 00:56:19.036870 kernel: 32regs : 13003 MB/sec Mar 6 00:56:19.037262 kernel: arm64_neon : 9063 MB/sec Mar 6 00:56:19.040005 kernel: xor: using function: 32regs (13003 MB/sec) Mar 6 00:56:19.136291 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 6 00:56:19.150447 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 6 00:56:19.158174 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 00:56:19.215408 systemd-udevd[511]: Using default interface naming scheme 'v255'. Mar 6 00:56:19.227769 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 00:56:19.236622 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 6 00:56:19.277630 dracut-pre-trigger[515]: rd.md=0: removing MD RAID activation Mar 6 00:56:19.326014 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 6 00:56:19.333617 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 6 00:56:19.468582 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 00:56:19.476935 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 6 00:56:19.636130 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 6 00:56:19.636198 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 6 00:56:19.638001 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 6 00:56:19.642246 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Mar 6 00:56:19.648267 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 6 00:56:19.654196 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 6 00:56:19.654563 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 6 00:56:19.657675 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 6 00:56:19.658541 kernel: GPT:9289727 != 33554431 Mar 6 00:56:19.661158 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 6 00:56:19.663136 kernel: GPT:9289727 != 33554431 Mar 6 00:56:19.663190 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 6 00:56:19.666804 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:1f:dd:67:08:69 Mar 6 00:56:19.667121 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 6 00:56:19.682781 (udev-worker)[557]: Network interface NamePolicy= disabled on kernel command line. Mar 6 00:56:19.694338 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 00:56:19.697917 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 00:56:19.704477 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 00:56:19.712167 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 00:56:19.721989 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 6 00:56:19.743269 kernel: nvme nvme0: using unchecked data buffer Mar 6 00:56:19.767403 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 00:56:19.926444 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 6 00:56:19.946373 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 6 00:56:19.973533 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 6 00:56:20.000091 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 6 00:56:20.024034 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 6 00:56:20.029897 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 6 00:56:20.041678 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 6 00:56:20.047795 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 00:56:20.050759 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 6 00:56:20.060113 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 6 00:56:20.070761 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 6 00:56:20.097341 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 6 00:56:20.097904 disk-uuid[690]: Primary Header is updated. Mar 6 00:56:20.097904 disk-uuid[690]: Secondary Entries is updated. Mar 6 00:56:20.097904 disk-uuid[690]: Secondary Header is updated. Mar 6 00:56:20.118684 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 6 00:56:20.143332 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 6 00:56:21.137275 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 6 00:56:21.138537 disk-uuid[694]: The operation has completed successfully. Mar 6 00:56:21.361346 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 6 00:56:21.361573 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 6 00:56:21.444543 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 6 00:56:21.483725 sh[956]: Success Mar 6 00:56:21.507104 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 6 00:56:21.507198 kernel: device-mapper: uevent: version 1.0.3 Mar 6 00:56:21.512253 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 6 00:56:21.522257 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Mar 6 00:56:21.616143 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 6 00:56:21.622461 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 6 00:56:21.637756 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 6 00:56:21.665271 kernel: BTRFS: device fsid fcb4e7bf-1206-4803-90fb-6606b15e3aea devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (979) Mar 6 00:56:21.665357 kernel: BTRFS info (device dm-0): first mount of filesystem fcb4e7bf-1206-4803-90fb-6606b15e3aea Mar 6 00:56:21.669277 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 6 00:56:21.797666 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Mar 6 00:56:21.797734 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 6 00:56:21.797761 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 6 00:56:21.809928 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 6 00:56:21.813843 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 6 00:56:21.817379 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 6 00:56:21.818608 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 6 00:56:21.828440 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 6 00:56:21.881331 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1014) Mar 6 00:56:21.887206 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 890f9900-ea91-473b-9515-ad9b05b1880b Mar 6 00:56:21.887325 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 6 00:56:21.906290 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 6 00:56:21.906377 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 6 00:56:21.915320 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 890f9900-ea91-473b-9515-ad9b05b1880b Mar 6 00:56:21.918405 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 6 00:56:21.926534 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 6 00:56:22.029076 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 6 00:56:22.039035 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 6 00:56:22.125495 systemd-networkd[1148]: lo: Link UP Mar 6 00:56:22.125512 systemd-networkd[1148]: lo: Gained carrier Mar 6 00:56:22.131909 systemd-networkd[1148]: Enumeration completed Mar 6 00:56:22.133793 systemd-networkd[1148]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 00:56:22.133800 systemd-networkd[1148]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 6 00:56:22.134789 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 6 00:56:22.144485 systemd-networkd[1148]: eth0: Link UP Mar 6 00:56:22.144493 systemd-networkd[1148]: eth0: Gained carrier Mar 6 00:56:22.144515 systemd-networkd[1148]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 00:56:22.161614 systemd[1]: Reached target network.target - Network. Mar 6 00:56:22.172319 systemd-networkd[1148]: eth0: DHCPv4 address 172.31.24.181/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 6 00:56:22.495718 ignition[1079]: Ignition 2.22.0 Mar 6 00:56:22.496305 ignition[1079]: Stage: fetch-offline Mar 6 00:56:22.497185 ignition[1079]: no configs at "/usr/lib/ignition/base.d" Mar 6 00:56:22.497208 ignition[1079]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 6 00:56:22.498279 ignition[1079]: Ignition finished successfully Mar 6 00:56:22.508981 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 6 00:56:22.515400 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 6 00:56:22.566139 ignition[1160]: Ignition 2.22.0 Mar 6 00:56:22.566169 ignition[1160]: Stage: fetch Mar 6 00:56:22.567940 ignition[1160]: no configs at "/usr/lib/ignition/base.d" Mar 6 00:56:22.567973 ignition[1160]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 6 00:56:22.568124 ignition[1160]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 6 00:56:22.582728 ignition[1160]: PUT result: OK Mar 6 00:56:22.591985 ignition[1160]: parsed url from cmdline: "" Mar 6 00:56:22.592014 ignition[1160]: no config URL provided Mar 6 00:56:22.592030 ignition[1160]: reading system config file "/usr/lib/ignition/user.ign" Mar 6 00:56:22.592058 ignition[1160]: no config at "/usr/lib/ignition/user.ign" Mar 6 00:56:22.592096 ignition[1160]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 6 00:56:22.594186 ignition[1160]: PUT result: OK Mar 6 00:56:22.594331 ignition[1160]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 6 00:56:22.597911 ignition[1160]: GET result: OK Mar 6 00:56:22.598144 ignition[1160]: parsing config with SHA512: ceb3dd4cc397775f284c7c01215a0148ebb7205bc74763ce9cbde6fc5b79d9a35cc1c59c9c3550ba8bcee94c27ad8bfba8f2585ede78c944fa43affacc823634 Mar 6 00:56:22.615107 unknown[1160]: fetched base config from "system" Mar 6 00:56:22.616026 unknown[1160]: fetched base config from "system" Mar 6 00:56:22.616659 ignition[1160]: fetch: fetch complete Mar 6 00:56:22.616048 unknown[1160]: fetched user config from "aws" Mar 6 00:56:22.616671 ignition[1160]: fetch: fetch passed Mar 6 00:56:22.616782 ignition[1160]: Ignition finished successfully Mar 6 00:56:22.629436 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 6 00:56:22.633998 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 6 00:56:22.703876 ignition[1166]: Ignition 2.22.0 Mar 6 00:56:22.703908 ignition[1166]: Stage: kargs Mar 6 00:56:22.704558 ignition[1166]: no configs at "/usr/lib/ignition/base.d" Mar 6 00:56:22.704585 ignition[1166]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 6 00:56:22.704746 ignition[1166]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 6 00:56:22.708129 ignition[1166]: PUT result: OK Mar 6 00:56:22.720977 ignition[1166]: kargs: kargs passed Mar 6 00:56:22.721121 ignition[1166]: Ignition finished successfully Mar 6 00:56:22.726487 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 6 00:56:22.737816 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 6 00:56:22.790839 ignition[1172]: Ignition 2.22.0 Mar 6 00:56:22.791387 ignition[1172]: Stage: disks Mar 6 00:56:22.792381 ignition[1172]: no configs at "/usr/lib/ignition/base.d" Mar 6 00:56:22.792405 ignition[1172]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 6 00:56:22.793660 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 6 00:56:22.801893 ignition[1172]: PUT result: OK Mar 6 00:56:22.807134 ignition[1172]: disks: disks passed Mar 6 00:56:22.807531 ignition[1172]: Ignition finished successfully Mar 6 00:56:22.813819 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 6 00:56:22.819126 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 6 00:56:22.824523 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 6 00:56:22.830823 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 6 00:56:22.834046 systemd[1]: Reached target sysinit.target - System Initialization. Mar 6 00:56:22.840515 systemd[1]: Reached target basic.target - Basic System. Mar 6 00:56:22.846289 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 6 00:56:22.922966 systemd-fsck[1181]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 6 00:56:22.931361 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 6 00:56:22.940169 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 6 00:56:23.074274 kernel: EXT4-fs (nvme0n1p9): mounted filesystem f0884ab3-756d-49e8-9d95-af187b4f35fb r/w with ordered data mode. Quota mode: none. Mar 6 00:56:23.075589 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 6 00:56:23.079834 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 6 00:56:23.085097 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 6 00:56:23.096391 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 6 00:56:23.102066 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 6 00:56:23.102173 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 6 00:56:23.102317 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 6 00:56:23.124017 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 6 00:56:23.129761 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 6 00:56:23.139316 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1200) Mar 6 00:56:23.143201 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 890f9900-ea91-473b-9515-ad9b05b1880b Mar 6 00:56:23.143374 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 6 00:56:23.151443 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 6 00:56:23.151510 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 6 00:56:23.155722 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 6 00:56:23.494033 initrd-setup-root[1224]: cut: /sysroot/etc/passwd: No such file or directory Mar 6 00:56:23.505434 systemd-networkd[1148]: eth0: Gained IPv6LL Mar 6 00:56:23.526499 initrd-setup-root[1231]: cut: /sysroot/etc/group: No such file or directory Mar 6 00:56:23.536278 initrd-setup-root[1238]: cut: /sysroot/etc/shadow: No such file or directory Mar 6 00:56:23.547221 initrd-setup-root[1245]: cut: /sysroot/etc/gshadow: No such file or directory Mar 6 00:56:23.913986 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 6 00:56:23.921635 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 6 00:56:23.925341 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 6 00:56:23.959411 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 6 00:56:23.969332 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 890f9900-ea91-473b-9515-ad9b05b1880b Mar 6 00:56:23.995160 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 6 00:56:24.032122 ignition[1314]: INFO : Ignition 2.22.0 Mar 6 00:56:24.034367 ignition[1314]: INFO : Stage: mount Mar 6 00:56:24.034367 ignition[1314]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 00:56:24.034367 ignition[1314]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 6 00:56:24.034367 ignition[1314]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 6 00:56:24.045258 ignition[1314]: INFO : PUT result: OK Mar 6 00:56:24.050657 ignition[1314]: INFO : mount: mount passed Mar 6 00:56:24.052608 ignition[1314]: INFO : Ignition finished successfully Mar 6 00:56:24.057925 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 6 00:56:24.069770 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 6 00:56:24.097745 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 6 00:56:24.145296 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1326) Mar 6 00:56:24.151017 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 890f9900-ea91-473b-9515-ad9b05b1880b Mar 6 00:56:24.151095 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 6 00:56:24.158713 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 6 00:56:24.158804 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Mar 6 00:56:24.162767 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 6 00:56:24.225941 ignition[1343]: INFO : Ignition 2.22.0 Mar 6 00:56:24.229377 ignition[1343]: INFO : Stage: files Mar 6 00:56:24.229377 ignition[1343]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 00:56:24.229377 ignition[1343]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 6 00:56:24.229377 ignition[1343]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 6 00:56:24.239041 ignition[1343]: INFO : PUT result: OK Mar 6 00:56:24.243953 ignition[1343]: DEBUG : files: compiled without relabeling support, skipping Mar 6 00:56:24.250319 ignition[1343]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 6 00:56:24.250319 ignition[1343]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 6 00:56:24.291299 ignition[1343]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 6 00:56:24.295072 ignition[1343]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 6 00:56:24.295072 ignition[1343]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 6 00:56:24.292140 unknown[1343]: wrote ssh authorized keys file for user: core Mar 6 00:56:24.304213 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 6 00:56:24.304213 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 6 00:56:24.390304 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 6 00:56:24.640086 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 6 00:56:24.646870 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 6 00:56:24.646870 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 6 00:56:24.646870 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 6 00:56:24.646870 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 6 00:56:24.646870 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 6 00:56:24.646870 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 6 00:56:24.646870 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 6 00:56:24.646870 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 6 00:56:24.685790 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 6 00:56:24.685790 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 6 00:56:24.685790 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 6 00:56:24.705001 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 6 00:56:24.705001 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 6 00:56:24.705001 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Mar 6 00:56:25.177468 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 6 00:56:25.622215 ignition[1343]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 6 00:56:25.627624 ignition[1343]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 6 00:56:25.637246 ignition[1343]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 6 00:56:25.649212 ignition[1343]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 6 00:56:25.653960 ignition[1343]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 6 00:56:25.653960 ignition[1343]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 6 00:56:25.660685 ignition[1343]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 6 00:56:25.660685 ignition[1343]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 6 00:56:25.660685 ignition[1343]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 6 00:56:25.660685 ignition[1343]: INFO : files: files passed Mar 6 00:56:25.660685 ignition[1343]: INFO : Ignition finished successfully Mar 6 00:56:25.679689 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 6 00:56:25.689125 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 6 00:56:25.696570 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 6 00:56:25.718469 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 6 00:56:25.718958 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 6 00:56:25.741386 initrd-setup-root-after-ignition[1372]: grep: Mar 6 00:56:25.743698 initrd-setup-root-after-ignition[1376]: grep: Mar 6 00:56:25.745733 initrd-setup-root-after-ignition[1372]: /sysroot/etc/flatcar/enabled-sysext.conf Mar 6 00:56:25.748554 initrd-setup-root-after-ignition[1376]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 6 00:56:25.751984 initrd-setup-root-after-ignition[1372]: : No such file or directory Mar 6 00:56:25.754581 initrd-setup-root-after-ignition[1372]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 6 00:56:25.760480 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 6 00:56:25.767859 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 6 00:56:25.774848 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 6 00:56:25.870137 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 6 00:56:25.870672 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 6 00:56:25.882015 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 6 00:56:25.887404 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 6 00:56:25.890272 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 6 00:56:25.894463 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 6 00:56:25.944107 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 6 00:56:25.954511 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 6 00:56:25.998068 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 6 00:56:25.998373 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 6 00:56:26.008001 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 6 00:56:26.012401 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 00:56:26.012589 systemd[1]: Stopped target timers.target - Timer Units. Mar 6 00:56:26.024027 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 6 00:56:26.024407 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 6 00:56:26.032687 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 6 00:56:26.038481 systemd[1]: Stopped target basic.target - Basic System. Mar 6 00:56:26.040723 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 6 00:56:26.043579 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 6 00:56:26.054282 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 6 00:56:26.057445 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 6 00:56:26.060834 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 6 00:56:26.069544 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 6 00:56:26.073638 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 6 00:56:26.079313 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 6 00:56:26.084983 systemd[1]: Stopped target swap.target - Swaps. Mar 6 00:56:26.089652 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 6 00:56:26.089800 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 6 00:56:26.092970 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 6 00:56:26.095800 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 00:56:26.098854 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 6 00:56:26.099049 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 00:56:26.102334 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 6 00:56:26.102463 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 6 00:56:26.111814 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 6 00:56:26.111945 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 6 00:56:26.119739 systemd[1]: ignition-files.service: Deactivated successfully. Mar 6 00:56:26.119850 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 6 00:56:26.128890 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 6 00:56:26.167929 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 6 00:56:26.172834 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 6 00:56:26.173149 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 00:56:26.182383 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 6 00:56:26.182700 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 6 00:56:26.226991 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 6 00:56:26.232261 ignition[1397]: INFO : Ignition 2.22.0 Mar 6 00:56:26.234817 ignition[1397]: INFO : Stage: umount Mar 6 00:56:26.234817 ignition[1397]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 6 00:56:26.234817 ignition[1397]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 6 00:56:26.234817 ignition[1397]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 6 00:56:26.246257 ignition[1397]: INFO : PUT result: OK Mar 6 00:56:26.253677 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 6 00:56:26.256332 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 6 00:56:26.268326 ignition[1397]: INFO : umount: umount passed Mar 6 00:56:26.272154 ignition[1397]: INFO : Ignition finished successfully Mar 6 00:56:26.271528 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 6 00:56:26.271799 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 6 00:56:26.277113 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 6 00:56:26.277575 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 6 00:56:26.281918 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 6 00:56:26.282028 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 6 00:56:26.287181 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 6 00:56:26.287410 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 6 00:56:26.291592 systemd[1]: Stopped target network.target - Network. Mar 6 00:56:26.296503 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 6 00:56:26.296626 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 6 00:56:26.299983 systemd[1]: Stopped target paths.target - Path Units. Mar 6 00:56:26.304545 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 6 00:56:26.309453 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 00:56:26.309646 systemd[1]: Stopped target slices.target - Slice Units. Mar 6 00:56:26.317191 systemd[1]: Stopped target sockets.target - Socket Units. Mar 6 00:56:26.320015 systemd[1]: iscsid.socket: Deactivated successfully. Mar 6 00:56:26.320109 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 6 00:56:26.324752 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 6 00:56:26.324835 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 6 00:56:26.327258 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 6 00:56:26.327375 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 6 00:56:26.334913 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 6 00:56:26.335022 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 6 00:56:26.339349 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 6 00:56:26.339476 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 6 00:56:26.344556 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 6 00:56:26.349840 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 6 00:56:26.373956 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 6 00:56:26.376374 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 6 00:56:26.385517 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 6 00:56:26.386046 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 6 00:56:26.388637 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 6 00:56:26.398589 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 6 00:56:26.400475 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 6 00:56:26.405115 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 6 00:56:26.405220 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 6 00:56:26.420321 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 6 00:56:26.424600 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 6 00:56:26.425167 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 6 00:56:26.436746 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 6 00:56:26.436867 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 6 00:56:26.446679 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 6 00:56:26.446809 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 6 00:56:26.462565 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 6 00:56:26.462704 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 00:56:26.470329 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 00:56:26.487845 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 6 00:56:26.487999 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 6 00:56:26.500837 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 6 00:56:26.502086 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 00:56:26.514542 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 6 00:56:26.514690 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 6 00:56:26.518783 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 6 00:56:26.518861 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 00:56:26.526320 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 6 00:56:26.526430 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 6 00:56:26.535170 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 6 00:56:26.535314 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 6 00:56:26.540588 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 6 00:56:26.540722 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 6 00:56:26.549427 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 6 00:56:26.569674 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 6 00:56:26.569823 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 6 00:56:26.577683 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 6 00:56:26.577809 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 00:56:26.590820 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 6 00:56:26.590947 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 00:56:26.600315 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 6 00:56:26.600454 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 00:56:26.603766 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 6 00:56:26.603886 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 00:56:26.611008 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 6 00:56:26.611129 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Mar 6 00:56:26.611212 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 6 00:56:26.611403 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 6 00:56:26.612261 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 6 00:56:26.615468 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 6 00:56:26.624407 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 6 00:56:26.626318 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 6 00:56:26.630920 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 6 00:56:26.644952 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 6 00:56:26.690219 systemd[1]: Switching root. Mar 6 00:56:26.763346 systemd-journald[259]: Journal stopped Mar 6 00:56:29.263367 systemd-journald[259]: Received SIGTERM from PID 1 (systemd). Mar 6 00:56:29.263530 kernel: SELinux: policy capability network_peer_controls=1 Mar 6 00:56:29.263598 kernel: SELinux: policy capability open_perms=1 Mar 6 00:56:29.263640 kernel: SELinux: policy capability extended_socket_class=1 Mar 6 00:56:29.263680 kernel: SELinux: policy capability always_check_network=0 Mar 6 00:56:29.263718 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 6 00:56:29.263759 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 6 00:56:29.263794 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 6 00:56:29.263833 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 6 00:56:29.263860 kernel: SELinux: policy capability userspace_initial_context=0 Mar 6 00:56:29.266300 kernel: audit: type=1403 audit(1772758587.185:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 6 00:56:29.266360 systemd[1]: Successfully loaded SELinux policy in 123.296ms. Mar 6 00:56:29.266412 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.196ms. Mar 6 00:56:29.266448 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 6 00:56:29.266487 systemd[1]: Detected virtualization amazon. Mar 6 00:56:29.266518 systemd[1]: Detected architecture arm64. Mar 6 00:56:29.266548 systemd[1]: Detected first boot. Mar 6 00:56:29.266579 systemd[1]: Initializing machine ID from VM UUID. Mar 6 00:56:29.266611 kernel: NET: Registered PF_VSOCK protocol family Mar 6 00:56:29.266639 zram_generator::config[1439]: No configuration found. Mar 6 00:56:29.266675 systemd[1]: Populated /etc with preset unit settings. Mar 6 00:56:29.266708 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 6 00:56:29.266738 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 6 00:56:29.266772 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 6 00:56:29.266802 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 6 00:56:29.266837 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 6 00:56:29.266867 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 6 00:56:29.266906 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 6 00:56:29.266940 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 6 00:56:29.266973 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 6 00:56:29.267006 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 6 00:56:29.267041 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 6 00:56:29.267074 systemd[1]: Created slice user.slice - User and Session Slice. Mar 6 00:56:29.267106 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 6 00:56:29.267136 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 6 00:56:29.267168 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 6 00:56:29.267200 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 6 00:56:29.268407 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 6 00:56:29.268474 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 6 00:56:29.268509 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 6 00:56:29.268549 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 6 00:56:29.268580 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 6 00:56:29.268609 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 6 00:56:29.268638 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 6 00:56:29.268670 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 6 00:56:29.268701 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 6 00:56:29.268741 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 6 00:56:29.268772 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 6 00:56:29.268809 systemd[1]: Reached target slices.target - Slice Units. Mar 6 00:56:29.268843 systemd[1]: Reached target swap.target - Swaps. Mar 6 00:56:29.268872 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 6 00:56:29.268904 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 6 00:56:29.268936 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 6 00:56:29.268971 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 6 00:56:29.269005 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 6 00:56:29.269034 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 6 00:56:29.269062 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 6 00:56:29.269096 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 6 00:56:29.269125 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 6 00:56:29.269153 systemd[1]: Mounting media.mount - External Media Directory... Mar 6 00:56:29.269184 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 6 00:56:29.269212 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 6 00:56:29.269516 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 6 00:56:29.269557 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 6 00:56:29.269591 systemd[1]: Reached target machines.target - Containers. Mar 6 00:56:29.269622 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 6 00:56:29.269659 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 00:56:29.269691 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 6 00:56:29.269720 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 6 00:56:29.269754 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 00:56:29.269785 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 6 00:56:29.269816 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 00:56:29.269847 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 6 00:56:29.269876 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 00:56:29.269912 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 6 00:56:29.269944 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 6 00:56:29.269979 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 6 00:56:29.270014 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 6 00:56:29.270047 systemd[1]: Stopped systemd-fsck-usr.service. Mar 6 00:56:29.270080 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 6 00:56:29.270114 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 6 00:56:29.270148 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 6 00:56:29.270178 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 6 00:56:29.270218 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 6 00:56:29.270288 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 6 00:56:29.270322 kernel: loop: module loaded Mar 6 00:56:29.270352 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 6 00:56:29.270389 systemd[1]: verity-setup.service: Deactivated successfully. Mar 6 00:56:29.270419 systemd[1]: Stopped verity-setup.service. Mar 6 00:56:29.270449 kernel: fuse: init (API version 7.41) Mar 6 00:56:29.270480 kernel: ACPI: bus type drm_connector registered Mar 6 00:56:29.270508 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 6 00:56:29.270536 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 6 00:56:29.270572 systemd[1]: Mounted media.mount - External Media Directory. Mar 6 00:56:29.270606 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 6 00:56:29.270635 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 6 00:56:29.270664 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 6 00:56:29.270693 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 6 00:56:29.270722 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 6 00:56:29.270751 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 6 00:56:29.270780 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 00:56:29.270875 systemd-journald[1518]: Collecting audit messages is disabled. Mar 6 00:56:29.270936 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 00:56:29.270968 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 6 00:56:29.270998 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 6 00:56:29.271027 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 00:56:29.271056 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 00:56:29.271088 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 6 00:56:29.271117 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 6 00:56:29.271150 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 00:56:29.271181 systemd-journald[1518]: Journal started Mar 6 00:56:29.271281 systemd-journald[1518]: Runtime Journal (/run/log/journal/ec2c533688fd290bcc9c38c1e22f89cf) is 8M, max 75.3M, 67.3M free. Mar 6 00:56:28.558164 systemd[1]: Queued start job for default target multi-user.target. Mar 6 00:56:28.573685 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 6 00:56:28.574618 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 6 00:56:29.286537 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 00:56:29.294265 systemd[1]: Started systemd-journald.service - Journal Service. Mar 6 00:56:29.295382 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 6 00:56:29.300861 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 6 00:56:29.305564 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 6 00:56:29.311440 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 6 00:56:29.337365 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 6 00:56:29.353107 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 6 00:56:29.361426 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 6 00:56:29.373057 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 6 00:56:29.376492 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 6 00:56:29.376567 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 6 00:56:29.383465 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 6 00:56:29.394636 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 6 00:56:29.397811 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 00:56:29.413404 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 6 00:56:29.424770 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 6 00:56:29.427667 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 6 00:56:29.432680 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 6 00:56:29.435470 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 6 00:56:29.449322 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 6 00:56:29.467662 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 6 00:56:29.475173 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 6 00:56:29.484478 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 6 00:56:29.488993 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 6 00:56:29.492312 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 6 00:56:29.523044 systemd-journald[1518]: Time spent on flushing to /var/log/journal/ec2c533688fd290bcc9c38c1e22f89cf is 75.942ms for 932 entries. Mar 6 00:56:29.523044 systemd-journald[1518]: System Journal (/var/log/journal/ec2c533688fd290bcc9c38c1e22f89cf) is 8M, max 195.6M, 187.6M free. Mar 6 00:56:29.616670 systemd-journald[1518]: Received client request to flush runtime journal. Mar 6 00:56:29.616789 kernel: loop0: detected capacity change from 0 to 100632 Mar 6 00:56:29.529412 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 6 00:56:29.533009 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 6 00:56:29.544734 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 6 00:56:29.597351 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 6 00:56:29.620979 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 6 00:56:29.641500 systemd-tmpfiles[1574]: ACLs are not supported, ignoring. Mar 6 00:56:29.641543 systemd-tmpfiles[1574]: ACLs are not supported, ignoring. Mar 6 00:56:29.649638 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 6 00:56:29.652391 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 6 00:56:29.657444 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 6 00:56:29.674030 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 6 00:56:29.739451 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 6 00:56:29.759432 kernel: loop1: detected capacity change from 0 to 119840 Mar 6 00:56:29.786468 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 6 00:56:29.793693 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 6 00:56:29.849177 systemd-tmpfiles[1594]: ACLs are not supported, ignoring. Mar 6 00:56:29.849252 systemd-tmpfiles[1594]: ACLs are not supported, ignoring. Mar 6 00:56:29.866939 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 6 00:56:29.884296 kernel: loop2: detected capacity change from 0 to 209336 Mar 6 00:56:30.008312 kernel: loop3: detected capacity change from 0 to 61264 Mar 6 00:56:30.145320 kernel: loop4: detected capacity change from 0 to 100632 Mar 6 00:56:30.170348 kernel: loop5: detected capacity change from 0 to 119840 Mar 6 00:56:30.195291 kernel: loop6: detected capacity change from 0 to 209336 Mar 6 00:56:30.228291 kernel: loop7: detected capacity change from 0 to 61264 Mar 6 00:56:30.241605 (sd-merge)[1602]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 6 00:56:30.243504 (sd-merge)[1602]: Merged extensions into '/usr'. Mar 6 00:56:30.255930 systemd[1]: Reload requested from client PID 1573 ('systemd-sysext') (unit systemd-sysext.service)... Mar 6 00:56:30.256184 systemd[1]: Reloading... Mar 6 00:56:30.525313 zram_generator::config[1628]: No configuration found. Mar 6 00:56:31.009830 systemd[1]: Reloading finished in 752 ms. Mar 6 00:56:31.037526 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 6 00:56:31.040970 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 6 00:56:31.057597 systemd[1]: Starting ensure-sysext.service... Mar 6 00:56:31.062551 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 6 00:56:31.070482 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 6 00:56:31.100826 systemd[1]: Reload requested from client PID 1680 ('systemctl') (unit ensure-sysext.service)... Mar 6 00:56:31.101036 systemd[1]: Reloading... Mar 6 00:56:31.181804 systemd-udevd[1683]: Using default interface naming scheme 'v255'. Mar 6 00:56:31.191774 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 6 00:56:31.191872 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 6 00:56:31.192605 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 6 00:56:31.194043 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 6 00:56:31.196021 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 6 00:56:31.196703 systemd-tmpfiles[1682]: ACLs are not supported, ignoring. Mar 6 00:56:31.196874 systemd-tmpfiles[1682]: ACLs are not supported, ignoring. Mar 6 00:56:31.204105 systemd-tmpfiles[1682]: Detected autofs mount point /boot during canonicalization of boot. Mar 6 00:56:31.204141 systemd-tmpfiles[1682]: Skipping /boot Mar 6 00:56:31.244825 systemd-tmpfiles[1682]: Detected autofs mount point /boot during canonicalization of boot. Mar 6 00:56:31.244861 systemd-tmpfiles[1682]: Skipping /boot Mar 6 00:56:31.359624 zram_generator::config[1709]: No configuration found. Mar 6 00:56:31.412031 ldconfig[1568]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 6 00:56:31.797985 (udev-worker)[1731]: Network interface NamePolicy= disabled on kernel command line. Mar 6 00:56:32.040412 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 6 00:56:32.042091 systemd[1]: Reloading finished in 940 ms. Mar 6 00:56:32.075776 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 6 00:56:32.083516 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 6 00:56:32.108933 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 6 00:56:32.136051 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 6 00:56:32.147616 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 6 00:56:32.154119 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 6 00:56:32.163120 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 6 00:56:32.176558 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 6 00:56:32.185845 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 6 00:56:32.201899 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 00:56:32.204828 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 6 00:56:32.219101 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 6 00:56:32.226119 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 6 00:56:32.229687 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 00:56:32.230032 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 6 00:56:32.242638 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 6 00:56:32.247847 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 00:56:32.248346 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 00:56:32.248594 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 6 00:56:32.267042 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 6 00:56:32.274924 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 6 00:56:32.278313 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 6 00:56:32.279600 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 6 00:56:32.279953 systemd[1]: Reached target time-set.target - System Time Set. Mar 6 00:56:32.301429 systemd[1]: Finished ensure-sysext.service. Mar 6 00:56:32.337050 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 6 00:56:32.340343 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 6 00:56:32.365930 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 6 00:56:32.416889 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 6 00:56:32.420883 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 6 00:56:32.422117 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 6 00:56:32.426335 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 6 00:56:32.435572 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 6 00:56:32.444697 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 6 00:56:32.447315 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 6 00:56:32.461863 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 6 00:56:32.462947 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 6 00:56:32.472926 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 6 00:56:32.475499 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 6 00:56:32.478915 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 6 00:56:32.557380 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 6 00:56:32.600450 augenrules[1917]: No rules Mar 6 00:56:32.615061 systemd[1]: audit-rules.service: Deactivated successfully. Mar 6 00:56:32.616392 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 6 00:56:32.643483 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 6 00:56:32.848868 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 6 00:56:32.856747 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 6 00:56:32.866436 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 6 00:56:32.943262 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 6 00:56:33.098944 systemd-networkd[1849]: lo: Link UP Mar 6 00:56:33.098967 systemd-networkd[1849]: lo: Gained carrier Mar 6 00:56:33.104179 systemd-networkd[1849]: Enumeration completed Mar 6 00:56:33.104432 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 6 00:56:33.105418 systemd-networkd[1849]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 00:56:33.105426 systemd-networkd[1849]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 6 00:56:33.111692 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 6 00:56:33.116509 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 6 00:56:33.169828 systemd-networkd[1849]: eth0: Link UP Mar 6 00:56:33.170295 systemd-networkd[1849]: eth0: Gained carrier Mar 6 00:56:33.170339 systemd-networkd[1849]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 6 00:56:33.170587 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 6 00:56:33.195400 systemd-networkd[1849]: eth0: DHCPv4 address 172.31.24.181/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 6 00:56:33.213750 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 6 00:56:33.254064 systemd-resolved[1856]: Positive Trust Anchors: Mar 6 00:56:33.254691 systemd-resolved[1856]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 6 00:56:33.254769 systemd-resolved[1856]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 6 00:56:33.272873 systemd-resolved[1856]: Defaulting to hostname 'linux'. Mar 6 00:56:33.278108 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 6 00:56:33.283807 systemd[1]: Reached target network.target - Network. Mar 6 00:56:33.287847 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 6 00:56:33.291415 systemd[1]: Reached target sysinit.target - System Initialization. Mar 6 00:56:33.294120 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 6 00:56:33.297136 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 6 00:56:33.300713 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 6 00:56:33.303506 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 6 00:56:33.306532 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 6 00:56:33.309512 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 6 00:56:33.309573 systemd[1]: Reached target paths.target - Path Units. Mar 6 00:56:33.312055 systemd[1]: Reached target timers.target - Timer Units. Mar 6 00:56:33.316964 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 6 00:56:33.322816 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 6 00:56:33.329721 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 6 00:56:33.334544 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 6 00:56:33.337374 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 6 00:56:33.348616 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 6 00:56:33.351804 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 6 00:56:33.355768 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 6 00:56:33.358801 systemd[1]: Reached target sockets.target - Socket Units. Mar 6 00:56:33.361417 systemd[1]: Reached target basic.target - Basic System. Mar 6 00:56:33.363828 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 6 00:56:33.363897 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 6 00:56:33.366070 systemd[1]: Starting containerd.service - containerd container runtime... Mar 6 00:56:33.373537 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 6 00:56:33.378687 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 6 00:56:33.384726 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 6 00:56:33.393454 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 6 00:56:33.402873 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 6 00:56:33.405463 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 6 00:56:33.410659 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 6 00:56:33.419771 systemd[1]: Started ntpd.service - Network Time Service. Mar 6 00:56:33.430647 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 6 00:56:33.437674 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 6 00:56:33.447075 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 6 00:56:33.459623 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 6 00:56:33.475772 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 6 00:56:33.480865 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 6 00:56:33.482823 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 6 00:56:33.488259 systemd[1]: Starting update-engine.service - Update Engine... Mar 6 00:56:33.499754 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 6 00:56:33.528069 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 6 00:56:33.537473 jq[1979]: true Mar 6 00:56:33.551275 jq[1969]: false Mar 6 00:56:33.561923 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 6 00:56:33.562679 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 6 00:56:33.617860 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 6 00:56:33.621459 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 6 00:56:33.624406 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 6 00:56:33.653393 extend-filesystems[1970]: Found /dev/nvme0n1p6 Mar 6 00:56:33.685418 tar[1986]: linux-arm64/LICENSE Mar 6 00:56:33.685418 tar[1986]: linux-arm64/helm Mar 6 00:56:33.688397 extend-filesystems[1970]: Found /dev/nvme0n1p9 Mar 6 00:56:33.699199 systemd[1]: motdgen.service: Deactivated successfully. Mar 6 00:56:33.701439 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 6 00:56:33.709086 (ntainerd)[2011]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 6 00:56:33.739657 extend-filesystems[1970]: Checking size of /dev/nvme0n1p9 Mar 6 00:56:33.745864 jq[1991]: true Mar 6 00:56:33.819362 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 6 00:56:33.851768 dbus-daemon[1967]: [system] SELinux support is enabled Mar 6 00:56:33.852148 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 6 00:56:33.861960 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 6 00:56:33.862033 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 6 00:56:33.867094 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 6 00:56:33.867136 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 6 00:56:33.881289 extend-filesystems[1970]: Resized partition /dev/nvme0n1p9 Mar 6 00:56:33.896937 update_engine[1978]: I20260306 00:56:33.895051 1978 main.cc:92] Flatcar Update Engine starting Mar 6 00:56:33.910926 dbus-daemon[1967]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1849 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 6 00:56:33.921729 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 6 00:56:33.931770 systemd[1]: Started update-engine.service - Update Engine. Mar 6 00:56:33.936269 update_engine[1978]: I20260306 00:56:33.934618 1978 update_check_scheduler.cc:74] Next update check in 8m10s Mar 6 00:56:33.955968 ntpd[1972]: ntpd 4.2.8p18@1.4062-o Thu Mar 5 21:50:04 UTC 2026 (1): Starting Mar 6 00:56:33.962274 extend-filesystems[2035]: resize2fs 1.47.3 (8-Jul-2025) Mar 6 00:56:33.973294 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Mar 6 00:56:33.973389 ntpd[1972]: 6 Mar 00:56:33 ntpd[1972]: ntpd 4.2.8p18@1.4062-o Thu Mar 5 21:50:04 UTC 2026 (1): Starting Mar 6 00:56:33.973389 ntpd[1972]: 6 Mar 00:56:33 ntpd[1972]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 6 00:56:33.973389 ntpd[1972]: 6 Mar 00:56:33 ntpd[1972]: ---------------------------------------------------- Mar 6 00:56:33.973389 ntpd[1972]: 6 Mar 00:56:33 ntpd[1972]: ntp-4 is maintained by Network Time Foundation, Mar 6 00:56:33.973389 ntpd[1972]: 6 Mar 00:56:33 ntpd[1972]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 6 00:56:33.973389 ntpd[1972]: 6 Mar 00:56:33 ntpd[1972]: corporation. Support and training for ntp-4 are Mar 6 00:56:33.973389 ntpd[1972]: 6 Mar 00:56:33 ntpd[1972]: available at https://www.nwtime.org/support Mar 6 00:56:33.973389 ntpd[1972]: 6 Mar 00:56:33 ntpd[1972]: ---------------------------------------------------- Mar 6 00:56:33.968421 ntpd[1972]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 6 00:56:33.968451 ntpd[1972]: ---------------------------------------------------- Mar 6 00:56:33.968468 ntpd[1972]: ntp-4 is maintained by Network Time Foundation, Mar 6 00:56:33.968486 ntpd[1972]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 6 00:56:33.968503 ntpd[1972]: corporation. Support and training for ntp-4 are Mar 6 00:56:33.968519 ntpd[1972]: available at https://www.nwtime.org/support Mar 6 00:56:33.968534 ntpd[1972]: ---------------------------------------------------- Mar 6 00:56:33.984018 ntpd[1972]: proto: precision = 0.096 usec (-23) Mar 6 00:56:33.991437 ntpd[1972]: 6 Mar 00:56:33 ntpd[1972]: proto: precision = 0.096 usec (-23) Mar 6 00:56:33.991437 ntpd[1972]: 6 Mar 00:56:33 ntpd[1972]: basedate set to 2026-02-21 Mar 6 00:56:33.991437 ntpd[1972]: 6 Mar 00:56:33 ntpd[1972]: gps base set to 2026-02-22 (week 2407) Mar 6 00:56:33.991437 ntpd[1972]: 6 Mar 00:56:33 ntpd[1972]: Listen and drop on 0 v6wildcard [::]:123 Mar 6 00:56:33.991437 ntpd[1972]: 6 Mar 00:56:33 ntpd[1972]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 6 00:56:33.984472 ntpd[1972]: basedate set to 2026-02-21 Mar 6 00:56:33.984497 ntpd[1972]: gps base set to 2026-02-22 (week 2407) Mar 6 00:56:33.984670 ntpd[1972]: Listen and drop on 0 v6wildcard [::]:123 Mar 6 00:56:33.984714 ntpd[1972]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 6 00:56:33.995535 ntpd[1972]: Listen normally on 2 lo 127.0.0.1:123 Mar 6 00:56:33.997456 ntpd[1972]: 6 Mar 00:56:33 ntpd[1972]: Listen normally on 2 lo 127.0.0.1:123 Mar 6 00:56:33.997456 ntpd[1972]: 6 Mar 00:56:33 ntpd[1972]: Listen normally on 3 eth0 172.31.24.181:123 Mar 6 00:56:33.997456 ntpd[1972]: 6 Mar 00:56:33 ntpd[1972]: Listen normally on 4 lo [::1]:123 Mar 6 00:56:33.997456 ntpd[1972]: 6 Mar 00:56:33 ntpd[1972]: bind(21) AF_INET6 [fe80::41f:ddff:fe67:869%2]:123 flags 0x811 failed: Cannot assign requested address Mar 6 00:56:33.997456 ntpd[1972]: 6 Mar 00:56:33 ntpd[1972]: unable to create socket on eth0 (5) for [fe80::41f:ddff:fe67:869%2]:123 Mar 6 00:56:33.995598 ntpd[1972]: Listen normally on 3 eth0 172.31.24.181:123 Mar 6 00:56:33.995649 ntpd[1972]: Listen normally on 4 lo [::1]:123 Mar 6 00:56:33.995695 ntpd[1972]: bind(21) AF_INET6 [fe80::41f:ddff:fe67:869%2]:123 flags 0x811 failed: Cannot assign requested address Mar 6 00:56:33.995730 ntpd[1972]: unable to create socket on eth0 (5) for [fe80::41f:ddff:fe67:869%2]:123 Mar 6 00:56:34.008034 systemd-coredump[2040]: Process 1972 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Mar 6 00:56:34.039629 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 6 00:56:34.048639 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Mar 6 00:56:34.059729 systemd[1]: Started systemd-coredump@0-2040-0.service - Process Core Dump (PID 2040/UID 0). Mar 6 00:56:34.083365 coreos-metadata[1966]: Mar 06 00:56:34.083 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 6 00:56:34.096411 coreos-metadata[1966]: Mar 06 00:56:34.095 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 6 00:56:34.104263 coreos-metadata[1966]: Mar 06 00:56:34.102 INFO Fetch successful Mar 6 00:56:34.104263 coreos-metadata[1966]: Mar 06 00:56:34.102 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 6 00:56:34.151424 coreos-metadata[1966]: Mar 06 00:56:34.104 INFO Fetch successful Mar 6 00:56:34.151424 coreos-metadata[1966]: Mar 06 00:56:34.104 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 6 00:56:34.151424 coreos-metadata[1966]: Mar 06 00:56:34.117 INFO Fetch successful Mar 6 00:56:34.151424 coreos-metadata[1966]: Mar 06 00:56:34.117 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 6 00:56:34.151424 coreos-metadata[1966]: Mar 06 00:56:34.119 INFO Fetch successful Mar 6 00:56:34.151424 coreos-metadata[1966]: Mar 06 00:56:34.119 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 6 00:56:34.151424 coreos-metadata[1966]: Mar 06 00:56:34.124 INFO Fetch failed with 404: resource not found Mar 6 00:56:34.151424 coreos-metadata[1966]: Mar 06 00:56:34.124 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 6 00:56:34.151424 coreos-metadata[1966]: Mar 06 00:56:34.131 INFO Fetch successful Mar 6 00:56:34.151424 coreos-metadata[1966]: Mar 06 00:56:34.132 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 6 00:56:34.151424 coreos-metadata[1966]: Mar 06 00:56:34.139 INFO Fetch successful Mar 6 00:56:34.151424 coreos-metadata[1966]: Mar 06 00:56:34.139 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 6 00:56:34.151424 coreos-metadata[1966]: Mar 06 00:56:34.146 INFO Fetch successful Mar 6 00:56:34.151424 coreos-metadata[1966]: Mar 06 00:56:34.146 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 6 00:56:34.151424 coreos-metadata[1966]: Mar 06 00:56:34.148 INFO Fetch successful Mar 6 00:56:34.151424 coreos-metadata[1966]: Mar 06 00:56:34.148 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 6 00:56:34.158854 coreos-metadata[1966]: Mar 06 00:56:34.158 INFO Fetch successful Mar 6 00:56:34.221176 bash[2049]: Updated "/home/core/.ssh/authorized_keys" Mar 6 00:56:34.222552 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 6 00:56:34.241733 systemd[1]: Starting sshkeys.service... Mar 6 00:56:34.274351 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Mar 6 00:56:34.348850 extend-filesystems[2035]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 6 00:56:34.348850 extend-filesystems[2035]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 6 00:56:34.348850 extend-filesystems[2035]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Mar 6 00:56:34.365585 systemd-logind[1977]: Watching system buttons on /dev/input/event0 (Power Button) Mar 6 00:56:34.365650 systemd-logind[1977]: Watching system buttons on /dev/input/event1 (Sleep Button) Mar 6 00:56:34.370778 extend-filesystems[1970]: Resized filesystem in /dev/nvme0n1p9 Mar 6 00:56:34.380505 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 6 00:56:34.382362 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 6 00:56:34.382595 systemd-logind[1977]: New seat seat0. Mar 6 00:56:34.399721 systemd[1]: Started systemd-logind.service - User Login Management. Mar 6 00:56:34.431870 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 6 00:56:34.454008 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 6 00:56:34.471806 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 6 00:56:34.475572 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 6 00:56:34.787862 coreos-metadata[2110]: Mar 06 00:56:34.786 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 6 00:56:34.791926 coreos-metadata[2110]: Mar 06 00:56:34.789 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 6 00:56:34.792556 coreos-metadata[2110]: Mar 06 00:56:34.792 INFO Fetch successful Mar 6 00:56:34.792556 coreos-metadata[2110]: Mar 06 00:56:34.792 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 6 00:56:34.803431 coreos-metadata[2110]: Mar 06 00:56:34.803 INFO Fetch successful Mar 6 00:56:34.805417 unknown[2110]: wrote ssh authorized keys file for user: core Mar 6 00:56:34.864982 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 6 00:56:34.869701 dbus-daemon[1967]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 6 00:56:34.879672 dbus-daemon[1967]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2030 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 6 00:56:34.894585 systemd[1]: Starting polkit.service - Authorization Manager... Mar 6 00:56:34.897499 systemd-networkd[1849]: eth0: Gained IPv6LL Mar 6 00:56:34.912727 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 6 00:56:34.918331 systemd[1]: Reached target network-online.target - Network is Online. Mar 6 00:56:34.929406 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 6 00:56:34.940148 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 00:56:34.948673 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 6 00:56:34.968708 update-ssh-keys[2149]: Updated "/home/core/.ssh/authorized_keys" Mar 6 00:56:34.972813 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 6 00:56:34.990324 systemd[1]: Finished sshkeys.service. Mar 6 00:56:35.052512 locksmithd[2032]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 6 00:56:35.212733 containerd[2011]: time="2026-03-06T00:56:35Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 6 00:56:35.227669 containerd[2011]: time="2026-03-06T00:56:35.222636744Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 6 00:56:35.273789 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 6 00:56:35.401456 containerd[2011]: time="2026-03-06T00:56:35.399454764Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.72µs" Mar 6 00:56:35.401456 containerd[2011]: time="2026-03-06T00:56:35.399511020Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 6 00:56:35.401456 containerd[2011]: time="2026-03-06T00:56:35.399547728Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 6 00:56:35.401456 containerd[2011]: time="2026-03-06T00:56:35.399857376Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 6 00:56:35.401456 containerd[2011]: time="2026-03-06T00:56:35.399900888Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 6 00:56:35.401456 containerd[2011]: time="2026-03-06T00:56:35.399952332Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 6 00:56:35.401456 containerd[2011]: time="2026-03-06T00:56:35.400088484Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 6 00:56:35.401456 containerd[2011]: time="2026-03-06T00:56:35.400118940Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 6 00:56:35.401456 containerd[2011]: time="2026-03-06T00:56:35.400702752Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 6 00:56:35.401456 containerd[2011]: time="2026-03-06T00:56:35.400755852Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 6 00:56:35.401456 containerd[2011]: time="2026-03-06T00:56:35.400807884Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 6 00:56:35.401456 containerd[2011]: time="2026-03-06T00:56:35.400831320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 6 00:56:35.401996 containerd[2011]: time="2026-03-06T00:56:35.401058816Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 6 00:56:35.416140 containerd[2011]: time="2026-03-06T00:56:35.415071084Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 6 00:56:35.416140 containerd[2011]: time="2026-03-06T00:56:35.415181892Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 6 00:56:35.416140 containerd[2011]: time="2026-03-06T00:56:35.415217184Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 6 00:56:35.416140 containerd[2011]: time="2026-03-06T00:56:35.415342212Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 6 00:56:35.416140 containerd[2011]: time="2026-03-06T00:56:35.415943016Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 6 00:56:35.416140 containerd[2011]: time="2026-03-06T00:56:35.416136852Z" level=info msg="metadata content store policy set" policy=shared Mar 6 00:56:35.436796 containerd[2011]: time="2026-03-06T00:56:35.432093877Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 6 00:56:35.436796 containerd[2011]: time="2026-03-06T00:56:35.432313141Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 6 00:56:35.436796 containerd[2011]: time="2026-03-06T00:56:35.432358465Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 6 00:56:35.436796 containerd[2011]: time="2026-03-06T00:56:35.432387325Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 6 00:56:35.436796 containerd[2011]: time="2026-03-06T00:56:35.432416917Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 6 00:56:35.436796 containerd[2011]: time="2026-03-06T00:56:35.432444229Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 6 00:56:35.436796 containerd[2011]: time="2026-03-06T00:56:35.432472069Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 6 00:56:35.436796 containerd[2011]: time="2026-03-06T00:56:35.432500629Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 6 00:56:35.436796 containerd[2011]: time="2026-03-06T00:56:35.432534865Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 6 00:56:35.436796 containerd[2011]: time="2026-03-06T00:56:35.432561373Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 6 00:56:35.436796 containerd[2011]: time="2026-03-06T00:56:35.432588409Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 6 00:56:35.436796 containerd[2011]: time="2026-03-06T00:56:35.432619225Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 6 00:56:35.436796 containerd[2011]: time="2026-03-06T00:56:35.432873313Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 6 00:56:35.436796 containerd[2011]: time="2026-03-06T00:56:35.432913837Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 6 00:56:35.437559 containerd[2011]: time="2026-03-06T00:56:35.432948469Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 6 00:56:35.437559 containerd[2011]: time="2026-03-06T00:56:35.432975709Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 6 00:56:35.437559 containerd[2011]: time="2026-03-06T00:56:35.433001797Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 6 00:56:35.437559 containerd[2011]: time="2026-03-06T00:56:35.433035361Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 6 00:56:35.437559 containerd[2011]: time="2026-03-06T00:56:35.433079341Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 6 00:56:35.437559 containerd[2011]: time="2026-03-06T00:56:35.433107877Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 6 00:56:35.437559 containerd[2011]: time="2026-03-06T00:56:35.433135741Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 6 00:56:35.437559 containerd[2011]: time="2026-03-06T00:56:35.433161985Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 6 00:56:35.437559 containerd[2011]: time="2026-03-06T00:56:35.433187569Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 6 00:56:35.437559 containerd[2011]: time="2026-03-06T00:56:35.435643261Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 6 00:56:35.437559 containerd[2011]: time="2026-03-06T00:56:35.435710473Z" level=info msg="Start snapshots syncer" Mar 6 00:56:35.437559 containerd[2011]: time="2026-03-06T00:56:35.435787645Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 6 00:56:35.455614 containerd[2011]: time="2026-03-06T00:56:35.449087821Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 6 00:56:35.455614 containerd[2011]: time="2026-03-06T00:56:35.449282725Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 6 00:56:35.455913 containerd[2011]: time="2026-03-06T00:56:35.449453257Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 6 00:56:35.455913 containerd[2011]: time="2026-03-06T00:56:35.449729305Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 6 00:56:35.455913 containerd[2011]: time="2026-03-06T00:56:35.449789281Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 6 00:56:35.455913 containerd[2011]: time="2026-03-06T00:56:35.449822605Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 6 00:56:35.455913 containerd[2011]: time="2026-03-06T00:56:35.449854309Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 6 00:56:35.455913 containerd[2011]: time="2026-03-06T00:56:35.449884297Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 6 00:56:35.455913 containerd[2011]: time="2026-03-06T00:56:35.449912521Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 6 00:56:35.455913 containerd[2011]: time="2026-03-06T00:56:35.449948953Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 6 00:56:35.455913 containerd[2011]: time="2026-03-06T00:56:35.450025033Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 6 00:56:35.455913 containerd[2011]: time="2026-03-06T00:56:35.450057349Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 6 00:56:35.455913 containerd[2011]: time="2026-03-06T00:56:35.450086281Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 6 00:56:35.455913 containerd[2011]: time="2026-03-06T00:56:35.450153769Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 6 00:56:35.455913 containerd[2011]: time="2026-03-06T00:56:35.450192733Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 6 00:56:35.455913 containerd[2011]: time="2026-03-06T00:56:35.450216637Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 6 00:56:35.456532 containerd[2011]: time="2026-03-06T00:56:35.454297777Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 6 00:56:35.456532 containerd[2011]: time="2026-03-06T00:56:35.454336405Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 6 00:56:35.456532 containerd[2011]: time="2026-03-06T00:56:35.454369561Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 6 00:56:35.456532 containerd[2011]: time="2026-03-06T00:56:35.454424521Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 6 00:56:35.456532 containerd[2011]: time="2026-03-06T00:56:35.454610797Z" level=info msg="runtime interface created" Mar 6 00:56:35.456532 containerd[2011]: time="2026-03-06T00:56:35.454633237Z" level=info msg="created NRI interface" Mar 6 00:56:35.456532 containerd[2011]: time="2026-03-06T00:56:35.454666225Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 6 00:56:35.456532 containerd[2011]: time="2026-03-06T00:56:35.454701841Z" level=info msg="Connect containerd service" Mar 6 00:56:35.456532 containerd[2011]: time="2026-03-06T00:56:35.454761109Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 6 00:56:35.474094 containerd[2011]: time="2026-03-06T00:56:35.469544713Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 6 00:56:35.501320 polkitd[2156]: Started polkitd version 126 Mar 6 00:56:35.545857 amazon-ssm-agent[2158]: Initializing new seelog logger Mar 6 00:56:35.545857 amazon-ssm-agent[2158]: New Seelog Logger Creation Complete Mar 6 00:56:35.546638 amazon-ssm-agent[2158]: 2026/03/06 00:56:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 00:56:35.548276 amazon-ssm-agent[2158]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 00:56:35.548276 amazon-ssm-agent[2158]: 2026/03/06 00:56:35 processing appconfig overrides Mar 6 00:56:35.548356 polkitd[2156]: Loading rules from directory /etc/polkit-1/rules.d Mar 6 00:56:35.549055 polkitd[2156]: Loading rules from directory /run/polkit-1/rules.d Mar 6 00:56:35.549176 polkitd[2156]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 6 00:56:35.549897 polkitd[2156]: Loading rules from directory /usr/local/share/polkit-1/rules.d Mar 6 00:56:35.549979 polkitd[2156]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 6 00:56:35.550076 polkitd[2156]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 6 00:56:35.552406 amazon-ssm-agent[2158]: 2026/03/06 00:56:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 00:56:35.552879 amazon-ssm-agent[2158]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 00:56:35.553660 amazon-ssm-agent[2158]: 2026/03/06 00:56:35 processing appconfig overrides Mar 6 00:56:35.555061 amazon-ssm-agent[2158]: 2026/03/06 00:56:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 00:56:35.555215 amazon-ssm-agent[2158]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 00:56:35.557702 amazon-ssm-agent[2158]: 2026/03/06 00:56:35 processing appconfig overrides Mar 6 00:56:35.559620 amazon-ssm-agent[2158]: 2026-03-06 00:56:35.5522 INFO Proxy environment variables: Mar 6 00:56:35.569328 amazon-ssm-agent[2158]: 2026/03/06 00:56:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 00:56:35.568875 polkitd[2156]: Finished loading, compiling and executing 2 rules Mar 6 00:56:35.569788 amazon-ssm-agent[2158]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 00:56:35.570710 amazon-ssm-agent[2158]: 2026/03/06 00:56:35 processing appconfig overrides Mar 6 00:56:35.571601 systemd[1]: Started polkit.service - Authorization Manager. Mar 6 00:56:35.588512 dbus-daemon[1967]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 6 00:56:35.592823 polkitd[2156]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 6 00:56:35.660320 amazon-ssm-agent[2158]: 2026-03-06 00:56:35.5523 INFO https_proxy: Mar 6 00:56:35.695103 systemd-hostnamed[2030]: Hostname set to (transient) Mar 6 00:56:35.695650 systemd-resolved[1856]: System hostname changed to 'ip-172-31-24-181'. Mar 6 00:56:35.705274 systemd-coredump[2050]: Process 1972 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1972: #0 0x0000aaaab48a0b5c n/a (ntpd + 0x60b5c) #1 0x0000aaaab484fe60 n/a (ntpd + 0xfe60) #2 0x0000aaaab4850240 n/a (ntpd + 0x10240) #3 0x0000aaaab484be14 n/a (ntpd + 0xbe14) #4 0x0000aaaab484d3ec n/a (ntpd + 0xd3ec) #5 0x0000aaaab4855a38 n/a (ntpd + 0x15a38) #6 0x0000aaaab484738c n/a (ntpd + 0x738c) #7 0x0000ffffa9312034 n/a (libc.so.6 + 0x22034) #8 0x0000ffffa9312118 __libc_start_main (libc.so.6 + 0x22118) #9 0x0000aaaab48473f0 n/a (ntpd + 0x73f0) ELF object binary architecture: AARCH64 Mar 6 00:56:35.712061 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Mar 6 00:56:35.713603 systemd[1]: ntpd.service: Failed with result 'core-dump'. Mar 6 00:56:35.727041 systemd[1]: systemd-coredump@0-2040-0.service: Deactivated successfully. Mar 6 00:56:35.760952 amazon-ssm-agent[2158]: 2026-03-06 00:56:35.5523 INFO http_proxy: Mar 6 00:56:35.848765 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Mar 6 00:56:35.856904 systemd[1]: Started ntpd.service - Network Time Service. Mar 6 00:56:35.865844 amazon-ssm-agent[2158]: 2026-03-06 00:56:35.5523 INFO no_proxy: Mar 6 00:56:35.965348 amazon-ssm-agent[2158]: 2026-03-06 00:56:35.5538 INFO Checking if agent identity type OnPrem can be assumed Mar 6 00:56:35.986671 ntpd[2213]: ntpd 4.2.8p18@1.4062-o Thu Mar 5 21:50:04 UTC 2026 (1): Starting Mar 6 00:56:35.986813 ntpd[2213]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 6 00:56:35.987431 ntpd[2213]: 6 Mar 00:56:35 ntpd[2213]: ntpd 4.2.8p18@1.4062-o Thu Mar 5 21:50:04 UTC 2026 (1): Starting Mar 6 00:56:35.987431 ntpd[2213]: 6 Mar 00:56:35 ntpd[2213]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 6 00:56:35.987431 ntpd[2213]: 6 Mar 00:56:35 ntpd[2213]: ---------------------------------------------------- Mar 6 00:56:35.987431 ntpd[2213]: 6 Mar 00:56:35 ntpd[2213]: ntp-4 is maintained by Network Time Foundation, Mar 6 00:56:35.987431 ntpd[2213]: 6 Mar 00:56:35 ntpd[2213]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 6 00:56:35.987431 ntpd[2213]: 6 Mar 00:56:35 ntpd[2213]: corporation. Support and training for ntp-4 are Mar 6 00:56:35.987431 ntpd[2213]: 6 Mar 00:56:35 ntpd[2213]: available at https://www.nwtime.org/support Mar 6 00:56:35.987431 ntpd[2213]: 6 Mar 00:56:35 ntpd[2213]: ---------------------------------------------------- Mar 6 00:56:35.986833 ntpd[2213]: ---------------------------------------------------- Mar 6 00:56:35.991314 ntpd[2213]: 6 Mar 00:56:35 ntpd[2213]: proto: precision = 0.096 usec (-23) Mar 6 00:56:35.986851 ntpd[2213]: ntp-4 is maintained by Network Time Foundation, Mar 6 00:56:35.986868 ntpd[2213]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 6 00:56:35.986883 ntpd[2213]: corporation. Support and training for ntp-4 are Mar 6 00:56:35.986900 ntpd[2213]: available at https://www.nwtime.org/support Mar 6 00:56:35.986916 ntpd[2213]: ---------------------------------------------------- Mar 6 00:56:35.988008 ntpd[2213]: proto: precision = 0.096 usec (-23) Mar 6 00:56:35.994611 ntpd[2213]: basedate set to 2026-02-21 Mar 6 00:56:35.994670 ntpd[2213]: gps base set to 2026-02-22 (week 2407) Mar 6 00:56:35.994902 ntpd[2213]: 6 Mar 00:56:35 ntpd[2213]: basedate set to 2026-02-21 Mar 6 00:56:35.994902 ntpd[2213]: 6 Mar 00:56:35 ntpd[2213]: gps base set to 2026-02-22 (week 2407) Mar 6 00:56:35.994902 ntpd[2213]: 6 Mar 00:56:35 ntpd[2213]: Listen and drop on 0 v6wildcard [::]:123 Mar 6 00:56:35.994902 ntpd[2213]: 6 Mar 00:56:35 ntpd[2213]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 6 00:56:35.994831 ntpd[2213]: Listen and drop on 0 v6wildcard [::]:123 Mar 6 00:56:35.994879 ntpd[2213]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 6 00:56:35.995173 ntpd[2213]: Listen normally on 2 lo 127.0.0.1:123 Mar 6 00:56:35.998377 ntpd[2213]: 6 Mar 00:56:35 ntpd[2213]: Listen normally on 2 lo 127.0.0.1:123 Mar 6 00:56:35.998377 ntpd[2213]: 6 Mar 00:56:35 ntpd[2213]: Listen normally on 3 eth0 172.31.24.181:123 Mar 6 00:56:35.998377 ntpd[2213]: 6 Mar 00:56:35 ntpd[2213]: Listen normally on 4 lo [::1]:123 Mar 6 00:56:35.998377 ntpd[2213]: 6 Mar 00:56:35 ntpd[2213]: Listen normally on 5 eth0 [fe80::41f:ddff:fe67:869%2]:123 Mar 6 00:56:35.998377 ntpd[2213]: 6 Mar 00:56:35 ntpd[2213]: Listening on routing socket on fd #22 for interface updates Mar 6 00:56:35.997804 ntpd[2213]: Listen normally on 3 eth0 172.31.24.181:123 Mar 6 00:56:35.997863 ntpd[2213]: Listen normally on 4 lo [::1]:123 Mar 6 00:56:35.997913 ntpd[2213]: Listen normally on 5 eth0 [fe80::41f:ddff:fe67:869%2]:123 Mar 6 00:56:35.997960 ntpd[2213]: Listening on routing socket on fd #22 for interface updates Mar 6 00:56:36.016190 containerd[2011]: time="2026-03-06T00:56:36.014559035Z" level=info msg="Start subscribing containerd event" Mar 6 00:56:36.016190 containerd[2011]: time="2026-03-06T00:56:36.014685215Z" level=info msg="Start recovering state" Mar 6 00:56:36.016190 containerd[2011]: time="2026-03-06T00:56:36.014835167Z" level=info msg="Start event monitor" Mar 6 00:56:36.016190 containerd[2011]: time="2026-03-06T00:56:36.014862815Z" level=info msg="Start cni network conf syncer for default" Mar 6 00:56:36.016190 containerd[2011]: time="2026-03-06T00:56:36.014880719Z" level=info msg="Start streaming server" Mar 6 00:56:36.016190 containerd[2011]: time="2026-03-06T00:56:36.014905055Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 6 00:56:36.016190 containerd[2011]: time="2026-03-06T00:56:36.014923271Z" level=info msg="runtime interface starting up..." Mar 6 00:56:36.016190 containerd[2011]: time="2026-03-06T00:56:36.014950103Z" level=info msg="starting plugins..." Mar 6 00:56:36.016190 containerd[2011]: time="2026-03-06T00:56:36.014981711Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 6 00:56:36.016190 containerd[2011]: time="2026-03-06T00:56:36.015866123Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 6 00:56:36.016190 containerd[2011]: time="2026-03-06T00:56:36.016076915Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 6 00:56:36.018671 sshd_keygen[1990]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 6 00:56:36.020792 systemd[1]: Started containerd.service - containerd container runtime. Mar 6 00:56:36.025759 ntpd[2213]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 6 00:56:36.025851 ntpd[2213]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 6 00:56:36.026078 ntpd[2213]: 6 Mar 00:56:36 ntpd[2213]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 6 00:56:36.026078 ntpd[2213]: 6 Mar 00:56:36 ntpd[2213]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 6 00:56:36.027191 containerd[2011]: time="2026-03-06T00:56:36.026975436Z" level=info msg="containerd successfully booted in 0.828370s" Mar 6 00:56:36.037459 tar[1986]: linux-arm64/README.md Mar 6 00:56:36.065267 amazon-ssm-agent[2158]: 2026-03-06 00:56:35.5540 INFO Checking if agent identity type EC2 can be assumed Mar 6 00:56:36.087604 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 6 00:56:36.117636 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 6 00:56:36.125026 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 6 00:56:36.131789 systemd[1]: Started sshd@0-172.31.24.181:22-68.220.241.50:37840.service - OpenSSH per-connection server daemon (68.220.241.50:37840). Mar 6 00:56:36.163258 amazon-ssm-agent[2158]: 2026-03-06 00:56:35.8399 INFO Agent will take identity from EC2 Mar 6 00:56:36.169723 systemd[1]: issuegen.service: Deactivated successfully. Mar 6 00:56:36.174365 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 6 00:56:36.186438 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 6 00:56:36.246165 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 6 00:56:36.254954 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 6 00:56:36.262727 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 6 00:56:36.262902 amazon-ssm-agent[2158]: 2026-03-06 00:56:35.8436 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Mar 6 00:56:36.266499 systemd[1]: Reached target getty.target - Login Prompts. Mar 6 00:56:36.362269 amazon-ssm-agent[2158]: 2026-03-06 00:56:35.8437 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Mar 6 00:56:36.461540 amazon-ssm-agent[2158]: 2026-03-06 00:56:35.8437 INFO [amazon-ssm-agent] Starting Core Agent Mar 6 00:56:36.562380 amazon-ssm-agent[2158]: 2026-03-06 00:56:35.8437 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Mar 6 00:56:36.611727 amazon-ssm-agent[2158]: 2026/03/06 00:56:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 00:56:36.611982 amazon-ssm-agent[2158]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 6 00:56:36.612184 amazon-ssm-agent[2158]: 2026/03/06 00:56:36 processing appconfig overrides Mar 6 00:56:36.642753 amazon-ssm-agent[2158]: 2026-03-06 00:56:35.8437 INFO [Registrar] Starting registrar module Mar 6 00:56:36.642753 amazon-ssm-agent[2158]: 2026-03-06 00:56:35.8645 INFO [EC2Identity] Checking disk for registration info Mar 6 00:56:36.642753 amazon-ssm-agent[2158]: 2026-03-06 00:56:35.8645 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Mar 6 00:56:36.642947 amazon-ssm-agent[2158]: 2026-03-06 00:56:35.8645 INFO [EC2Identity] Generating registration keypair Mar 6 00:56:36.642947 amazon-ssm-agent[2158]: 2026-03-06 00:56:36.5590 INFO [EC2Identity] Checking write access before registering Mar 6 00:56:36.642947 amazon-ssm-agent[2158]: 2026-03-06 00:56:36.5597 INFO [EC2Identity] Registering EC2 instance with Systems Manager Mar 6 00:56:36.642947 amazon-ssm-agent[2158]: 2026-03-06 00:56:36.6113 INFO [EC2Identity] EC2 registration was successful. Mar 6 00:56:36.642947 amazon-ssm-agent[2158]: 2026-03-06 00:56:36.6114 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Mar 6 00:56:36.642947 amazon-ssm-agent[2158]: 2026-03-06 00:56:36.6115 INFO [CredentialRefresher] credentialRefresher has started Mar 6 00:56:36.642947 amazon-ssm-agent[2158]: 2026-03-06 00:56:36.6115 INFO [CredentialRefresher] Starting credentials refresher loop Mar 6 00:56:36.642947 amazon-ssm-agent[2158]: 2026-03-06 00:56:36.6423 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 6 00:56:36.642947 amazon-ssm-agent[2158]: 2026-03-06 00:56:36.6426 INFO [CredentialRefresher] Credentials ready Mar 6 00:56:36.661474 amazon-ssm-agent[2158]: 2026-03-06 00:56:36.6429 INFO [CredentialRefresher] Next credential rotation will be in 29.9999909615 minutes Mar 6 00:56:36.740901 sshd[2234]: Accepted publickey for core from 68.220.241.50 port 37840 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:56:36.744000 sshd-session[2234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:56:36.759428 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 6 00:56:36.766949 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 6 00:56:36.792462 systemd-logind[1977]: New session 1 of user core. Mar 6 00:56:36.804837 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 6 00:56:36.813896 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 6 00:56:36.843601 (systemd)[2246]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 6 00:56:36.850200 systemd-logind[1977]: New session c1 of user core. Mar 6 00:56:37.154704 systemd[2246]: Queued start job for default target default.target. Mar 6 00:56:37.162894 systemd[2246]: Created slice app.slice - User Application Slice. Mar 6 00:56:37.162974 systemd[2246]: Reached target paths.target - Paths. Mar 6 00:56:37.163073 systemd[2246]: Reached target timers.target - Timers. Mar 6 00:56:37.170483 systemd[2246]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 6 00:56:37.204349 systemd[2246]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 6 00:56:37.204946 systemd[2246]: Reached target sockets.target - Sockets. Mar 6 00:56:37.205195 systemd[2246]: Reached target basic.target - Basic System. Mar 6 00:56:37.205593 systemd[2246]: Reached target default.target - Main User Target. Mar 6 00:56:37.205804 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 6 00:56:37.206042 systemd[2246]: Startup finished in 337ms. Mar 6 00:56:37.227595 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 6 00:56:37.499528 systemd[1]: Started sshd@1-172.31.24.181:22-68.220.241.50:37856.service - OpenSSH per-connection server daemon (68.220.241.50:37856). Mar 6 00:56:37.680764 amazon-ssm-agent[2158]: 2026-03-06 00:56:37.6804 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 6 00:56:37.781333 amazon-ssm-agent[2158]: 2026-03-06 00:56:37.6841 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2262) started Mar 6 00:56:37.842070 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 00:56:37.847222 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 6 00:56:37.853471 systemd[1]: Startup finished in 3.818s (kernel) + 9.434s (initrd) + 10.790s (userspace) = 24.043s. Mar 6 00:56:37.868943 (kubelet)[2271]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 00:56:37.882197 amazon-ssm-agent[2158]: 2026-03-06 00:56:37.6842 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 6 00:56:37.986383 sshd[2257]: Accepted publickey for core from 68.220.241.50 port 37856 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:56:37.989876 sshd-session[2257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:56:38.003340 systemd-logind[1977]: New session 2 of user core. Mar 6 00:56:38.013552 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 6 00:56:38.238181 sshd[2284]: Connection closed by 68.220.241.50 port 37856 Mar 6 00:56:38.239629 sshd-session[2257]: pam_unix(sshd:session): session closed for user core Mar 6 00:56:38.249594 systemd[1]: sshd@1-172.31.24.181:22-68.220.241.50:37856.service: Deactivated successfully. Mar 6 00:56:38.254481 systemd[1]: session-2.scope: Deactivated successfully. Mar 6 00:56:38.258184 systemd-logind[1977]: Session 2 logged out. Waiting for processes to exit. Mar 6 00:56:38.261146 systemd-logind[1977]: Removed session 2. Mar 6 00:56:38.335647 systemd[1]: Started sshd@2-172.31.24.181:22-68.220.241.50:37872.service - OpenSSH per-connection server daemon (68.220.241.50:37872). Mar 6 00:56:38.844626 sshd[2294]: Accepted publickey for core from 68.220.241.50 port 37872 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:56:38.847520 sshd-session[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:56:38.858365 systemd-logind[1977]: New session 3 of user core. Mar 6 00:56:38.868613 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 6 00:56:39.093029 sshd[2297]: Connection closed by 68.220.241.50 port 37872 Mar 6 00:56:39.094029 sshd-session[2294]: pam_unix(sshd:session): session closed for user core Mar 6 00:56:39.104541 systemd-logind[1977]: Session 3 logged out. Waiting for processes to exit. Mar 6 00:56:39.105884 systemd[1]: sshd@2-172.31.24.181:22-68.220.241.50:37872.service: Deactivated successfully. Mar 6 00:56:39.109680 systemd[1]: session-3.scope: Deactivated successfully. Mar 6 00:56:39.114500 systemd-logind[1977]: Removed session 3. Mar 6 00:56:39.185683 systemd[1]: Started sshd@3-172.31.24.181:22-68.220.241.50:37882.service - OpenSSH per-connection server daemon (68.220.241.50:37882). Mar 6 00:56:39.250172 kubelet[2271]: E0306 00:56:39.250069 2271 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 00:56:39.255595 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 00:56:39.256077 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 00:56:39.258413 systemd[1]: kubelet.service: Consumed 1.509s CPU time, 259.1M memory peak. Mar 6 00:56:39.653409 sshd[2304]: Accepted publickey for core from 68.220.241.50 port 37882 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:56:39.654989 sshd-session[2304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:56:39.664513 systemd-logind[1977]: New session 4 of user core. Mar 6 00:56:39.668480 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 6 00:56:39.895107 sshd[2308]: Connection closed by 68.220.241.50 port 37882 Mar 6 00:56:39.895940 sshd-session[2304]: pam_unix(sshd:session): session closed for user core Mar 6 00:56:39.903082 systemd-logind[1977]: Session 4 logged out. Waiting for processes to exit. Mar 6 00:56:39.903470 systemd[1]: sshd@3-172.31.24.181:22-68.220.241.50:37882.service: Deactivated successfully. Mar 6 00:56:39.907125 systemd[1]: session-4.scope: Deactivated successfully. Mar 6 00:56:39.910803 systemd-logind[1977]: Removed session 4. Mar 6 00:56:39.987638 systemd[1]: Started sshd@4-172.31.24.181:22-68.220.241.50:37896.service - OpenSSH per-connection server daemon (68.220.241.50:37896). Mar 6 00:56:40.450291 sshd[2314]: Accepted publickey for core from 68.220.241.50 port 37896 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:56:40.452755 sshd-session[2314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:56:40.465331 systemd-logind[1977]: New session 5 of user core. Mar 6 00:56:40.471516 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 6 00:56:40.629983 sudo[2318]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 6 00:56:40.631080 sudo[2318]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 00:56:40.646514 sudo[2318]: pam_unix(sudo:session): session closed for user root Mar 6 00:56:40.728382 sshd[2317]: Connection closed by 68.220.241.50 port 37896 Mar 6 00:56:40.726949 sshd-session[2314]: pam_unix(sshd:session): session closed for user core Mar 6 00:56:40.735778 systemd-logind[1977]: Session 5 logged out. Waiting for processes to exit. Mar 6 00:56:40.736648 systemd[1]: sshd@4-172.31.24.181:22-68.220.241.50:37896.service: Deactivated successfully. Mar 6 00:56:40.740973 systemd[1]: session-5.scope: Deactivated successfully. Mar 6 00:56:40.744714 systemd-logind[1977]: Removed session 5. Mar 6 00:56:40.822796 systemd[1]: Started sshd@5-172.31.24.181:22-68.220.241.50:37910.service - OpenSSH per-connection server daemon (68.220.241.50:37910). Mar 6 00:56:41.284809 sshd[2324]: Accepted publickey for core from 68.220.241.50 port 37910 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:56:41.287252 sshd-session[2324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:56:41.295321 systemd-logind[1977]: New session 6 of user core. Mar 6 00:56:41.314495 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 6 00:56:41.451601 sudo[2329]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 6 00:56:41.452307 sudo[2329]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 00:56:41.461153 sudo[2329]: pam_unix(sudo:session): session closed for user root Mar 6 00:56:41.476025 sudo[2328]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 6 00:56:41.477500 sudo[2328]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 00:56:41.495608 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 6 00:56:41.557572 augenrules[2351]: No rules Mar 6 00:56:41.560520 systemd[1]: audit-rules.service: Deactivated successfully. Mar 6 00:56:41.562377 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 6 00:56:41.565552 sudo[2328]: pam_unix(sudo:session): session closed for user root Mar 6 00:56:41.644025 sshd[2327]: Connection closed by 68.220.241.50 port 37910 Mar 6 00:56:41.644514 sshd-session[2324]: pam_unix(sshd:session): session closed for user core Mar 6 00:56:41.653149 systemd[1]: sshd@5-172.31.24.181:22-68.220.241.50:37910.service: Deactivated successfully. Mar 6 00:56:41.656744 systemd[1]: session-6.scope: Deactivated successfully. Mar 6 00:56:41.659118 systemd-logind[1977]: Session 6 logged out. Waiting for processes to exit. Mar 6 00:56:41.661838 systemd-logind[1977]: Removed session 6. Mar 6 00:56:41.736268 systemd[1]: Started sshd@6-172.31.24.181:22-68.220.241.50:37922.service - OpenSSH per-connection server daemon (68.220.241.50:37922). Mar 6 00:56:42.198142 sshd[2360]: Accepted publickey for core from 68.220.241.50 port 37922 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:56:42.200509 sshd-session[2360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:56:42.209457 systemd-logind[1977]: New session 7 of user core. Mar 6 00:56:42.217526 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 6 00:56:42.364601 sudo[2364]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 6 00:56:42.365204 sudo[2364]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 6 00:56:43.283515 systemd-resolved[1856]: Clock change detected. Flushing caches. Mar 6 00:56:43.387277 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 6 00:56:43.402255 (dockerd)[2382]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 6 00:56:43.948390 dockerd[2382]: time="2026-03-06T00:56:43.947811727Z" level=info msg="Starting up" Mar 6 00:56:43.949300 dockerd[2382]: time="2026-03-06T00:56:43.949250623Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 6 00:56:43.970064 dockerd[2382]: time="2026-03-06T00:56:43.969984319Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 6 00:56:44.105363 systemd[1]: var-lib-docker-metacopy\x2dcheck3581591368-merged.mount: Deactivated successfully. Mar 6 00:56:44.137551 dockerd[2382]: time="2026-03-06T00:56:44.137481664Z" level=info msg="Loading containers: start." Mar 6 00:56:44.150496 kernel: Initializing XFRM netlink socket Mar 6 00:56:44.519730 (udev-worker)[2406]: Network interface NamePolicy= disabled on kernel command line. Mar 6 00:56:44.596328 systemd-networkd[1849]: docker0: Link UP Mar 6 00:56:44.602077 dockerd[2382]: time="2026-03-06T00:56:44.602004366Z" level=info msg="Loading containers: done." Mar 6 00:56:44.629658 dockerd[2382]: time="2026-03-06T00:56:44.629593614Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 6 00:56:44.630140 dockerd[2382]: time="2026-03-06T00:56:44.629980782Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 6 00:56:44.630304 dockerd[2382]: time="2026-03-06T00:56:44.630266670Z" level=info msg="Initializing buildkit" Mar 6 00:56:44.668008 dockerd[2382]: time="2026-03-06T00:56:44.667745130Z" level=info msg="Completed buildkit initialization" Mar 6 00:56:44.683597 dockerd[2382]: time="2026-03-06T00:56:44.683542914Z" level=info msg="Daemon has completed initialization" Mar 6 00:56:44.683812 dockerd[2382]: time="2026-03-06T00:56:44.683767038Z" level=info msg="API listen on /run/docker.sock" Mar 6 00:56:44.684116 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 6 00:56:45.716304 containerd[2011]: time="2026-03-06T00:56:45.715673264Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 6 00:56:46.344401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2435022866.mount: Deactivated successfully. Mar 6 00:56:47.738330 containerd[2011]: time="2026-03-06T00:56:47.738267922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:47.741192 containerd[2011]: time="2026-03-06T00:56:47.741127510Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=27390174" Mar 6 00:56:47.743582 containerd[2011]: time="2026-03-06T00:56:47.743499094Z" level=info msg="ImageCreate event name:\"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:47.748388 containerd[2011]: time="2026-03-06T00:56:47.748312006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:47.752473 containerd[2011]: time="2026-03-06T00:56:47.752225158Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"27386773\" in 2.036494858s" Mar 6 00:56:47.752473 containerd[2011]: time="2026-03-06T00:56:47.752280574Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\"" Mar 6 00:56:47.753167 containerd[2011]: time="2026-03-06T00:56:47.753124258Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 6 00:56:49.141662 containerd[2011]: time="2026-03-06T00:56:49.141225873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:49.142847 containerd[2011]: time="2026-03-06T00:56:49.142772469Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=23552106" Mar 6 00:56:49.144365 containerd[2011]: time="2026-03-06T00:56:49.144282813Z" level=info msg="ImageCreate event name:\"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:49.150175 containerd[2011]: time="2026-03-06T00:56:49.150108333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:49.152480 containerd[2011]: time="2026-03-06T00:56:49.152354349Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"25136510\" in 1.399174195s" Mar 6 00:56:49.152480 containerd[2011]: time="2026-03-06T00:56:49.152445849Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\"" Mar 6 00:56:49.153622 containerd[2011]: time="2026-03-06T00:56:49.153213141Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 6 00:56:49.746763 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 6 00:56:49.752916 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 00:56:50.199758 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 00:56:50.215022 (kubelet)[2669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 00:56:50.309421 kubelet[2669]: E0306 00:56:50.309337 2669 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 00:56:50.321048 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 00:56:50.321351 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 00:56:50.323391 systemd[1]: kubelet.service: Consumed 383ms CPU time, 104.4M memory peak. Mar 6 00:56:50.624108 containerd[2011]: time="2026-03-06T00:56:50.623143752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:50.625391 containerd[2011]: time="2026-03-06T00:56:50.625317312Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=18301305" Mar 6 00:56:50.627439 containerd[2011]: time="2026-03-06T00:56:50.627347628Z" level=info msg="ImageCreate event name:\"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:50.635683 containerd[2011]: time="2026-03-06T00:56:50.635614260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:50.638018 containerd[2011]: time="2026-03-06T00:56:50.637952016Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"19885727\" in 1.484683303s" Mar 6 00:56:50.638018 containerd[2011]: time="2026-03-06T00:56:50.638014152Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\"" Mar 6 00:56:50.639501 containerd[2011]: time="2026-03-06T00:56:50.638706516Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 6 00:56:51.935128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount637880033.mount: Deactivated successfully. Mar 6 00:56:52.548715 containerd[2011]: time="2026-03-06T00:56:52.548660162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:52.549730 containerd[2011]: time="2026-03-06T00:56:52.549671282Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=28148870" Mar 6 00:56:52.550928 containerd[2011]: time="2026-03-06T00:56:52.550848158Z" level=info msg="ImageCreate event name:\"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:52.553930 containerd[2011]: time="2026-03-06T00:56:52.553851470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:52.556018 containerd[2011]: time="2026-03-06T00:56:52.555632042Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"28147889\" in 1.916864242s" Mar 6 00:56:52.556018 containerd[2011]: time="2026-03-06T00:56:52.555693050Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\"" Mar 6 00:56:52.556708 containerd[2011]: time="2026-03-06T00:56:52.556652210Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 6 00:56:53.040599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3580364665.mount: Deactivated successfully. Mar 6 00:56:54.393789 containerd[2011]: time="2026-03-06T00:56:54.392786511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:54.416238 containerd[2011]: time="2026-03-06T00:56:54.416133687Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Mar 6 00:56:54.451483 containerd[2011]: time="2026-03-06T00:56:54.450822711Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:54.459484 containerd[2011]: time="2026-03-06T00:56:54.459380055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:54.462018 containerd[2011]: time="2026-03-06T00:56:54.461955339Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.905240549s" Mar 6 00:56:54.462235 containerd[2011]: time="2026-03-06T00:56:54.462201807Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Mar 6 00:56:54.463034 containerd[2011]: time="2026-03-06T00:56:54.462952227Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 6 00:56:54.923868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2953861180.mount: Deactivated successfully. Mar 6 00:56:54.932963 containerd[2011]: time="2026-03-06T00:56:54.932878745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 00:56:54.934046 containerd[2011]: time="2026-03-06T00:56:54.933874121Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 6 00:56:54.935520 containerd[2011]: time="2026-03-06T00:56:54.935332781Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 00:56:54.940507 containerd[2011]: time="2026-03-06T00:56:54.939488513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 6 00:56:54.941807 containerd[2011]: time="2026-03-06T00:56:54.941108681Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 478.078346ms" Mar 6 00:56:54.941807 containerd[2011]: time="2026-03-06T00:56:54.941179085Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 6 00:56:54.941986 containerd[2011]: time="2026-03-06T00:56:54.941805233Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 6 00:56:55.461834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2950862152.mount: Deactivated successfully. Mar 6 00:56:56.818063 containerd[2011]: time="2026-03-06T00:56:56.817948555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:56.819982 containerd[2011]: time="2026-03-06T00:56:56.819905155Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21885780" Mar 6 00:56:56.821478 containerd[2011]: time="2026-03-06T00:56:56.821241307Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:56.826934 containerd[2011]: time="2026-03-06T00:56:56.826852807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:56:56.830985 containerd[2011]: time="2026-03-06T00:56:56.830776183Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 1.888920358s" Mar 6 00:56:56.830985 containerd[2011]: time="2026-03-06T00:56:56.830838847Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Mar 6 00:57:00.495425 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 6 00:57:00.499849 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 00:57:00.875748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 00:57:00.892153 (kubelet)[2833]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 6 00:57:00.975484 kubelet[2833]: E0306 00:57:00.975332 2833 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 6 00:57:00.979876 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 6 00:57:00.980209 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 6 00:57:00.981719 systemd[1]: kubelet.service: Consumed 346ms CPU time, 107M memory peak. Mar 6 00:57:04.624605 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 00:57:04.625541 systemd[1]: kubelet.service: Consumed 346ms CPU time, 107M memory peak. Mar 6 00:57:04.630635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 00:57:04.684205 systemd[1]: Reload requested from client PID 2847 ('systemctl') (unit session-7.scope)... Mar 6 00:57:04.684240 systemd[1]: Reloading... Mar 6 00:57:04.959494 zram_generator::config[2895]: No configuration found. Mar 6 00:57:05.439906 systemd[1]: Reloading finished in 754 ms. Mar 6 00:57:05.521055 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 6 00:57:05.521238 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 6 00:57:05.522552 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 00:57:05.522641 systemd[1]: kubelet.service: Consumed 239ms CPU time, 94.9M memory peak. Mar 6 00:57:05.525674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 00:57:05.880250 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 00:57:05.901363 (kubelet)[2955]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 6 00:57:05.984241 kubelet[2955]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 00:57:05.984241 kubelet[2955]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 6 00:57:05.984241 kubelet[2955]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 00:57:05.984950 kubelet[2955]: I0306 00:57:05.984324 2955 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 6 00:57:06.022187 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 6 00:57:08.569172 kubelet[2955]: I0306 00:57:08.569118 2955 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 6 00:57:08.571525 kubelet[2955]: I0306 00:57:08.569827 2955 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 6 00:57:08.571525 kubelet[2955]: I0306 00:57:08.570231 2955 server.go:956] "Client rotation is on, will bootstrap in background" Mar 6 00:57:08.612037 kubelet[2955]: E0306 00:57:08.611964 2955 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.24.181:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.24.181:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 6 00:57:08.613848 kubelet[2955]: I0306 00:57:08.613786 2955 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 00:57:08.631158 kubelet[2955]: I0306 00:57:08.631112 2955 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 6 00:57:08.638156 kubelet[2955]: I0306 00:57:08.638091 2955 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 6 00:57:08.640092 kubelet[2955]: I0306 00:57:08.640013 2955 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 6 00:57:08.640444 kubelet[2955]: I0306 00:57:08.640075 2955 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-181","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 6 00:57:08.640444 kubelet[2955]: I0306 00:57:08.640442 2955 topology_manager.go:138] "Creating topology manager with none policy" Mar 6 00:57:08.640721 kubelet[2955]: I0306 00:57:08.640484 2955 container_manager_linux.go:303] "Creating device plugin manager" Mar 6 00:57:08.640891 kubelet[2955]: I0306 00:57:08.640844 2955 state_mem.go:36] "Initialized new in-memory state store" Mar 6 00:57:08.647149 kubelet[2955]: I0306 00:57:08.647082 2955 kubelet.go:480] "Attempting to sync node with API server" Mar 6 00:57:08.647149 kubelet[2955]: I0306 00:57:08.647130 2955 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 6 00:57:08.648614 kubelet[2955]: I0306 00:57:08.647175 2955 kubelet.go:386] "Adding apiserver pod source" Mar 6 00:57:08.648614 kubelet[2955]: I0306 00:57:08.647204 2955 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 6 00:57:08.659262 kubelet[2955]: I0306 00:57:08.659226 2955 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 6 00:57:08.660612 kubelet[2955]: I0306 00:57:08.660574 2955 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 6 00:57:08.661015 kubelet[2955]: W0306 00:57:08.660993 2955 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 6 00:57:08.664275 kubelet[2955]: E0306 00:57:08.664191 2955 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.24.181:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-181&limit=500&resourceVersion=0\": dial tcp 172.31.24.181:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 6 00:57:08.664525 kubelet[2955]: E0306 00:57:08.664478 2955 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.24.181:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.181:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 6 00:57:08.667501 kubelet[2955]: I0306 00:57:08.667436 2955 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 6 00:57:08.667724 kubelet[2955]: I0306 00:57:08.667705 2955 server.go:1289] "Started kubelet" Mar 6 00:57:08.673583 kubelet[2955]: I0306 00:57:08.673536 2955 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 6 00:57:08.676572 kubelet[2955]: E0306 00:57:08.674253 2955 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.181:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.181:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-181.189a1a948e3e765a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-181,UID:ip-172-31-24-181,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-181,},FirstTimestamp:2026-03-06 00:57:08.667651674 +0000 UTC m=+2.757267207,LastTimestamp:2026-03-06 00:57:08.667651674 +0000 UTC m=+2.757267207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-181,}" Mar 6 00:57:08.678014 kubelet[2955]: I0306 00:57:08.677800 2955 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 6 00:57:08.680701 kubelet[2955]: I0306 00:57:08.680600 2955 server.go:317] "Adding debug handlers to kubelet server" Mar 6 00:57:08.684568 kubelet[2955]: I0306 00:57:08.684486 2955 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 6 00:57:08.686484 kubelet[2955]: E0306 00:57:08.685252 2955 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-181\" not found" Mar 6 00:57:08.686484 kubelet[2955]: I0306 00:57:08.686045 2955 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 6 00:57:08.686484 kubelet[2955]: I0306 00:57:08.686164 2955 reconciler.go:26] "Reconciler: start to sync state" Mar 6 00:57:08.689554 kubelet[2955]: E0306 00:57:08.689399 2955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.181:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-181?timeout=10s\": dial tcp 172.31.24.181:6443: connect: connection refused" interval="200ms" Mar 6 00:57:08.690300 kubelet[2955]: E0306 00:57:08.690249 2955 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.24.181:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.181:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 6 00:57:08.690887 kubelet[2955]: I0306 00:57:08.690851 2955 factory.go:223] Registration of the systemd container factory successfully Mar 6 00:57:08.691179 kubelet[2955]: I0306 00:57:08.691146 2955 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 6 00:57:08.692072 kubelet[2955]: I0306 00:57:08.691788 2955 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 6 00:57:08.692285 kubelet[2955]: I0306 00:57:08.692184 2955 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 6 00:57:08.693030 kubelet[2955]: I0306 00:57:08.692948 2955 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 6 00:57:08.695089 kubelet[2955]: E0306 00:57:08.695033 2955 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 6 00:57:08.700548 kubelet[2955]: I0306 00:57:08.700010 2955 factory.go:223] Registration of the containerd container factory successfully Mar 6 00:57:08.749188 kubelet[2955]: I0306 00:57:08.749133 2955 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 6 00:57:08.750327 kubelet[2955]: I0306 00:57:08.750279 2955 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 6 00:57:08.750327 kubelet[2955]: I0306 00:57:08.750314 2955 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 6 00:57:08.750585 kubelet[2955]: I0306 00:57:08.750344 2955 state_mem.go:36] "Initialized new in-memory state store" Mar 6 00:57:08.753497 kubelet[2955]: I0306 00:57:08.753408 2955 policy_none.go:49] "None policy: Start" Mar 6 00:57:08.753497 kubelet[2955]: I0306 00:57:08.753480 2955 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 6 00:57:08.753497 kubelet[2955]: I0306 00:57:08.753508 2955 state_mem.go:35] "Initializing new in-memory state store" Mar 6 00:57:08.755009 kubelet[2955]: I0306 00:57:08.754422 2955 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 6 00:57:08.755009 kubelet[2955]: I0306 00:57:08.754529 2955 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 6 00:57:08.755009 kubelet[2955]: I0306 00:57:08.754563 2955 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 6 00:57:08.755009 kubelet[2955]: I0306 00:57:08.754578 2955 kubelet.go:2436] "Starting kubelet main sync loop" Mar 6 00:57:08.755009 kubelet[2955]: E0306 00:57:08.754648 2955 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 6 00:57:08.758423 kubelet[2955]: E0306 00:57:08.758346 2955 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.24.181:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.181:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 6 00:57:08.769610 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 6 00:57:08.785580 kubelet[2955]: E0306 00:57:08.785506 2955 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-181\" not found" Mar 6 00:57:08.790252 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 6 00:57:08.798201 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 6 00:57:08.818816 kubelet[2955]: E0306 00:57:08.818552 2955 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 6 00:57:08.819846 kubelet[2955]: I0306 00:57:08.819724 2955 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 6 00:57:08.821660 kubelet[2955]: I0306 00:57:08.819758 2955 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 6 00:57:08.821660 kubelet[2955]: I0306 00:57:08.821229 2955 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 6 00:57:08.822528 kubelet[2955]: E0306 00:57:08.822480 2955 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 6 00:57:08.822732 kubelet[2955]: E0306 00:57:08.822710 2955 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-24-181\" not found" Mar 6 00:57:08.877619 systemd[1]: Created slice kubepods-burstable-podef63c3957b873175ef82c33635dcd13a.slice - libcontainer container kubepods-burstable-podef63c3957b873175ef82c33635dcd13a.slice. Mar 6 00:57:08.890839 kubelet[2955]: E0306 00:57:08.890722 2955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.181:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-181?timeout=10s\": dial tcp 172.31.24.181:6443: connect: connection refused" interval="400ms" Mar 6 00:57:08.910923 kubelet[2955]: E0306 00:57:08.910801 2955 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-181\" not found" node="ip-172-31-24-181" Mar 6 00:57:08.919194 systemd[1]: Created slice kubepods-burstable-pod83cb99b342e5a5068569985c70d58960.slice - libcontainer container kubepods-burstable-pod83cb99b342e5a5068569985c70d58960.slice. Mar 6 00:57:08.925151 kubelet[2955]: E0306 00:57:08.925114 2955 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-181\" not found" node="ip-172-31-24-181" Mar 6 00:57:08.926591 kubelet[2955]: I0306 00:57:08.926186 2955 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-181" Mar 6 00:57:08.927386 kubelet[2955]: E0306 00:57:08.927294 2955 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.181:6443/api/v1/nodes\": dial tcp 172.31.24.181:6443: connect: connection refused" node="ip-172-31-24-181" Mar 6 00:57:08.930433 systemd[1]: Created slice kubepods-burstable-podb85f8b4467fb727b5bd9c7fbafc06333.slice - libcontainer container kubepods-burstable-podb85f8b4467fb727b5bd9c7fbafc06333.slice. Mar 6 00:57:08.935590 kubelet[2955]: E0306 00:57:08.935531 2955 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-181\" not found" node="ip-172-31-24-181" Mar 6 00:57:08.987491 kubelet[2955]: I0306 00:57:08.987327 2955 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ef63c3957b873175ef82c33635dcd13a-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-181\" (UID: \"ef63c3957b873175ef82c33635dcd13a\") " pod="kube-system/kube-apiserver-ip-172-31-24-181" Mar 6 00:57:08.987491 kubelet[2955]: I0306 00:57:08.987413 2955 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/83cb99b342e5a5068569985c70d58960-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-181\" (UID: \"83cb99b342e5a5068569985c70d58960\") " pod="kube-system/kube-controller-manager-ip-172-31-24-181" Mar 6 00:57:08.987743 kubelet[2955]: I0306 00:57:08.987717 2955 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ef63c3957b873175ef82c33635dcd13a-ca-certs\") pod \"kube-apiserver-ip-172-31-24-181\" (UID: \"ef63c3957b873175ef82c33635dcd13a\") " pod="kube-system/kube-apiserver-ip-172-31-24-181" Mar 6 00:57:08.987880 kubelet[2955]: I0306 00:57:08.987857 2955 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/83cb99b342e5a5068569985c70d58960-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-181\" (UID: \"83cb99b342e5a5068569985c70d58960\") " pod="kube-system/kube-controller-manager-ip-172-31-24-181" Mar 6 00:57:08.988008 kubelet[2955]: I0306 00:57:08.987985 2955 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/83cb99b342e5a5068569985c70d58960-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-181\" (UID: \"83cb99b342e5a5068569985c70d58960\") " pod="kube-system/kube-controller-manager-ip-172-31-24-181" Mar 6 00:57:08.988176 kubelet[2955]: I0306 00:57:08.988151 2955 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/83cb99b342e5a5068569985c70d58960-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-181\" (UID: \"83cb99b342e5a5068569985c70d58960\") " pod="kube-system/kube-controller-manager-ip-172-31-24-181" Mar 6 00:57:08.988315 kubelet[2955]: I0306 00:57:08.988291 2955 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/83cb99b342e5a5068569985c70d58960-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-181\" (UID: \"83cb99b342e5a5068569985c70d58960\") " pod="kube-system/kube-controller-manager-ip-172-31-24-181" Mar 6 00:57:08.988484 kubelet[2955]: I0306 00:57:08.988445 2955 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b85f8b4467fb727b5bd9c7fbafc06333-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-181\" (UID: \"b85f8b4467fb727b5bd9c7fbafc06333\") " pod="kube-system/kube-scheduler-ip-172-31-24-181" Mar 6 00:57:08.988613 kubelet[2955]: I0306 00:57:08.988590 2955 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ef63c3957b873175ef82c33635dcd13a-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-181\" (UID: \"ef63c3957b873175ef82c33635dcd13a\") " pod="kube-system/kube-apiserver-ip-172-31-24-181" Mar 6 00:57:09.061553 kubelet[2955]: E0306 00:57:09.061352 2955 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.181:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.181:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-181.189a1a948e3e765a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-181,UID:ip-172-31-24-181,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-181,},FirstTimestamp:2026-03-06 00:57:08.667651674 +0000 UTC m=+2.757267207,LastTimestamp:2026-03-06 00:57:08.667651674 +0000 UTC m=+2.757267207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-181,}" Mar 6 00:57:09.131124 kubelet[2955]: I0306 00:57:09.130803 2955 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-181" Mar 6 00:57:09.131621 kubelet[2955]: E0306 00:57:09.131515 2955 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.181:6443/api/v1/nodes\": dial tcp 172.31.24.181:6443: connect: connection refused" node="ip-172-31-24-181" Mar 6 00:57:09.213786 containerd[2011]: time="2026-03-06T00:57:09.213665620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-181,Uid:ef63c3957b873175ef82c33635dcd13a,Namespace:kube-system,Attempt:0,}" Mar 6 00:57:09.229073 containerd[2011]: time="2026-03-06T00:57:09.227356336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-181,Uid:83cb99b342e5a5068569985c70d58960,Namespace:kube-system,Attempt:0,}" Mar 6 00:57:09.243787 containerd[2011]: time="2026-03-06T00:57:09.243436408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-181,Uid:b85f8b4467fb727b5bd9c7fbafc06333,Namespace:kube-system,Attempt:0,}" Mar 6 00:57:09.250925 containerd[2011]: time="2026-03-06T00:57:09.250867348Z" level=info msg="connecting to shim 829355da540fe02201298fbdb90fa23c4aee47c52236c9fbb4c2f48740d768dd" address="unix:///run/containerd/s/48b7fdb246efa10b56c16bb9704ea5774ad192d864c4805bb82d4baf377d0544" namespace=k8s.io protocol=ttrpc version=3 Mar 6 00:57:09.295792 kubelet[2955]: E0306 00:57:09.295738 2955 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.181:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-181?timeout=10s\": dial tcp 172.31.24.181:6443: connect: connection refused" interval="800ms" Mar 6 00:57:09.340115 containerd[2011]: time="2026-03-06T00:57:09.340029473Z" level=info msg="connecting to shim dd0a7b863ccf95c72bc534b6a6c4c65c3d9f6e52c5441f6df59ade4df92620d1" address="unix:///run/containerd/s/037c29725dc1f0b55b57545e2ac9d1cc39e5fb8038f30f795f66e130a26a8f85" namespace=k8s.io protocol=ttrpc version=3 Mar 6 00:57:09.342829 systemd[1]: Started cri-containerd-829355da540fe02201298fbdb90fa23c4aee47c52236c9fbb4c2f48740d768dd.scope - libcontainer container 829355da540fe02201298fbdb90fa23c4aee47c52236c9fbb4c2f48740d768dd. Mar 6 00:57:09.343322 containerd[2011]: time="2026-03-06T00:57:09.343182929Z" level=info msg="connecting to shim a4b57e8ffcf4bdbc667b6b44ad82ff1df92c2b9fd0623518ee90afcebeea0327" address="unix:///run/containerd/s/8c306c8b206022687567a36c67dd04113a9051c187733079737007eecb8ebb48" namespace=k8s.io protocol=ttrpc version=3 Mar 6 00:57:09.434774 systemd[1]: Started cri-containerd-a4b57e8ffcf4bdbc667b6b44ad82ff1df92c2b9fd0623518ee90afcebeea0327.scope - libcontainer container a4b57e8ffcf4bdbc667b6b44ad82ff1df92c2b9fd0623518ee90afcebeea0327. Mar 6 00:57:09.444153 systemd[1]: Started cri-containerd-dd0a7b863ccf95c72bc534b6a6c4c65c3d9f6e52c5441f6df59ade4df92620d1.scope - libcontainer container dd0a7b863ccf95c72bc534b6a6c4c65c3d9f6e52c5441f6df59ade4df92620d1. Mar 6 00:57:09.472121 kubelet[2955]: E0306 00:57:09.471978 2955 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.24.181:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.181:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 6 00:57:09.478506 containerd[2011]: time="2026-03-06T00:57:09.478324302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-181,Uid:ef63c3957b873175ef82c33635dcd13a,Namespace:kube-system,Attempt:0,} returns sandbox id \"829355da540fe02201298fbdb90fa23c4aee47c52236c9fbb4c2f48740d768dd\"" Mar 6 00:57:09.498508 containerd[2011]: time="2026-03-06T00:57:09.494947170Z" level=info msg="CreateContainer within sandbox \"829355da540fe02201298fbdb90fa23c4aee47c52236c9fbb4c2f48740d768dd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 6 00:57:09.514431 containerd[2011]: time="2026-03-06T00:57:09.514270554Z" level=info msg="Container baa59875b635ed851b22f7a9cd3c4deaca1ecef6c9854e74691b8126214972b6: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:57:09.530399 containerd[2011]: time="2026-03-06T00:57:09.530339310Z" level=info msg="CreateContainer within sandbox \"829355da540fe02201298fbdb90fa23c4aee47c52236c9fbb4c2f48740d768dd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"baa59875b635ed851b22f7a9cd3c4deaca1ecef6c9854e74691b8126214972b6\"" Mar 6 00:57:09.534487 containerd[2011]: time="2026-03-06T00:57:09.534031470Z" level=info msg="StartContainer for \"baa59875b635ed851b22f7a9cd3c4deaca1ecef6c9854e74691b8126214972b6\"" Mar 6 00:57:09.538622 containerd[2011]: time="2026-03-06T00:57:09.538547658Z" level=info msg="connecting to shim baa59875b635ed851b22f7a9cd3c4deaca1ecef6c9854e74691b8126214972b6" address="unix:///run/containerd/s/48b7fdb246efa10b56c16bb9704ea5774ad192d864c4805bb82d4baf377d0544" protocol=ttrpc version=3 Mar 6 00:57:09.539428 kubelet[2955]: I0306 00:57:09.539374 2955 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-181" Mar 6 00:57:09.540323 kubelet[2955]: E0306 00:57:09.540269 2955 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.181:6443/api/v1/nodes\": dial tcp 172.31.24.181:6443: connect: connection refused" node="ip-172-31-24-181" Mar 6 00:57:09.588620 kubelet[2955]: E0306 00:57:09.588383 2955 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.24.181:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-181&limit=500&resourceVersion=0\": dial tcp 172.31.24.181:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 6 00:57:09.590921 containerd[2011]: time="2026-03-06T00:57:09.590848530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-181,Uid:83cb99b342e5a5068569985c70d58960,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd0a7b863ccf95c72bc534b6a6c4c65c3d9f6e52c5441f6df59ade4df92620d1\"" Mar 6 00:57:09.603098 systemd[1]: Started cri-containerd-baa59875b635ed851b22f7a9cd3c4deaca1ecef6c9854e74691b8126214972b6.scope - libcontainer container baa59875b635ed851b22f7a9cd3c4deaca1ecef6c9854e74691b8126214972b6. Mar 6 00:57:09.605972 containerd[2011]: time="2026-03-06T00:57:09.605919234Z" level=info msg="CreateContainer within sandbox \"dd0a7b863ccf95c72bc534b6a6c4c65c3d9f6e52c5441f6df59ade4df92620d1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 6 00:57:09.629542 containerd[2011]: time="2026-03-06T00:57:09.629164506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-181,Uid:b85f8b4467fb727b5bd9c7fbafc06333,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4b57e8ffcf4bdbc667b6b44ad82ff1df92c2b9fd0623518ee90afcebeea0327\"" Mar 6 00:57:09.632364 containerd[2011]: time="2026-03-06T00:57:09.632287134Z" level=info msg="Container 5051f2c2186922242b72147943513317808fa3a65b40d2a74ebf36b722685b9d: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:57:09.636319 containerd[2011]: time="2026-03-06T00:57:09.636178650Z" level=info msg="CreateContainer within sandbox \"a4b57e8ffcf4bdbc667b6b44ad82ff1df92c2b9fd0623518ee90afcebeea0327\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 6 00:57:09.652745 containerd[2011]: time="2026-03-06T00:57:09.652681506Z" level=info msg="CreateContainer within sandbox \"dd0a7b863ccf95c72bc534b6a6c4c65c3d9f6e52c5441f6df59ade4df92620d1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5051f2c2186922242b72147943513317808fa3a65b40d2a74ebf36b722685b9d\"" Mar 6 00:57:09.656830 containerd[2011]: time="2026-03-06T00:57:09.656748439Z" level=info msg="StartContainer for \"5051f2c2186922242b72147943513317808fa3a65b40d2a74ebf36b722685b9d\"" Mar 6 00:57:09.657944 containerd[2011]: time="2026-03-06T00:57:09.657852163Z" level=info msg="Container 5dbec2e9285ed282c9f502d836db5445942e901f2bfebf6d59cd3b7f7ba251a9: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:57:09.664907 containerd[2011]: time="2026-03-06T00:57:09.664806511Z" level=info msg="connecting to shim 5051f2c2186922242b72147943513317808fa3a65b40d2a74ebf36b722685b9d" address="unix:///run/containerd/s/037c29725dc1f0b55b57545e2ac9d1cc39e5fb8038f30f795f66e130a26a8f85" protocol=ttrpc version=3 Mar 6 00:57:09.679935 containerd[2011]: time="2026-03-06T00:57:09.679861399Z" level=info msg="CreateContainer within sandbox \"a4b57e8ffcf4bdbc667b6b44ad82ff1df92c2b9fd0623518ee90afcebeea0327\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5dbec2e9285ed282c9f502d836db5445942e901f2bfebf6d59cd3b7f7ba251a9\"" Mar 6 00:57:09.680981 containerd[2011]: time="2026-03-06T00:57:09.680625583Z" level=info msg="StartContainer for \"5dbec2e9285ed282c9f502d836db5445942e901f2bfebf6d59cd3b7f7ba251a9\"" Mar 6 00:57:09.686887 containerd[2011]: time="2026-03-06T00:57:09.683156311Z" level=info msg="connecting to shim 5dbec2e9285ed282c9f502d836db5445942e901f2bfebf6d59cd3b7f7ba251a9" address="unix:///run/containerd/s/8c306c8b206022687567a36c67dd04113a9051c187733079737007eecb8ebb48" protocol=ttrpc version=3 Mar 6 00:57:09.718927 systemd[1]: Started cri-containerd-5051f2c2186922242b72147943513317808fa3a65b40d2a74ebf36b722685b9d.scope - libcontainer container 5051f2c2186922242b72147943513317808fa3a65b40d2a74ebf36b722685b9d. Mar 6 00:57:09.750110 systemd[1]: Started cri-containerd-5dbec2e9285ed282c9f502d836db5445942e901f2bfebf6d59cd3b7f7ba251a9.scope - libcontainer container 5dbec2e9285ed282c9f502d836db5445942e901f2bfebf6d59cd3b7f7ba251a9. Mar 6 00:57:09.771833 containerd[2011]: time="2026-03-06T00:57:09.771747775Z" level=info msg="StartContainer for \"baa59875b635ed851b22f7a9cd3c4deaca1ecef6c9854e74691b8126214972b6\" returns successfully" Mar 6 00:57:09.793721 kubelet[2955]: E0306 00:57:09.793672 2955 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.24.181:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.181:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 6 00:57:09.820916 kubelet[2955]: E0306 00:57:09.820237 2955 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-181\" not found" node="ip-172-31-24-181" Mar 6 00:57:09.946665 containerd[2011]: time="2026-03-06T00:57:09.946509656Z" level=info msg="StartContainer for \"5051f2c2186922242b72147943513317808fa3a65b40d2a74ebf36b722685b9d\" returns successfully" Mar 6 00:57:09.952575 containerd[2011]: time="2026-03-06T00:57:09.952509788Z" level=info msg="StartContainer for \"5dbec2e9285ed282c9f502d836db5445942e901f2bfebf6d59cd3b7f7ba251a9\" returns successfully" Mar 6 00:57:10.342844 kubelet[2955]: I0306 00:57:10.342795 2955 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-181" Mar 6 00:57:10.824441 kubelet[2955]: E0306 00:57:10.824380 2955 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-181\" not found" node="ip-172-31-24-181" Mar 6 00:57:10.828430 kubelet[2955]: E0306 00:57:10.828379 2955 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-181\" not found" node="ip-172-31-24-181" Mar 6 00:57:10.829830 kubelet[2955]: E0306 00:57:10.829781 2955 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-181\" not found" node="ip-172-31-24-181" Mar 6 00:57:11.831437 kubelet[2955]: E0306 00:57:11.831380 2955 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-181\" not found" node="ip-172-31-24-181" Mar 6 00:57:11.833546 kubelet[2955]: E0306 00:57:11.833495 2955 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-181\" not found" node="ip-172-31-24-181" Mar 6 00:57:12.777549 kubelet[2955]: E0306 00:57:12.776650 2955 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-181\" not found" node="ip-172-31-24-181" Mar 6 00:57:12.835721 kubelet[2955]: E0306 00:57:12.835665 2955 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-181\" not found" node="ip-172-31-24-181" Mar 6 00:57:13.572578 kubelet[2955]: E0306 00:57:13.572509 2955 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-24-181\" not found" node="ip-172-31-24-181" Mar 6 00:57:13.653729 kubelet[2955]: I0306 00:57:13.653640 2955 apiserver.go:52] "Watching apiserver" Mar 6 00:57:13.686374 kubelet[2955]: I0306 00:57:13.686204 2955 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 6 00:57:13.708367 kubelet[2955]: I0306 00:57:13.708290 2955 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-181" Mar 6 00:57:13.708367 kubelet[2955]: E0306 00:57:13.708367 2955 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-24-181\": node \"ip-172-31-24-181\" not found" Mar 6 00:57:13.786483 kubelet[2955]: I0306 00:57:13.786388 2955 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-181" Mar 6 00:57:13.834212 kubelet[2955]: I0306 00:57:13.834048 2955 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-181" Mar 6 00:57:13.851224 kubelet[2955]: E0306 00:57:13.851105 2955 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-181\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-24-181" Mar 6 00:57:13.851224 kubelet[2955]: I0306 00:57:13.851206 2955 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-181" Mar 6 00:57:13.853574 kubelet[2955]: E0306 00:57:13.852366 2955 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-24-181\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-24-181" Mar 6 00:57:13.860860 kubelet[2955]: E0306 00:57:13.860772 2955 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-24-181\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-24-181" Mar 6 00:57:13.860860 kubelet[2955]: I0306 00:57:13.860861 2955 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-181" Mar 6 00:57:13.867767 kubelet[2955]: E0306 00:57:13.867697 2955 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-24-181\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-24-181" Mar 6 00:57:15.914024 kubelet[2955]: I0306 00:57:15.913707 2955 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-181" Mar 6 00:57:17.130612 systemd[1]: Reload requested from client PID 3242 ('systemctl') (unit session-7.scope)... Mar 6 00:57:17.130645 systemd[1]: Reloading... Mar 6 00:57:17.460506 zram_generator::config[3288]: No configuration found. Mar 6 00:57:18.101181 systemd[1]: Reloading finished in 969 ms. Mar 6 00:57:18.174952 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 00:57:18.193092 systemd[1]: kubelet.service: Deactivated successfully. Mar 6 00:57:18.193682 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 00:57:18.193791 systemd[1]: kubelet.service: Consumed 3.566s CPU time, 128M memory peak. Mar 6 00:57:18.201879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 6 00:57:18.610712 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 6 00:57:18.629093 (kubelet)[3346]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 6 00:57:18.715975 kubelet[3346]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 00:57:18.718630 kubelet[3346]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 6 00:57:18.720488 kubelet[3346]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 6 00:57:18.720488 kubelet[3346]: I0306 00:57:18.718994 3346 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 6 00:57:18.737326 kubelet[3346]: I0306 00:57:18.737265 3346 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 6 00:57:18.737582 kubelet[3346]: I0306 00:57:18.737560 3346 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 6 00:57:18.738180 kubelet[3346]: I0306 00:57:18.738143 3346 server.go:956] "Client rotation is on, will bootstrap in background" Mar 6 00:57:18.741211 kubelet[3346]: I0306 00:57:18.741159 3346 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 6 00:57:18.748936 kubelet[3346]: I0306 00:57:18.748845 3346 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 6 00:57:18.764138 kubelet[3346]: I0306 00:57:18.763878 3346 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 6 00:57:18.772567 kubelet[3346]: I0306 00:57:18.772502 3346 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 6 00:57:18.772998 kubelet[3346]: I0306 00:57:18.772931 3346 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 6 00:57:18.773426 kubelet[3346]: I0306 00:57:18.772986 3346 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-181","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 6 00:57:18.773426 kubelet[3346]: I0306 00:57:18.773326 3346 topology_manager.go:138] "Creating topology manager with none policy" Mar 6 00:57:18.773426 kubelet[3346]: I0306 00:57:18.773349 3346 container_manager_linux.go:303] "Creating device plugin manager" Mar 6 00:57:18.774975 kubelet[3346]: I0306 00:57:18.774538 3346 state_mem.go:36] "Initialized new in-memory state store" Mar 6 00:57:18.774975 kubelet[3346]: I0306 00:57:18.774862 3346 kubelet.go:480] "Attempting to sync node with API server" Mar 6 00:57:18.774975 kubelet[3346]: I0306 00:57:18.774893 3346 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 6 00:57:18.775252 kubelet[3346]: I0306 00:57:18.775231 3346 kubelet.go:386] "Adding apiserver pod source" Mar 6 00:57:18.775385 kubelet[3346]: I0306 00:57:18.775364 3346 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 6 00:57:18.792174 kubelet[3346]: I0306 00:57:18.791834 3346 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 6 00:57:18.799363 kubelet[3346]: I0306 00:57:18.795756 3346 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 6 00:57:18.820382 kubelet[3346]: I0306 00:57:18.820333 3346 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 6 00:57:18.820559 kubelet[3346]: I0306 00:57:18.820403 3346 server.go:1289] "Started kubelet" Mar 6 00:57:18.821551 kubelet[3346]: I0306 00:57:18.820621 3346 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 6 00:57:18.821551 kubelet[3346]: I0306 00:57:18.821082 3346 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 6 00:57:18.832607 kubelet[3346]: I0306 00:57:18.832552 3346 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 6 00:57:18.837907 kubelet[3346]: I0306 00:57:18.837824 3346 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 6 00:57:18.840991 kubelet[3346]: I0306 00:57:18.840915 3346 server.go:317] "Adding debug handlers to kubelet server" Mar 6 00:57:18.852066 kubelet[3346]: I0306 00:57:18.851059 3346 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 6 00:57:18.855605 kubelet[3346]: I0306 00:57:18.855121 3346 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 6 00:57:18.855605 kubelet[3346]: E0306 00:57:18.855322 3346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-181\" not found" Mar 6 00:57:18.857945 kubelet[3346]: I0306 00:57:18.857820 3346 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 6 00:57:18.858107 kubelet[3346]: I0306 00:57:18.858074 3346 reconciler.go:26] "Reconciler: start to sync state" Mar 6 00:57:18.871662 kubelet[3346]: I0306 00:57:18.871514 3346 factory.go:223] Registration of the systemd container factory successfully Mar 6 00:57:18.875646 kubelet[3346]: I0306 00:57:18.873818 3346 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 6 00:57:18.881510 kubelet[3346]: I0306 00:57:18.881334 3346 factory.go:223] Registration of the containerd container factory successfully Mar 6 00:57:18.893857 kubelet[3346]: E0306 00:57:18.893798 3346 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 6 00:57:18.930603 kubelet[3346]: I0306 00:57:18.930513 3346 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 6 00:57:18.935553 kubelet[3346]: I0306 00:57:18.934702 3346 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 6 00:57:18.935553 kubelet[3346]: I0306 00:57:18.934795 3346 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 6 00:57:18.935553 kubelet[3346]: I0306 00:57:18.934856 3346 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 6 00:57:18.935553 kubelet[3346]: I0306 00:57:18.934873 3346 kubelet.go:2436] "Starting kubelet main sync loop" Mar 6 00:57:18.935553 kubelet[3346]: E0306 00:57:18.935043 3346 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 6 00:57:19.018796 kubelet[3346]: I0306 00:57:19.018678 3346 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 6 00:57:19.018796 kubelet[3346]: I0306 00:57:19.018715 3346 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 6 00:57:19.018796 kubelet[3346]: I0306 00:57:19.018752 3346 state_mem.go:36] "Initialized new in-memory state store" Mar 6 00:57:19.018796 kubelet[3346]: I0306 00:57:19.019031 3346 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 6 00:57:19.018796 kubelet[3346]: I0306 00:57:19.019052 3346 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 6 00:57:19.018796 kubelet[3346]: I0306 00:57:19.019084 3346 policy_none.go:49] "None policy: Start" Mar 6 00:57:19.018796 kubelet[3346]: I0306 00:57:19.019103 3346 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 6 00:57:19.018796 kubelet[3346]: I0306 00:57:19.019123 3346 state_mem.go:35] "Initializing new in-memory state store" Mar 6 00:57:19.018796 kubelet[3346]: I0306 00:57:19.019285 3346 state_mem.go:75] "Updated machine memory state" Mar 6 00:57:19.030175 kubelet[3346]: E0306 00:57:19.030124 3346 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 6 00:57:19.030574 kubelet[3346]: I0306 00:57:19.030446 3346 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 6 00:57:19.030765 kubelet[3346]: I0306 00:57:19.030576 3346 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 6 00:57:19.033120 kubelet[3346]: I0306 00:57:19.033081 3346 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 6 00:57:19.039954 kubelet[3346]: E0306 00:57:19.039596 3346 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 6 00:57:19.050276 kubelet[3346]: I0306 00:57:19.050193 3346 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-181" Mar 6 00:57:19.064105 kubelet[3346]: I0306 00:57:19.064017 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b85f8b4467fb727b5bd9c7fbafc06333-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-181\" (UID: \"b85f8b4467fb727b5bd9c7fbafc06333\") " pod="kube-system/kube-scheduler-ip-172-31-24-181" Mar 6 00:57:19.064738 kubelet[3346]: I0306 00:57:19.064681 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ef63c3957b873175ef82c33635dcd13a-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-181\" (UID: \"ef63c3957b873175ef82c33635dcd13a\") " pod="kube-system/kube-apiserver-ip-172-31-24-181" Mar 6 00:57:19.064902 kubelet[3346]: I0306 00:57:19.051359 3346 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-181" Mar 6 00:57:19.065017 kubelet[3346]: I0306 00:57:19.053557 3346 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-181" Mar 6 00:57:19.068497 kubelet[3346]: I0306 00:57:19.066245 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ef63c3957b873175ef82c33635dcd13a-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-181\" (UID: \"ef63c3957b873175ef82c33635dcd13a\") " pod="kube-system/kube-apiserver-ip-172-31-24-181" Mar 6 00:57:19.068497 kubelet[3346]: I0306 00:57:19.067047 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/83cb99b342e5a5068569985c70d58960-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-181\" (UID: \"83cb99b342e5a5068569985c70d58960\") " pod="kube-system/kube-controller-manager-ip-172-31-24-181" Mar 6 00:57:19.068497 kubelet[3346]: I0306 00:57:19.067136 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/83cb99b342e5a5068569985c70d58960-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-181\" (UID: \"83cb99b342e5a5068569985c70d58960\") " pod="kube-system/kube-controller-manager-ip-172-31-24-181" Mar 6 00:57:19.069919 kubelet[3346]: I0306 00:57:19.068832 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ef63c3957b873175ef82c33635dcd13a-ca-certs\") pod \"kube-apiserver-ip-172-31-24-181\" (UID: \"ef63c3957b873175ef82c33635dcd13a\") " pod="kube-system/kube-apiserver-ip-172-31-24-181" Mar 6 00:57:19.069919 kubelet[3346]: I0306 00:57:19.069899 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/83cb99b342e5a5068569985c70d58960-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-181\" (UID: \"83cb99b342e5a5068569985c70d58960\") " pod="kube-system/kube-controller-manager-ip-172-31-24-181" Mar 6 00:57:19.070167 kubelet[3346]: I0306 00:57:19.069941 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/83cb99b342e5a5068569985c70d58960-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-181\" (UID: \"83cb99b342e5a5068569985c70d58960\") " pod="kube-system/kube-controller-manager-ip-172-31-24-181" Mar 6 00:57:19.070167 kubelet[3346]: I0306 00:57:19.069979 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/83cb99b342e5a5068569985c70d58960-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-181\" (UID: \"83cb99b342e5a5068569985c70d58960\") " pod="kube-system/kube-controller-manager-ip-172-31-24-181" Mar 6 00:57:19.090029 kubelet[3346]: E0306 00:57:19.089972 3346 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-24-181\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-24-181" Mar 6 00:57:19.105971 update_engine[1978]: I20260306 00:57:19.105686 1978 update_attempter.cc:509] Updating boot flags... Mar 6 00:57:19.169153 kubelet[3346]: I0306 00:57:19.168092 3346 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-181" Mar 6 00:57:19.189892 kubelet[3346]: I0306 00:57:19.189838 3346 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-24-181" Mar 6 00:57:19.190024 kubelet[3346]: I0306 00:57:19.189959 3346 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-181" Mar 6 00:57:19.783196 kubelet[3346]: I0306 00:57:19.783124 3346 apiserver.go:52] "Watching apiserver" Mar 6 00:57:19.861084 kubelet[3346]: I0306 00:57:19.859818 3346 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 6 00:57:19.985368 kubelet[3346]: I0306 00:57:19.985306 3346 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-181" Mar 6 00:57:19.992500 kubelet[3346]: I0306 00:57:19.988260 3346 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-181" Mar 6 00:57:19.993016 kubelet[3346]: I0306 00:57:19.992938 3346 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-181" Mar 6 00:57:20.065310 kubelet[3346]: E0306 00:57:20.064865 3346 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-24-181\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-24-181" Mar 6 00:57:20.072503 kubelet[3346]: E0306 00:57:20.071381 3346 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-24-181\" already exists" pod="kube-system/kube-scheduler-ip-172-31-24-181" Mar 6 00:57:20.089759 kubelet[3346]: E0306 00:57:20.071734 3346 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-181\" already exists" pod="kube-system/kube-apiserver-ip-172-31-24-181" Mar 6 00:57:20.236098 kubelet[3346]: I0306 00:57:20.235995 3346 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-24-181" podStartSLOduration=1.235971471 podStartE2EDuration="1.235971471s" podCreationTimestamp="2026-03-06 00:57:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 00:57:20.194788539 +0000 UTC m=+1.555110213" watchObservedRunningTime="2026-03-06 00:57:20.235971471 +0000 UTC m=+1.596293049" Mar 6 00:57:20.306045 kubelet[3346]: I0306 00:57:20.305932 3346 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-24-181" podStartSLOduration=1.305910819 podStartE2EDuration="1.305910819s" podCreationTimestamp="2026-03-06 00:57:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 00:57:20.239961783 +0000 UTC m=+1.600283433" watchObservedRunningTime="2026-03-06 00:57:20.305910819 +0000 UTC m=+1.666232397" Mar 6 00:57:20.579074 kubelet[3346]: I0306 00:57:20.578589 3346 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-24-181" podStartSLOduration=5.578562725 podStartE2EDuration="5.578562725s" podCreationTimestamp="2026-03-06 00:57:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 00:57:20.310758903 +0000 UTC m=+1.671080481" watchObservedRunningTime="2026-03-06 00:57:20.578562725 +0000 UTC m=+1.938884303" Mar 6 00:57:22.031380 kubelet[3346]: I0306 00:57:22.031329 3346 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 6 00:57:22.032667 containerd[2011]: time="2026-03-06T00:57:22.031879300Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 6 00:57:22.033842 kubelet[3346]: I0306 00:57:22.033329 3346 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 6 00:57:22.919766 systemd[1]: Created slice kubepods-besteffort-pod984971f3_00e0_49c0_b802_b3488a5087ef.slice - libcontainer container kubepods-besteffort-pod984971f3_00e0_49c0_b802_b3488a5087ef.slice. Mar 6 00:57:23.013756 kubelet[3346]: I0306 00:57:23.013503 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/984971f3-00e0-49c0-b802-b3488a5087ef-lib-modules\") pod \"kube-proxy-qvf45\" (UID: \"984971f3-00e0-49c0-b802-b3488a5087ef\") " pod="kube-system/kube-proxy-qvf45" Mar 6 00:57:23.013756 kubelet[3346]: I0306 00:57:23.013586 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7584\" (UniqueName: \"kubernetes.io/projected/984971f3-00e0-49c0-b802-b3488a5087ef-kube-api-access-m7584\") pod \"kube-proxy-qvf45\" (UID: \"984971f3-00e0-49c0-b802-b3488a5087ef\") " pod="kube-system/kube-proxy-qvf45" Mar 6 00:57:23.013756 kubelet[3346]: I0306 00:57:23.013644 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/984971f3-00e0-49c0-b802-b3488a5087ef-kube-proxy\") pod \"kube-proxy-qvf45\" (UID: \"984971f3-00e0-49c0-b802-b3488a5087ef\") " pod="kube-system/kube-proxy-qvf45" Mar 6 00:57:23.013756 kubelet[3346]: I0306 00:57:23.013684 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/984971f3-00e0-49c0-b802-b3488a5087ef-xtables-lock\") pod \"kube-proxy-qvf45\" (UID: \"984971f3-00e0-49c0-b802-b3488a5087ef\") " pod="kube-system/kube-proxy-qvf45" Mar 6 00:57:23.235891 containerd[2011]: time="2026-03-06T00:57:23.235704714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qvf45,Uid:984971f3-00e0-49c0-b802-b3488a5087ef,Namespace:kube-system,Attempt:0,}" Mar 6 00:57:23.291925 containerd[2011]: time="2026-03-06T00:57:23.290879226Z" level=info msg="connecting to shim 2295f9f97740d6e44f9d81e5a058607feff2f6ddc83b531bcccd5cb14dccc4da" address="unix:///run/containerd/s/8146fbc0ef3b7fd918d65057f5e7010d772fc285b3d7974efa138d6094722b62" namespace=k8s.io protocol=ttrpc version=3 Mar 6 00:57:23.296554 systemd[1]: Created slice kubepods-besteffort-podd782ff88_4444_46ee_aba6_b22f365d1f22.slice - libcontainer container kubepods-besteffort-podd782ff88_4444_46ee_aba6_b22f365d1f22.slice. Mar 6 00:57:23.316973 kubelet[3346]: I0306 00:57:23.316889 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk97z\" (UniqueName: \"kubernetes.io/projected/d782ff88-4444-46ee-aba6-b22f365d1f22-kube-api-access-wk97z\") pod \"tigera-operator-6bf85f8dd-szc59\" (UID: \"d782ff88-4444-46ee-aba6-b22f365d1f22\") " pod="tigera-operator/tigera-operator-6bf85f8dd-szc59" Mar 6 00:57:23.316973 kubelet[3346]: I0306 00:57:23.316978 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d782ff88-4444-46ee-aba6-b22f365d1f22-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-szc59\" (UID: \"d782ff88-4444-46ee-aba6-b22f365d1f22\") " pod="tigera-operator/tigera-operator-6bf85f8dd-szc59" Mar 6 00:57:23.369805 systemd[1]: Started cri-containerd-2295f9f97740d6e44f9d81e5a058607feff2f6ddc83b531bcccd5cb14dccc4da.scope - libcontainer container 2295f9f97740d6e44f9d81e5a058607feff2f6ddc83b531bcccd5cb14dccc4da. Mar 6 00:57:23.431245 containerd[2011]: time="2026-03-06T00:57:23.430891471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qvf45,Uid:984971f3-00e0-49c0-b802-b3488a5087ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"2295f9f97740d6e44f9d81e5a058607feff2f6ddc83b531bcccd5cb14dccc4da\"" Mar 6 00:57:23.448525 containerd[2011]: time="2026-03-06T00:57:23.448360063Z" level=info msg="CreateContainer within sandbox \"2295f9f97740d6e44f9d81e5a058607feff2f6ddc83b531bcccd5cb14dccc4da\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 6 00:57:23.478561 containerd[2011]: time="2026-03-06T00:57:23.477833887Z" level=info msg="Container 4597867b45276acab53ecb3b15919ddc472640c93f368b7a8c415782be996bba: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:57:23.498799 containerd[2011]: time="2026-03-06T00:57:23.498675835Z" level=info msg="CreateContainer within sandbox \"2295f9f97740d6e44f9d81e5a058607feff2f6ddc83b531bcccd5cb14dccc4da\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4597867b45276acab53ecb3b15919ddc472640c93f368b7a8c415782be996bba\"" Mar 6 00:57:23.499943 containerd[2011]: time="2026-03-06T00:57:23.499897951Z" level=info msg="StartContainer for \"4597867b45276acab53ecb3b15919ddc472640c93f368b7a8c415782be996bba\"" Mar 6 00:57:23.504498 containerd[2011]: time="2026-03-06T00:57:23.504375715Z" level=info msg="connecting to shim 4597867b45276acab53ecb3b15919ddc472640c93f368b7a8c415782be996bba" address="unix:///run/containerd/s/8146fbc0ef3b7fd918d65057f5e7010d772fc285b3d7974efa138d6094722b62" protocol=ttrpc version=3 Mar 6 00:57:23.558861 systemd[1]: Started cri-containerd-4597867b45276acab53ecb3b15919ddc472640c93f368b7a8c415782be996bba.scope - libcontainer container 4597867b45276acab53ecb3b15919ddc472640c93f368b7a8c415782be996bba. Mar 6 00:57:23.610108 containerd[2011]: time="2026-03-06T00:57:23.609718412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-szc59,Uid:d782ff88-4444-46ee-aba6-b22f365d1f22,Namespace:tigera-operator,Attempt:0,}" Mar 6 00:57:23.655807 containerd[2011]: time="2026-03-06T00:57:23.655743188Z" level=info msg="connecting to shim 74c3355d897f259a7583ae9cff87e995ac1df2bea3d6e5131ba913cb064e6ac2" address="unix:///run/containerd/s/4423a0485b583b8200de28707f46e3ff5f8de4129a3c510c12f09a902a985e2c" namespace=k8s.io protocol=ttrpc version=3 Mar 6 00:57:23.721667 containerd[2011]: time="2026-03-06T00:57:23.721584896Z" level=info msg="StartContainer for \"4597867b45276acab53ecb3b15919ddc472640c93f368b7a8c415782be996bba\" returns successfully" Mar 6 00:57:23.743816 systemd[1]: Started cri-containerd-74c3355d897f259a7583ae9cff87e995ac1df2bea3d6e5131ba913cb064e6ac2.scope - libcontainer container 74c3355d897f259a7583ae9cff87e995ac1df2bea3d6e5131ba913cb064e6ac2. Mar 6 00:57:23.846423 containerd[2011]: time="2026-03-06T00:57:23.845192601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-szc59,Uid:d782ff88-4444-46ee-aba6-b22f365d1f22,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"74c3355d897f259a7583ae9cff87e995ac1df2bea3d6e5131ba913cb064e6ac2\"" Mar 6 00:57:23.851491 containerd[2011]: time="2026-03-06T00:57:23.851394009Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 6 00:57:24.038816 kubelet[3346]: I0306 00:57:24.038140 3346 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qvf45" podStartSLOduration=2.038119362 podStartE2EDuration="2.038119362s" podCreationTimestamp="2026-03-06 00:57:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 00:57:24.038075046 +0000 UTC m=+5.398396636" watchObservedRunningTime="2026-03-06 00:57:24.038119362 +0000 UTC m=+5.398440940" Mar 6 00:57:24.147668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount285464197.mount: Deactivated successfully. Mar 6 00:57:25.166803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3323663833.mount: Deactivated successfully. Mar 6 00:57:26.497393 containerd[2011]: time="2026-03-06T00:57:26.497314918Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:57:26.498657 containerd[2011]: time="2026-03-06T00:57:26.498259246Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Mar 6 00:57:26.500656 containerd[2011]: time="2026-03-06T00:57:26.500567230Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:57:26.504440 containerd[2011]: time="2026-03-06T00:57:26.504282274Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:57:26.508491 containerd[2011]: time="2026-03-06T00:57:26.507926530Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.656457449s" Mar 6 00:57:26.508491 containerd[2011]: time="2026-03-06T00:57:26.508030342Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Mar 6 00:57:26.524172 containerd[2011]: time="2026-03-06T00:57:26.522936418Z" level=info msg="CreateContainer within sandbox \"74c3355d897f259a7583ae9cff87e995ac1df2bea3d6e5131ba913cb064e6ac2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 6 00:57:26.551902 containerd[2011]: time="2026-03-06T00:57:26.551828770Z" level=info msg="Container 24a049a2bead0439dfa4f9bfa04715b8f8bca1c4e857b418a63fc1c7314a619d: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:57:26.553723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1412085223.mount: Deactivated successfully. Mar 6 00:57:26.564009 containerd[2011]: time="2026-03-06T00:57:26.563928082Z" level=info msg="CreateContainer within sandbox \"74c3355d897f259a7583ae9cff87e995ac1df2bea3d6e5131ba913cb064e6ac2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"24a049a2bead0439dfa4f9bfa04715b8f8bca1c4e857b418a63fc1c7314a619d\"" Mar 6 00:57:26.565372 containerd[2011]: time="2026-03-06T00:57:26.565159570Z" level=info msg="StartContainer for \"24a049a2bead0439dfa4f9bfa04715b8f8bca1c4e857b418a63fc1c7314a619d\"" Mar 6 00:57:26.569284 containerd[2011]: time="2026-03-06T00:57:26.569222987Z" level=info msg="connecting to shim 24a049a2bead0439dfa4f9bfa04715b8f8bca1c4e857b418a63fc1c7314a619d" address="unix:///run/containerd/s/4423a0485b583b8200de28707f46e3ff5f8de4129a3c510c12f09a902a985e2c" protocol=ttrpc version=3 Mar 6 00:57:26.618936 systemd[1]: Started cri-containerd-24a049a2bead0439dfa4f9bfa04715b8f8bca1c4e857b418a63fc1c7314a619d.scope - libcontainer container 24a049a2bead0439dfa4f9bfa04715b8f8bca1c4e857b418a63fc1c7314a619d. Mar 6 00:57:26.686495 containerd[2011]: time="2026-03-06T00:57:26.685159547Z" level=info msg="StartContainer for \"24a049a2bead0439dfa4f9bfa04715b8f8bca1c4e857b418a63fc1c7314a619d\" returns successfully" Mar 6 00:57:27.052204 kubelet[3346]: I0306 00:57:27.052087 3346 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-szc59" podStartSLOduration=1.3901322839999999 podStartE2EDuration="4.052064817s" podCreationTimestamp="2026-03-06 00:57:23 +0000 UTC" firstStartedPulling="2026-03-06 00:57:23.850622949 +0000 UTC m=+5.210944503" lastFinishedPulling="2026-03-06 00:57:26.512555482 +0000 UTC m=+7.872877036" observedRunningTime="2026-03-06 00:57:27.051896685 +0000 UTC m=+8.412218263" watchObservedRunningTime="2026-03-06 00:57:27.052064817 +0000 UTC m=+8.412386395" Mar 6 00:57:33.983196 sudo[2364]: pam_unix(sudo:session): session closed for user root Mar 6 00:57:34.063510 sshd[2363]: Connection closed by 68.220.241.50 port 37922 Mar 6 00:57:34.063718 sshd-session[2360]: pam_unix(sshd:session): session closed for user core Mar 6 00:57:34.075293 systemd-logind[1977]: Session 7 logged out. Waiting for processes to exit. Mar 6 00:57:34.077178 systemd[1]: sshd@6-172.31.24.181:22-68.220.241.50:37922.service: Deactivated successfully. Mar 6 00:57:34.082978 systemd[1]: session-7.scope: Deactivated successfully. Mar 6 00:57:34.086874 systemd[1]: session-7.scope: Consumed 11.855s CPU time, 224.6M memory peak. Mar 6 00:57:34.098280 systemd-logind[1977]: Removed session 7. Mar 6 00:57:47.454724 systemd[1]: Created slice kubepods-besteffort-pod83c5cc7b_5792_4243_b510_471cdfe8983b.slice - libcontainer container kubepods-besteffort-pod83c5cc7b_5792_4243_b510_471cdfe8983b.slice. Mar 6 00:57:47.495024 kubelet[3346]: I0306 00:57:47.494969 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83c5cc7b-5792-4243-b510-471cdfe8983b-tigera-ca-bundle\") pod \"calico-typha-56c4587657-ckvvf\" (UID: \"83c5cc7b-5792-4243-b510-471cdfe8983b\") " pod="calico-system/calico-typha-56c4587657-ckvvf" Mar 6 00:57:47.496215 kubelet[3346]: I0306 00:57:47.495799 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/83c5cc7b-5792-4243-b510-471cdfe8983b-typha-certs\") pod \"calico-typha-56c4587657-ckvvf\" (UID: \"83c5cc7b-5792-4243-b510-471cdfe8983b\") " pod="calico-system/calico-typha-56c4587657-ckvvf" Mar 6 00:57:47.496215 kubelet[3346]: I0306 00:57:47.495876 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6p6x\" (UniqueName: \"kubernetes.io/projected/83c5cc7b-5792-4243-b510-471cdfe8983b-kube-api-access-q6p6x\") pod \"calico-typha-56c4587657-ckvvf\" (UID: \"83c5cc7b-5792-4243-b510-471cdfe8983b\") " pod="calico-system/calico-typha-56c4587657-ckvvf" Mar 6 00:57:47.748830 systemd[1]: Created slice kubepods-besteffort-podb972a4d5_0823_43d2_916d_a7d1ac6a9616.slice - libcontainer container kubepods-besteffort-podb972a4d5_0823_43d2_916d_a7d1ac6a9616.slice. Mar 6 00:57:47.771286 containerd[2011]: time="2026-03-06T00:57:47.771210440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56c4587657-ckvvf,Uid:83c5cc7b-5792-4243-b510-471cdfe8983b,Namespace:calico-system,Attempt:0,}" Mar 6 00:57:47.798890 kubelet[3346]: I0306 00:57:47.798811 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/b972a4d5-0823-43d2-916d-a7d1ac6a9616-bpffs\") pod \"calico-node-g2qw9\" (UID: \"b972a4d5-0823-43d2-916d-a7d1ac6a9616\") " pod="calico-system/calico-node-g2qw9" Mar 6 00:57:47.799061 kubelet[3346]: I0306 00:57:47.798998 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/b972a4d5-0823-43d2-916d-a7d1ac6a9616-nodeproc\") pod \"calico-node-g2qw9\" (UID: \"b972a4d5-0823-43d2-916d-a7d1ac6a9616\") " pod="calico-system/calico-node-g2qw9" Mar 6 00:57:47.799143 kubelet[3346]: I0306 00:57:47.799102 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47pvq\" (UniqueName: \"kubernetes.io/projected/b972a4d5-0823-43d2-916d-a7d1ac6a9616-kube-api-access-47pvq\") pod \"calico-node-g2qw9\" (UID: \"b972a4d5-0823-43d2-916d-a7d1ac6a9616\") " pod="calico-system/calico-node-g2qw9" Mar 6 00:57:47.799326 kubelet[3346]: I0306 00:57:47.799284 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b972a4d5-0823-43d2-916d-a7d1ac6a9616-xtables-lock\") pod \"calico-node-g2qw9\" (UID: \"b972a4d5-0823-43d2-916d-a7d1ac6a9616\") " pod="calico-system/calico-node-g2qw9" Mar 6 00:57:47.800255 kubelet[3346]: I0306 00:57:47.799391 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b972a4d5-0823-43d2-916d-a7d1ac6a9616-cni-bin-dir\") pod \"calico-node-g2qw9\" (UID: \"b972a4d5-0823-43d2-916d-a7d1ac6a9616\") " pod="calico-system/calico-node-g2qw9" Mar 6 00:57:47.800461 kubelet[3346]: I0306 00:57:47.800404 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b972a4d5-0823-43d2-916d-a7d1ac6a9616-node-certs\") pod \"calico-node-g2qw9\" (UID: \"b972a4d5-0823-43d2-916d-a7d1ac6a9616\") " pod="calico-system/calico-node-g2qw9" Mar 6 00:57:47.800723 kubelet[3346]: I0306 00:57:47.800671 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b972a4d5-0823-43d2-916d-a7d1ac6a9616-policysync\") pod \"calico-node-g2qw9\" (UID: \"b972a4d5-0823-43d2-916d-a7d1ac6a9616\") " pod="calico-system/calico-node-g2qw9" Mar 6 00:57:47.800850 kubelet[3346]: I0306 00:57:47.800772 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b972a4d5-0823-43d2-916d-a7d1ac6a9616-flexvol-driver-host\") pod \"calico-node-g2qw9\" (UID: \"b972a4d5-0823-43d2-916d-a7d1ac6a9616\") " pod="calico-system/calico-node-g2qw9" Mar 6 00:57:47.800912 kubelet[3346]: I0306 00:57:47.800850 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b972a4d5-0823-43d2-916d-a7d1ac6a9616-tigera-ca-bundle\") pod \"calico-node-g2qw9\" (UID: \"b972a4d5-0823-43d2-916d-a7d1ac6a9616\") " pod="calico-system/calico-node-g2qw9" Mar 6 00:57:47.800912 kubelet[3346]: I0306 00:57:47.800894 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/b972a4d5-0823-43d2-916d-a7d1ac6a9616-sys-fs\") pod \"calico-node-g2qw9\" (UID: \"b972a4d5-0823-43d2-916d-a7d1ac6a9616\") " pod="calico-system/calico-node-g2qw9" Mar 6 00:57:47.801761 kubelet[3346]: I0306 00:57:47.801626 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b972a4d5-0823-43d2-916d-a7d1ac6a9616-cni-net-dir\") pod \"calico-node-g2qw9\" (UID: \"b972a4d5-0823-43d2-916d-a7d1ac6a9616\") " pod="calico-system/calico-node-g2qw9" Mar 6 00:57:47.801761 kubelet[3346]: I0306 00:57:47.801719 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b972a4d5-0823-43d2-916d-a7d1ac6a9616-lib-modules\") pod \"calico-node-g2qw9\" (UID: \"b972a4d5-0823-43d2-916d-a7d1ac6a9616\") " pod="calico-system/calico-node-g2qw9" Mar 6 00:57:47.803642 kubelet[3346]: I0306 00:57:47.801871 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b972a4d5-0823-43d2-916d-a7d1ac6a9616-cni-log-dir\") pod \"calico-node-g2qw9\" (UID: \"b972a4d5-0823-43d2-916d-a7d1ac6a9616\") " pod="calico-system/calico-node-g2qw9" Mar 6 00:57:47.803642 kubelet[3346]: I0306 00:57:47.801919 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b972a4d5-0823-43d2-916d-a7d1ac6a9616-var-lib-calico\") pod \"calico-node-g2qw9\" (UID: \"b972a4d5-0823-43d2-916d-a7d1ac6a9616\") " pod="calico-system/calico-node-g2qw9" Mar 6 00:57:47.803642 kubelet[3346]: I0306 00:57:47.801992 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b972a4d5-0823-43d2-916d-a7d1ac6a9616-var-run-calico\") pod \"calico-node-g2qw9\" (UID: \"b972a4d5-0823-43d2-916d-a7d1ac6a9616\") " pod="calico-system/calico-node-g2qw9" Mar 6 00:57:47.819654 containerd[2011]: time="2026-03-06T00:57:47.819595628Z" level=info msg="connecting to shim 5c98a699be9ad7a5538091e06c12c53daeb243b01e187472a8fac28aba6fff2f" address="unix:///run/containerd/s/8a8f5d18e1adb841eaa96720b8575517d0bb3b250461b91dd78b45d094c0dc3e" namespace=k8s.io protocol=ttrpc version=3 Mar 6 00:57:47.920801 kubelet[3346]: E0306 00:57:47.920758 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:47.921406 kubelet[3346]: W0306 00:57:47.921365 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:47.921926 kubelet[3346]: E0306 00:57:47.921893 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:47.924908 kubelet[3346]: E0306 00:57:47.924869 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:47.926493 kubelet[3346]: W0306 00:57:47.925157 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:47.926493 kubelet[3346]: E0306 00:57:47.925262 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:47.948782 kubelet[3346]: E0306 00:57:47.948656 3346 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qg2pp" podUID="608e60b2-3521-42e2-85c9-1a14ee77e2b1" Mar 6 00:57:47.964194 systemd[1]: Started cri-containerd-5c98a699be9ad7a5538091e06c12c53daeb243b01e187472a8fac28aba6fff2f.scope - libcontainer container 5c98a699be9ad7a5538091e06c12c53daeb243b01e187472a8fac28aba6fff2f. Mar 6 00:57:47.964723 kubelet[3346]: E0306 00:57:47.964666 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:47.964723 kubelet[3346]: W0306 00:57:47.964713 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:47.965859 kubelet[3346]: E0306 00:57:47.964752 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:47.967130 kubelet[3346]: E0306 00:57:47.967063 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:47.968498 kubelet[3346]: W0306 00:57:47.967101 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:47.968498 kubelet[3346]: E0306 00:57:47.967730 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:47.968686 kubelet[3346]: E0306 00:57:47.968597 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:47.968686 kubelet[3346]: W0306 00:57:47.968652 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:47.968807 kubelet[3346]: E0306 00:57:47.968684 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:47.970211 kubelet[3346]: E0306 00:57:47.969219 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:47.970211 kubelet[3346]: W0306 00:57:47.969282 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:47.970211 kubelet[3346]: E0306 00:57:47.969312 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:47.970211 kubelet[3346]: E0306 00:57:47.969954 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:47.970211 kubelet[3346]: W0306 00:57:47.969979 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:47.970211 kubelet[3346]: E0306 00:57:47.970010 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:47.970673 kubelet[3346]: E0306 00:57:47.970371 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:47.970673 kubelet[3346]: W0306 00:57:47.970392 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:47.970673 kubelet[3346]: E0306 00:57:47.970417 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:47.971647 kubelet[3346]: E0306 00:57:47.970835 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:47.971647 kubelet[3346]: W0306 00:57:47.970872 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:47.971647 kubelet[3346]: E0306 00:57:47.970916 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:47.972905 kubelet[3346]: E0306 00:57:47.972067 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:47.972905 kubelet[3346]: W0306 00:57:47.972094 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:47.972905 kubelet[3346]: E0306 00:57:47.972125 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:47.972905 kubelet[3346]: E0306 00:57:47.972625 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:47.972905 kubelet[3346]: W0306 00:57:47.972649 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:47.972905 kubelet[3346]: E0306 00:57:47.972678 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:47.975328 kubelet[3346]: E0306 00:57:47.974678 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:47.975328 kubelet[3346]: W0306 00:57:47.974707 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:47.975328 kubelet[3346]: E0306 00:57:47.974743 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:47.977702 kubelet[3346]: E0306 00:57:47.977604 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:47.977702 kubelet[3346]: W0306 00:57:47.977643 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:47.978307 kubelet[3346]: E0306 00:57:47.977776 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:47.980701 kubelet[3346]: E0306 00:57:47.980365 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:47.980701 kubelet[3346]: W0306 00:57:47.980406 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:47.980701 kubelet[3346]: E0306 00:57:47.980510 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:47.983806 kubelet[3346]: E0306 00:57:47.983741 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:47.983806 kubelet[3346]: W0306 00:57:47.983780 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:47.983806 kubelet[3346]: E0306 00:57:47.983811 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:47.987528 kubelet[3346]: E0306 00:57:47.987030 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:47.987528 kubelet[3346]: W0306 00:57:47.987073 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:47.987528 kubelet[3346]: E0306 00:57:47.987111 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:47.991499 kubelet[3346]: E0306 00:57:47.990728 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:47.991499 kubelet[3346]: W0306 00:57:47.990771 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:47.991499 kubelet[3346]: E0306 00:57:47.990935 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:47.992160 kubelet[3346]: E0306 00:57:47.991792 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:47.992160 kubelet[3346]: W0306 00:57:47.991958 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:47.992160 kubelet[3346]: E0306 00:57:47.992022 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:47.993990 kubelet[3346]: E0306 00:57:47.993898 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:47.993990 kubelet[3346]: W0306 00:57:47.993936 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:47.994243 kubelet[3346]: E0306 00:57:47.994019 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:47.994634 kubelet[3346]: E0306 00:57:47.994593 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:47.994634 kubelet[3346]: W0306 00:57:47.994629 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:47.994784 kubelet[3346]: E0306 00:57:47.994682 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:47.995275 kubelet[3346]: E0306 00:57:47.995238 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:47.995535 kubelet[3346]: W0306 00:57:47.995268 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:47.995605 kubelet[3346]: E0306 00:57:47.995543 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:47.996890 kubelet[3346]: E0306 00:57:47.996847 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:47.996988 kubelet[3346]: W0306 00:57:47.996883 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:47.996988 kubelet[3346]: E0306 00:57:47.996928 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.009438 kubelet[3346]: E0306 00:57:48.005782 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.009438 kubelet[3346]: W0306 00:57:48.005822 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.009438 kubelet[3346]: E0306 00:57:48.005881 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.009438 kubelet[3346]: I0306 00:57:48.006694 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/608e60b2-3521-42e2-85c9-1a14ee77e2b1-registration-dir\") pod \"csi-node-driver-qg2pp\" (UID: \"608e60b2-3521-42e2-85c9-1a14ee77e2b1\") " pod="calico-system/csi-node-driver-qg2pp" Mar 6 00:57:48.009438 kubelet[3346]: E0306 00:57:48.007908 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.009438 kubelet[3346]: W0306 00:57:48.007936 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.009438 kubelet[3346]: E0306 00:57:48.007993 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.009438 kubelet[3346]: E0306 00:57:48.008505 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.009438 kubelet[3346]: W0306 00:57:48.008529 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.009988 kubelet[3346]: E0306 00:57:48.009535 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.010168 kubelet[3346]: E0306 00:57:48.010089 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.010168 kubelet[3346]: W0306 00:57:48.010135 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.010168 kubelet[3346]: E0306 00:57:48.010162 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.010378 kubelet[3346]: I0306 00:57:48.010235 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/608e60b2-3521-42e2-85c9-1a14ee77e2b1-kubelet-dir\") pod \"csi-node-driver-qg2pp\" (UID: \"608e60b2-3521-42e2-85c9-1a14ee77e2b1\") " pod="calico-system/csi-node-driver-qg2pp" Mar 6 00:57:48.013246 kubelet[3346]: E0306 00:57:48.010607 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.013246 kubelet[3346]: W0306 00:57:48.010644 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.013246 kubelet[3346]: E0306 00:57:48.011555 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.013246 kubelet[3346]: E0306 00:57:48.012806 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.013246 kubelet[3346]: W0306 00:57:48.012861 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.013246 kubelet[3346]: E0306 00:57:48.012895 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.014710 kubelet[3346]: E0306 00:57:48.014574 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.014710 kubelet[3346]: W0306 00:57:48.014613 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.014710 kubelet[3346]: E0306 00:57:48.014672 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.014927 kubelet[3346]: I0306 00:57:48.014745 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/608e60b2-3521-42e2-85c9-1a14ee77e2b1-varrun\") pod \"csi-node-driver-qg2pp\" (UID: \"608e60b2-3521-42e2-85c9-1a14ee77e2b1\") " pod="calico-system/csi-node-driver-qg2pp" Mar 6 00:57:48.015781 kubelet[3346]: E0306 00:57:48.015236 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.015781 kubelet[3346]: W0306 00:57:48.015272 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.015781 kubelet[3346]: E0306 00:57:48.015320 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.015781 kubelet[3346]: I0306 00:57:48.015356 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpp84\" (UniqueName: \"kubernetes.io/projected/608e60b2-3521-42e2-85c9-1a14ee77e2b1-kube-api-access-xpp84\") pod \"csi-node-driver-qg2pp\" (UID: \"608e60b2-3521-42e2-85c9-1a14ee77e2b1\") " pod="calico-system/csi-node-driver-qg2pp" Mar 6 00:57:48.016707 kubelet[3346]: E0306 00:57:48.016619 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.016707 kubelet[3346]: W0306 00:57:48.016694 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.016905 kubelet[3346]: E0306 00:57:48.016754 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.016905 kubelet[3346]: I0306 00:57:48.016806 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/608e60b2-3521-42e2-85c9-1a14ee77e2b1-socket-dir\") pod \"csi-node-driver-qg2pp\" (UID: \"608e60b2-3521-42e2-85c9-1a14ee77e2b1\") " pod="calico-system/csi-node-driver-qg2pp" Mar 6 00:57:48.019991 kubelet[3346]: E0306 00:57:48.018383 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.019991 kubelet[3346]: W0306 00:57:48.018470 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.019991 kubelet[3346]: E0306 00:57:48.018505 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.019991 kubelet[3346]: E0306 00:57:48.019008 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.019991 kubelet[3346]: W0306 00:57:48.019027 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.019991 kubelet[3346]: E0306 00:57:48.019503 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.021439 kubelet[3346]: E0306 00:57:48.020504 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.021439 kubelet[3346]: W0306 00:57:48.020552 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.021439 kubelet[3346]: E0306 00:57:48.020615 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.021439 kubelet[3346]: E0306 00:57:48.021012 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.021439 kubelet[3346]: W0306 00:57:48.021034 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.021439 kubelet[3346]: E0306 00:57:48.021078 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.023339 kubelet[3346]: E0306 00:57:48.022055 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.023339 kubelet[3346]: W0306 00:57:48.022094 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.023339 kubelet[3346]: E0306 00:57:48.022129 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.023339 kubelet[3346]: E0306 00:57:48.022984 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.023339 kubelet[3346]: W0306 00:57:48.023040 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.023339 kubelet[3346]: E0306 00:57:48.023069 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.028920 kubelet[3346]: E0306 00:57:48.028867 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.028920 kubelet[3346]: W0306 00:57:48.028906 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.029170 kubelet[3346]: E0306 00:57:48.028943 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.062628 containerd[2011]: time="2026-03-06T00:57:48.061789961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g2qw9,Uid:b972a4d5-0823-43d2-916d-a7d1ac6a9616,Namespace:calico-system,Attempt:0,}" Mar 6 00:57:48.117730 kubelet[3346]: E0306 00:57:48.117679 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.117966 kubelet[3346]: W0306 00:57:48.117932 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.118150 kubelet[3346]: E0306 00:57:48.118122 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.118948 kubelet[3346]: E0306 00:57:48.118916 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.119575 kubelet[3346]: W0306 00:57:48.119326 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.120367 kubelet[3346]: E0306 00:57:48.119962 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.121867 kubelet[3346]: E0306 00:57:48.121675 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.121867 kubelet[3346]: W0306 00:57:48.121768 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.122696 kubelet[3346]: E0306 00:57:48.121805 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.124108 kubelet[3346]: E0306 00:57:48.124006 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.124737 kubelet[3346]: W0306 00:57:48.124296 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.124737 kubelet[3346]: E0306 00:57:48.124339 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.125780 kubelet[3346]: E0306 00:57:48.125673 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.125780 kubelet[3346]: W0306 00:57:48.125711 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.126804 kubelet[3346]: E0306 00:57:48.126536 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.127365 kubelet[3346]: E0306 00:57:48.127330 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.127647 kubelet[3346]: W0306 00:57:48.127554 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.127647 kubelet[3346]: E0306 00:57:48.127601 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.128855 kubelet[3346]: E0306 00:57:48.128806 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.129428 kubelet[3346]: W0306 00:57:48.129158 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.129428 kubelet[3346]: E0306 00:57:48.129211 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.131943 kubelet[3346]: E0306 00:57:48.131213 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.131943 kubelet[3346]: W0306 00:57:48.131255 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.131943 kubelet[3346]: E0306 00:57:48.131307 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.133649 kubelet[3346]: E0306 00:57:48.133511 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.134323 kubelet[3346]: W0306 00:57:48.134015 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.134323 kubelet[3346]: E0306 00:57:48.134070 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.135692 kubelet[3346]: E0306 00:57:48.135444 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.135692 kubelet[3346]: W0306 00:57:48.135530 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.135692 kubelet[3346]: E0306 00:57:48.135564 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.137722 kubelet[3346]: E0306 00:57:48.137269 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.137722 kubelet[3346]: W0306 00:57:48.137344 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.137722 kubelet[3346]: E0306 00:57:48.137382 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.139144 kubelet[3346]: E0306 00:57:48.139042 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.139144 kubelet[3346]: W0306 00:57:48.139078 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.139144 kubelet[3346]: E0306 00:57:48.139110 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.141058 kubelet[3346]: E0306 00:57:48.140835 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.141058 kubelet[3346]: W0306 00:57:48.140979 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.141058 kubelet[3346]: E0306 00:57:48.141017 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.145179 kubelet[3346]: E0306 00:57:48.143625 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.145179 kubelet[3346]: W0306 00:57:48.143708 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.145179 kubelet[3346]: E0306 00:57:48.143751 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.145864 kubelet[3346]: E0306 00:57:48.145753 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.146641 kubelet[3346]: W0306 00:57:48.146062 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.146641 kubelet[3346]: E0306 00:57:48.146177 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.147773 kubelet[3346]: E0306 00:57:48.147732 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.148396 kubelet[3346]: W0306 00:57:48.148033 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.148396 kubelet[3346]: E0306 00:57:48.148101 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.151319 kubelet[3346]: E0306 00:57:48.150939 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.151319 kubelet[3346]: W0306 00:57:48.151019 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.151319 kubelet[3346]: E0306 00:57:48.151054 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.153975 kubelet[3346]: E0306 00:57:48.153869 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.153975 kubelet[3346]: W0306 00:57:48.153905 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.153975 kubelet[3346]: E0306 00:57:48.153940 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.155844 containerd[2011]: time="2026-03-06T00:57:48.155532582Z" level=info msg="connecting to shim 2b4e607639fe2d2e95af89b4263050a3ba89afb5efd9ae97e7e193ad6adfb5cf" address="unix:///run/containerd/s/d287ac09b723d9698d789661bae09c627247c83d1a4f3b6b8de2a2d17d924559" namespace=k8s.io protocol=ttrpc version=3 Mar 6 00:57:48.158057 kubelet[3346]: E0306 00:57:48.158005 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.158612 kubelet[3346]: W0306 00:57:48.158386 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.158612 kubelet[3346]: E0306 00:57:48.158430 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.160759 kubelet[3346]: E0306 00:57:48.160557 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.161953 kubelet[3346]: W0306 00:57:48.161154 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.161953 kubelet[3346]: E0306 00:57:48.161302 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.164211 kubelet[3346]: E0306 00:57:48.163739 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.164211 kubelet[3346]: W0306 00:57:48.163779 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.164211 kubelet[3346]: E0306 00:57:48.163816 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.165812 kubelet[3346]: E0306 00:57:48.165744 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.165812 kubelet[3346]: W0306 00:57:48.165788 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.166261 kubelet[3346]: E0306 00:57:48.165826 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.167703 kubelet[3346]: E0306 00:57:48.167606 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.167703 kubelet[3346]: W0306 00:57:48.167649 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.167703 kubelet[3346]: E0306 00:57:48.167685 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.169179 kubelet[3346]: E0306 00:57:48.168658 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.169179 kubelet[3346]: W0306 00:57:48.168688 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.169179 kubelet[3346]: E0306 00:57:48.168718 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.170920 kubelet[3346]: E0306 00:57:48.170817 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.170920 kubelet[3346]: W0306 00:57:48.170867 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.170920 kubelet[3346]: E0306 00:57:48.170904 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.256840 systemd[1]: Started cri-containerd-2b4e607639fe2d2e95af89b4263050a3ba89afb5efd9ae97e7e193ad6adfb5cf.scope - libcontainer container 2b4e607639fe2d2e95af89b4263050a3ba89afb5efd9ae97e7e193ad6adfb5cf. Mar 6 00:57:48.281523 kubelet[3346]: E0306 00:57:48.278685 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:48.281523 kubelet[3346]: W0306 00:57:48.278725 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:48.281523 kubelet[3346]: E0306 00:57:48.278759 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:48.335404 containerd[2011]: time="2026-03-06T00:57:48.335325643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56c4587657-ckvvf,Uid:83c5cc7b-5792-4243-b510-471cdfe8983b,Namespace:calico-system,Attempt:0,} returns sandbox id \"5c98a699be9ad7a5538091e06c12c53daeb243b01e187472a8fac28aba6fff2f\"" Mar 6 00:57:48.342728 containerd[2011]: time="2026-03-06T00:57:48.342643927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 6 00:57:48.446467 containerd[2011]: time="2026-03-06T00:57:48.446299783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g2qw9,Uid:b972a4d5-0823-43d2-916d-a7d1ac6a9616,Namespace:calico-system,Attempt:0,} returns sandbox id \"2b4e607639fe2d2e95af89b4263050a3ba89afb5efd9ae97e7e193ad6adfb5cf\"" Mar 6 00:57:49.935653 kubelet[3346]: E0306 00:57:49.935578 3346 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qg2pp" podUID="608e60b2-3521-42e2-85c9-1a14ee77e2b1" Mar 6 00:57:50.221276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3900091833.mount: Deactivated successfully. Mar 6 00:57:51.060516 containerd[2011]: time="2026-03-06T00:57:51.059955956Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:57:51.061275 containerd[2011]: time="2026-03-06T00:57:51.061218296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Mar 6 00:57:51.062331 containerd[2011]: time="2026-03-06T00:57:51.062224880Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:57:51.068709 containerd[2011]: time="2026-03-06T00:57:51.068652572Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:57:51.070096 containerd[2011]: time="2026-03-06T00:57:51.070028480Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.727305653s" Mar 6 00:57:51.070096 containerd[2011]: time="2026-03-06T00:57:51.070089140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Mar 6 00:57:51.073853 containerd[2011]: time="2026-03-06T00:57:51.073801400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 6 00:57:51.104489 containerd[2011]: time="2026-03-06T00:57:51.104391920Z" level=info msg="CreateContainer within sandbox \"5c98a699be9ad7a5538091e06c12c53daeb243b01e187472a8fac28aba6fff2f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 6 00:57:51.126494 containerd[2011]: time="2026-03-06T00:57:51.126281048Z" level=info msg="Container 6840e28f0c9c3e453d57b0dd72ee8592d1525f1ce69aa4c34311104acdaf835d: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:57:51.134994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3435000163.mount: Deactivated successfully. Mar 6 00:57:51.145957 containerd[2011]: time="2026-03-06T00:57:51.145877781Z" level=info msg="CreateContainer within sandbox \"5c98a699be9ad7a5538091e06c12c53daeb243b01e187472a8fac28aba6fff2f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6840e28f0c9c3e453d57b0dd72ee8592d1525f1ce69aa4c34311104acdaf835d\"" Mar 6 00:57:51.147384 containerd[2011]: time="2026-03-06T00:57:51.147291621Z" level=info msg="StartContainer for \"6840e28f0c9c3e453d57b0dd72ee8592d1525f1ce69aa4c34311104acdaf835d\"" Mar 6 00:57:51.150523 containerd[2011]: time="2026-03-06T00:57:51.150381945Z" level=info msg="connecting to shim 6840e28f0c9c3e453d57b0dd72ee8592d1525f1ce69aa4c34311104acdaf835d" address="unix:///run/containerd/s/8a8f5d18e1adb841eaa96720b8575517d0bb3b250461b91dd78b45d094c0dc3e" protocol=ttrpc version=3 Mar 6 00:57:51.195829 systemd[1]: Started cri-containerd-6840e28f0c9c3e453d57b0dd72ee8592d1525f1ce69aa4c34311104acdaf835d.scope - libcontainer container 6840e28f0c9c3e453d57b0dd72ee8592d1525f1ce69aa4c34311104acdaf835d. Mar 6 00:57:51.283588 containerd[2011]: time="2026-03-06T00:57:51.283022517Z" level=info msg="StartContainer for \"6840e28f0c9c3e453d57b0dd72ee8592d1525f1ce69aa4c34311104acdaf835d\" returns successfully" Mar 6 00:57:51.936570 kubelet[3346]: E0306 00:57:51.935996 3346 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qg2pp" podUID="608e60b2-3521-42e2-85c9-1a14ee77e2b1" Mar 6 00:57:52.166608 kubelet[3346]: I0306 00:57:52.166486 3346 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-56c4587657-ckvvf" podStartSLOduration=2.4345608690000002 podStartE2EDuration="5.166434382s" podCreationTimestamp="2026-03-06 00:57:47 +0000 UTC" firstStartedPulling="2026-03-06 00:57:48.340371847 +0000 UTC m=+29.700693413" lastFinishedPulling="2026-03-06 00:57:51.072245348 +0000 UTC m=+32.432566926" observedRunningTime="2026-03-06 00:57:52.165936226 +0000 UTC m=+33.526257852" watchObservedRunningTime="2026-03-06 00:57:52.166434382 +0000 UTC m=+33.526755936" Mar 6 00:57:52.229921 kubelet[3346]: E0306 00:57:52.229738 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:52.229921 kubelet[3346]: W0306 00:57:52.229803 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:52.229921 kubelet[3346]: E0306 00:57:52.229839 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:52.231288 kubelet[3346]: E0306 00:57:52.230708 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:52.231882 kubelet[3346]: W0306 00:57:52.230740 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:52.231882 kubelet[3346]: E0306 00:57:52.231651 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:52.232734 kubelet[3346]: E0306 00:57:52.232172 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:52.232734 kubelet[3346]: W0306 00:57:52.232230 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:52.232734 kubelet[3346]: E0306 00:57:52.232262 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:52.233868 kubelet[3346]: E0306 00:57:52.233820 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:52.233991 kubelet[3346]: W0306 00:57:52.233866 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:52.233991 kubelet[3346]: E0306 00:57:52.233927 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:52.234860 kubelet[3346]: E0306 00:57:52.234798 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:52.236002 kubelet[3346]: W0306 00:57:52.235936 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:52.236238 kubelet[3346]: E0306 00:57:52.236192 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:52.236829 kubelet[3346]: E0306 00:57:52.236794 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:52.237028 kubelet[3346]: W0306 00:57:52.236993 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:52.237170 kubelet[3346]: E0306 00:57:52.237143 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:52.237978 kubelet[3346]: E0306 00:57:52.237725 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:52.237978 kubelet[3346]: W0306 00:57:52.237759 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:52.237978 kubelet[3346]: E0306 00:57:52.237791 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:52.238542 kubelet[3346]: E0306 00:57:52.238510 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:52.238926 kubelet[3346]: W0306 00:57:52.238672 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:52.238926 kubelet[3346]: E0306 00:57:52.238713 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:52.241235 kubelet[3346]: E0306 00:57:52.241198 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:52.241402 kubelet[3346]: W0306 00:57:52.241370 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:52.241560 kubelet[3346]: E0306 00:57:52.241533 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:52.242022 kubelet[3346]: E0306 00:57:52.241994 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:52.242288 kubelet[3346]: W0306 00:57:52.242138 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:52.242288 kubelet[3346]: E0306 00:57:52.242173 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:52.242774 kubelet[3346]: E0306 00:57:52.242747 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:52.243084 kubelet[3346]: W0306 00:57:52.242890 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:52.243084 kubelet[3346]: E0306 00:57:52.242927 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:52.243496 kubelet[3346]: E0306 00:57:52.243390 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:52.243496 kubelet[3346]: W0306 00:57:52.243416 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:52.243814 kubelet[3346]: E0306 00:57:52.243441 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:52.244119 kubelet[3346]: E0306 00:57:52.244094 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:52.244477 kubelet[3346]: W0306 00:57:52.244327 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:52.244477 kubelet[3346]: E0306 00:57:52.244367 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:52.244984 kubelet[3346]: E0306 00:57:52.244954 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:52.245303 kubelet[3346]: W0306 00:57:52.245099 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:52.245303 kubelet[3346]: E0306 00:57:52.245135 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:52.247867 kubelet[3346]: E0306 00:57:52.247797 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:52.247867 kubelet[3346]: W0306 00:57:52.247845 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:52.248049 kubelet[3346]: E0306 00:57:52.247884 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:52.272912 kubelet[3346]: E0306 00:57:52.272852 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:52.272912 kubelet[3346]: W0306 00:57:52.272895 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:52.273239 kubelet[3346]: E0306 00:57:52.272929 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:52.274597 kubelet[3346]: E0306 00:57:52.274551 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:52.274597 kubelet[3346]: W0306 00:57:52.274592 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:52.275123 kubelet[3346]: E0306 00:57:52.274626 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:52.276949 kubelet[3346]: E0306 00:57:52.275951 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:52.276949 kubelet[3346]: W0306 00:57:52.275985 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:52.276949 kubelet[3346]: E0306 00:57:52.276019 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:52.277999 kubelet[3346]: E0306 00:57:52.277575 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:52.277999 kubelet[3346]: W0306 00:57:52.277607 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:52.277999 kubelet[3346]: E0306 00:57:52.277641 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:52.278639 kubelet[3346]: E0306 00:57:52.278611 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:52.278939 kubelet[3346]: W0306 00:57:52.278910 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:52.279508 kubelet[3346]: E0306 00:57:52.279200 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:52.280601 kubelet[3346]: E0306 00:57:52.280446 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:52.283260 kubelet[3346]: W0306 00:57:52.282952 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:52.283260 kubelet[3346]: E0306 00:57:52.283009 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:52.284131 kubelet[3346]: E0306 00:57:52.283565 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:52.284131 kubelet[3346]: W0306 00:57:52.283592 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:52.284131 kubelet[3346]: E0306 00:57:52.283616 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:52.286693 kubelet[3346]: E0306 00:57:52.284658 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:52.286693 kubelet[3346]: W0306 00:57:52.284690 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:52.286693 kubelet[3346]: E0306 00:57:52.284719 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:52.287494 kubelet[3346]: E0306 00:57:52.287429 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:52.287855 kubelet[3346]: W0306 00:57:52.287821 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:52.288083 kubelet[3346]: E0306 00:57:52.288057 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:52.290541 kubelet[3346]: E0306 00:57:52.290498 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:52.291502 kubelet[3346]: W0306 00:57:52.290712 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:52.291502 kubelet[3346]: E0306 00:57:52.290759 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:52.293850 kubelet[3346]: E0306 00:57:52.293799 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:52.295518 kubelet[3346]: W0306 00:57:52.294523 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:52.295518 kubelet[3346]: E0306 00:57:52.294583 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:52.297675 kubelet[3346]: E0306 00:57:52.297629 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:52.297878 kubelet[3346]: W0306 00:57:52.297839 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:52.298013 kubelet[3346]: E0306 00:57:52.297987 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:52.299192 kubelet[3346]: E0306 00:57:52.299031 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:52.300720 kubelet[3346]: W0306 00:57:52.299408 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:52.300720 kubelet[3346]: E0306 00:57:52.299482 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:52.303307 kubelet[3346]: E0306 00:57:52.302662 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:52.303307 kubelet[3346]: W0306 00:57:52.302702 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:52.303307 kubelet[3346]: E0306 00:57:52.302735 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:52.304023 kubelet[3346]: E0306 00:57:52.303990 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:52.304910 kubelet[3346]: W0306 00:57:52.304861 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:52.305136 kubelet[3346]: E0306 00:57:52.305094 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:52.308073 kubelet[3346]: E0306 00:57:52.308000 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:52.308073 kubelet[3346]: W0306 00:57:52.308043 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:52.308073 kubelet[3346]: E0306 00:57:52.308079 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:52.311233 kubelet[3346]: E0306 00:57:52.311128 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:52.311233 kubelet[3346]: W0306 00:57:52.311170 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:52.311233 kubelet[3346]: E0306 00:57:52.311206 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:52.314925 kubelet[3346]: E0306 00:57:52.314766 3346 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 6 00:57:52.314925 kubelet[3346]: W0306 00:57:52.314806 3346 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 6 00:57:52.315121 kubelet[3346]: E0306 00:57:52.315054 3346 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 6 00:57:52.446498 containerd[2011]: time="2026-03-06T00:57:52.445854623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:57:52.448812 containerd[2011]: time="2026-03-06T00:57:52.448760903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Mar 6 00:57:52.451374 containerd[2011]: time="2026-03-06T00:57:52.451231511Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:57:52.457964 containerd[2011]: time="2026-03-06T00:57:52.457876175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:57:52.460147 containerd[2011]: time="2026-03-06T00:57:52.460092923Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.385721355s" Mar 6 00:57:52.460595 containerd[2011]: time="2026-03-06T00:57:52.460306967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Mar 6 00:57:52.470059 containerd[2011]: time="2026-03-06T00:57:52.469980899Z" level=info msg="CreateContainer within sandbox \"2b4e607639fe2d2e95af89b4263050a3ba89afb5efd9ae97e7e193ad6adfb5cf\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 6 00:57:52.487758 containerd[2011]: time="2026-03-06T00:57:52.485691191Z" level=info msg="Container 3e1a2124a79784978873b5b8b4d4e2bd71808c13b0ea214da04e30e8f8419479: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:57:52.514153 containerd[2011]: time="2026-03-06T00:57:52.514095659Z" level=info msg="CreateContainer within sandbox \"2b4e607639fe2d2e95af89b4263050a3ba89afb5efd9ae97e7e193ad6adfb5cf\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3e1a2124a79784978873b5b8b4d4e2bd71808c13b0ea214da04e30e8f8419479\"" Mar 6 00:57:52.515038 containerd[2011]: time="2026-03-06T00:57:52.514962239Z" level=info msg="StartContainer for \"3e1a2124a79784978873b5b8b4d4e2bd71808c13b0ea214da04e30e8f8419479\"" Mar 6 00:57:52.518767 containerd[2011]: time="2026-03-06T00:57:52.518698547Z" level=info msg="connecting to shim 3e1a2124a79784978873b5b8b4d4e2bd71808c13b0ea214da04e30e8f8419479" address="unix:///run/containerd/s/d287ac09b723d9698d789661bae09c627247c83d1a4f3b6b8de2a2d17d924559" protocol=ttrpc version=3 Mar 6 00:57:52.572875 systemd[1]: Started cri-containerd-3e1a2124a79784978873b5b8b4d4e2bd71808c13b0ea214da04e30e8f8419479.scope - libcontainer container 3e1a2124a79784978873b5b8b4d4e2bd71808c13b0ea214da04e30e8f8419479. Mar 6 00:57:52.709275 containerd[2011]: time="2026-03-06T00:57:52.709204104Z" level=info msg="StartContainer for \"3e1a2124a79784978873b5b8b4d4e2bd71808c13b0ea214da04e30e8f8419479\" returns successfully" Mar 6 00:57:52.791564 systemd[1]: cri-containerd-3e1a2124a79784978873b5b8b4d4e2bd71808c13b0ea214da04e30e8f8419479.scope: Deactivated successfully. Mar 6 00:57:52.799889 containerd[2011]: time="2026-03-06T00:57:52.799435801Z" level=info msg="received container exit event container_id:\"3e1a2124a79784978873b5b8b4d4e2bd71808c13b0ea214da04e30e8f8419479\" id:\"3e1a2124a79784978873b5b8b4d4e2bd71808c13b0ea214da04e30e8f8419479\" pid:4210 exited_at:{seconds:1772758672 nanos:798746593}" Mar 6 00:57:52.874154 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e1a2124a79784978873b5b8b4d4e2bd71808c13b0ea214da04e30e8f8419479-rootfs.mount: Deactivated successfully. Mar 6 00:57:53.155976 kubelet[3346]: I0306 00:57:53.155216 3346 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 00:57:53.936152 kubelet[3346]: E0306 00:57:53.935657 3346 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qg2pp" podUID="608e60b2-3521-42e2-85c9-1a14ee77e2b1" Mar 6 00:57:54.169983 containerd[2011]: time="2026-03-06T00:57:54.169735836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 6 00:57:55.935919 kubelet[3346]: E0306 00:57:55.935805 3346 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qg2pp" podUID="608e60b2-3521-42e2-85c9-1a14ee77e2b1" Mar 6 00:57:57.937089 kubelet[3346]: E0306 00:57:57.936962 3346 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qg2pp" podUID="608e60b2-3521-42e2-85c9-1a14ee77e2b1" Mar 6 00:57:59.937619 kubelet[3346]: E0306 00:57:59.937130 3346 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qg2pp" podUID="608e60b2-3521-42e2-85c9-1a14ee77e2b1" Mar 6 00:58:00.776874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1509019417.mount: Deactivated successfully. Mar 6 00:58:00.840805 containerd[2011]: time="2026-03-06T00:58:00.840721617Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:58:00.842312 containerd[2011]: time="2026-03-06T00:58:00.842220477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Mar 6 00:58:00.844504 containerd[2011]: time="2026-03-06T00:58:00.843536409Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:58:00.857837 containerd[2011]: time="2026-03-06T00:58:00.857749857Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:58:00.859564 containerd[2011]: time="2026-03-06T00:58:00.859477905Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 6.688713801s" Mar 6 00:58:00.859564 containerd[2011]: time="2026-03-06T00:58:00.859557297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Mar 6 00:58:00.869429 containerd[2011]: time="2026-03-06T00:58:00.869160957Z" level=info msg="CreateContainer within sandbox \"2b4e607639fe2d2e95af89b4263050a3ba89afb5efd9ae97e7e193ad6adfb5cf\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 6 00:58:00.886022 containerd[2011]: time="2026-03-06T00:58:00.885959061Z" level=info msg="Container 62eced4b25029698ffe8a569ca5f8e81bc83ebf527962c45df74a8ca458198a0: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:58:00.904531 containerd[2011]: time="2026-03-06T00:58:00.904447149Z" level=info msg="CreateContainer within sandbox \"2b4e607639fe2d2e95af89b4263050a3ba89afb5efd9ae97e7e193ad6adfb5cf\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"62eced4b25029698ffe8a569ca5f8e81bc83ebf527962c45df74a8ca458198a0\"" Mar 6 00:58:00.907760 containerd[2011]: time="2026-03-06T00:58:00.907693305Z" level=info msg="StartContainer for \"62eced4b25029698ffe8a569ca5f8e81bc83ebf527962c45df74a8ca458198a0\"" Mar 6 00:58:00.913479 containerd[2011]: time="2026-03-06T00:58:00.913355601Z" level=info msg="connecting to shim 62eced4b25029698ffe8a569ca5f8e81bc83ebf527962c45df74a8ca458198a0" address="unix:///run/containerd/s/d287ac09b723d9698d789661bae09c627247c83d1a4f3b6b8de2a2d17d924559" protocol=ttrpc version=3 Mar 6 00:58:00.966819 systemd[1]: Started cri-containerd-62eced4b25029698ffe8a569ca5f8e81bc83ebf527962c45df74a8ca458198a0.scope - libcontainer container 62eced4b25029698ffe8a569ca5f8e81bc83ebf527962c45df74a8ca458198a0. Mar 6 00:58:01.094039 containerd[2011]: time="2026-03-06T00:58:01.093853734Z" level=info msg="StartContainer for \"62eced4b25029698ffe8a569ca5f8e81bc83ebf527962c45df74a8ca458198a0\" returns successfully" Mar 6 00:58:01.308846 systemd[1]: cri-containerd-62eced4b25029698ffe8a569ca5f8e81bc83ebf527962c45df74a8ca458198a0.scope: Deactivated successfully. Mar 6 00:58:01.316322 containerd[2011]: time="2026-03-06T00:58:01.315968719Z" level=info msg="received container exit event container_id:\"62eced4b25029698ffe8a569ca5f8e81bc83ebf527962c45df74a8ca458198a0\" id:\"62eced4b25029698ffe8a569ca5f8e81bc83ebf527962c45df74a8ca458198a0\" pid:4264 exited_at:{seconds:1772758681 nanos:315604231}" Mar 6 00:58:01.777746 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62eced4b25029698ffe8a569ca5f8e81bc83ebf527962c45df74a8ca458198a0-rootfs.mount: Deactivated successfully. Mar 6 00:58:01.935658 kubelet[3346]: E0306 00:58:01.935571 3346 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qg2pp" podUID="608e60b2-3521-42e2-85c9-1a14ee77e2b1" Mar 6 00:58:02.214037 containerd[2011]: time="2026-03-06T00:58:02.213771548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 6 00:58:02.286730 kubelet[3346]: I0306 00:58:02.286665 3346 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 00:58:03.936186 kubelet[3346]: E0306 00:58:03.936078 3346 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qg2pp" podUID="608e60b2-3521-42e2-85c9-1a14ee77e2b1" Mar 6 00:58:05.516935 containerd[2011]: time="2026-03-06T00:58:05.516856260Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:58:05.518728 containerd[2011]: time="2026-03-06T00:58:05.518609580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Mar 6 00:58:05.520436 containerd[2011]: time="2026-03-06T00:58:05.520331376Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:58:05.533704 containerd[2011]: time="2026-03-06T00:58:05.533592288Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:58:05.536490 containerd[2011]: time="2026-03-06T00:58:05.534776724Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 3.320875336s" Mar 6 00:58:05.536490 containerd[2011]: time="2026-03-06T00:58:05.534851268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Mar 6 00:58:05.549081 containerd[2011]: time="2026-03-06T00:58:05.549010932Z" level=info msg="CreateContainer within sandbox \"2b4e607639fe2d2e95af89b4263050a3ba89afb5efd9ae97e7e193ad6adfb5cf\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 6 00:58:05.566951 containerd[2011]: time="2026-03-06T00:58:05.566849832Z" level=info msg="Container 36de8d8a6eb76e7ef64edfe8f8baa4ffdd11157af48f1cf282040eb33d0c0664: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:58:05.589077 containerd[2011]: time="2026-03-06T00:58:05.588958836Z" level=info msg="CreateContainer within sandbox \"2b4e607639fe2d2e95af89b4263050a3ba89afb5efd9ae97e7e193ad6adfb5cf\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"36de8d8a6eb76e7ef64edfe8f8baa4ffdd11157af48f1cf282040eb33d0c0664\"" Mar 6 00:58:05.590485 containerd[2011]: time="2026-03-06T00:58:05.590384700Z" level=info msg="StartContainer for \"36de8d8a6eb76e7ef64edfe8f8baa4ffdd11157af48f1cf282040eb33d0c0664\"" Mar 6 00:58:05.594488 containerd[2011]: time="2026-03-06T00:58:05.594385140Z" level=info msg="connecting to shim 36de8d8a6eb76e7ef64edfe8f8baa4ffdd11157af48f1cf282040eb33d0c0664" address="unix:///run/containerd/s/d287ac09b723d9698d789661bae09c627247c83d1a4f3b6b8de2a2d17d924559" protocol=ttrpc version=3 Mar 6 00:58:05.658897 systemd[1]: Started cri-containerd-36de8d8a6eb76e7ef64edfe8f8baa4ffdd11157af48f1cf282040eb33d0c0664.scope - libcontainer container 36de8d8a6eb76e7ef64edfe8f8baa4ffdd11157af48f1cf282040eb33d0c0664. Mar 6 00:58:05.797860 containerd[2011]: time="2026-03-06T00:58:05.797533981Z" level=info msg="StartContainer for \"36de8d8a6eb76e7ef64edfe8f8baa4ffdd11157af48f1cf282040eb33d0c0664\" returns successfully" Mar 6 00:58:05.935613 kubelet[3346]: E0306 00:58:05.935387 3346 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qg2pp" podUID="608e60b2-3521-42e2-85c9-1a14ee77e2b1" Mar 6 00:58:07.370764 containerd[2011]: time="2026-03-06T00:58:07.370672561Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 6 00:58:07.376347 systemd[1]: cri-containerd-36de8d8a6eb76e7ef64edfe8f8baa4ffdd11157af48f1cf282040eb33d0c0664.scope: Deactivated successfully. Mar 6 00:58:07.377897 systemd[1]: cri-containerd-36de8d8a6eb76e7ef64edfe8f8baa4ffdd11157af48f1cf282040eb33d0c0664.scope: Consumed 1.190s CPU time, 183.3M memory peak, 1.3M read from disk, 171.3M written to disk. Mar 6 00:58:07.383841 containerd[2011]: time="2026-03-06T00:58:07.383446861Z" level=info msg="received container exit event container_id:\"36de8d8a6eb76e7ef64edfe8f8baa4ffdd11157af48f1cf282040eb33d0c0664\" id:\"36de8d8a6eb76e7ef64edfe8f8baa4ffdd11157af48f1cf282040eb33d0c0664\" pid:4325 exited_at:{seconds:1772758687 nanos:382911973}" Mar 6 00:58:07.435899 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36de8d8a6eb76e7ef64edfe8f8baa4ffdd11157af48f1cf282040eb33d0c0664-rootfs.mount: Deactivated successfully. Mar 6 00:58:07.444402 kubelet[3346]: I0306 00:58:07.444359 3346 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 6 00:58:07.573413 systemd[1]: Created slice kubepods-burstable-podbce0721f_75f9_4109_b6db_0d0ca49740fe.slice - libcontainer container kubepods-burstable-podbce0721f_75f9_4109_b6db_0d0ca49740fe.slice. Mar 6 00:58:07.619500 kubelet[3346]: I0306 00:58:07.618775 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqr4p\" (UniqueName: \"kubernetes.io/projected/d19574a4-d3dc-47a1-b931-8728fdbc3418-kube-api-access-kqr4p\") pod \"whisker-7c59cf5869-vdvrm\" (UID: \"d19574a4-d3dc-47a1-b931-8728fdbc3418\") " pod="calico-system/whisker-7c59cf5869-vdvrm" Mar 6 00:58:07.622617 kubelet[3346]: I0306 00:58:07.621614 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/d19574a4-d3dc-47a1-b931-8728fdbc3418-nginx-config\") pod \"whisker-7c59cf5869-vdvrm\" (UID: \"d19574a4-d3dc-47a1-b931-8728fdbc3418\") " pod="calico-system/whisker-7c59cf5869-vdvrm" Mar 6 00:58:07.622173 systemd[1]: Created slice kubepods-burstable-pod36d8fa48_f26e_449b_b7be_412a82e9724f.slice - libcontainer container kubepods-burstable-pod36d8fa48_f26e_449b_b7be_412a82e9724f.slice. Mar 6 00:58:07.625501 kubelet[3346]: I0306 00:58:07.624226 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp8pc\" (UniqueName: \"kubernetes.io/projected/bce0721f-75f9-4109-b6db-0d0ca49740fe-kube-api-access-wp8pc\") pod \"coredns-674b8bbfcf-7d46d\" (UID: \"bce0721f-75f9-4109-b6db-0d0ca49740fe\") " pod="kube-system/coredns-674b8bbfcf-7d46d" Mar 6 00:58:07.625501 kubelet[3346]: I0306 00:58:07.624302 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/36d8fa48-f26e-449b-b7be-412a82e9724f-config-volume\") pod \"coredns-674b8bbfcf-7p58v\" (UID: \"36d8fa48-f26e-449b-b7be-412a82e9724f\") " pod="kube-system/coredns-674b8bbfcf-7p58v" Mar 6 00:58:07.625501 kubelet[3346]: I0306 00:58:07.624341 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19574a4-d3dc-47a1-b931-8728fdbc3418-whisker-ca-bundle\") pod \"whisker-7c59cf5869-vdvrm\" (UID: \"d19574a4-d3dc-47a1-b931-8728fdbc3418\") " pod="calico-system/whisker-7c59cf5869-vdvrm" Mar 6 00:58:07.625501 kubelet[3346]: I0306 00:58:07.624387 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bce0721f-75f9-4109-b6db-0d0ca49740fe-config-volume\") pod \"coredns-674b8bbfcf-7d46d\" (UID: \"bce0721f-75f9-4109-b6db-0d0ca49740fe\") " pod="kube-system/coredns-674b8bbfcf-7d46d" Mar 6 00:58:07.625501 kubelet[3346]: I0306 00:58:07.624428 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8s5m\" (UniqueName: \"kubernetes.io/projected/36d8fa48-f26e-449b-b7be-412a82e9724f-kube-api-access-m8s5m\") pod \"coredns-674b8bbfcf-7p58v\" (UID: \"36d8fa48-f26e-449b-b7be-412a82e9724f\") " pod="kube-system/coredns-674b8bbfcf-7p58v" Mar 6 00:58:07.625886 kubelet[3346]: I0306 00:58:07.624710 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d19574a4-d3dc-47a1-b931-8728fdbc3418-whisker-backend-key-pair\") pod \"whisker-7c59cf5869-vdvrm\" (UID: \"d19574a4-d3dc-47a1-b931-8728fdbc3418\") " pod="calico-system/whisker-7c59cf5869-vdvrm" Mar 6 00:58:07.657627 systemd[1]: Created slice kubepods-besteffort-pod69311b7a_6d88_4316_8584_5f01cf4ac2b4.slice - libcontainer container kubepods-besteffort-pod69311b7a_6d88_4316_8584_5f01cf4ac2b4.slice. Mar 6 00:58:07.679632 systemd[1]: Created slice kubepods-besteffort-pod670e25a3_881a_4bd2_bf8f_9378935e262b.slice - libcontainer container kubepods-besteffort-pod670e25a3_881a_4bd2_bf8f_9378935e262b.slice. Mar 6 00:58:07.706549 systemd[1]: Created slice kubepods-besteffort-podd19574a4_d3dc_47a1_b931_8728fdbc3418.slice - libcontainer container kubepods-besteffort-podd19574a4_d3dc_47a1_b931_8728fdbc3418.slice. Mar 6 00:58:07.724829 systemd[1]: Created slice kubepods-besteffort-pod0a33d7ba_f1d0_47ea_a80d_d50ce24ae1a9.slice - libcontainer container kubepods-besteffort-pod0a33d7ba_f1d0_47ea_a80d_d50ce24ae1a9.slice. Mar 6 00:58:07.727294 kubelet[3346]: I0306 00:58:07.727244 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a33d7ba-f1d0-47ea-a80d-d50ce24ae1a9-config\") pod \"goldmane-5b85766d88-qtd2z\" (UID: \"0a33d7ba-f1d0-47ea-a80d-d50ce24ae1a9\") " pod="calico-system/goldmane-5b85766d88-qtd2z" Mar 6 00:58:07.727996 kubelet[3346]: I0306 00:58:07.727900 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a33d7ba-f1d0-47ea-a80d-d50ce24ae1a9-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-qtd2z\" (UID: \"0a33d7ba-f1d0-47ea-a80d-d50ce24ae1a9\") " pod="calico-system/goldmane-5b85766d88-qtd2z" Mar 6 00:58:07.730406 kubelet[3346]: I0306 00:58:07.730239 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2b510939-855d-4b9d-85ce-7833d3ee5cac-calico-apiserver-certs\") pod \"calico-apiserver-5cdfc896d5-gvch9\" (UID: \"2b510939-855d-4b9d-85ce-7833d3ee5cac\") " pod="calico-system/calico-apiserver-5cdfc896d5-gvch9" Mar 6 00:58:07.731244 kubelet[3346]: I0306 00:58:07.730828 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69311b7a-6d88-4316-8584-5f01cf4ac2b4-tigera-ca-bundle\") pod \"calico-kube-controllers-7b5646b546-vb6xx\" (UID: \"69311b7a-6d88-4316-8584-5f01cf4ac2b4\") " pod="calico-system/calico-kube-controllers-7b5646b546-vb6xx" Mar 6 00:58:07.731671 kubelet[3346]: I0306 00:58:07.731542 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0a33d7ba-f1d0-47ea-a80d-d50ce24ae1a9-goldmane-key-pair\") pod \"goldmane-5b85766d88-qtd2z\" (UID: \"0a33d7ba-f1d0-47ea-a80d-d50ce24ae1a9\") " pod="calico-system/goldmane-5b85766d88-qtd2z" Mar 6 00:58:07.731860 kubelet[3346]: I0306 00:58:07.731824 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxv4f\" (UniqueName: \"kubernetes.io/projected/69311b7a-6d88-4316-8584-5f01cf4ac2b4-kube-api-access-zxv4f\") pod \"calico-kube-controllers-7b5646b546-vb6xx\" (UID: \"69311b7a-6d88-4316-8584-5f01cf4ac2b4\") " pod="calico-system/calico-kube-controllers-7b5646b546-vb6xx" Mar 6 00:58:07.732287 kubelet[3346]: I0306 00:58:07.732115 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp5h6\" (UniqueName: \"kubernetes.io/projected/2b510939-855d-4b9d-85ce-7833d3ee5cac-kube-api-access-tp5h6\") pod \"calico-apiserver-5cdfc896d5-gvch9\" (UID: \"2b510939-855d-4b9d-85ce-7833d3ee5cac\") " pod="calico-system/calico-apiserver-5cdfc896d5-gvch9" Mar 6 00:58:07.732583 kubelet[3346]: I0306 00:58:07.732254 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/670e25a3-881a-4bd2-bf8f-9378935e262b-calico-apiserver-certs\") pod \"calico-apiserver-5cdfc896d5-kqp6p\" (UID: \"670e25a3-881a-4bd2-bf8f-9378935e262b\") " pod="calico-system/calico-apiserver-5cdfc896d5-kqp6p" Mar 6 00:58:07.732910 kubelet[3346]: I0306 00:58:07.732844 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzflp\" (UniqueName: \"kubernetes.io/projected/0a33d7ba-f1d0-47ea-a80d-d50ce24ae1a9-kube-api-access-nzflp\") pod \"goldmane-5b85766d88-qtd2z\" (UID: \"0a33d7ba-f1d0-47ea-a80d-d50ce24ae1a9\") " pod="calico-system/goldmane-5b85766d88-qtd2z" Mar 6 00:58:07.733222 kubelet[3346]: I0306 00:58:07.733101 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzc58\" (UniqueName: \"kubernetes.io/projected/670e25a3-881a-4bd2-bf8f-9378935e262b-kube-api-access-kzc58\") pod \"calico-apiserver-5cdfc896d5-kqp6p\" (UID: \"670e25a3-881a-4bd2-bf8f-9378935e262b\") " pod="calico-system/calico-apiserver-5cdfc896d5-kqp6p" Mar 6 00:58:07.752646 systemd[1]: Created slice kubepods-besteffort-pod2b510939_855d_4b9d_85ce_7833d3ee5cac.slice - libcontainer container kubepods-besteffort-pod2b510939_855d_4b9d_85ce_7833d3ee5cac.slice. Mar 6 00:58:07.917378 containerd[2011]: time="2026-03-06T00:58:07.917078884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7d46d,Uid:bce0721f-75f9-4109-b6db-0d0ca49740fe,Namespace:kube-system,Attempt:0,}" Mar 6 00:58:07.944132 containerd[2011]: time="2026-03-06T00:58:07.944038588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7p58v,Uid:36d8fa48-f26e-449b-b7be-412a82e9724f,Namespace:kube-system,Attempt:0,}" Mar 6 00:58:07.964890 systemd[1]: Created slice kubepods-besteffort-pod608e60b2_3521_42e2_85c9_1a14ee77e2b1.slice - libcontainer container kubepods-besteffort-pod608e60b2_3521_42e2_85c9_1a14ee77e2b1.slice. Mar 6 00:58:07.979507 containerd[2011]: time="2026-03-06T00:58:07.979408192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qg2pp,Uid:608e60b2-3521-42e2-85c9-1a14ee77e2b1,Namespace:calico-system,Attempt:0,}" Mar 6 00:58:07.981627 containerd[2011]: time="2026-03-06T00:58:07.980079832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b5646b546-vb6xx,Uid:69311b7a-6d88-4316-8584-5f01cf4ac2b4,Namespace:calico-system,Attempt:0,}" Mar 6 00:58:08.000613 containerd[2011]: time="2026-03-06T00:58:07.999733024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cdfc896d5-kqp6p,Uid:670e25a3-881a-4bd2-bf8f-9378935e262b,Namespace:calico-system,Attempt:0,}" Mar 6 00:58:08.021736 containerd[2011]: time="2026-03-06T00:58:08.021547776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c59cf5869-vdvrm,Uid:d19574a4-d3dc-47a1-b931-8728fdbc3418,Namespace:calico-system,Attempt:0,}" Mar 6 00:58:08.063661 containerd[2011]: time="2026-03-06T00:58:08.063573289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-qtd2z,Uid:0a33d7ba-f1d0-47ea-a80d-d50ce24ae1a9,Namespace:calico-system,Attempt:0,}" Mar 6 00:58:08.081978 containerd[2011]: time="2026-03-06T00:58:08.081899593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cdfc896d5-gvch9,Uid:2b510939-855d-4b9d-85ce-7833d3ee5cac,Namespace:calico-system,Attempt:0,}" Mar 6 00:58:08.466497 containerd[2011]: time="2026-03-06T00:58:08.464076939Z" level=info msg="CreateContainer within sandbox \"2b4e607639fe2d2e95af89b4263050a3ba89afb5efd9ae97e7e193ad6adfb5cf\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 6 00:58:08.662917 containerd[2011]: time="2026-03-06T00:58:08.662596516Z" level=error msg="Failed to destroy network for sandbox \"7fe9fa62b2df9e937f067fed78d19eaa35184e2b6a25203f273cf0db4fa2bebf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 00:58:08.669104 systemd[1]: run-netns-cni\x2d2adaf5aa\x2d61d0\x2d9280\x2d6cb0\x2d9733a23aadd8.mount: Deactivated successfully. Mar 6 00:58:08.672006 containerd[2011]: time="2026-03-06T00:58:08.671829892Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b5646b546-vb6xx,Uid:69311b7a-6d88-4316-8584-5f01cf4ac2b4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fe9fa62b2df9e937f067fed78d19eaa35184e2b6a25203f273cf0db4fa2bebf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 00:58:08.682871 containerd[2011]: time="2026-03-06T00:58:08.682595992Z" level=error msg="Failed to destroy network for sandbox \"ecd978e00be7445933631adc046274fce79cb3103761b11e741485d5c102df1e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 00:58:08.683978 containerd[2011]: time="2026-03-06T00:58:08.683763208Z" level=error msg="Failed to destroy network for sandbox \"7b4db09f08fa77ee4b2430d7be8e821556e538a7bbaae437c2f96fc9b73ce3e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 00:58:08.691033 kubelet[3346]: E0306 00:58:08.690787 3346 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fe9fa62b2df9e937f067fed78d19eaa35184e2b6a25203f273cf0db4fa2bebf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 00:58:08.691033 kubelet[3346]: E0306 00:58:08.690896 3346 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fe9fa62b2df9e937f067fed78d19eaa35184e2b6a25203f273cf0db4fa2bebf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b5646b546-vb6xx" Mar 6 00:58:08.691033 kubelet[3346]: E0306 00:58:08.690933 3346 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fe9fa62b2df9e937f067fed78d19eaa35184e2b6a25203f273cf0db4fa2bebf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b5646b546-vb6xx" Mar 6 00:58:08.697439 kubelet[3346]: E0306 00:58:08.691032 3346 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b5646b546-vb6xx_calico-system(69311b7a-6d88-4316-8584-5f01cf4ac2b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b5646b546-vb6xx_calico-system(69311b7a-6d88-4316-8584-5f01cf4ac2b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7fe9fa62b2df9e937f067fed78d19eaa35184e2b6a25203f273cf0db4fa2bebf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b5646b546-vb6xx" podUID="69311b7a-6d88-4316-8584-5f01cf4ac2b4" Mar 6 00:58:08.692012 systemd[1]: run-netns-cni\x2d41d7677d\x2d1680\x2dca02\x2d1c26\x2d4e7e4596ab14.mount: Deactivated successfully. Mar 6 00:58:08.704047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount51902490.mount: Deactivated successfully. Mar 6 00:58:08.704294 systemd[1]: run-netns-cni\x2dba87bfa8\x2db11c\x2db8ea\x2d6baf\x2d7d32ca6792d3.mount: Deactivated successfully. Mar 6 00:58:08.706590 containerd[2011]: time="2026-03-06T00:58:08.706132888Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c59cf5869-vdvrm,Uid:d19574a4-d3dc-47a1-b931-8728fdbc3418,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecd978e00be7445933631adc046274fce79cb3103761b11e741485d5c102df1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 00:58:08.707996 kubelet[3346]: E0306 00:58:08.707818 3346 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecd978e00be7445933631adc046274fce79cb3103761b11e741485d5c102df1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 00:58:08.708966 kubelet[3346]: E0306 00:58:08.707954 3346 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecd978e00be7445933631adc046274fce79cb3103761b11e741485d5c102df1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7c59cf5869-vdvrm" Mar 6 00:58:08.709649 kubelet[3346]: E0306 00:58:08.708217 3346 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecd978e00be7445933631adc046274fce79cb3103761b11e741485d5c102df1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7c59cf5869-vdvrm" Mar 6 00:58:08.710343 kubelet[3346]: E0306 00:58:08.709965 3346 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7c59cf5869-vdvrm_calico-system(d19574a4-d3dc-47a1-b931-8728fdbc3418)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7c59cf5869-vdvrm_calico-system(d19574a4-d3dc-47a1-b931-8728fdbc3418)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ecd978e00be7445933631adc046274fce79cb3103761b11e741485d5c102df1e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7c59cf5869-vdvrm" podUID="d19574a4-d3dc-47a1-b931-8728fdbc3418" Mar 6 00:58:08.714130 containerd[2011]: time="2026-03-06T00:58:08.713714992Z" level=info msg="Container d261df5f01153373aba38d1f4817904210f5fff7b696a46e19cc00cf66beaf61: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:58:08.717196 containerd[2011]: time="2026-03-06T00:58:08.716805652Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7p58v,Uid:36d8fa48-f26e-449b-b7be-412a82e9724f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b4db09f08fa77ee4b2430d7be8e821556e538a7bbaae437c2f96fc9b73ce3e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 00:58:08.719701 kubelet[3346]: E0306 00:58:08.718101 3346 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b4db09f08fa77ee4b2430d7be8e821556e538a7bbaae437c2f96fc9b73ce3e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 00:58:08.719701 kubelet[3346]: E0306 00:58:08.718638 3346 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b4db09f08fa77ee4b2430d7be8e821556e538a7bbaae437c2f96fc9b73ce3e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7p58v" Mar 6 00:58:08.719701 kubelet[3346]: E0306 00:58:08.718789 3346 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b4db09f08fa77ee4b2430d7be8e821556e538a7bbaae437c2f96fc9b73ce3e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7p58v" Mar 6 00:58:08.720260 kubelet[3346]: E0306 00:58:08.718884 3346 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-7p58v_kube-system(36d8fa48-f26e-449b-b7be-412a82e9724f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-7p58v_kube-system(36d8fa48-f26e-449b-b7be-412a82e9724f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b4db09f08fa77ee4b2430d7be8e821556e538a7bbaae437c2f96fc9b73ce3e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-7p58v" podUID="36d8fa48-f26e-449b-b7be-412a82e9724f" Mar 6 00:58:08.742873 containerd[2011]: time="2026-03-06T00:58:08.742501132Z" level=error msg="Failed to destroy network for sandbox \"28bddbaa80527ec6efb37028f5c5f87222b5fe68b469d775260f7496e3ac19aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 00:58:08.749051 containerd[2011]: time="2026-03-06T00:58:08.748952032Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7d46d,Uid:bce0721f-75f9-4109-b6db-0d0ca49740fe,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"28bddbaa80527ec6efb37028f5c5f87222b5fe68b469d775260f7496e3ac19aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 00:58:08.750203 kubelet[3346]: E0306 00:58:08.749444 3346 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28bddbaa80527ec6efb37028f5c5f87222b5fe68b469d775260f7496e3ac19aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 00:58:08.750953 kubelet[3346]: E0306 00:58:08.750498 3346 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28bddbaa80527ec6efb37028f5c5f87222b5fe68b469d775260f7496e3ac19aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7d46d" Mar 6 00:58:08.751328 kubelet[3346]: E0306 00:58:08.750804 3346 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28bddbaa80527ec6efb37028f5c5f87222b5fe68b469d775260f7496e3ac19aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7d46d" Mar 6 00:58:08.752102 kubelet[3346]: E0306 00:58:08.751947 3346 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-7d46d_kube-system(bce0721f-75f9-4109-b6db-0d0ca49740fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-7d46d_kube-system(bce0721f-75f9-4109-b6db-0d0ca49740fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28bddbaa80527ec6efb37028f5c5f87222b5fe68b469d775260f7496e3ac19aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-7d46d" podUID="bce0721f-75f9-4109-b6db-0d0ca49740fe" Mar 6 00:58:08.756424 containerd[2011]: time="2026-03-06T00:58:08.756203548Z" level=error msg="Failed to destroy network for sandbox \"2bc1fefe6e4528533fc472c764a45db2f821036da08db49f8b197eba5836a932\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 00:58:08.762359 containerd[2011]: time="2026-03-06T00:58:08.762281692Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-qtd2z,Uid:0a33d7ba-f1d0-47ea-a80d-d50ce24ae1a9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bc1fefe6e4528533fc472c764a45db2f821036da08db49f8b197eba5836a932\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 00:58:08.763499 kubelet[3346]: E0306 00:58:08.762984 3346 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bc1fefe6e4528533fc472c764a45db2f821036da08db49f8b197eba5836a932\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 00:58:08.763499 kubelet[3346]: E0306 00:58:08.763081 3346 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bc1fefe6e4528533fc472c764a45db2f821036da08db49f8b197eba5836a932\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-qtd2z" Mar 6 00:58:08.763499 kubelet[3346]: E0306 00:58:08.763119 3346 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bc1fefe6e4528533fc472c764a45db2f821036da08db49f8b197eba5836a932\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-qtd2z" Mar 6 00:58:08.763797 kubelet[3346]: E0306 00:58:08.763211 3346 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-qtd2z_calico-system(0a33d7ba-f1d0-47ea-a80d-d50ce24ae1a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-qtd2z_calico-system(0a33d7ba-f1d0-47ea-a80d-d50ce24ae1a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2bc1fefe6e4528533fc472c764a45db2f821036da08db49f8b197eba5836a932\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-qtd2z" podUID="0a33d7ba-f1d0-47ea-a80d-d50ce24ae1a9" Mar 6 00:58:08.778066 containerd[2011]: time="2026-03-06T00:58:08.777933532Z" level=error msg="Failed to destroy network for sandbox \"7f6ca06562baa6ca3cc766f9563645e7940913154180f4f1946d598d76c7bcd2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 00:58:08.779315 containerd[2011]: time="2026-03-06T00:58:08.779239720Z" level=info msg="CreateContainer within sandbox \"2b4e607639fe2d2e95af89b4263050a3ba89afb5efd9ae97e7e193ad6adfb5cf\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d261df5f01153373aba38d1f4817904210f5fff7b696a46e19cc00cf66beaf61\"" Mar 6 00:58:08.782537 containerd[2011]: time="2026-03-06T00:58:08.782063620Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qg2pp,Uid:608e60b2-3521-42e2-85c9-1a14ee77e2b1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f6ca06562baa6ca3cc766f9563645e7940913154180f4f1946d598d76c7bcd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 00:58:08.783847 containerd[2011]: time="2026-03-06T00:58:08.783756868Z" level=info msg="StartContainer for \"d261df5f01153373aba38d1f4817904210f5fff7b696a46e19cc00cf66beaf61\"" Mar 6 00:58:08.784381 kubelet[3346]: E0306 00:58:08.784069 3346 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f6ca06562baa6ca3cc766f9563645e7940913154180f4f1946d598d76c7bcd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 00:58:08.784381 kubelet[3346]: E0306 00:58:08.784171 3346 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f6ca06562baa6ca3cc766f9563645e7940913154180f4f1946d598d76c7bcd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qg2pp" Mar 6 00:58:08.784381 kubelet[3346]: E0306 00:58:08.784219 3346 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f6ca06562baa6ca3cc766f9563645e7940913154180f4f1946d598d76c7bcd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qg2pp" Mar 6 00:58:08.785300 kubelet[3346]: E0306 00:58:08.784323 3346 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qg2pp_calico-system(608e60b2-3521-42e2-85c9-1a14ee77e2b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qg2pp_calico-system(608e60b2-3521-42e2-85c9-1a14ee77e2b1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f6ca06562baa6ca3cc766f9563645e7940913154180f4f1946d598d76c7bcd2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qg2pp" podUID="608e60b2-3521-42e2-85c9-1a14ee77e2b1" Mar 6 00:58:08.786855 containerd[2011]: time="2026-03-06T00:58:08.786771856Z" level=error msg="Failed to destroy network for sandbox \"284cb3b40b13ba5e0910811147e0fbd21380fc56bb6642b9eb5aacb1b53bd315\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 00:58:08.790734 containerd[2011]: time="2026-03-06T00:58:08.790634980Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cdfc896d5-kqp6p,Uid:670e25a3-881a-4bd2-bf8f-9378935e262b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"284cb3b40b13ba5e0910811147e0fbd21380fc56bb6642b9eb5aacb1b53bd315\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 00:58:08.791436 kubelet[3346]: E0306 00:58:08.791114 3346 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"284cb3b40b13ba5e0910811147e0fbd21380fc56bb6642b9eb5aacb1b53bd315\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 00:58:08.791436 kubelet[3346]: E0306 00:58:08.791202 3346 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"284cb3b40b13ba5e0910811147e0fbd21380fc56bb6642b9eb5aacb1b53bd315\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5cdfc896d5-kqp6p" Mar 6 00:58:08.791436 kubelet[3346]: E0306 00:58:08.791248 3346 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"284cb3b40b13ba5e0910811147e0fbd21380fc56bb6642b9eb5aacb1b53bd315\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5cdfc896d5-kqp6p" Mar 6 00:58:08.791741 kubelet[3346]: E0306 00:58:08.791323 3346 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cdfc896d5-kqp6p_calico-system(670e25a3-881a-4bd2-bf8f-9378935e262b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cdfc896d5-kqp6p_calico-system(670e25a3-881a-4bd2-bf8f-9378935e262b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"284cb3b40b13ba5e0910811147e0fbd21380fc56bb6642b9eb5aacb1b53bd315\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5cdfc896d5-kqp6p" podUID="670e25a3-881a-4bd2-bf8f-9378935e262b" Mar 6 00:58:08.794273 containerd[2011]: time="2026-03-06T00:58:08.793396324Z" level=info msg="connecting to shim d261df5f01153373aba38d1f4817904210f5fff7b696a46e19cc00cf66beaf61" address="unix:///run/containerd/s/d287ac09b723d9698d789661bae09c627247c83d1a4f3b6b8de2a2d17d924559" protocol=ttrpc version=3 Mar 6 00:58:08.828427 containerd[2011]: time="2026-03-06T00:58:08.828364636Z" level=error msg="Failed to destroy network for sandbox \"bbcafed598ba9727e3ae833e574d09d0747e72a19e7dff06b98db58629a7579b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 00:58:08.832298 containerd[2011]: time="2026-03-06T00:58:08.832217728Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cdfc896d5-gvch9,Uid:2b510939-855d-4b9d-85ce-7833d3ee5cac,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbcafed598ba9727e3ae833e574d09d0747e72a19e7dff06b98db58629a7579b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 00:58:08.834697 kubelet[3346]: E0306 00:58:08.834627 3346 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbcafed598ba9727e3ae833e574d09d0747e72a19e7dff06b98db58629a7579b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 6 00:58:08.835192 kubelet[3346]: E0306 00:58:08.835130 3346 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbcafed598ba9727e3ae833e574d09d0747e72a19e7dff06b98db58629a7579b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5cdfc896d5-gvch9" Mar 6 00:58:08.835437 kubelet[3346]: E0306 00:58:08.835376 3346 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbcafed598ba9727e3ae833e574d09d0747e72a19e7dff06b98db58629a7579b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5cdfc896d5-gvch9" Mar 6 00:58:08.836808 kubelet[3346]: E0306 00:58:08.836419 3346 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cdfc896d5-gvch9_calico-system(2b510939-855d-4b9d-85ce-7833d3ee5cac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cdfc896d5-gvch9_calico-system(2b510939-855d-4b9d-85ce-7833d3ee5cac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bbcafed598ba9727e3ae833e574d09d0747e72a19e7dff06b98db58629a7579b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5cdfc896d5-gvch9" podUID="2b510939-855d-4b9d-85ce-7833d3ee5cac" Mar 6 00:58:08.847799 systemd[1]: Started cri-containerd-d261df5f01153373aba38d1f4817904210f5fff7b696a46e19cc00cf66beaf61.scope - libcontainer container d261df5f01153373aba38d1f4817904210f5fff7b696a46e19cc00cf66beaf61. Mar 6 00:58:09.021175 containerd[2011]: time="2026-03-06T00:58:09.021086353Z" level=info msg="StartContainer for \"d261df5f01153373aba38d1f4817904210f5fff7b696a46e19cc00cf66beaf61\" returns successfully" Mar 6 00:58:09.450855 systemd[1]: run-netns-cni\x2d6e660115\x2d462e\x2dd973\x2dc077\x2de2570225a185.mount: Deactivated successfully. Mar 6 00:58:09.451382 systemd[1]: run-netns-cni\x2d5ad3abc9\x2da3d9\x2d27fe\x2d6d2c\x2db6c9357d8ded.mount: Deactivated successfully. Mar 6 00:58:09.452421 systemd[1]: run-netns-cni\x2d2ec599ee\x2d77fc\x2d9cba\x2d0fd2\x2db2dd6dd52082.mount: Deactivated successfully. Mar 6 00:58:09.452860 systemd[1]: run-netns-cni\x2df45a1bbf\x2d933d\x2d6813\x2d32fd\x2dfa650145f8ea.mount: Deactivated successfully. Mar 6 00:58:09.453392 systemd[1]: run-netns-cni\x2d69cd74de\x2dbca3\x2d24d4\x2d995a\x2d292a773b6470.mount: Deactivated successfully. Mar 6 00:58:09.477485 kubelet[3346]: I0306 00:58:09.477354 3346 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-g2qw9" podStartSLOduration=5.391933835 podStartE2EDuration="22.477326764s" podCreationTimestamp="2026-03-06 00:57:47 +0000 UTC" firstStartedPulling="2026-03-06 00:57:48.454209883 +0000 UTC m=+29.814531449" lastFinishedPulling="2026-03-06 00:58:05.539602824 +0000 UTC m=+46.899924378" observedRunningTime="2026-03-06 00:58:09.474861688 +0000 UTC m=+50.835183278" watchObservedRunningTime="2026-03-06 00:58:09.477326764 +0000 UTC m=+50.837648330" Mar 6 00:58:09.559583 kubelet[3346]: I0306 00:58:09.559107 3346 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d19574a4-d3dc-47a1-b931-8728fdbc3418-whisker-backend-key-pair\") pod \"d19574a4-d3dc-47a1-b931-8728fdbc3418\" (UID: \"d19574a4-d3dc-47a1-b931-8728fdbc3418\") " Mar 6 00:58:09.561499 kubelet[3346]: I0306 00:58:09.560948 3346 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/d19574a4-d3dc-47a1-b931-8728fdbc3418-nginx-config\") pod \"d19574a4-d3dc-47a1-b931-8728fdbc3418\" (UID: \"d19574a4-d3dc-47a1-b931-8728fdbc3418\") " Mar 6 00:58:09.561499 kubelet[3346]: I0306 00:58:09.561077 3346 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqr4p\" (UniqueName: \"kubernetes.io/projected/d19574a4-d3dc-47a1-b931-8728fdbc3418-kube-api-access-kqr4p\") pod \"d19574a4-d3dc-47a1-b931-8728fdbc3418\" (UID: \"d19574a4-d3dc-47a1-b931-8728fdbc3418\") " Mar 6 00:58:09.561499 kubelet[3346]: I0306 00:58:09.561124 3346 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19574a4-d3dc-47a1-b931-8728fdbc3418-whisker-ca-bundle\") pod \"d19574a4-d3dc-47a1-b931-8728fdbc3418\" (UID: \"d19574a4-d3dc-47a1-b931-8728fdbc3418\") " Mar 6 00:58:09.563247 kubelet[3346]: I0306 00:58:09.563159 3346 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19574a4-d3dc-47a1-b931-8728fdbc3418-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "d19574a4-d3dc-47a1-b931-8728fdbc3418" (UID: "d19574a4-d3dc-47a1-b931-8728fdbc3418"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 6 00:58:09.568852 kubelet[3346]: I0306 00:58:09.568771 3346 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19574a4-d3dc-47a1-b931-8728fdbc3418-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d19574a4-d3dc-47a1-b931-8728fdbc3418" (UID: "d19574a4-d3dc-47a1-b931-8728fdbc3418"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 6 00:58:09.575886 kubelet[3346]: I0306 00:58:09.575701 3346 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19574a4-d3dc-47a1-b931-8728fdbc3418-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d19574a4-d3dc-47a1-b931-8728fdbc3418" (UID: "d19574a4-d3dc-47a1-b931-8728fdbc3418"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 6 00:58:09.576761 kubelet[3346]: I0306 00:58:09.576646 3346 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19574a4-d3dc-47a1-b931-8728fdbc3418-kube-api-access-kqr4p" (OuterVolumeSpecName: "kube-api-access-kqr4p") pod "d19574a4-d3dc-47a1-b931-8728fdbc3418" (UID: "d19574a4-d3dc-47a1-b931-8728fdbc3418"). InnerVolumeSpecName "kube-api-access-kqr4p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 6 00:58:09.583902 systemd[1]: var-lib-kubelet-pods-d19574a4\x2dd3dc\x2d47a1\x2db931\x2d8728fdbc3418-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkqr4p.mount: Deactivated successfully. Mar 6 00:58:09.584536 systemd[1]: var-lib-kubelet-pods-d19574a4\x2dd3dc\x2d47a1\x2db931\x2d8728fdbc3418-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 6 00:58:09.662741 kubelet[3346]: I0306 00:58:09.662662 3346 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d19574a4-d3dc-47a1-b931-8728fdbc3418-whisker-backend-key-pair\") on node \"ip-172-31-24-181\" DevicePath \"\"" Mar 6 00:58:09.662741 kubelet[3346]: I0306 00:58:09.662746 3346 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/d19574a4-d3dc-47a1-b931-8728fdbc3418-nginx-config\") on node \"ip-172-31-24-181\" DevicePath \"\"" Mar 6 00:58:09.662959 kubelet[3346]: I0306 00:58:09.662775 3346 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kqr4p\" (UniqueName: \"kubernetes.io/projected/d19574a4-d3dc-47a1-b931-8728fdbc3418-kube-api-access-kqr4p\") on node \"ip-172-31-24-181\" DevicePath \"\"" Mar 6 00:58:09.662959 kubelet[3346]: I0306 00:58:09.662798 3346 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19574a4-d3dc-47a1-b931-8728fdbc3418-whisker-ca-bundle\") on node \"ip-172-31-24-181\" DevicePath \"\"" Mar 6 00:58:10.442678 systemd[1]: Removed slice kubepods-besteffort-podd19574a4_d3dc_47a1_b931_8728fdbc3418.slice - libcontainer container kubepods-besteffort-podd19574a4_d3dc_47a1_b931_8728fdbc3418.slice. Mar 6 00:58:10.575506 systemd[1]: Created slice kubepods-besteffort-podf5caaafd_1559_4c3c_aca2_85734efe5ccd.slice - libcontainer container kubepods-besteffort-podf5caaafd_1559_4c3c_aca2_85734efe5ccd.slice. Mar 6 00:58:10.677346 kubelet[3346]: I0306 00:58:10.677157 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/f5caaafd-1559-4c3c-aca2-85734efe5ccd-nginx-config\") pod \"whisker-8484fb69f4-l2rh4\" (UID: \"f5caaafd-1559-4c3c-aca2-85734efe5ccd\") " pod="calico-system/whisker-8484fb69f4-l2rh4" Mar 6 00:58:10.677346 kubelet[3346]: I0306 00:58:10.677222 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5caaafd-1559-4c3c-aca2-85734efe5ccd-whisker-ca-bundle\") pod \"whisker-8484fb69f4-l2rh4\" (UID: \"f5caaafd-1559-4c3c-aca2-85734efe5ccd\") " pod="calico-system/whisker-8484fb69f4-l2rh4" Mar 6 00:58:10.677346 kubelet[3346]: I0306 00:58:10.677292 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7hfc\" (UniqueName: \"kubernetes.io/projected/f5caaafd-1559-4c3c-aca2-85734efe5ccd-kube-api-access-l7hfc\") pod \"whisker-8484fb69f4-l2rh4\" (UID: \"f5caaafd-1559-4c3c-aca2-85734efe5ccd\") " pod="calico-system/whisker-8484fb69f4-l2rh4" Mar 6 00:58:10.678176 kubelet[3346]: I0306 00:58:10.678120 3346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f5caaafd-1559-4c3c-aca2-85734efe5ccd-whisker-backend-key-pair\") pod \"whisker-8484fb69f4-l2rh4\" (UID: \"f5caaafd-1559-4c3c-aca2-85734efe5ccd\") " pod="calico-system/whisker-8484fb69f4-l2rh4" Mar 6 00:58:10.890417 containerd[2011]: time="2026-03-06T00:58:10.888579523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8484fb69f4-l2rh4,Uid:f5caaafd-1559-4c3c-aca2-85734efe5ccd,Namespace:calico-system,Attempt:0,}" Mar 6 00:58:10.951494 kubelet[3346]: I0306 00:58:10.950492 3346 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19574a4-d3dc-47a1-b931-8728fdbc3418" path="/var/lib/kubelet/pods/d19574a4-d3dc-47a1-b931-8728fdbc3418/volumes" Mar 6 00:58:11.297767 systemd-networkd[1849]: cali91ef9311448: Link UP Mar 6 00:58:11.302749 systemd-networkd[1849]: cali91ef9311448: Gained carrier Mar 6 00:58:11.313747 (udev-worker)[4749]: Network interface NamePolicy= disabled on kernel command line. Mar 6 00:58:11.355529 containerd[2011]: 2026-03-06 00:58:10.966 [ERROR][4650] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 6 00:58:11.355529 containerd[2011]: 2026-03-06 00:58:11.013 [INFO][4650] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--181-k8s-whisker--8484fb69f4--l2rh4-eth0 whisker-8484fb69f4- calico-system f5caaafd-1559-4c3c-aca2-85734efe5ccd 924 0 2026-03-06 00:58:10 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8484fb69f4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-24-181 whisker-8484fb69f4-l2rh4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali91ef9311448 [] [] }} ContainerID="0f74eeb3d9e372d29ab820cbc49b1edfa578252233bc49aa7254eb8e083c79cb" Namespace="calico-system" Pod="whisker-8484fb69f4-l2rh4" WorkloadEndpoint="ip--172--31--24--181-k8s-whisker--8484fb69f4--l2rh4-" Mar 6 00:58:11.355529 containerd[2011]: 2026-03-06 00:58:11.013 [INFO][4650] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0f74eeb3d9e372d29ab820cbc49b1edfa578252233bc49aa7254eb8e083c79cb" Namespace="calico-system" Pod="whisker-8484fb69f4-l2rh4" WorkloadEndpoint="ip--172--31--24--181-k8s-whisker--8484fb69f4--l2rh4-eth0" Mar 6 00:58:11.355529 containerd[2011]: 2026-03-06 00:58:11.153 [INFO][4694] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f74eeb3d9e372d29ab820cbc49b1edfa578252233bc49aa7254eb8e083c79cb" HandleID="k8s-pod-network.0f74eeb3d9e372d29ab820cbc49b1edfa578252233bc49aa7254eb8e083c79cb" Workload="ip--172--31--24--181-k8s-whisker--8484fb69f4--l2rh4-eth0" Mar 6 00:58:11.355986 containerd[2011]: 2026-03-06 00:58:11.172 [INFO][4694] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0f74eeb3d9e372d29ab820cbc49b1edfa578252233bc49aa7254eb8e083c79cb" HandleID="k8s-pod-network.0f74eeb3d9e372d29ab820cbc49b1edfa578252233bc49aa7254eb8e083c79cb" Workload="ip--172--31--24--181-k8s-whisker--8484fb69f4--l2rh4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000378530), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-181", "pod":"whisker-8484fb69f4-l2rh4", "timestamp":"2026-03-06 00:58:11.153838636 +0000 UTC"}, Hostname:"ip-172-31-24-181", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40000cc840)} Mar 6 00:58:11.355986 containerd[2011]: 2026-03-06 00:58:11.172 [INFO][4694] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 00:58:11.355986 containerd[2011]: 2026-03-06 00:58:11.173 [INFO][4694] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 00:58:11.355986 containerd[2011]: 2026-03-06 00:58:11.173 [INFO][4694] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-181' Mar 6 00:58:11.355986 containerd[2011]: 2026-03-06 00:58:11.177 [INFO][4694] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0f74eeb3d9e372d29ab820cbc49b1edfa578252233bc49aa7254eb8e083c79cb" host="ip-172-31-24-181" Mar 6 00:58:11.355986 containerd[2011]: 2026-03-06 00:58:11.185 [INFO][4694] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-24-181" Mar 6 00:58:11.355986 containerd[2011]: 2026-03-06 00:58:11.195 [INFO][4694] ipam/ipam.go 526: Trying affinity for 192.168.62.192/26 host="ip-172-31-24-181" Mar 6 00:58:11.355986 containerd[2011]: 2026-03-06 00:58:11.201 [INFO][4694] ipam/ipam.go 160: Attempting to load block cidr=192.168.62.192/26 host="ip-172-31-24-181" Mar 6 00:58:11.355986 containerd[2011]: 2026-03-06 00:58:11.206 [INFO][4694] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.62.192/26 host="ip-172-31-24-181" Mar 6 00:58:11.356946 containerd[2011]: 2026-03-06 00:58:11.206 [INFO][4694] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.62.192/26 handle="k8s-pod-network.0f74eeb3d9e372d29ab820cbc49b1edfa578252233bc49aa7254eb8e083c79cb" host="ip-172-31-24-181" Mar 6 00:58:11.356946 containerd[2011]: 2026-03-06 00:58:11.209 [INFO][4694] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0f74eeb3d9e372d29ab820cbc49b1edfa578252233bc49aa7254eb8e083c79cb Mar 6 00:58:11.356946 containerd[2011]: 2026-03-06 00:58:11.216 [INFO][4694] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.62.192/26 handle="k8s-pod-network.0f74eeb3d9e372d29ab820cbc49b1edfa578252233bc49aa7254eb8e083c79cb" host="ip-172-31-24-181" Mar 6 00:58:11.356946 containerd[2011]: 2026-03-06 00:58:11.231 [INFO][4694] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.62.193/26] block=192.168.62.192/26 handle="k8s-pod-network.0f74eeb3d9e372d29ab820cbc49b1edfa578252233bc49aa7254eb8e083c79cb" host="ip-172-31-24-181" Mar 6 00:58:11.356946 containerd[2011]: 2026-03-06 00:58:11.231 [INFO][4694] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.62.193/26] handle="k8s-pod-network.0f74eeb3d9e372d29ab820cbc49b1edfa578252233bc49aa7254eb8e083c79cb" host="ip-172-31-24-181" Mar 6 00:58:11.356946 containerd[2011]: 2026-03-06 00:58:11.231 [INFO][4694] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 00:58:11.356946 containerd[2011]: 2026-03-06 00:58:11.231 [INFO][4694] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.62.193/26] IPv6=[] ContainerID="0f74eeb3d9e372d29ab820cbc49b1edfa578252233bc49aa7254eb8e083c79cb" HandleID="k8s-pod-network.0f74eeb3d9e372d29ab820cbc49b1edfa578252233bc49aa7254eb8e083c79cb" Workload="ip--172--31--24--181-k8s-whisker--8484fb69f4--l2rh4-eth0" Mar 6 00:58:11.357254 containerd[2011]: 2026-03-06 00:58:11.241 [INFO][4650] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0f74eeb3d9e372d29ab820cbc49b1edfa578252233bc49aa7254eb8e083c79cb" Namespace="calico-system" Pod="whisker-8484fb69f4-l2rh4" WorkloadEndpoint="ip--172--31--24--181-k8s-whisker--8484fb69f4--l2rh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--181-k8s-whisker--8484fb69f4--l2rh4-eth0", GenerateName:"whisker-8484fb69f4-", Namespace:"calico-system", SelfLink:"", UID:"f5caaafd-1559-4c3c-aca2-85734efe5ccd", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 0, 58, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8484fb69f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-181", ContainerID:"", Pod:"whisker-8484fb69f4-l2rh4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.62.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali91ef9311448", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 00:58:11.357254 containerd[2011]: 2026-03-06 00:58:11.241 [INFO][4650] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.193/32] ContainerID="0f74eeb3d9e372d29ab820cbc49b1edfa578252233bc49aa7254eb8e083c79cb" Namespace="calico-system" Pod="whisker-8484fb69f4-l2rh4" WorkloadEndpoint="ip--172--31--24--181-k8s-whisker--8484fb69f4--l2rh4-eth0" Mar 6 00:58:11.357422 containerd[2011]: 2026-03-06 00:58:11.242 [INFO][4650] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali91ef9311448 ContainerID="0f74eeb3d9e372d29ab820cbc49b1edfa578252233bc49aa7254eb8e083c79cb" Namespace="calico-system" Pod="whisker-8484fb69f4-l2rh4" WorkloadEndpoint="ip--172--31--24--181-k8s-whisker--8484fb69f4--l2rh4-eth0" Mar 6 00:58:11.357422 containerd[2011]: 2026-03-06 00:58:11.303 [INFO][4650] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f74eeb3d9e372d29ab820cbc49b1edfa578252233bc49aa7254eb8e083c79cb" Namespace="calico-system" Pod="whisker-8484fb69f4-l2rh4" WorkloadEndpoint="ip--172--31--24--181-k8s-whisker--8484fb69f4--l2rh4-eth0" Mar 6 00:58:11.360892 containerd[2011]: 2026-03-06 00:58:11.305 [INFO][4650] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0f74eeb3d9e372d29ab820cbc49b1edfa578252233bc49aa7254eb8e083c79cb" Namespace="calico-system" Pod="whisker-8484fb69f4-l2rh4" WorkloadEndpoint="ip--172--31--24--181-k8s-whisker--8484fb69f4--l2rh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--181-k8s-whisker--8484fb69f4--l2rh4-eth0", GenerateName:"whisker-8484fb69f4-", Namespace:"calico-system", SelfLink:"", UID:"f5caaafd-1559-4c3c-aca2-85734efe5ccd", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 0, 58, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8484fb69f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-181", ContainerID:"0f74eeb3d9e372d29ab820cbc49b1edfa578252233bc49aa7254eb8e083c79cb", Pod:"whisker-8484fb69f4-l2rh4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.62.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali91ef9311448", MAC:"72:24:38:2c:71:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 00:58:11.361047 containerd[2011]: 2026-03-06 00:58:11.339 [INFO][4650] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0f74eeb3d9e372d29ab820cbc49b1edfa578252233bc49aa7254eb8e083c79cb" Namespace="calico-system" Pod="whisker-8484fb69f4-l2rh4" WorkloadEndpoint="ip--172--31--24--181-k8s-whisker--8484fb69f4--l2rh4-eth0" Mar 6 00:58:11.411828 containerd[2011]: time="2026-03-06T00:58:11.411752165Z" level=info msg="connecting to shim 0f74eeb3d9e372d29ab820cbc49b1edfa578252233bc49aa7254eb8e083c79cb" address="unix:///run/containerd/s/c3b646fcd56bd01777d89ebf27175f195979535668edbc04c2a193244dbe14ea" namespace=k8s.io protocol=ttrpc version=3 Mar 6 00:58:11.493087 systemd[1]: Started cri-containerd-0f74eeb3d9e372d29ab820cbc49b1edfa578252233bc49aa7254eb8e083c79cb.scope - libcontainer container 0f74eeb3d9e372d29ab820cbc49b1edfa578252233bc49aa7254eb8e083c79cb. Mar 6 00:58:11.648558 containerd[2011]: time="2026-03-06T00:58:11.647909586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8484fb69f4-l2rh4,Uid:f5caaafd-1559-4c3c-aca2-85734efe5ccd,Namespace:calico-system,Attempt:0,} returns sandbox id \"0f74eeb3d9e372d29ab820cbc49b1edfa578252233bc49aa7254eb8e083c79cb\"" Mar 6 00:58:11.655511 containerd[2011]: time="2026-03-06T00:58:11.655406610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 6 00:58:12.996855 systemd-networkd[1849]: vxlan.calico: Link UP Mar 6 00:58:12.996873 systemd-networkd[1849]: vxlan.calico: Gained carrier Mar 6 00:58:12.998009 (udev-worker)[4748]: Network interface NamePolicy= disabled on kernel command line. Mar 6 00:58:13.179239 kubelet[3346]: I0306 00:58:13.179178 3346 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 00:58:13.281881 containerd[2011]: time="2026-03-06T00:58:13.280146967Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:58:13.283750 containerd[2011]: time="2026-03-06T00:58:13.283069183Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Mar 6 00:58:13.289780 containerd[2011]: time="2026-03-06T00:58:13.289581175Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:58:13.307130 systemd-networkd[1849]: cali91ef9311448: Gained IPv6LL Mar 6 00:58:13.327760 containerd[2011]: time="2026-03-06T00:58:13.327610111Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:58:13.342595 containerd[2011]: time="2026-03-06T00:58:13.342447787Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.686800949s" Mar 6 00:58:13.344679 containerd[2011]: time="2026-03-06T00:58:13.344620807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Mar 6 00:58:13.355152 containerd[2011]: time="2026-03-06T00:58:13.355075459Z" level=info msg="CreateContainer within sandbox \"0f74eeb3d9e372d29ab820cbc49b1edfa578252233bc49aa7254eb8e083c79cb\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 6 00:58:13.367570 containerd[2011]: time="2026-03-06T00:58:13.367494355Z" level=info msg="Container c3a0a1af99cfe9723921f3e863627fe5129f08b7a2b77d90a4f13a0e654cbbfa: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:58:13.387758 containerd[2011]: time="2026-03-06T00:58:13.387443239Z" level=info msg="CreateContainer within sandbox \"0f74eeb3d9e372d29ab820cbc49b1edfa578252233bc49aa7254eb8e083c79cb\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"c3a0a1af99cfe9723921f3e863627fe5129f08b7a2b77d90a4f13a0e654cbbfa\"" Mar 6 00:58:13.392540 containerd[2011]: time="2026-03-06T00:58:13.391530775Z" level=info msg="StartContainer for \"c3a0a1af99cfe9723921f3e863627fe5129f08b7a2b77d90a4f13a0e654cbbfa\"" Mar 6 00:58:13.396316 containerd[2011]: time="2026-03-06T00:58:13.396258427Z" level=info msg="connecting to shim c3a0a1af99cfe9723921f3e863627fe5129f08b7a2b77d90a4f13a0e654cbbfa" address="unix:///run/containerd/s/c3b646fcd56bd01777d89ebf27175f195979535668edbc04c2a193244dbe14ea" protocol=ttrpc version=3 Mar 6 00:58:13.459158 systemd[1]: Started cri-containerd-c3a0a1af99cfe9723921f3e863627fe5129f08b7a2b77d90a4f13a0e654cbbfa.scope - libcontainer container c3a0a1af99cfe9723921f3e863627fe5129f08b7a2b77d90a4f13a0e654cbbfa. Mar 6 00:58:13.678277 containerd[2011]: time="2026-03-06T00:58:13.678100748Z" level=info msg="StartContainer for \"c3a0a1af99cfe9723921f3e863627fe5129f08b7a2b77d90a4f13a0e654cbbfa\" returns successfully" Mar 6 00:58:13.685883 containerd[2011]: time="2026-03-06T00:58:13.685819509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 6 00:58:14.137653 systemd-networkd[1849]: vxlan.calico: Gained IPv6LL Mar 6 00:58:14.793840 systemd[1]: Started sshd@7-172.31.24.181:22-68.220.241.50:39854.service - OpenSSH per-connection server daemon (68.220.241.50:39854). Mar 6 00:58:15.280695 sshd[5020]: Accepted publickey for core from 68.220.241.50 port 39854 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:58:15.289032 sshd-session[5020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:58:15.318041 systemd-logind[1977]: New session 8 of user core. Mar 6 00:58:15.339834 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 6 00:58:15.919403 sshd[5027]: Connection closed by 68.220.241.50 port 39854 Mar 6 00:58:15.919902 sshd-session[5020]: pam_unix(sshd:session): session closed for user core Mar 6 00:58:15.937436 systemd[1]: sshd@7-172.31.24.181:22-68.220.241.50:39854.service: Deactivated successfully. Mar 6 00:58:15.953356 systemd[1]: session-8.scope: Deactivated successfully. Mar 6 00:58:15.959913 systemd-logind[1977]: Session 8 logged out. Waiting for processes to exit. Mar 6 00:58:15.969186 systemd-logind[1977]: Removed session 8. Mar 6 00:58:16.179603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount656279254.mount: Deactivated successfully. Mar 6 00:58:16.204584 containerd[2011]: time="2026-03-06T00:58:16.203516385Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:58:16.205863 containerd[2011]: time="2026-03-06T00:58:16.205601613Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Mar 6 00:58:16.206445 containerd[2011]: time="2026-03-06T00:58:16.206381697Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:58:16.219070 containerd[2011]: time="2026-03-06T00:58:16.218988357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:58:16.221593 containerd[2011]: time="2026-03-06T00:58:16.221517969Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 2.535627084s" Mar 6 00:58:16.221904 containerd[2011]: time="2026-03-06T00:58:16.221864793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Mar 6 00:58:16.232014 containerd[2011]: time="2026-03-06T00:58:16.230981241Z" level=info msg="CreateContainer within sandbox \"0f74eeb3d9e372d29ab820cbc49b1edfa578252233bc49aa7254eb8e083c79cb\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 6 00:58:16.245602 containerd[2011]: time="2026-03-06T00:58:16.245530173Z" level=info msg="Container e62deba4d14a0fff2ffe8df7676240cd360c2447afb6ae6653ffc549214a09f2: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:58:16.265162 containerd[2011]: time="2026-03-06T00:58:16.265082997Z" level=info msg="CreateContainer within sandbox \"0f74eeb3d9e372d29ab820cbc49b1edfa578252233bc49aa7254eb8e083c79cb\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"e62deba4d14a0fff2ffe8df7676240cd360c2447afb6ae6653ffc549214a09f2\"" Mar 6 00:58:16.266505 containerd[2011]: time="2026-03-06T00:58:16.266400225Z" level=info msg="StartContainer for \"e62deba4d14a0fff2ffe8df7676240cd360c2447afb6ae6653ffc549214a09f2\"" Mar 6 00:58:16.271043 containerd[2011]: time="2026-03-06T00:58:16.270868401Z" level=info msg="connecting to shim e62deba4d14a0fff2ffe8df7676240cd360c2447afb6ae6653ffc549214a09f2" address="unix:///run/containerd/s/c3b646fcd56bd01777d89ebf27175f195979535668edbc04c2a193244dbe14ea" protocol=ttrpc version=3 Mar 6 00:58:16.284868 ntpd[2213]: Listen normally on 6 vxlan.calico 192.168.62.192:123 Mar 6 00:58:16.284980 ntpd[2213]: Listen normally on 7 cali91ef9311448 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 6 00:58:16.286729 ntpd[2213]: 6 Mar 00:58:16 ntpd[2213]: Listen normally on 6 vxlan.calico 192.168.62.192:123 Mar 6 00:58:16.286729 ntpd[2213]: 6 Mar 00:58:16 ntpd[2213]: Listen normally on 7 cali91ef9311448 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 6 00:58:16.286729 ntpd[2213]: 6 Mar 00:58:16 ntpd[2213]: Listen normally on 8 vxlan.calico [fe80::64e5:44ff:fe78:c944%5]:123 Mar 6 00:58:16.285033 ntpd[2213]: Listen normally on 8 vxlan.calico [fe80::64e5:44ff:fe78:c944%5]:123 Mar 6 00:58:16.332083 systemd[1]: Started cri-containerd-e62deba4d14a0fff2ffe8df7676240cd360c2447afb6ae6653ffc549214a09f2.scope - libcontainer container e62deba4d14a0fff2ffe8df7676240cd360c2447afb6ae6653ffc549214a09f2. Mar 6 00:58:16.451127 containerd[2011]: time="2026-03-06T00:58:16.450921730Z" level=info msg="StartContainer for \"e62deba4d14a0fff2ffe8df7676240cd360c2447afb6ae6653ffc549214a09f2\" returns successfully" Mar 6 00:58:16.523175 kubelet[3346]: I0306 00:58:16.522734 3346 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-8484fb69f4-l2rh4" podStartSLOduration=1.951883604 podStartE2EDuration="6.522707927s" podCreationTimestamp="2026-03-06 00:58:10 +0000 UTC" firstStartedPulling="2026-03-06 00:58:11.653090838 +0000 UTC m=+53.013412404" lastFinishedPulling="2026-03-06 00:58:16.223915149 +0000 UTC m=+57.584236727" observedRunningTime="2026-03-06 00:58:16.521935847 +0000 UTC m=+57.882257425" watchObservedRunningTime="2026-03-06 00:58:16.522707927 +0000 UTC m=+57.883029493" Mar 6 00:58:19.937491 containerd[2011]: time="2026-03-06T00:58:19.937024912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cdfc896d5-kqp6p,Uid:670e25a3-881a-4bd2-bf8f-9378935e262b,Namespace:calico-system,Attempt:0,}" Mar 6 00:58:20.168703 systemd-networkd[1849]: cali2087e620c5e: Link UP Mar 6 00:58:20.169770 systemd-networkd[1849]: cali2087e620c5e: Gained carrier Mar 6 00:58:20.180864 (udev-worker)[5112]: Network interface NamePolicy= disabled on kernel command line. Mar 6 00:58:20.201382 containerd[2011]: 2026-03-06 00:58:20.021 [INFO][5092] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--181-k8s-calico--apiserver--5cdfc896d5--kqp6p-eth0 calico-apiserver-5cdfc896d5- calico-system 670e25a3-881a-4bd2-bf8f-9378935e262b 872 0 2026-03-06 00:57:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5cdfc896d5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-24-181 calico-apiserver-5cdfc896d5-kqp6p eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali2087e620c5e [] [] }} ContainerID="6afafefa0fea4d3c64ba52396e3efe802f7a39c18f4d235b6a4b6e117d9cddc4" Namespace="calico-system" Pod="calico-apiserver-5cdfc896d5-kqp6p" WorkloadEndpoint="ip--172--31--24--181-k8s-calico--apiserver--5cdfc896d5--kqp6p-" Mar 6 00:58:20.201382 containerd[2011]: 2026-03-06 00:58:20.022 [INFO][5092] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6afafefa0fea4d3c64ba52396e3efe802f7a39c18f4d235b6a4b6e117d9cddc4" Namespace="calico-system" Pod="calico-apiserver-5cdfc896d5-kqp6p" WorkloadEndpoint="ip--172--31--24--181-k8s-calico--apiserver--5cdfc896d5--kqp6p-eth0" Mar 6 00:58:20.201382 containerd[2011]: 2026-03-06 00:58:20.076 [INFO][5104] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6afafefa0fea4d3c64ba52396e3efe802f7a39c18f4d235b6a4b6e117d9cddc4" HandleID="k8s-pod-network.6afafefa0fea4d3c64ba52396e3efe802f7a39c18f4d235b6a4b6e117d9cddc4" Workload="ip--172--31--24--181-k8s-calico--apiserver--5cdfc896d5--kqp6p-eth0" Mar 6 00:58:20.201806 containerd[2011]: 2026-03-06 00:58:20.093 [INFO][5104] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6afafefa0fea4d3c64ba52396e3efe802f7a39c18f4d235b6a4b6e117d9cddc4" HandleID="k8s-pod-network.6afafefa0fea4d3c64ba52396e3efe802f7a39c18f4d235b6a4b6e117d9cddc4" Workload="ip--172--31--24--181-k8s-calico--apiserver--5cdfc896d5--kqp6p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e27c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-181", "pod":"calico-apiserver-5cdfc896d5-kqp6p", "timestamp":"2026-03-06 00:58:20.076904136 +0000 UTC"}, Hostname:"ip-172-31-24-181", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002d0420)} Mar 6 00:58:20.201806 containerd[2011]: 2026-03-06 00:58:20.093 [INFO][5104] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 00:58:20.201806 containerd[2011]: 2026-03-06 00:58:20.093 [INFO][5104] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 00:58:20.201806 containerd[2011]: 2026-03-06 00:58:20.093 [INFO][5104] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-181' Mar 6 00:58:20.201806 containerd[2011]: 2026-03-06 00:58:20.097 [INFO][5104] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6afafefa0fea4d3c64ba52396e3efe802f7a39c18f4d235b6a4b6e117d9cddc4" host="ip-172-31-24-181" Mar 6 00:58:20.201806 containerd[2011]: 2026-03-06 00:58:20.114 [INFO][5104] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-24-181" Mar 6 00:58:20.201806 containerd[2011]: 2026-03-06 00:58:20.125 [INFO][5104] ipam/ipam.go 526: Trying affinity for 192.168.62.192/26 host="ip-172-31-24-181" Mar 6 00:58:20.201806 containerd[2011]: 2026-03-06 00:58:20.129 [INFO][5104] ipam/ipam.go 160: Attempting to load block cidr=192.168.62.192/26 host="ip-172-31-24-181" Mar 6 00:58:20.201806 containerd[2011]: 2026-03-06 00:58:20.133 [INFO][5104] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.62.192/26 host="ip-172-31-24-181" Mar 6 00:58:20.202362 containerd[2011]: 2026-03-06 00:58:20.133 [INFO][5104] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.62.192/26 handle="k8s-pod-network.6afafefa0fea4d3c64ba52396e3efe802f7a39c18f4d235b6a4b6e117d9cddc4" host="ip-172-31-24-181" Mar 6 00:58:20.202362 containerd[2011]: 2026-03-06 00:58:20.136 [INFO][5104] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6afafefa0fea4d3c64ba52396e3efe802f7a39c18f4d235b6a4b6e117d9cddc4 Mar 6 00:58:20.202362 containerd[2011]: 2026-03-06 00:58:20.144 [INFO][5104] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.62.192/26 handle="k8s-pod-network.6afafefa0fea4d3c64ba52396e3efe802f7a39c18f4d235b6a4b6e117d9cddc4" host="ip-172-31-24-181" Mar 6 00:58:20.202362 containerd[2011]: 2026-03-06 00:58:20.156 [INFO][5104] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.62.194/26] block=192.168.62.192/26 handle="k8s-pod-network.6afafefa0fea4d3c64ba52396e3efe802f7a39c18f4d235b6a4b6e117d9cddc4" host="ip-172-31-24-181" Mar 6 00:58:20.202362 containerd[2011]: 2026-03-06 00:58:20.156 [INFO][5104] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.62.194/26] handle="k8s-pod-network.6afafefa0fea4d3c64ba52396e3efe802f7a39c18f4d235b6a4b6e117d9cddc4" host="ip-172-31-24-181" Mar 6 00:58:20.202362 containerd[2011]: 2026-03-06 00:58:20.157 [INFO][5104] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 00:58:20.202362 containerd[2011]: 2026-03-06 00:58:20.157 [INFO][5104] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.62.194/26] IPv6=[] ContainerID="6afafefa0fea4d3c64ba52396e3efe802f7a39c18f4d235b6a4b6e117d9cddc4" HandleID="k8s-pod-network.6afafefa0fea4d3c64ba52396e3efe802f7a39c18f4d235b6a4b6e117d9cddc4" Workload="ip--172--31--24--181-k8s-calico--apiserver--5cdfc896d5--kqp6p-eth0" Mar 6 00:58:20.206639 containerd[2011]: 2026-03-06 00:58:20.162 [INFO][5092] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6afafefa0fea4d3c64ba52396e3efe802f7a39c18f4d235b6a4b6e117d9cddc4" Namespace="calico-system" Pod="calico-apiserver-5cdfc896d5-kqp6p" WorkloadEndpoint="ip--172--31--24--181-k8s-calico--apiserver--5cdfc896d5--kqp6p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--181-k8s-calico--apiserver--5cdfc896d5--kqp6p-eth0", GenerateName:"calico-apiserver-5cdfc896d5-", Namespace:"calico-system", SelfLink:"", UID:"670e25a3-881a-4bd2-bf8f-9378935e262b", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 0, 57, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cdfc896d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-181", ContainerID:"", Pod:"calico-apiserver-5cdfc896d5-kqp6p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali2087e620c5e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 00:58:20.206795 containerd[2011]: 2026-03-06 00:58:20.162 [INFO][5092] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.194/32] ContainerID="6afafefa0fea4d3c64ba52396e3efe802f7a39c18f4d235b6a4b6e117d9cddc4" Namespace="calico-system" Pod="calico-apiserver-5cdfc896d5-kqp6p" WorkloadEndpoint="ip--172--31--24--181-k8s-calico--apiserver--5cdfc896d5--kqp6p-eth0" Mar 6 00:58:20.206795 containerd[2011]: 2026-03-06 00:58:20.162 [INFO][5092] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2087e620c5e ContainerID="6afafefa0fea4d3c64ba52396e3efe802f7a39c18f4d235b6a4b6e117d9cddc4" Namespace="calico-system" Pod="calico-apiserver-5cdfc896d5-kqp6p" WorkloadEndpoint="ip--172--31--24--181-k8s-calico--apiserver--5cdfc896d5--kqp6p-eth0" Mar 6 00:58:20.206795 containerd[2011]: 2026-03-06 00:58:20.167 [INFO][5092] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6afafefa0fea4d3c64ba52396e3efe802f7a39c18f4d235b6a4b6e117d9cddc4" Namespace="calico-system" Pod="calico-apiserver-5cdfc896d5-kqp6p" WorkloadEndpoint="ip--172--31--24--181-k8s-calico--apiserver--5cdfc896d5--kqp6p-eth0" Mar 6 00:58:20.206953 containerd[2011]: 2026-03-06 00:58:20.168 [INFO][5092] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6afafefa0fea4d3c64ba52396e3efe802f7a39c18f4d235b6a4b6e117d9cddc4" Namespace="calico-system" Pod="calico-apiserver-5cdfc896d5-kqp6p" WorkloadEndpoint="ip--172--31--24--181-k8s-calico--apiserver--5cdfc896d5--kqp6p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--181-k8s-calico--apiserver--5cdfc896d5--kqp6p-eth0", GenerateName:"calico-apiserver-5cdfc896d5-", Namespace:"calico-system", SelfLink:"", UID:"670e25a3-881a-4bd2-bf8f-9378935e262b", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 0, 57, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cdfc896d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-181", ContainerID:"6afafefa0fea4d3c64ba52396e3efe802f7a39c18f4d235b6a4b6e117d9cddc4", Pod:"calico-apiserver-5cdfc896d5-kqp6p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali2087e620c5e", MAC:"12:70:f0:e8:57:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 00:58:20.207079 containerd[2011]: 2026-03-06 00:58:20.187 [INFO][5092] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6afafefa0fea4d3c64ba52396e3efe802f7a39c18f4d235b6a4b6e117d9cddc4" Namespace="calico-system" Pod="calico-apiserver-5cdfc896d5-kqp6p" WorkloadEndpoint="ip--172--31--24--181-k8s-calico--apiserver--5cdfc896d5--kqp6p-eth0" Mar 6 00:58:20.259907 containerd[2011]: time="2026-03-06T00:58:20.259682485Z" level=info msg="connecting to shim 6afafefa0fea4d3c64ba52396e3efe802f7a39c18f4d235b6a4b6e117d9cddc4" address="unix:///run/containerd/s/0c595d0d3449965c7f668ca7d69931f3c78bd8314af93df0f21866ff3100efef" namespace=k8s.io protocol=ttrpc version=3 Mar 6 00:58:20.340865 systemd[1]: Started cri-containerd-6afafefa0fea4d3c64ba52396e3efe802f7a39c18f4d235b6a4b6e117d9cddc4.scope - libcontainer container 6afafefa0fea4d3c64ba52396e3efe802f7a39c18f4d235b6a4b6e117d9cddc4. Mar 6 00:58:20.466930 containerd[2011]: time="2026-03-06T00:58:20.466541810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cdfc896d5-kqp6p,Uid:670e25a3-881a-4bd2-bf8f-9378935e262b,Namespace:calico-system,Attempt:0,} returns sandbox id \"6afafefa0fea4d3c64ba52396e3efe802f7a39c18f4d235b6a4b6e117d9cddc4\"" Mar 6 00:58:20.473902 containerd[2011]: time="2026-03-06T00:58:20.473514170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 6 00:58:20.938368 containerd[2011]: time="2026-03-06T00:58:20.937762193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-qtd2z,Uid:0a33d7ba-f1d0-47ea-a80d-d50ce24ae1a9,Namespace:calico-system,Attempt:0,}" Mar 6 00:58:21.020328 systemd[1]: Started sshd@8-172.31.24.181:22-68.220.241.50:39866.service - OpenSSH per-connection server daemon (68.220.241.50:39866). Mar 6 00:58:21.211106 systemd-networkd[1849]: calieef0e81c6a4: Link UP Mar 6 00:58:21.213288 systemd-networkd[1849]: calieef0e81c6a4: Gained carrier Mar 6 00:58:21.253821 containerd[2011]: 2026-03-06 00:58:21.039 [INFO][5179] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--181-k8s-goldmane--5b85766d88--qtd2z-eth0 goldmane-5b85766d88- calico-system 0a33d7ba-f1d0-47ea-a80d-d50ce24ae1a9 873 0 2026-03-06 00:57:45 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-24-181 goldmane-5b85766d88-qtd2z eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calieef0e81c6a4 [] [] }} ContainerID="ef44cc78ae166b27a96758cecb5eb6a7d3c447e7e2ee01d950032249f4078f18" Namespace="calico-system" Pod="goldmane-5b85766d88-qtd2z" WorkloadEndpoint="ip--172--31--24--181-k8s-goldmane--5b85766d88--qtd2z-" Mar 6 00:58:21.253821 containerd[2011]: 2026-03-06 00:58:21.040 [INFO][5179] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ef44cc78ae166b27a96758cecb5eb6a7d3c447e7e2ee01d950032249f4078f18" Namespace="calico-system" Pod="goldmane-5b85766d88-qtd2z" WorkloadEndpoint="ip--172--31--24--181-k8s-goldmane--5b85766d88--qtd2z-eth0" Mar 6 00:58:21.253821 containerd[2011]: 2026-03-06 00:58:21.109 [INFO][5193] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef44cc78ae166b27a96758cecb5eb6a7d3c447e7e2ee01d950032249f4078f18" HandleID="k8s-pod-network.ef44cc78ae166b27a96758cecb5eb6a7d3c447e7e2ee01d950032249f4078f18" Workload="ip--172--31--24--181-k8s-goldmane--5b85766d88--qtd2z-eth0" Mar 6 00:58:21.254180 containerd[2011]: 2026-03-06 00:58:21.129 [INFO][5193] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ef44cc78ae166b27a96758cecb5eb6a7d3c447e7e2ee01d950032249f4078f18" HandleID="k8s-pod-network.ef44cc78ae166b27a96758cecb5eb6a7d3c447e7e2ee01d950032249f4078f18" Workload="ip--172--31--24--181-k8s-goldmane--5b85766d88--qtd2z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000606350), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-181", "pod":"goldmane-5b85766d88-qtd2z", "timestamp":"2026-03-06 00:58:21.109356949 +0000 UTC"}, Hostname:"ip-172-31-24-181", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400053a580)} Mar 6 00:58:21.254180 containerd[2011]: 2026-03-06 00:58:21.129 [INFO][5193] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 00:58:21.254180 containerd[2011]: 2026-03-06 00:58:21.130 [INFO][5193] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 00:58:21.254180 containerd[2011]: 2026-03-06 00:58:21.130 [INFO][5193] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-181' Mar 6 00:58:21.254180 containerd[2011]: 2026-03-06 00:58:21.136 [INFO][5193] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ef44cc78ae166b27a96758cecb5eb6a7d3c447e7e2ee01d950032249f4078f18" host="ip-172-31-24-181" Mar 6 00:58:21.254180 containerd[2011]: 2026-03-06 00:58:21.146 [INFO][5193] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-24-181" Mar 6 00:58:21.254180 containerd[2011]: 2026-03-06 00:58:21.155 [INFO][5193] ipam/ipam.go 526: Trying affinity for 192.168.62.192/26 host="ip-172-31-24-181" Mar 6 00:58:21.254180 containerd[2011]: 2026-03-06 00:58:21.160 [INFO][5193] ipam/ipam.go 160: Attempting to load block cidr=192.168.62.192/26 host="ip-172-31-24-181" Mar 6 00:58:21.254180 containerd[2011]: 2026-03-06 00:58:21.168 [INFO][5193] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.62.192/26 host="ip-172-31-24-181" Mar 6 00:58:21.254752 containerd[2011]: 2026-03-06 00:58:21.168 [INFO][5193] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.62.192/26 handle="k8s-pod-network.ef44cc78ae166b27a96758cecb5eb6a7d3c447e7e2ee01d950032249f4078f18" host="ip-172-31-24-181" Mar 6 00:58:21.254752 containerd[2011]: 2026-03-06 00:58:21.171 [INFO][5193] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ef44cc78ae166b27a96758cecb5eb6a7d3c447e7e2ee01d950032249f4078f18 Mar 6 00:58:21.254752 containerd[2011]: 2026-03-06 00:58:21.180 [INFO][5193] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.62.192/26 handle="k8s-pod-network.ef44cc78ae166b27a96758cecb5eb6a7d3c447e7e2ee01d950032249f4078f18" host="ip-172-31-24-181" Mar 6 00:58:21.254752 containerd[2011]: 2026-03-06 00:58:21.197 [INFO][5193] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.62.195/26] block=192.168.62.192/26 handle="k8s-pod-network.ef44cc78ae166b27a96758cecb5eb6a7d3c447e7e2ee01d950032249f4078f18" host="ip-172-31-24-181" Mar 6 00:58:21.254752 containerd[2011]: 2026-03-06 00:58:21.197 [INFO][5193] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.62.195/26] handle="k8s-pod-network.ef44cc78ae166b27a96758cecb5eb6a7d3c447e7e2ee01d950032249f4078f18" host="ip-172-31-24-181" Mar 6 00:58:21.254752 containerd[2011]: 2026-03-06 00:58:21.197 [INFO][5193] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 00:58:21.254752 containerd[2011]: 2026-03-06 00:58:21.197 [INFO][5193] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.62.195/26] IPv6=[] ContainerID="ef44cc78ae166b27a96758cecb5eb6a7d3c447e7e2ee01d950032249f4078f18" HandleID="k8s-pod-network.ef44cc78ae166b27a96758cecb5eb6a7d3c447e7e2ee01d950032249f4078f18" Workload="ip--172--31--24--181-k8s-goldmane--5b85766d88--qtd2z-eth0" Mar 6 00:58:21.255108 containerd[2011]: 2026-03-06 00:58:21.203 [INFO][5179] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ef44cc78ae166b27a96758cecb5eb6a7d3c447e7e2ee01d950032249f4078f18" Namespace="calico-system" Pod="goldmane-5b85766d88-qtd2z" WorkloadEndpoint="ip--172--31--24--181-k8s-goldmane--5b85766d88--qtd2z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--181-k8s-goldmane--5b85766d88--qtd2z-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"0a33d7ba-f1d0-47ea-a80d-d50ce24ae1a9", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 0, 57, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-181", ContainerID:"", Pod:"goldmane-5b85766d88-qtd2z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.62.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calieef0e81c6a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 00:58:21.255108 containerd[2011]: 2026-03-06 00:58:21.203 [INFO][5179] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.195/32] ContainerID="ef44cc78ae166b27a96758cecb5eb6a7d3c447e7e2ee01d950032249f4078f18" Namespace="calico-system" Pod="goldmane-5b85766d88-qtd2z" WorkloadEndpoint="ip--172--31--24--181-k8s-goldmane--5b85766d88--qtd2z-eth0" Mar 6 00:58:21.255351 containerd[2011]: 2026-03-06 00:58:21.203 [INFO][5179] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieef0e81c6a4 ContainerID="ef44cc78ae166b27a96758cecb5eb6a7d3c447e7e2ee01d950032249f4078f18" Namespace="calico-system" Pod="goldmane-5b85766d88-qtd2z" WorkloadEndpoint="ip--172--31--24--181-k8s-goldmane--5b85766d88--qtd2z-eth0" Mar 6 00:58:21.255351 containerd[2011]: 2026-03-06 00:58:21.212 [INFO][5179] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ef44cc78ae166b27a96758cecb5eb6a7d3c447e7e2ee01d950032249f4078f18" Namespace="calico-system" Pod="goldmane-5b85766d88-qtd2z" WorkloadEndpoint="ip--172--31--24--181-k8s-goldmane--5b85766d88--qtd2z-eth0" Mar 6 00:58:21.257534 containerd[2011]: 2026-03-06 00:58:21.219 [INFO][5179] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ef44cc78ae166b27a96758cecb5eb6a7d3c447e7e2ee01d950032249f4078f18" Namespace="calico-system" Pod="goldmane-5b85766d88-qtd2z" WorkloadEndpoint="ip--172--31--24--181-k8s-goldmane--5b85766d88--qtd2z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--181-k8s-goldmane--5b85766d88--qtd2z-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"0a33d7ba-f1d0-47ea-a80d-d50ce24ae1a9", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 0, 57, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-181", ContainerID:"ef44cc78ae166b27a96758cecb5eb6a7d3c447e7e2ee01d950032249f4078f18", Pod:"goldmane-5b85766d88-qtd2z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.62.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calieef0e81c6a4", MAC:"22:1e:d2:1a:5a:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 00:58:21.257808 containerd[2011]: 2026-03-06 00:58:21.241 [INFO][5179] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ef44cc78ae166b27a96758cecb5eb6a7d3c447e7e2ee01d950032249f4078f18" Namespace="calico-system" Pod="goldmane-5b85766d88-qtd2z" WorkloadEndpoint="ip--172--31--24--181-k8s-goldmane--5b85766d88--qtd2z-eth0" Mar 6 00:58:21.322130 containerd[2011]: time="2026-03-06T00:58:21.321883682Z" level=info msg="connecting to shim ef44cc78ae166b27a96758cecb5eb6a7d3c447e7e2ee01d950032249f4078f18" address="unix:///run/containerd/s/d1bf536d07d074ea12bdb040f0a975cee767cfe991e29190e1b1a766ad50be9e" namespace=k8s.io protocol=ttrpc version=3 Mar 6 00:58:21.370593 systemd-networkd[1849]: cali2087e620c5e: Gained IPv6LL Mar 6 00:58:21.425997 systemd[1]: Started cri-containerd-ef44cc78ae166b27a96758cecb5eb6a7d3c447e7e2ee01d950032249f4078f18.scope - libcontainer container ef44cc78ae166b27a96758cecb5eb6a7d3c447e7e2ee01d950032249f4078f18. Mar 6 00:58:21.532176 sshd[5191]: Accepted publickey for core from 68.220.241.50 port 39866 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:58:21.539323 sshd-session[5191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:58:21.545842 containerd[2011]: time="2026-03-06T00:58:21.545756968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-qtd2z,Uid:0a33d7ba-f1d0-47ea-a80d-d50ce24ae1a9,Namespace:calico-system,Attempt:0,} returns sandbox id \"ef44cc78ae166b27a96758cecb5eb6a7d3c447e7e2ee01d950032249f4078f18\"" Mar 6 00:58:21.559417 systemd-logind[1977]: New session 9 of user core. Mar 6 00:58:21.572855 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 6 00:58:21.937477 containerd[2011]: time="2026-03-06T00:58:21.936838710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7p58v,Uid:36d8fa48-f26e-449b-b7be-412a82e9724f,Namespace:kube-system,Attempt:0,}" Mar 6 00:58:22.029850 sshd[5265]: Connection closed by 68.220.241.50 port 39866 Mar 6 00:58:22.030872 sshd-session[5191]: pam_unix(sshd:session): session closed for user core Mar 6 00:58:22.043576 systemd[1]: session-9.scope: Deactivated successfully. Mar 6 00:58:22.047489 systemd[1]: sshd@8-172.31.24.181:22-68.220.241.50:39866.service: Deactivated successfully. Mar 6 00:58:22.063175 systemd-logind[1977]: Session 9 logged out. Waiting for processes to exit. Mar 6 00:58:22.070521 systemd-logind[1977]: Removed session 9. Mar 6 00:58:22.404735 systemd-networkd[1849]: calidfb80224adb: Link UP Mar 6 00:58:22.414687 systemd-networkd[1849]: calidfb80224adb: Gained carrier Mar 6 00:58:22.476492 containerd[2011]: 2026-03-06 00:58:22.128 [INFO][5275] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--181-k8s-coredns--674b8bbfcf--7p58v-eth0 coredns-674b8bbfcf- kube-system 36d8fa48-f26e-449b-b7be-412a82e9724f 871 0 2026-03-06 00:57:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-24-181 coredns-674b8bbfcf-7p58v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidfb80224adb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c0fdbed69c138dba6c74817c00a828846657b8575d2bee47d15483879c9cb1ed" Namespace="kube-system" Pod="coredns-674b8bbfcf-7p58v" WorkloadEndpoint="ip--172--31--24--181-k8s-coredns--674b8bbfcf--7p58v-" Mar 6 00:58:22.476492 containerd[2011]: 2026-03-06 00:58:22.129 [INFO][5275] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c0fdbed69c138dba6c74817c00a828846657b8575d2bee47d15483879c9cb1ed" Namespace="kube-system" Pod="coredns-674b8bbfcf-7p58v" WorkloadEndpoint="ip--172--31--24--181-k8s-coredns--674b8bbfcf--7p58v-eth0" Mar 6 00:58:22.476492 containerd[2011]: 2026-03-06 00:58:22.246 [INFO][5294] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0fdbed69c138dba6c74817c00a828846657b8575d2bee47d15483879c9cb1ed" HandleID="k8s-pod-network.c0fdbed69c138dba6c74817c00a828846657b8575d2bee47d15483879c9cb1ed" Workload="ip--172--31--24--181-k8s-coredns--674b8bbfcf--7p58v-eth0" Mar 6 00:58:22.477233 containerd[2011]: 2026-03-06 00:58:22.269 [INFO][5294] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c0fdbed69c138dba6c74817c00a828846657b8575d2bee47d15483879c9cb1ed" HandleID="k8s-pod-network.c0fdbed69c138dba6c74817c00a828846657b8575d2bee47d15483879c9cb1ed" Workload="ip--172--31--24--181-k8s-coredns--674b8bbfcf--7p58v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000374a10), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-24-181", "pod":"coredns-674b8bbfcf-7p58v", "timestamp":"2026-03-06 00:58:22.246559131 +0000 UTC"}, Hostname:"ip-172-31-24-181", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400046cb00)} Mar 6 00:58:22.477233 containerd[2011]: 2026-03-06 00:58:22.269 [INFO][5294] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 00:58:22.477233 containerd[2011]: 2026-03-06 00:58:22.269 [INFO][5294] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 00:58:22.477233 containerd[2011]: 2026-03-06 00:58:22.270 [INFO][5294] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-181' Mar 6 00:58:22.477233 containerd[2011]: 2026-03-06 00:58:22.275 [INFO][5294] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c0fdbed69c138dba6c74817c00a828846657b8575d2bee47d15483879c9cb1ed" host="ip-172-31-24-181" Mar 6 00:58:22.477233 containerd[2011]: 2026-03-06 00:58:22.288 [INFO][5294] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-24-181" Mar 6 00:58:22.477233 containerd[2011]: 2026-03-06 00:58:22.305 [INFO][5294] ipam/ipam.go 526: Trying affinity for 192.168.62.192/26 host="ip-172-31-24-181" Mar 6 00:58:22.477233 containerd[2011]: 2026-03-06 00:58:22.315 [INFO][5294] ipam/ipam.go 160: Attempting to load block cidr=192.168.62.192/26 host="ip-172-31-24-181" Mar 6 00:58:22.477233 containerd[2011]: 2026-03-06 00:58:22.330 [INFO][5294] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.62.192/26 host="ip-172-31-24-181" Mar 6 00:58:22.478429 containerd[2011]: 2026-03-06 00:58:22.331 [INFO][5294] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.62.192/26 handle="k8s-pod-network.c0fdbed69c138dba6c74817c00a828846657b8575d2bee47d15483879c9cb1ed" host="ip-172-31-24-181" Mar 6 00:58:22.478429 containerd[2011]: 2026-03-06 00:58:22.336 [INFO][5294] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c0fdbed69c138dba6c74817c00a828846657b8575d2bee47d15483879c9cb1ed Mar 6 00:58:22.478429 containerd[2011]: 2026-03-06 00:58:22.348 [INFO][5294] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.62.192/26 handle="k8s-pod-network.c0fdbed69c138dba6c74817c00a828846657b8575d2bee47d15483879c9cb1ed" host="ip-172-31-24-181" Mar 6 00:58:22.478429 containerd[2011]: 2026-03-06 00:58:22.366 [INFO][5294] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.62.196/26] block=192.168.62.192/26 handle="k8s-pod-network.c0fdbed69c138dba6c74817c00a828846657b8575d2bee47d15483879c9cb1ed" host="ip-172-31-24-181" Mar 6 00:58:22.478429 containerd[2011]: 2026-03-06 00:58:22.367 [INFO][5294] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.62.196/26] handle="k8s-pod-network.c0fdbed69c138dba6c74817c00a828846657b8575d2bee47d15483879c9cb1ed" host="ip-172-31-24-181" Mar 6 00:58:22.478429 containerd[2011]: 2026-03-06 00:58:22.367 [INFO][5294] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 00:58:22.478429 containerd[2011]: 2026-03-06 00:58:22.368 [INFO][5294] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.62.196/26] IPv6=[] ContainerID="c0fdbed69c138dba6c74817c00a828846657b8575d2bee47d15483879c9cb1ed" HandleID="k8s-pod-network.c0fdbed69c138dba6c74817c00a828846657b8575d2bee47d15483879c9cb1ed" Workload="ip--172--31--24--181-k8s-coredns--674b8bbfcf--7p58v-eth0" Mar 6 00:58:22.479314 containerd[2011]: 2026-03-06 00:58:22.388 [INFO][5275] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c0fdbed69c138dba6c74817c00a828846657b8575d2bee47d15483879c9cb1ed" Namespace="kube-system" Pod="coredns-674b8bbfcf-7p58v" WorkloadEndpoint="ip--172--31--24--181-k8s-coredns--674b8bbfcf--7p58v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--181-k8s-coredns--674b8bbfcf--7p58v-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"36d8fa48-f26e-449b-b7be-412a82e9724f", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 0, 57, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-181", ContainerID:"", Pod:"coredns-674b8bbfcf-7p58v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidfb80224adb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 00:58:22.479314 containerd[2011]: 2026-03-06 00:58:22.388 [INFO][5275] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.196/32] ContainerID="c0fdbed69c138dba6c74817c00a828846657b8575d2bee47d15483879c9cb1ed" Namespace="kube-system" Pod="coredns-674b8bbfcf-7p58v" WorkloadEndpoint="ip--172--31--24--181-k8s-coredns--674b8bbfcf--7p58v-eth0" Mar 6 00:58:22.479314 containerd[2011]: 2026-03-06 00:58:22.390 [INFO][5275] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidfb80224adb ContainerID="c0fdbed69c138dba6c74817c00a828846657b8575d2bee47d15483879c9cb1ed" Namespace="kube-system" Pod="coredns-674b8bbfcf-7p58v" WorkloadEndpoint="ip--172--31--24--181-k8s-coredns--674b8bbfcf--7p58v-eth0" Mar 6 00:58:22.479314 containerd[2011]: 2026-03-06 00:58:22.418 [INFO][5275] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0fdbed69c138dba6c74817c00a828846657b8575d2bee47d15483879c9cb1ed" Namespace="kube-system" Pod="coredns-674b8bbfcf-7p58v" WorkloadEndpoint="ip--172--31--24--181-k8s-coredns--674b8bbfcf--7p58v-eth0" Mar 6 00:58:22.479314 containerd[2011]: 2026-03-06 00:58:22.422 [INFO][5275] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c0fdbed69c138dba6c74817c00a828846657b8575d2bee47d15483879c9cb1ed" Namespace="kube-system" Pod="coredns-674b8bbfcf-7p58v" WorkloadEndpoint="ip--172--31--24--181-k8s-coredns--674b8bbfcf--7p58v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--181-k8s-coredns--674b8bbfcf--7p58v-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"36d8fa48-f26e-449b-b7be-412a82e9724f", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 0, 57, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-181", ContainerID:"c0fdbed69c138dba6c74817c00a828846657b8575d2bee47d15483879c9cb1ed", Pod:"coredns-674b8bbfcf-7p58v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidfb80224adb", MAC:"aa:1c:c0:ea:2f:9e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 00:58:22.479314 containerd[2011]: 2026-03-06 00:58:22.460 [INFO][5275] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c0fdbed69c138dba6c74817c00a828846657b8575d2bee47d15483879c9cb1ed" Namespace="kube-system" Pod="coredns-674b8bbfcf-7p58v" WorkloadEndpoint="ip--172--31--24--181-k8s-coredns--674b8bbfcf--7p58v-eth0" Mar 6 00:58:22.584811 containerd[2011]: time="2026-03-06T00:58:22.584733377Z" level=info msg="connecting to shim c0fdbed69c138dba6c74817c00a828846657b8575d2bee47d15483879c9cb1ed" address="unix:///run/containerd/s/c2c980123497e7e966a64b8f823c83381a2ea4b8c60640aa7b669f3c786f9e4b" namespace=k8s.io protocol=ttrpc version=3 Mar 6 00:58:22.716632 systemd-networkd[1849]: calieef0e81c6a4: Gained IPv6LL Mar 6 00:58:22.720310 systemd[1]: Started cri-containerd-c0fdbed69c138dba6c74817c00a828846657b8575d2bee47d15483879c9cb1ed.scope - libcontainer container c0fdbed69c138dba6c74817c00a828846657b8575d2bee47d15483879c9cb1ed. Mar 6 00:58:22.889036 containerd[2011]: time="2026-03-06T00:58:22.888978774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7p58v,Uid:36d8fa48-f26e-449b-b7be-412a82e9724f,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0fdbed69c138dba6c74817c00a828846657b8575d2bee47d15483879c9cb1ed\"" Mar 6 00:58:22.907354 containerd[2011]: time="2026-03-06T00:58:22.907273770Z" level=info msg="CreateContainer within sandbox \"c0fdbed69c138dba6c74817c00a828846657b8575d2bee47d15483879c9cb1ed\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 6 00:58:22.935195 containerd[2011]: time="2026-03-06T00:58:22.935100630Z" level=info msg="Container f50f3f460ddb6358b5633f2cf57c980f2eb893760de6b93c7a4b07196ea30af8: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:58:22.940131 containerd[2011]: time="2026-03-06T00:58:22.940034683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b5646b546-vb6xx,Uid:69311b7a-6d88-4316-8584-5f01cf4ac2b4,Namespace:calico-system,Attempt:0,}" Mar 6 00:58:22.954585 containerd[2011]: time="2026-03-06T00:58:22.953828167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qg2pp,Uid:608e60b2-3521-42e2-85c9-1a14ee77e2b1,Namespace:calico-system,Attempt:0,}" Mar 6 00:58:22.962351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount412532786.mount: Deactivated successfully. Mar 6 00:58:22.995061 containerd[2011]: time="2026-03-06T00:58:22.993323275Z" level=info msg="CreateContainer within sandbox \"c0fdbed69c138dba6c74817c00a828846657b8575d2bee47d15483879c9cb1ed\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f50f3f460ddb6358b5633f2cf57c980f2eb893760de6b93c7a4b07196ea30af8\"" Mar 6 00:58:23.000851 containerd[2011]: time="2026-03-06T00:58:23.000619791Z" level=info msg="StartContainer for \"f50f3f460ddb6358b5633f2cf57c980f2eb893760de6b93c7a4b07196ea30af8\"" Mar 6 00:58:23.005776 containerd[2011]: time="2026-03-06T00:58:23.005431731Z" level=info msg="connecting to shim f50f3f460ddb6358b5633f2cf57c980f2eb893760de6b93c7a4b07196ea30af8" address="unix:///run/containerd/s/c2c980123497e7e966a64b8f823c83381a2ea4b8c60640aa7b669f3c786f9e4b" protocol=ttrpc version=3 Mar 6 00:58:23.139035 systemd[1]: Started cri-containerd-f50f3f460ddb6358b5633f2cf57c980f2eb893760de6b93c7a4b07196ea30af8.scope - libcontainer container f50f3f460ddb6358b5633f2cf57c980f2eb893760de6b93c7a4b07196ea30af8. Mar 6 00:58:23.441061 containerd[2011]: time="2026-03-06T00:58:23.440743589Z" level=info msg="StartContainer for \"f50f3f460ddb6358b5633f2cf57c980f2eb893760de6b93c7a4b07196ea30af8\" returns successfully" Mar 6 00:58:23.944296 containerd[2011]: time="2026-03-06T00:58:23.944209987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cdfc896d5-gvch9,Uid:2b510939-855d-4b9d-85ce-7833d3ee5cac,Namespace:calico-system,Attempt:0,}" Mar 6 00:58:23.945811 containerd[2011]: time="2026-03-06T00:58:23.945521659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7d46d,Uid:bce0721f-75f9-4109-b6db-0d0ca49740fe,Namespace:kube-system,Attempt:0,}" Mar 6 00:58:23.992144 systemd-networkd[1849]: calid115d677af6: Link UP Mar 6 00:58:24.006342 systemd-networkd[1849]: calid115d677af6: Gained carrier Mar 6 00:58:24.082776 kubelet[3346]: I0306 00:58:24.081167 3346 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-7p58v" podStartSLOduration=61.081138856 podStartE2EDuration="1m1.081138856s" podCreationTimestamp="2026-03-06 00:57:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 00:58:23.637777662 +0000 UTC m=+64.998099264" watchObservedRunningTime="2026-03-06 00:58:24.081138856 +0000 UTC m=+65.441460422" Mar 6 00:58:24.102843 containerd[2011]: 2026-03-06 00:58:23.266 [INFO][5367] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--181-k8s-calico--kube--controllers--7b5646b546--vb6xx-eth0 calico-kube-controllers-7b5646b546- calico-system 69311b7a-6d88-4316-8584-5f01cf4ac2b4 869 0 2026-03-06 00:57:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b5646b546 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-24-181 calico-kube-controllers-7b5646b546-vb6xx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid115d677af6 [] [] }} ContainerID="b94e1383afa8a93e98960577bc8b047e9e845af1ecb7d5f3de951397106752cb" Namespace="calico-system" Pod="calico-kube-controllers-7b5646b546-vb6xx" WorkloadEndpoint="ip--172--31--24--181-k8s-calico--kube--controllers--7b5646b546--vb6xx-" Mar 6 00:58:24.102843 containerd[2011]: 2026-03-06 00:58:23.267 [INFO][5367] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b94e1383afa8a93e98960577bc8b047e9e845af1ecb7d5f3de951397106752cb" Namespace="calico-system" Pod="calico-kube-controllers-7b5646b546-vb6xx" WorkloadEndpoint="ip--172--31--24--181-k8s-calico--kube--controllers--7b5646b546--vb6xx-eth0" Mar 6 00:58:24.102843 containerd[2011]: 2026-03-06 00:58:23.602 [INFO][5414] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b94e1383afa8a93e98960577bc8b047e9e845af1ecb7d5f3de951397106752cb" HandleID="k8s-pod-network.b94e1383afa8a93e98960577bc8b047e9e845af1ecb7d5f3de951397106752cb" Workload="ip--172--31--24--181-k8s-calico--kube--controllers--7b5646b546--vb6xx-eth0" Mar 6 00:58:24.102843 containerd[2011]: 2026-03-06 00:58:23.689 [INFO][5414] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b94e1383afa8a93e98960577bc8b047e9e845af1ecb7d5f3de951397106752cb" HandleID="k8s-pod-network.b94e1383afa8a93e98960577bc8b047e9e845af1ecb7d5f3de951397106752cb" Workload="ip--172--31--24--181-k8s-calico--kube--controllers--7b5646b546--vb6xx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035f890), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-181", "pod":"calico-kube-controllers-7b5646b546-vb6xx", "timestamp":"2026-03-06 00:58:23.599642802 +0000 UTC"}, Hostname:"ip-172-31-24-181", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40000fcdc0)} Mar 6 00:58:24.102843 containerd[2011]: 2026-03-06 00:58:23.689 [INFO][5414] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 00:58:24.102843 containerd[2011]: 2026-03-06 00:58:23.690 [INFO][5414] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 00:58:24.102843 containerd[2011]: 2026-03-06 00:58:23.690 [INFO][5414] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-181' Mar 6 00:58:24.102843 containerd[2011]: 2026-03-06 00:58:23.707 [INFO][5414] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b94e1383afa8a93e98960577bc8b047e9e845af1ecb7d5f3de951397106752cb" host="ip-172-31-24-181" Mar 6 00:58:24.102843 containerd[2011]: 2026-03-06 00:58:23.770 [INFO][5414] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-24-181" Mar 6 00:58:24.102843 containerd[2011]: 2026-03-06 00:58:23.802 [INFO][5414] ipam/ipam.go 526: Trying affinity for 192.168.62.192/26 host="ip-172-31-24-181" Mar 6 00:58:24.102843 containerd[2011]: 2026-03-06 00:58:23.817 [INFO][5414] ipam/ipam.go 160: Attempting to load block cidr=192.168.62.192/26 host="ip-172-31-24-181" Mar 6 00:58:24.102843 containerd[2011]: 2026-03-06 00:58:23.848 [INFO][5414] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.62.192/26 host="ip-172-31-24-181" Mar 6 00:58:24.102843 containerd[2011]: 2026-03-06 00:58:23.849 [INFO][5414] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.62.192/26 handle="k8s-pod-network.b94e1383afa8a93e98960577bc8b047e9e845af1ecb7d5f3de951397106752cb" host="ip-172-31-24-181" Mar 6 00:58:24.102843 containerd[2011]: 2026-03-06 00:58:23.867 [INFO][5414] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b94e1383afa8a93e98960577bc8b047e9e845af1ecb7d5f3de951397106752cb Mar 6 00:58:24.102843 containerd[2011]: 2026-03-06 00:58:23.891 [INFO][5414] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.62.192/26 handle="k8s-pod-network.b94e1383afa8a93e98960577bc8b047e9e845af1ecb7d5f3de951397106752cb" host="ip-172-31-24-181" Mar 6 00:58:24.102843 containerd[2011]: 2026-03-06 00:58:23.925 [INFO][5414] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.62.197/26] block=192.168.62.192/26 handle="k8s-pod-network.b94e1383afa8a93e98960577bc8b047e9e845af1ecb7d5f3de951397106752cb" host="ip-172-31-24-181" Mar 6 00:58:24.102843 containerd[2011]: 2026-03-06 00:58:23.925 [INFO][5414] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.62.197/26] handle="k8s-pod-network.b94e1383afa8a93e98960577bc8b047e9e845af1ecb7d5f3de951397106752cb" host="ip-172-31-24-181" Mar 6 00:58:24.102843 containerd[2011]: 2026-03-06 00:58:23.926 [INFO][5414] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 00:58:24.102843 containerd[2011]: 2026-03-06 00:58:23.926 [INFO][5414] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.62.197/26] IPv6=[] ContainerID="b94e1383afa8a93e98960577bc8b047e9e845af1ecb7d5f3de951397106752cb" HandleID="k8s-pod-network.b94e1383afa8a93e98960577bc8b047e9e845af1ecb7d5f3de951397106752cb" Workload="ip--172--31--24--181-k8s-calico--kube--controllers--7b5646b546--vb6xx-eth0" Mar 6 00:58:24.107393 containerd[2011]: 2026-03-06 00:58:23.948 [INFO][5367] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b94e1383afa8a93e98960577bc8b047e9e845af1ecb7d5f3de951397106752cb" Namespace="calico-system" Pod="calico-kube-controllers-7b5646b546-vb6xx" WorkloadEndpoint="ip--172--31--24--181-k8s-calico--kube--controllers--7b5646b546--vb6xx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--181-k8s-calico--kube--controllers--7b5646b546--vb6xx-eth0", GenerateName:"calico-kube-controllers-7b5646b546-", Namespace:"calico-system", SelfLink:"", UID:"69311b7a-6d88-4316-8584-5f01cf4ac2b4", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 0, 57, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b5646b546", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-181", ContainerID:"", Pod:"calico-kube-controllers-7b5646b546-vb6xx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid115d677af6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 00:58:24.107393 containerd[2011]: 2026-03-06 00:58:23.949 [INFO][5367] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.197/32] ContainerID="b94e1383afa8a93e98960577bc8b047e9e845af1ecb7d5f3de951397106752cb" Namespace="calico-system" Pod="calico-kube-controllers-7b5646b546-vb6xx" WorkloadEndpoint="ip--172--31--24--181-k8s-calico--kube--controllers--7b5646b546--vb6xx-eth0" Mar 6 00:58:24.107393 containerd[2011]: 2026-03-06 00:58:23.949 [INFO][5367] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid115d677af6 ContainerID="b94e1383afa8a93e98960577bc8b047e9e845af1ecb7d5f3de951397106752cb" Namespace="calico-system" Pod="calico-kube-controllers-7b5646b546-vb6xx" WorkloadEndpoint="ip--172--31--24--181-k8s-calico--kube--controllers--7b5646b546--vb6xx-eth0" Mar 6 00:58:24.107393 containerd[2011]: 2026-03-06 00:58:24.022 [INFO][5367] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b94e1383afa8a93e98960577bc8b047e9e845af1ecb7d5f3de951397106752cb" Namespace="calico-system" Pod="calico-kube-controllers-7b5646b546-vb6xx" WorkloadEndpoint="ip--172--31--24--181-k8s-calico--kube--controllers--7b5646b546--vb6xx-eth0" Mar 6 00:58:24.107393 containerd[2011]: 2026-03-06 00:58:24.035 [INFO][5367] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b94e1383afa8a93e98960577bc8b047e9e845af1ecb7d5f3de951397106752cb" Namespace="calico-system" Pod="calico-kube-controllers-7b5646b546-vb6xx" WorkloadEndpoint="ip--172--31--24--181-k8s-calico--kube--controllers--7b5646b546--vb6xx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--181-k8s-calico--kube--controllers--7b5646b546--vb6xx-eth0", GenerateName:"calico-kube-controllers-7b5646b546-", Namespace:"calico-system", SelfLink:"", UID:"69311b7a-6d88-4316-8584-5f01cf4ac2b4", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 0, 57, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b5646b546", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-181", ContainerID:"b94e1383afa8a93e98960577bc8b047e9e845af1ecb7d5f3de951397106752cb", Pod:"calico-kube-controllers-7b5646b546-vb6xx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.62.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid115d677af6", MAC:"62:90:b6:d5:94:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 00:58:24.107393 containerd[2011]: 2026-03-06 00:58:24.084 [INFO][5367] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b94e1383afa8a93e98960577bc8b047e9e845af1ecb7d5f3de951397106752cb" Namespace="calico-system" Pod="calico-kube-controllers-7b5646b546-vb6xx" WorkloadEndpoint="ip--172--31--24--181-k8s-calico--kube--controllers--7b5646b546--vb6xx-eth0" Mar 6 00:58:24.217193 systemd-networkd[1849]: cali5a87d1c8914: Link UP Mar 6 00:58:24.225715 systemd-networkd[1849]: cali5a87d1c8914: Gained carrier Mar 6 00:58:24.302945 containerd[2011]: 2026-03-06 00:58:23.376 [INFO][5384] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--181-k8s-csi--node--driver--qg2pp-eth0 csi-node-driver- calico-system 608e60b2-3521-42e2-85c9-1a14ee77e2b1 729 0 2026-03-06 00:57:47 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-24-181 csi-node-driver-qg2pp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5a87d1c8914 [] [] }} ContainerID="6d8b1a22924c1b83a19ae815f45c6075f06ab575aeb77b79863f706c8c9a0932" Namespace="calico-system" Pod="csi-node-driver-qg2pp" WorkloadEndpoint="ip--172--31--24--181-k8s-csi--node--driver--qg2pp-" Mar 6 00:58:24.302945 containerd[2011]: 2026-03-06 00:58:23.377 [INFO][5384] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d8b1a22924c1b83a19ae815f45c6075f06ab575aeb77b79863f706c8c9a0932" Namespace="calico-system" Pod="csi-node-driver-qg2pp" WorkloadEndpoint="ip--172--31--24--181-k8s-csi--node--driver--qg2pp-eth0" Mar 6 00:58:24.302945 containerd[2011]: 2026-03-06 00:58:23.698 [INFO][5428] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d8b1a22924c1b83a19ae815f45c6075f06ab575aeb77b79863f706c8c9a0932" HandleID="k8s-pod-network.6d8b1a22924c1b83a19ae815f45c6075f06ab575aeb77b79863f706c8c9a0932" Workload="ip--172--31--24--181-k8s-csi--node--driver--qg2pp-eth0" Mar 6 00:58:24.302945 containerd[2011]: 2026-03-06 00:58:23.787 [INFO][5428] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6d8b1a22924c1b83a19ae815f45c6075f06ab575aeb77b79863f706c8c9a0932" HandleID="k8s-pod-network.6d8b1a22924c1b83a19ae815f45c6075f06ab575aeb77b79863f706c8c9a0932" Workload="ip--172--31--24--181-k8s-csi--node--driver--qg2pp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000373b30), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-181", "pod":"csi-node-driver-qg2pp", "timestamp":"2026-03-06 00:58:23.698616786 +0000 UTC"}, Hostname:"ip-172-31-24-181", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002291e0)} Mar 6 00:58:24.302945 containerd[2011]: 2026-03-06 00:58:23.787 [INFO][5428] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 00:58:24.302945 containerd[2011]: 2026-03-06 00:58:23.925 [INFO][5428] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 00:58:24.302945 containerd[2011]: 2026-03-06 00:58:23.925 [INFO][5428] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-181' Mar 6 00:58:24.302945 containerd[2011]: 2026-03-06 00:58:23.937 [INFO][5428] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6d8b1a22924c1b83a19ae815f45c6075f06ab575aeb77b79863f706c8c9a0932" host="ip-172-31-24-181" Mar 6 00:58:24.302945 containerd[2011]: 2026-03-06 00:58:23.965 [INFO][5428] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-24-181" Mar 6 00:58:24.302945 containerd[2011]: 2026-03-06 00:58:24.004 [INFO][5428] ipam/ipam.go 526: Trying affinity for 192.168.62.192/26 host="ip-172-31-24-181" Mar 6 00:58:24.302945 containerd[2011]: 2026-03-06 00:58:24.035 [INFO][5428] ipam/ipam.go 160: Attempting to load block cidr=192.168.62.192/26 host="ip-172-31-24-181" Mar 6 00:58:24.302945 containerd[2011]: 2026-03-06 00:58:24.055 [INFO][5428] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.62.192/26 host="ip-172-31-24-181" Mar 6 00:58:24.302945 containerd[2011]: 2026-03-06 00:58:24.055 [INFO][5428] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.62.192/26 handle="k8s-pod-network.6d8b1a22924c1b83a19ae815f45c6075f06ab575aeb77b79863f706c8c9a0932" host="ip-172-31-24-181" Mar 6 00:58:24.302945 containerd[2011]: 2026-03-06 00:58:24.069 [INFO][5428] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6d8b1a22924c1b83a19ae815f45c6075f06ab575aeb77b79863f706c8c9a0932 Mar 6 00:58:24.302945 containerd[2011]: 2026-03-06 00:58:24.095 [INFO][5428] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.62.192/26 handle="k8s-pod-network.6d8b1a22924c1b83a19ae815f45c6075f06ab575aeb77b79863f706c8c9a0932" host="ip-172-31-24-181" Mar 6 00:58:24.302945 containerd[2011]: 2026-03-06 00:58:24.127 [INFO][5428] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.62.198/26] block=192.168.62.192/26 handle="k8s-pod-network.6d8b1a22924c1b83a19ae815f45c6075f06ab575aeb77b79863f706c8c9a0932" host="ip-172-31-24-181" Mar 6 00:58:24.302945 containerd[2011]: 2026-03-06 00:58:24.129 [INFO][5428] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.62.198/26] handle="k8s-pod-network.6d8b1a22924c1b83a19ae815f45c6075f06ab575aeb77b79863f706c8c9a0932" host="ip-172-31-24-181" Mar 6 00:58:24.302945 containerd[2011]: 2026-03-06 00:58:24.129 [INFO][5428] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 00:58:24.302945 containerd[2011]: 2026-03-06 00:58:24.129 [INFO][5428] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.62.198/26] IPv6=[] ContainerID="6d8b1a22924c1b83a19ae815f45c6075f06ab575aeb77b79863f706c8c9a0932" HandleID="k8s-pod-network.6d8b1a22924c1b83a19ae815f45c6075f06ab575aeb77b79863f706c8c9a0932" Workload="ip--172--31--24--181-k8s-csi--node--driver--qg2pp-eth0" Mar 6 00:58:24.306537 containerd[2011]: 2026-03-06 00:58:24.156 [INFO][5384] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d8b1a22924c1b83a19ae815f45c6075f06ab575aeb77b79863f706c8c9a0932" Namespace="calico-system" Pod="csi-node-driver-qg2pp" WorkloadEndpoint="ip--172--31--24--181-k8s-csi--node--driver--qg2pp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--181-k8s-csi--node--driver--qg2pp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"608e60b2-3521-42e2-85c9-1a14ee77e2b1", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 0, 57, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-181", ContainerID:"", Pod:"csi-node-driver-qg2pp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5a87d1c8914", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 00:58:24.306537 containerd[2011]: 2026-03-06 00:58:24.159 [INFO][5384] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.198/32] ContainerID="6d8b1a22924c1b83a19ae815f45c6075f06ab575aeb77b79863f706c8c9a0932" Namespace="calico-system" Pod="csi-node-driver-qg2pp" WorkloadEndpoint="ip--172--31--24--181-k8s-csi--node--driver--qg2pp-eth0" Mar 6 00:58:24.306537 containerd[2011]: 2026-03-06 00:58:24.159 [INFO][5384] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5a87d1c8914 ContainerID="6d8b1a22924c1b83a19ae815f45c6075f06ab575aeb77b79863f706c8c9a0932" Namespace="calico-system" Pod="csi-node-driver-qg2pp" WorkloadEndpoint="ip--172--31--24--181-k8s-csi--node--driver--qg2pp-eth0" Mar 6 00:58:24.306537 containerd[2011]: 2026-03-06 00:58:24.230 [INFO][5384] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d8b1a22924c1b83a19ae815f45c6075f06ab575aeb77b79863f706c8c9a0932" Namespace="calico-system" Pod="csi-node-driver-qg2pp" WorkloadEndpoint="ip--172--31--24--181-k8s-csi--node--driver--qg2pp-eth0" Mar 6 00:58:24.306537 containerd[2011]: 2026-03-06 00:58:24.236 [INFO][5384] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d8b1a22924c1b83a19ae815f45c6075f06ab575aeb77b79863f706c8c9a0932" Namespace="calico-system" Pod="csi-node-driver-qg2pp" WorkloadEndpoint="ip--172--31--24--181-k8s-csi--node--driver--qg2pp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--181-k8s-csi--node--driver--qg2pp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"608e60b2-3521-42e2-85c9-1a14ee77e2b1", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 0, 57, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-181", ContainerID:"6d8b1a22924c1b83a19ae815f45c6075f06ab575aeb77b79863f706c8c9a0932", Pod:"csi-node-driver-qg2pp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.62.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5a87d1c8914", MAC:"1e:32:88:7f:7f:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 00:58:24.306537 containerd[2011]: 2026-03-06 00:58:24.297 [INFO][5384] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d8b1a22924c1b83a19ae815f45c6075f06ab575aeb77b79863f706c8c9a0932" Namespace="calico-system" Pod="csi-node-driver-qg2pp" WorkloadEndpoint="ip--172--31--24--181-k8s-csi--node--driver--qg2pp-eth0" Mar 6 00:58:24.308549 containerd[2011]: time="2026-03-06T00:58:24.308355401Z" level=info msg="connecting to shim b94e1383afa8a93e98960577bc8b047e9e845af1ecb7d5f3de951397106752cb" address="unix:///run/containerd/s/fe6aa8fc26e0f22fb6c0f175293c7c890b80a03140cb81b50b3bde9964306bd2" namespace=k8s.io protocol=ttrpc version=3 Mar 6 00:58:24.441665 systemd-networkd[1849]: calidfb80224adb: Gained IPv6LL Mar 6 00:58:24.510185 systemd[1]: Started cri-containerd-b94e1383afa8a93e98960577bc8b047e9e845af1ecb7d5f3de951397106752cb.scope - libcontainer container b94e1383afa8a93e98960577bc8b047e9e845af1ecb7d5f3de951397106752cb. Mar 6 00:58:24.520086 containerd[2011]: time="2026-03-06T00:58:24.519716262Z" level=info msg="connecting to shim 6d8b1a22924c1b83a19ae815f45c6075f06ab575aeb77b79863f706c8c9a0932" address="unix:///run/containerd/s/5b072dbcb31dbae05b29147c10568a1eb54c3b98151aeaef1df62482eda3ccb3" namespace=k8s.io protocol=ttrpc version=3 Mar 6 00:58:24.700765 systemd[1]: Started cri-containerd-6d8b1a22924c1b83a19ae815f45c6075f06ab575aeb77b79863f706c8c9a0932.scope - libcontainer container 6d8b1a22924c1b83a19ae815f45c6075f06ab575aeb77b79863f706c8c9a0932. Mar 6 00:58:24.835200 systemd-networkd[1849]: cali73505ef3c1b: Link UP Mar 6 00:58:24.837747 systemd-networkd[1849]: cali73505ef3c1b: Gained carrier Mar 6 00:58:24.905493 containerd[2011]: 2026-03-06 00:58:24.239 [INFO][5448] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--181-k8s-calico--apiserver--5cdfc896d5--gvch9-eth0 calico-apiserver-5cdfc896d5- calico-system 2b510939-855d-4b9d-85ce-7833d3ee5cac 875 0 2026-03-06 00:57:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5cdfc896d5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-24-181 calico-apiserver-5cdfc896d5-gvch9 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali73505ef3c1b [] [] }} ContainerID="a45e2e96a4103e7ff16183585fa8a13cc3c36808e768520493d569c14dd743d7" Namespace="calico-system" Pod="calico-apiserver-5cdfc896d5-gvch9" WorkloadEndpoint="ip--172--31--24--181-k8s-calico--apiserver--5cdfc896d5--gvch9-" Mar 6 00:58:24.905493 containerd[2011]: 2026-03-06 00:58:24.240 [INFO][5448] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a45e2e96a4103e7ff16183585fa8a13cc3c36808e768520493d569c14dd743d7" Namespace="calico-system" Pod="calico-apiserver-5cdfc896d5-gvch9" WorkloadEndpoint="ip--172--31--24--181-k8s-calico--apiserver--5cdfc896d5--gvch9-eth0" Mar 6 00:58:24.905493 containerd[2011]: 2026-03-06 00:58:24.549 [INFO][5492] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a45e2e96a4103e7ff16183585fa8a13cc3c36808e768520493d569c14dd743d7" HandleID="k8s-pod-network.a45e2e96a4103e7ff16183585fa8a13cc3c36808e768520493d569c14dd743d7" Workload="ip--172--31--24--181-k8s-calico--apiserver--5cdfc896d5--gvch9-eth0" Mar 6 00:58:24.905493 containerd[2011]: 2026-03-06 00:58:24.618 [INFO][5492] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a45e2e96a4103e7ff16183585fa8a13cc3c36808e768520493d569c14dd743d7" HandleID="k8s-pod-network.a45e2e96a4103e7ff16183585fa8a13cc3c36808e768520493d569c14dd743d7" Workload="ip--172--31--24--181-k8s-calico--apiserver--5cdfc896d5--gvch9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004db70), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-181", "pod":"calico-apiserver-5cdfc896d5-gvch9", "timestamp":"2026-03-06 00:58:24.549020742 +0000 UTC"}, Hostname:"ip-172-31-24-181", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002c6c60)} Mar 6 00:58:24.905493 containerd[2011]: 2026-03-06 00:58:24.618 [INFO][5492] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 00:58:24.905493 containerd[2011]: 2026-03-06 00:58:24.621 [INFO][5492] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 00:58:24.905493 containerd[2011]: 2026-03-06 00:58:24.621 [INFO][5492] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-181' Mar 6 00:58:24.905493 containerd[2011]: 2026-03-06 00:58:24.633 [INFO][5492] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a45e2e96a4103e7ff16183585fa8a13cc3c36808e768520493d569c14dd743d7" host="ip-172-31-24-181" Mar 6 00:58:24.905493 containerd[2011]: 2026-03-06 00:58:24.676 [INFO][5492] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-24-181" Mar 6 00:58:24.905493 containerd[2011]: 2026-03-06 00:58:24.711 [INFO][5492] ipam/ipam.go 526: Trying affinity for 192.168.62.192/26 host="ip-172-31-24-181" Mar 6 00:58:24.905493 containerd[2011]: 2026-03-06 00:58:24.728 [INFO][5492] ipam/ipam.go 160: Attempting to load block cidr=192.168.62.192/26 host="ip-172-31-24-181" Mar 6 00:58:24.905493 containerd[2011]: 2026-03-06 00:58:24.739 [INFO][5492] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.62.192/26 host="ip-172-31-24-181" Mar 6 00:58:24.905493 containerd[2011]: 2026-03-06 00:58:24.743 [INFO][5492] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.62.192/26 handle="k8s-pod-network.a45e2e96a4103e7ff16183585fa8a13cc3c36808e768520493d569c14dd743d7" host="ip-172-31-24-181" Mar 6 00:58:24.905493 containerd[2011]: 2026-03-06 00:58:24.754 [INFO][5492] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a45e2e96a4103e7ff16183585fa8a13cc3c36808e768520493d569c14dd743d7 Mar 6 00:58:24.905493 containerd[2011]: 2026-03-06 00:58:24.772 [INFO][5492] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.62.192/26 handle="k8s-pod-network.a45e2e96a4103e7ff16183585fa8a13cc3c36808e768520493d569c14dd743d7" host="ip-172-31-24-181" Mar 6 00:58:24.905493 containerd[2011]: 2026-03-06 00:58:24.801 [INFO][5492] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.62.199/26] block=192.168.62.192/26 handle="k8s-pod-network.a45e2e96a4103e7ff16183585fa8a13cc3c36808e768520493d569c14dd743d7" host="ip-172-31-24-181" Mar 6 00:58:24.905493 containerd[2011]: 2026-03-06 00:58:24.802 [INFO][5492] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.62.199/26] handle="k8s-pod-network.a45e2e96a4103e7ff16183585fa8a13cc3c36808e768520493d569c14dd743d7" host="ip-172-31-24-181" Mar 6 00:58:24.905493 containerd[2011]: 2026-03-06 00:58:24.803 [INFO][5492] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 00:58:24.905493 containerd[2011]: 2026-03-06 00:58:24.803 [INFO][5492] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.62.199/26] IPv6=[] ContainerID="a45e2e96a4103e7ff16183585fa8a13cc3c36808e768520493d569c14dd743d7" HandleID="k8s-pod-network.a45e2e96a4103e7ff16183585fa8a13cc3c36808e768520493d569c14dd743d7" Workload="ip--172--31--24--181-k8s-calico--apiserver--5cdfc896d5--gvch9-eth0" Mar 6 00:58:24.907208 containerd[2011]: 2026-03-06 00:58:24.815 [INFO][5448] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a45e2e96a4103e7ff16183585fa8a13cc3c36808e768520493d569c14dd743d7" Namespace="calico-system" Pod="calico-apiserver-5cdfc896d5-gvch9" WorkloadEndpoint="ip--172--31--24--181-k8s-calico--apiserver--5cdfc896d5--gvch9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--181-k8s-calico--apiserver--5cdfc896d5--gvch9-eth0", GenerateName:"calico-apiserver-5cdfc896d5-", Namespace:"calico-system", SelfLink:"", UID:"2b510939-855d-4b9d-85ce-7833d3ee5cac", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 0, 57, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cdfc896d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-181", ContainerID:"", Pod:"calico-apiserver-5cdfc896d5-gvch9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali73505ef3c1b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 00:58:24.907208 containerd[2011]: 2026-03-06 00:58:24.817 [INFO][5448] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.199/32] ContainerID="a45e2e96a4103e7ff16183585fa8a13cc3c36808e768520493d569c14dd743d7" Namespace="calico-system" Pod="calico-apiserver-5cdfc896d5-gvch9" WorkloadEndpoint="ip--172--31--24--181-k8s-calico--apiserver--5cdfc896d5--gvch9-eth0" Mar 6 00:58:24.907208 containerd[2011]: 2026-03-06 00:58:24.819 [INFO][5448] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali73505ef3c1b ContainerID="a45e2e96a4103e7ff16183585fa8a13cc3c36808e768520493d569c14dd743d7" Namespace="calico-system" Pod="calico-apiserver-5cdfc896d5-gvch9" WorkloadEndpoint="ip--172--31--24--181-k8s-calico--apiserver--5cdfc896d5--gvch9-eth0" Mar 6 00:58:24.907208 containerd[2011]: 2026-03-06 00:58:24.843 [INFO][5448] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a45e2e96a4103e7ff16183585fa8a13cc3c36808e768520493d569c14dd743d7" Namespace="calico-system" Pod="calico-apiserver-5cdfc896d5-gvch9" WorkloadEndpoint="ip--172--31--24--181-k8s-calico--apiserver--5cdfc896d5--gvch9-eth0" Mar 6 00:58:24.907208 containerd[2011]: 2026-03-06 00:58:24.844 [INFO][5448] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a45e2e96a4103e7ff16183585fa8a13cc3c36808e768520493d569c14dd743d7" Namespace="calico-system" Pod="calico-apiserver-5cdfc896d5-gvch9" WorkloadEndpoint="ip--172--31--24--181-k8s-calico--apiserver--5cdfc896d5--gvch9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--181-k8s-calico--apiserver--5cdfc896d5--gvch9-eth0", GenerateName:"calico-apiserver-5cdfc896d5-", Namespace:"calico-system", SelfLink:"", UID:"2b510939-855d-4b9d-85ce-7833d3ee5cac", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 0, 57, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cdfc896d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-181", ContainerID:"a45e2e96a4103e7ff16183585fa8a13cc3c36808e768520493d569c14dd743d7", Pod:"calico-apiserver-5cdfc896d5-gvch9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.62.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali73505ef3c1b", MAC:"26:0d:ef:95:06:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 00:58:24.907208 containerd[2011]: 2026-03-06 00:58:24.888 [INFO][5448] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a45e2e96a4103e7ff16183585fa8a13cc3c36808e768520493d569c14dd743d7" Namespace="calico-system" Pod="calico-apiserver-5cdfc896d5-gvch9" WorkloadEndpoint="ip--172--31--24--181-k8s-calico--apiserver--5cdfc896d5--gvch9-eth0" Mar 6 00:58:25.145491 containerd[2011]: time="2026-03-06T00:58:25.145314125Z" level=info msg="connecting to shim a45e2e96a4103e7ff16183585fa8a13cc3c36808e768520493d569c14dd743d7" address="unix:///run/containerd/s/fe435b8aee3992b5e4e20e22b88e9ff45aca630238aa2707a857804cc9d2a8d3" namespace=k8s.io protocol=ttrpc version=3 Mar 6 00:58:25.209878 systemd-networkd[1849]: calid115d677af6: Gained IPv6LL Mar 6 00:58:25.246390 systemd-networkd[1849]: calicc955892643: Link UP Mar 6 00:58:25.250819 systemd-networkd[1849]: calicc955892643: Gained carrier Mar 6 00:58:25.255650 containerd[2011]: time="2026-03-06T00:58:25.254079294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b5646b546-vb6xx,Uid:69311b7a-6d88-4316-8584-5f01cf4ac2b4,Namespace:calico-system,Attempt:0,} returns sandbox id \"b94e1383afa8a93e98960577bc8b047e9e845af1ecb7d5f3de951397106752cb\"" Mar 6 00:58:25.292999 containerd[2011]: time="2026-03-06T00:58:25.292921650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qg2pp,Uid:608e60b2-3521-42e2-85c9-1a14ee77e2b1,Namespace:calico-system,Attempt:0,} returns sandbox id \"6d8b1a22924c1b83a19ae815f45c6075f06ab575aeb77b79863f706c8c9a0932\"" Mar 6 00:58:25.339724 containerd[2011]: 2026-03-06 00:58:24.537 [INFO][5451] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--181-k8s-coredns--674b8bbfcf--7d46d-eth0 coredns-674b8bbfcf- kube-system bce0721f-75f9-4109-b6db-0d0ca49740fe 865 0 2026-03-06 00:57:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-24-181 coredns-674b8bbfcf-7d46d eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicc955892643 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="848ace5c7f51769cdc8780367bf55304f380f800526255e4db946b749d26510f" Namespace="kube-system" Pod="coredns-674b8bbfcf-7d46d" WorkloadEndpoint="ip--172--31--24--181-k8s-coredns--674b8bbfcf--7d46d-" Mar 6 00:58:25.339724 containerd[2011]: 2026-03-06 00:58:24.545 [INFO][5451] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="848ace5c7f51769cdc8780367bf55304f380f800526255e4db946b749d26510f" Namespace="kube-system" Pod="coredns-674b8bbfcf-7d46d" WorkloadEndpoint="ip--172--31--24--181-k8s-coredns--674b8bbfcf--7d46d-eth0" Mar 6 00:58:25.339724 containerd[2011]: 2026-03-06 00:58:24.896 [INFO][5571] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="848ace5c7f51769cdc8780367bf55304f380f800526255e4db946b749d26510f" HandleID="k8s-pod-network.848ace5c7f51769cdc8780367bf55304f380f800526255e4db946b749d26510f" Workload="ip--172--31--24--181-k8s-coredns--674b8bbfcf--7d46d-eth0" Mar 6 00:58:25.339724 containerd[2011]: 2026-03-06 00:58:24.958 [INFO][5571] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="848ace5c7f51769cdc8780367bf55304f380f800526255e4db946b749d26510f" HandleID="k8s-pod-network.848ace5c7f51769cdc8780367bf55304f380f800526255e4db946b749d26510f" Workload="ip--172--31--24--181-k8s-coredns--674b8bbfcf--7d46d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000345e80), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-24-181", "pod":"coredns-674b8bbfcf-7d46d", "timestamp":"2026-03-06 00:58:24.896279252 +0000 UTC"}, Hostname:"ip-172-31-24-181", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001e6160)} Mar 6 00:58:25.339724 containerd[2011]: 2026-03-06 00:58:24.961 [INFO][5571] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 6 00:58:25.339724 containerd[2011]: 2026-03-06 00:58:24.962 [INFO][5571] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 6 00:58:25.339724 containerd[2011]: 2026-03-06 00:58:24.965 [INFO][5571] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-181' Mar 6 00:58:25.339724 containerd[2011]: 2026-03-06 00:58:24.985 [INFO][5571] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.848ace5c7f51769cdc8780367bf55304f380f800526255e4db946b749d26510f" host="ip-172-31-24-181" Mar 6 00:58:25.339724 containerd[2011]: 2026-03-06 00:58:25.023 [INFO][5571] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-24-181" Mar 6 00:58:25.339724 containerd[2011]: 2026-03-06 00:58:25.048 [INFO][5571] ipam/ipam.go 526: Trying affinity for 192.168.62.192/26 host="ip-172-31-24-181" Mar 6 00:58:25.339724 containerd[2011]: 2026-03-06 00:58:25.066 [INFO][5571] ipam/ipam.go 160: Attempting to load block cidr=192.168.62.192/26 host="ip-172-31-24-181" Mar 6 00:58:25.339724 containerd[2011]: 2026-03-06 00:58:25.084 [INFO][5571] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.62.192/26 host="ip-172-31-24-181" Mar 6 00:58:25.339724 containerd[2011]: 2026-03-06 00:58:25.084 [INFO][5571] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.62.192/26 handle="k8s-pod-network.848ace5c7f51769cdc8780367bf55304f380f800526255e4db946b749d26510f" host="ip-172-31-24-181" Mar 6 00:58:25.339724 containerd[2011]: 2026-03-06 00:58:25.094 [INFO][5571] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.848ace5c7f51769cdc8780367bf55304f380f800526255e4db946b749d26510f Mar 6 00:58:25.339724 containerd[2011]: 2026-03-06 00:58:25.118 [INFO][5571] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.62.192/26 handle="k8s-pod-network.848ace5c7f51769cdc8780367bf55304f380f800526255e4db946b749d26510f" host="ip-172-31-24-181" Mar 6 00:58:25.339724 containerd[2011]: 2026-03-06 00:58:25.163 [INFO][5571] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.62.200/26] block=192.168.62.192/26 handle="k8s-pod-network.848ace5c7f51769cdc8780367bf55304f380f800526255e4db946b749d26510f" host="ip-172-31-24-181" Mar 6 00:58:25.339724 containerd[2011]: 2026-03-06 00:58:25.170 [INFO][5571] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.62.200/26] handle="k8s-pod-network.848ace5c7f51769cdc8780367bf55304f380f800526255e4db946b749d26510f" host="ip-172-31-24-181" Mar 6 00:58:25.339724 containerd[2011]: 2026-03-06 00:58:25.175 [INFO][5571] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 6 00:58:25.339724 containerd[2011]: 2026-03-06 00:58:25.179 [INFO][5571] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.62.200/26] IPv6=[] ContainerID="848ace5c7f51769cdc8780367bf55304f380f800526255e4db946b749d26510f" HandleID="k8s-pod-network.848ace5c7f51769cdc8780367bf55304f380f800526255e4db946b749d26510f" Workload="ip--172--31--24--181-k8s-coredns--674b8bbfcf--7d46d-eth0" Mar 6 00:58:25.340923 containerd[2011]: 2026-03-06 00:58:25.217 [INFO][5451] cni-plugin/k8s.go 418: Populated endpoint ContainerID="848ace5c7f51769cdc8780367bf55304f380f800526255e4db946b749d26510f" Namespace="kube-system" Pod="coredns-674b8bbfcf-7d46d" WorkloadEndpoint="ip--172--31--24--181-k8s-coredns--674b8bbfcf--7d46d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--181-k8s-coredns--674b8bbfcf--7d46d-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bce0721f-75f9-4109-b6db-0d0ca49740fe", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 0, 57, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-181", ContainerID:"", Pod:"coredns-674b8bbfcf-7d46d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicc955892643", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 00:58:25.340923 containerd[2011]: 2026-03-06 00:58:25.221 [INFO][5451] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.62.200/32] ContainerID="848ace5c7f51769cdc8780367bf55304f380f800526255e4db946b749d26510f" Namespace="kube-system" Pod="coredns-674b8bbfcf-7d46d" WorkloadEndpoint="ip--172--31--24--181-k8s-coredns--674b8bbfcf--7d46d-eth0" Mar 6 00:58:25.340923 containerd[2011]: 2026-03-06 00:58:25.222 [INFO][5451] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicc955892643 ContainerID="848ace5c7f51769cdc8780367bf55304f380f800526255e4db946b749d26510f" Namespace="kube-system" Pod="coredns-674b8bbfcf-7d46d" WorkloadEndpoint="ip--172--31--24--181-k8s-coredns--674b8bbfcf--7d46d-eth0" Mar 6 00:58:25.340923 containerd[2011]: 2026-03-06 00:58:25.259 [INFO][5451] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="848ace5c7f51769cdc8780367bf55304f380f800526255e4db946b749d26510f" Namespace="kube-system" Pod="coredns-674b8bbfcf-7d46d" WorkloadEndpoint="ip--172--31--24--181-k8s-coredns--674b8bbfcf--7d46d-eth0" Mar 6 00:58:25.340923 containerd[2011]: 2026-03-06 00:58:25.271 [INFO][5451] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="848ace5c7f51769cdc8780367bf55304f380f800526255e4db946b749d26510f" Namespace="kube-system" Pod="coredns-674b8bbfcf-7d46d" WorkloadEndpoint="ip--172--31--24--181-k8s-coredns--674b8bbfcf--7d46d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--181-k8s-coredns--674b8bbfcf--7d46d-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bce0721f-75f9-4109-b6db-0d0ca49740fe", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2026, time.March, 6, 0, 57, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-181", ContainerID:"848ace5c7f51769cdc8780367bf55304f380f800526255e4db946b749d26510f", Pod:"coredns-674b8bbfcf-7d46d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.62.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicc955892643", MAC:"0e:15:c9:95:cb:97", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 6 00:58:25.340923 containerd[2011]: 2026-03-06 00:58:25.304 [INFO][5451] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="848ace5c7f51769cdc8780367bf55304f380f800526255e4db946b749d26510f" Namespace="kube-system" Pod="coredns-674b8bbfcf-7d46d" WorkloadEndpoint="ip--172--31--24--181-k8s-coredns--674b8bbfcf--7d46d-eth0" Mar 6 00:58:25.370876 systemd[1]: Started cri-containerd-a45e2e96a4103e7ff16183585fa8a13cc3c36808e768520493d569c14dd743d7.scope - libcontainer container a45e2e96a4103e7ff16183585fa8a13cc3c36808e768520493d569c14dd743d7. Mar 6 00:58:25.496227 containerd[2011]: time="2026-03-06T00:58:25.496153891Z" level=info msg="connecting to shim 848ace5c7f51769cdc8780367bf55304f380f800526255e4db946b749d26510f" address="unix:///run/containerd/s/9cf343f4e839192d60c469181de3ba19bfd9b0efa94f86db1cd75b6110e2949a" namespace=k8s.io protocol=ttrpc version=3 Mar 6 00:58:25.718603 systemd[1]: Started cri-containerd-848ace5c7f51769cdc8780367bf55304f380f800526255e4db946b749d26510f.scope - libcontainer container 848ace5c7f51769cdc8780367bf55304f380f800526255e4db946b749d26510f. Mar 6 00:58:25.746545 containerd[2011]: time="2026-03-06T00:58:25.746341520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cdfc896d5-gvch9,Uid:2b510939-855d-4b9d-85ce-7833d3ee5cac,Namespace:calico-system,Attempt:0,} returns sandbox id \"a45e2e96a4103e7ff16183585fa8a13cc3c36808e768520493d569c14dd743d7\"" Mar 6 00:58:25.923170 containerd[2011]: time="2026-03-06T00:58:25.923024817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7d46d,Uid:bce0721f-75f9-4109-b6db-0d0ca49740fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"848ace5c7f51769cdc8780367bf55304f380f800526255e4db946b749d26510f\"" Mar 6 00:58:25.939715 containerd[2011]: time="2026-03-06T00:58:25.939560061Z" level=info msg="CreateContainer within sandbox \"848ace5c7f51769cdc8780367bf55304f380f800526255e4db946b749d26510f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 6 00:58:25.981266 containerd[2011]: time="2026-03-06T00:58:25.978434326Z" level=info msg="Container 155f7c668a4a2ff0c42d8472132f2b0176787b2944465793eb8caf7e08589ff2: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:58:25.985010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1740913117.mount: Deactivated successfully. Mar 6 00:58:26.010346 containerd[2011]: time="2026-03-06T00:58:26.010073166Z" level=info msg="CreateContainer within sandbox \"848ace5c7f51769cdc8780367bf55304f380f800526255e4db946b749d26510f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"155f7c668a4a2ff0c42d8472132f2b0176787b2944465793eb8caf7e08589ff2\"" Mar 6 00:58:26.015560 containerd[2011]: time="2026-03-06T00:58:26.014991174Z" level=info msg="StartContainer for \"155f7c668a4a2ff0c42d8472132f2b0176787b2944465793eb8caf7e08589ff2\"" Mar 6 00:58:26.022839 containerd[2011]: time="2026-03-06T00:58:26.022632810Z" level=info msg="connecting to shim 155f7c668a4a2ff0c42d8472132f2b0176787b2944465793eb8caf7e08589ff2" address="unix:///run/containerd/s/9cf343f4e839192d60c469181de3ba19bfd9b0efa94f86db1cd75b6110e2949a" protocol=ttrpc version=3 Mar 6 00:58:26.104313 systemd[1]: Started cri-containerd-155f7c668a4a2ff0c42d8472132f2b0176787b2944465793eb8caf7e08589ff2.scope - libcontainer container 155f7c668a4a2ff0c42d8472132f2b0176787b2944465793eb8caf7e08589ff2. Mar 6 00:58:26.215621 containerd[2011]: time="2026-03-06T00:58:26.215558407Z" level=info msg="StartContainer for \"155f7c668a4a2ff0c42d8472132f2b0176787b2944465793eb8caf7e08589ff2\" returns successfully" Mar 6 00:58:26.232991 systemd-networkd[1849]: cali5a87d1c8914: Gained IPv6LL Mar 6 00:58:26.258667 containerd[2011]: time="2026-03-06T00:58:26.258492403Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:58:26.265196 containerd[2011]: time="2026-03-06T00:58:26.264998911Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Mar 6 00:58:26.274435 containerd[2011]: time="2026-03-06T00:58:26.274347955Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:58:26.295754 containerd[2011]: time="2026-03-06T00:58:26.295297615Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:58:26.300966 containerd[2011]: time="2026-03-06T00:58:26.300900091Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 5.827309673s" Mar 6 00:58:26.301270 containerd[2011]: time="2026-03-06T00:58:26.301152823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 6 00:58:26.303867 containerd[2011]: time="2026-03-06T00:58:26.303803851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 6 00:58:26.313486 containerd[2011]: time="2026-03-06T00:58:26.312099427Z" level=info msg="CreateContainer within sandbox \"6afafefa0fea4d3c64ba52396e3efe802f7a39c18f4d235b6a4b6e117d9cddc4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 6 00:58:26.339568 containerd[2011]: time="2026-03-06T00:58:26.339286135Z" level=info msg="Container 6d3ec03fabc1e9752acd738ec27697b13184db1ecf0dcab46f73bd9c51a27990: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:58:26.366737 containerd[2011]: time="2026-03-06T00:58:26.366648992Z" level=info msg="CreateContainer within sandbox \"6afafefa0fea4d3c64ba52396e3efe802f7a39c18f4d235b6a4b6e117d9cddc4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6d3ec03fabc1e9752acd738ec27697b13184db1ecf0dcab46f73bd9c51a27990\"" Mar 6 00:58:26.368781 containerd[2011]: time="2026-03-06T00:58:26.368678132Z" level=info msg="StartContainer for \"6d3ec03fabc1e9752acd738ec27697b13184db1ecf0dcab46f73bd9c51a27990\"" Mar 6 00:58:26.372661 containerd[2011]: time="2026-03-06T00:58:26.372585836Z" level=info msg="connecting to shim 6d3ec03fabc1e9752acd738ec27697b13184db1ecf0dcab46f73bd9c51a27990" address="unix:///run/containerd/s/0c595d0d3449965c7f668ca7d69931f3c78bd8314af93df0f21866ff3100efef" protocol=ttrpc version=3 Mar 6 00:58:26.416815 systemd[1]: Started cri-containerd-6d3ec03fabc1e9752acd738ec27697b13184db1ecf0dcab46f73bd9c51a27990.scope - libcontainer container 6d3ec03fabc1e9752acd738ec27697b13184db1ecf0dcab46f73bd9c51a27990. Mar 6 00:58:26.538724 containerd[2011]: time="2026-03-06T00:58:26.538527344Z" level=info msg="StartContainer for \"6d3ec03fabc1e9752acd738ec27697b13184db1ecf0dcab46f73bd9c51a27990\" returns successfully" Mar 6 00:58:26.691827 kubelet[3346]: I0306 00:58:26.691711 3346 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5cdfc896d5-kqp6p" podStartSLOduration=35.861598492 podStartE2EDuration="41.691688541s" podCreationTimestamp="2026-03-06 00:57:45 +0000 UTC" firstStartedPulling="2026-03-06 00:58:20.472967174 +0000 UTC m=+61.833288728" lastFinishedPulling="2026-03-06 00:58:26.303057211 +0000 UTC m=+67.663378777" observedRunningTime="2026-03-06 00:58:26.659723013 +0000 UTC m=+68.020044603" watchObservedRunningTime="2026-03-06 00:58:26.691688541 +0000 UTC m=+68.052010095" Mar 6 00:58:26.873724 systemd-networkd[1849]: cali73505ef3c1b: Gained IPv6LL Mar 6 00:58:26.937243 systemd-networkd[1849]: calicc955892643: Gained IPv6LL Mar 6 00:58:26.974927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2441918192.mount: Deactivated successfully. Mar 6 00:58:27.126856 systemd[1]: Started sshd@9-172.31.24.181:22-68.220.241.50:35154.service - OpenSSH per-connection server daemon (68.220.241.50:35154). Mar 6 00:58:27.630252 sshd[5827]: Accepted publickey for core from 68.220.241.50 port 35154 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:58:27.636897 sshd-session[5827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:58:27.662055 kubelet[3346]: I0306 00:58:27.661327 3346 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 00:58:27.667571 systemd-logind[1977]: New session 10 of user core. Mar 6 00:58:27.671806 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 6 00:58:28.240160 sshd[5830]: Connection closed by 68.220.241.50 port 35154 Mar 6 00:58:28.240748 sshd-session[5827]: pam_unix(sshd:session): session closed for user core Mar 6 00:58:28.251879 systemd-logind[1977]: Session 10 logged out. Waiting for processes to exit. Mar 6 00:58:28.251996 systemd[1]: sshd@9-172.31.24.181:22-68.220.241.50:35154.service: Deactivated successfully. Mar 6 00:58:28.259751 systemd[1]: session-10.scope: Deactivated successfully. Mar 6 00:58:28.265732 systemd-logind[1977]: Removed session 10. Mar 6 00:58:28.670209 kubelet[3346]: I0306 00:58:28.670015 3346 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 00:58:29.014066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2815382017.mount: Deactivated successfully. Mar 6 00:58:29.283877 ntpd[2213]: Listen normally on 9 cali2087e620c5e [fe80::ecee:eeff:feee:eeee%8]:123 Mar 6 00:58:29.285363 ntpd[2213]: Listen normally on 10 calieef0e81c6a4 [fe80::ecee:eeff:feee:eeee%9]:123 Mar 6 00:58:29.287758 ntpd[2213]: 6 Mar 00:58:29 ntpd[2213]: Listen normally on 9 cali2087e620c5e [fe80::ecee:eeff:feee:eeee%8]:123 Mar 6 00:58:29.287758 ntpd[2213]: 6 Mar 00:58:29 ntpd[2213]: Listen normally on 10 calieef0e81c6a4 [fe80::ecee:eeff:feee:eeee%9]:123 Mar 6 00:58:29.287758 ntpd[2213]: 6 Mar 00:58:29 ntpd[2213]: Listen normally on 11 calidfb80224adb [fe80::ecee:eeff:feee:eeee%10]:123 Mar 6 00:58:29.287758 ntpd[2213]: 6 Mar 00:58:29 ntpd[2213]: Listen normally on 12 calid115d677af6 [fe80::ecee:eeff:feee:eeee%11]:123 Mar 6 00:58:29.287758 ntpd[2213]: 6 Mar 00:58:29 ntpd[2213]: Listen normally on 13 cali5a87d1c8914 [fe80::ecee:eeff:feee:eeee%12]:123 Mar 6 00:58:29.287758 ntpd[2213]: 6 Mar 00:58:29 ntpd[2213]: Listen normally on 14 cali73505ef3c1b [fe80::ecee:eeff:feee:eeee%13]:123 Mar 6 00:58:29.287758 ntpd[2213]: 6 Mar 00:58:29 ntpd[2213]: Listen normally on 15 calicc955892643 [fe80::ecee:eeff:feee:eeee%14]:123 Mar 6 00:58:29.285415 ntpd[2213]: Listen normally on 11 calidfb80224adb [fe80::ecee:eeff:feee:eeee%10]:123 Mar 6 00:58:29.285496 ntpd[2213]: Listen normally on 12 calid115d677af6 [fe80::ecee:eeff:feee:eeee%11]:123 Mar 6 00:58:29.285552 ntpd[2213]: Listen normally on 13 cali5a87d1c8914 [fe80::ecee:eeff:feee:eeee%12]:123 Mar 6 00:58:29.285615 ntpd[2213]: Listen normally on 14 cali73505ef3c1b [fe80::ecee:eeff:feee:eeee%13]:123 Mar 6 00:58:29.285663 ntpd[2213]: Listen normally on 15 calicc955892643 [fe80::ecee:eeff:feee:eeee%14]:123 Mar 6 00:58:30.258495 containerd[2011]: time="2026-03-06T00:58:30.258208859Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:58:30.264271 containerd[2011]: time="2026-03-06T00:58:30.263619647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Mar 6 00:58:30.270959 containerd[2011]: time="2026-03-06T00:58:30.270220235Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:58:30.279249 containerd[2011]: time="2026-03-06T00:58:30.279134495Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:58:30.282902 containerd[2011]: time="2026-03-06T00:58:30.282691331Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 3.978822044s" Mar 6 00:58:30.282902 containerd[2011]: time="2026-03-06T00:58:30.282760715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Mar 6 00:58:30.287047 containerd[2011]: time="2026-03-06T00:58:30.286561511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 6 00:58:30.308718 containerd[2011]: time="2026-03-06T00:58:30.307875899Z" level=info msg="CreateContainer within sandbox \"ef44cc78ae166b27a96758cecb5eb6a7d3c447e7e2ee01d950032249f4078f18\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 6 00:58:30.347157 containerd[2011]: time="2026-03-06T00:58:30.347000783Z" level=info msg="Container 65521402ad017299a6252c12c9e582a8d5c8ff4e3c3c10ae766b31094c6a0c5c: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:58:30.394851 containerd[2011]: time="2026-03-06T00:58:30.394635276Z" level=info msg="CreateContainer within sandbox \"ef44cc78ae166b27a96758cecb5eb6a7d3c447e7e2ee01d950032249f4078f18\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"65521402ad017299a6252c12c9e582a8d5c8ff4e3c3c10ae766b31094c6a0c5c\"" Mar 6 00:58:30.398747 containerd[2011]: time="2026-03-06T00:58:30.398629740Z" level=info msg="StartContainer for \"65521402ad017299a6252c12c9e582a8d5c8ff4e3c3c10ae766b31094c6a0c5c\"" Mar 6 00:58:30.406894 containerd[2011]: time="2026-03-06T00:58:30.406788588Z" level=info msg="connecting to shim 65521402ad017299a6252c12c9e582a8d5c8ff4e3c3c10ae766b31094c6a0c5c" address="unix:///run/containerd/s/d1bf536d07d074ea12bdb040f0a975cee767cfe991e29190e1b1a766ad50be9e" protocol=ttrpc version=3 Mar 6 00:58:30.478759 systemd[1]: Started cri-containerd-65521402ad017299a6252c12c9e582a8d5c8ff4e3c3c10ae766b31094c6a0c5c.scope - libcontainer container 65521402ad017299a6252c12c9e582a8d5c8ff4e3c3c10ae766b31094c6a0c5c. Mar 6 00:58:30.617959 containerd[2011]: time="2026-03-06T00:58:30.617782873Z" level=info msg="StartContainer for \"65521402ad017299a6252c12c9e582a8d5c8ff4e3c3c10ae766b31094c6a0c5c\" returns successfully" Mar 6 00:58:30.737085 kubelet[3346]: I0306 00:58:30.736917 3346 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-qtd2z" podStartSLOduration=37.005871722 podStartE2EDuration="45.736873081s" podCreationTimestamp="2026-03-06 00:57:45 +0000 UTC" firstStartedPulling="2026-03-06 00:58:21.5537098 +0000 UTC m=+62.914031354" lastFinishedPulling="2026-03-06 00:58:30.284711147 +0000 UTC m=+71.645032713" observedRunningTime="2026-03-06 00:58:30.730545121 +0000 UTC m=+72.090866711" watchObservedRunningTime="2026-03-06 00:58:30.736873081 +0000 UTC m=+72.097194671" Mar 6 00:58:30.740527 kubelet[3346]: I0306 00:58:30.739746 3346 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-7d46d" podStartSLOduration=67.739714597 podStartE2EDuration="1m7.739714597s" podCreationTimestamp="2026-03-06 00:57:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-06 00:58:26.693752313 +0000 UTC m=+68.054073915" watchObservedRunningTime="2026-03-06 00:58:30.739714597 +0000 UTC m=+72.100036163" Mar 6 00:58:33.340011 systemd[1]: Started sshd@10-172.31.24.181:22-68.220.241.50:51510.service - OpenSSH per-connection server daemon (68.220.241.50:51510). Mar 6 00:58:33.823605 sshd[5921]: Accepted publickey for core from 68.220.241.50 port 51510 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:58:33.827798 sshd-session[5921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:58:33.848514 systemd-logind[1977]: New session 11 of user core. Mar 6 00:58:33.856871 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 6 00:58:34.453572 sshd[5928]: Connection closed by 68.220.241.50 port 51510 Mar 6 00:58:34.454630 sshd-session[5921]: pam_unix(sshd:session): session closed for user core Mar 6 00:58:34.472198 systemd[1]: sshd@10-172.31.24.181:22-68.220.241.50:51510.service: Deactivated successfully. Mar 6 00:58:34.482734 systemd[1]: session-11.scope: Deactivated successfully. Mar 6 00:58:34.489400 systemd-logind[1977]: Session 11 logged out. Waiting for processes to exit. Mar 6 00:58:34.494545 systemd-logind[1977]: Removed session 11. Mar 6 00:58:34.559324 systemd[1]: Started sshd@11-172.31.24.181:22-68.220.241.50:51524.service - OpenSSH per-connection server daemon (68.220.241.50:51524). Mar 6 00:58:35.096488 sshd[5951]: Accepted publickey for core from 68.220.241.50 port 51524 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:58:35.103222 sshd-session[5951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:58:35.117304 systemd-logind[1977]: New session 12 of user core. Mar 6 00:58:35.127906 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 6 00:58:35.938413 sshd[5961]: Connection closed by 68.220.241.50 port 51524 Mar 6 00:58:35.935799 sshd-session[5951]: pam_unix(sshd:session): session closed for user core Mar 6 00:58:35.951025 systemd-logind[1977]: Session 12 logged out. Waiting for processes to exit. Mar 6 00:58:35.952433 systemd[1]: sshd@11-172.31.24.181:22-68.220.241.50:51524.service: Deactivated successfully. Mar 6 00:58:35.966977 systemd[1]: session-12.scope: Deactivated successfully. Mar 6 00:58:35.978476 systemd-logind[1977]: Removed session 12. Mar 6 00:58:36.041432 systemd[1]: Started sshd@12-172.31.24.181:22-68.220.241.50:51536.service - OpenSSH per-connection server daemon (68.220.241.50:51536). Mar 6 00:58:36.328440 containerd[2011]: time="2026-03-06T00:58:36.328350593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:58:36.329648 containerd[2011]: time="2026-03-06T00:58:36.329578025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Mar 6 00:58:36.334627 containerd[2011]: time="2026-03-06T00:58:36.334561589Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:58:36.338785 containerd[2011]: time="2026-03-06T00:58:36.338645585Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:58:36.341436 containerd[2011]: time="2026-03-06T00:58:36.341210333Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 6.05457465s" Mar 6 00:58:36.341436 containerd[2011]: time="2026-03-06T00:58:36.341276669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Mar 6 00:58:36.344772 containerd[2011]: time="2026-03-06T00:58:36.344408873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 6 00:58:36.397564 containerd[2011]: time="2026-03-06T00:58:36.397426805Z" level=info msg="CreateContainer within sandbox \"b94e1383afa8a93e98960577bc8b047e9e845af1ecb7d5f3de951397106752cb\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 6 00:58:36.417551 containerd[2011]: time="2026-03-06T00:58:36.415871057Z" level=info msg="Container 389398a68c2afead4f63b3254d1256c95d2faf6dca81bd7887399fd67c735e39: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:58:36.441668 containerd[2011]: time="2026-03-06T00:58:36.441517614Z" level=info msg="CreateContainer within sandbox \"b94e1383afa8a93e98960577bc8b047e9e845af1ecb7d5f3de951397106752cb\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"389398a68c2afead4f63b3254d1256c95d2faf6dca81bd7887399fd67c735e39\"" Mar 6 00:58:36.442790 containerd[2011]: time="2026-03-06T00:58:36.442716450Z" level=info msg="StartContainer for \"389398a68c2afead4f63b3254d1256c95d2faf6dca81bd7887399fd67c735e39\"" Mar 6 00:58:36.447898 containerd[2011]: time="2026-03-06T00:58:36.447826854Z" level=info msg="connecting to shim 389398a68c2afead4f63b3254d1256c95d2faf6dca81bd7887399fd67c735e39" address="unix:///run/containerd/s/fe6aa8fc26e0f22fb6c0f175293c7c890b80a03140cb81b50b3bde9964306bd2" protocol=ttrpc version=3 Mar 6 00:58:36.564834 systemd[1]: Started cri-containerd-389398a68c2afead4f63b3254d1256c95d2faf6dca81bd7887399fd67c735e39.scope - libcontainer container 389398a68c2afead4f63b3254d1256c95d2faf6dca81bd7887399fd67c735e39. Mar 6 00:58:36.585223 sshd[5983]: Accepted publickey for core from 68.220.241.50 port 51536 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:58:36.591355 sshd-session[5983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:58:36.613610 systemd-logind[1977]: New session 13 of user core. Mar 6 00:58:36.619787 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 6 00:58:36.732912 containerd[2011]: time="2026-03-06T00:58:36.732401143Z" level=info msg="StartContainer for \"389398a68c2afead4f63b3254d1256c95d2faf6dca81bd7887399fd67c735e39\" returns successfully" Mar 6 00:58:37.085656 sshd[6010]: Connection closed by 68.220.241.50 port 51536 Mar 6 00:58:37.086998 sshd-session[5983]: pam_unix(sshd:session): session closed for user core Mar 6 00:58:37.096437 systemd-logind[1977]: Session 13 logged out. Waiting for processes to exit. Mar 6 00:58:37.098393 systemd[1]: sshd@12-172.31.24.181:22-68.220.241.50:51536.service: Deactivated successfully. Mar 6 00:58:37.114683 systemd[1]: session-13.scope: Deactivated successfully. Mar 6 00:58:37.126322 systemd-logind[1977]: Removed session 13. Mar 6 00:58:38.124486 containerd[2011]: time="2026-03-06T00:58:38.122862318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:58:38.124486 containerd[2011]: time="2026-03-06T00:58:38.124384842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Mar 6 00:58:38.125731 containerd[2011]: time="2026-03-06T00:58:38.125671134Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:58:38.135649 containerd[2011]: time="2026-03-06T00:58:38.135592422Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:58:38.137387 containerd[2011]: time="2026-03-06T00:58:38.137279142Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.792062801s" Mar 6 00:58:38.137387 containerd[2011]: time="2026-03-06T00:58:38.137348238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Mar 6 00:58:38.141594 containerd[2011]: time="2026-03-06T00:58:38.141518298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 6 00:58:38.149187 containerd[2011]: time="2026-03-06T00:58:38.149111022Z" level=info msg="CreateContainer within sandbox \"6d8b1a22924c1b83a19ae815f45c6075f06ab575aeb77b79863f706c8c9a0932\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 6 00:58:38.178860 containerd[2011]: time="2026-03-06T00:58:38.178764894Z" level=info msg="Container 8ecf976b473e84ad21789e8b3b5785ce09283afbdd338029552feff5e0048913: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:58:38.196061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3476481081.mount: Deactivated successfully. Mar 6 00:58:38.212482 containerd[2011]: time="2026-03-06T00:58:38.212397882Z" level=info msg="CreateContainer within sandbox \"6d8b1a22924c1b83a19ae815f45c6075f06ab575aeb77b79863f706c8c9a0932\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8ecf976b473e84ad21789e8b3b5785ce09283afbdd338029552feff5e0048913\"" Mar 6 00:58:38.214639 containerd[2011]: time="2026-03-06T00:58:38.214005990Z" level=info msg="StartContainer for \"8ecf976b473e84ad21789e8b3b5785ce09283afbdd338029552feff5e0048913\"" Mar 6 00:58:38.222522 containerd[2011]: time="2026-03-06T00:58:38.222303774Z" level=info msg="connecting to shim 8ecf976b473e84ad21789e8b3b5785ce09283afbdd338029552feff5e0048913" address="unix:///run/containerd/s/5b072dbcb31dbae05b29147c10568a1eb54c3b98151aeaef1df62482eda3ccb3" protocol=ttrpc version=3 Mar 6 00:58:38.274864 systemd[1]: Started cri-containerd-8ecf976b473e84ad21789e8b3b5785ce09283afbdd338029552feff5e0048913.scope - libcontainer container 8ecf976b473e84ad21789e8b3b5785ce09283afbdd338029552feff5e0048913. Mar 6 00:58:38.447666 containerd[2011]: time="2026-03-06T00:58:38.446806532Z" level=info msg="StartContainer for \"8ecf976b473e84ad21789e8b3b5785ce09283afbdd338029552feff5e0048913\" returns successfully" Mar 6 00:58:38.501572 containerd[2011]: time="2026-03-06T00:58:38.501264164Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:58:38.502714 containerd[2011]: time="2026-03-06T00:58:38.502638608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 6 00:58:38.507396 containerd[2011]: time="2026-03-06T00:58:38.507206408Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 365.612318ms" Mar 6 00:58:38.507396 containerd[2011]: time="2026-03-06T00:58:38.507266288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 6 00:58:38.509517 containerd[2011]: time="2026-03-06T00:58:38.509121620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 6 00:58:38.518421 containerd[2011]: time="2026-03-06T00:58:38.518309084Z" level=info msg="CreateContainer within sandbox \"a45e2e96a4103e7ff16183585fa8a13cc3c36808e768520493d569c14dd743d7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 6 00:58:38.539571 containerd[2011]: time="2026-03-06T00:58:38.536931848Z" level=info msg="Container 87c65bcec5c163f8acd2198ff56826eeff89a0f52b55ea65d83dbb3ecb2dc4d5: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:58:38.561148 containerd[2011]: time="2026-03-06T00:58:38.561052964Z" level=info msg="CreateContainer within sandbox \"a45e2e96a4103e7ff16183585fa8a13cc3c36808e768520493d569c14dd743d7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"87c65bcec5c163f8acd2198ff56826eeff89a0f52b55ea65d83dbb3ecb2dc4d5\"" Mar 6 00:58:38.563062 containerd[2011]: time="2026-03-06T00:58:38.562983704Z" level=info msg="StartContainer for \"87c65bcec5c163f8acd2198ff56826eeff89a0f52b55ea65d83dbb3ecb2dc4d5\"" Mar 6 00:58:38.569520 containerd[2011]: time="2026-03-06T00:58:38.569404484Z" level=info msg="connecting to shim 87c65bcec5c163f8acd2198ff56826eeff89a0f52b55ea65d83dbb3ecb2dc4d5" address="unix:///run/containerd/s/fe435b8aee3992b5e4e20e22b88e9ff45aca630238aa2707a857804cc9d2a8d3" protocol=ttrpc version=3 Mar 6 00:58:38.608877 systemd[1]: Started cri-containerd-87c65bcec5c163f8acd2198ff56826eeff89a0f52b55ea65d83dbb3ecb2dc4d5.scope - libcontainer container 87c65bcec5c163f8acd2198ff56826eeff89a0f52b55ea65d83dbb3ecb2dc4d5. Mar 6 00:58:38.715946 containerd[2011]: time="2026-03-06T00:58:38.715592337Z" level=info msg="StartContainer for \"87c65bcec5c163f8acd2198ff56826eeff89a0f52b55ea65d83dbb3ecb2dc4d5\" returns successfully" Mar 6 00:58:38.825783 kubelet[3346]: I0306 00:58:38.825672 3346 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7b5646b546-vb6xx" podStartSLOduration=39.748787374 podStartE2EDuration="50.825645489s" podCreationTimestamp="2026-03-06 00:57:48 +0000 UTC" firstStartedPulling="2026-03-06 00:58:25.267148602 +0000 UTC m=+66.627470156" lastFinishedPulling="2026-03-06 00:58:36.344006693 +0000 UTC m=+77.704328271" observedRunningTime="2026-03-06 00:58:37.75979172 +0000 UTC m=+79.120113298" watchObservedRunningTime="2026-03-06 00:58:38.825645489 +0000 UTC m=+80.185967055" Mar 6 00:58:38.827349 kubelet[3346]: I0306 00:58:38.826023 3346 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5cdfc896d5-gvch9" podStartSLOduration=41.075565494 podStartE2EDuration="53.825988977s" podCreationTimestamp="2026-03-06 00:57:45 +0000 UTC" firstStartedPulling="2026-03-06 00:58:25.758145069 +0000 UTC m=+67.118466635" lastFinishedPulling="2026-03-06 00:58:38.508568468 +0000 UTC m=+79.868890118" observedRunningTime="2026-03-06 00:58:38.825037953 +0000 UTC m=+80.185359543" watchObservedRunningTime="2026-03-06 00:58:38.825988977 +0000 UTC m=+80.186310543" Mar 6 00:58:39.795433 kubelet[3346]: I0306 00:58:39.795350 3346 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 00:58:41.330267 containerd[2011]: time="2026-03-06T00:58:41.329915602Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:58:41.334350 containerd[2011]: time="2026-03-06T00:58:41.333092194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Mar 6 00:58:41.344844 containerd[2011]: time="2026-03-06T00:58:41.344763058Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:58:41.362490 containerd[2011]: time="2026-03-06T00:58:41.360867250Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 6 00:58:41.365349 containerd[2011]: time="2026-03-06T00:58:41.365262430Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 2.856075902s" Mar 6 00:58:41.365349 containerd[2011]: time="2026-03-06T00:58:41.365335078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Mar 6 00:58:41.377227 kubelet[3346]: I0306 00:58:41.377148 3346 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 6 00:58:41.380232 containerd[2011]: time="2026-03-06T00:58:41.380154766Z" level=info msg="CreateContainer within sandbox \"6d8b1a22924c1b83a19ae815f45c6075f06ab575aeb77b79863f706c8c9a0932\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 6 00:58:41.402814 containerd[2011]: time="2026-03-06T00:58:41.402734038Z" level=info msg="Container e2440ffe36ceff56eb2cb635d4ef7ef3873272d4f76cd7578f12284b335a6aba: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:58:41.430996 containerd[2011]: time="2026-03-06T00:58:41.430772890Z" level=info msg="CreateContainer within sandbox \"6d8b1a22924c1b83a19ae815f45c6075f06ab575aeb77b79863f706c8c9a0932\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e2440ffe36ceff56eb2cb635d4ef7ef3873272d4f76cd7578f12284b335a6aba\"" Mar 6 00:58:41.434485 containerd[2011]: time="2026-03-06T00:58:41.433912498Z" level=info msg="StartContainer for \"e2440ffe36ceff56eb2cb635d4ef7ef3873272d4f76cd7578f12284b335a6aba\"" Mar 6 00:58:41.439837 containerd[2011]: time="2026-03-06T00:58:41.439767790Z" level=info msg="connecting to shim e2440ffe36ceff56eb2cb635d4ef7ef3873272d4f76cd7578f12284b335a6aba" address="unix:///run/containerd/s/5b072dbcb31dbae05b29147c10568a1eb54c3b98151aeaef1df62482eda3ccb3" protocol=ttrpc version=3 Mar 6 00:58:41.520598 systemd[1]: Started cri-containerd-e2440ffe36ceff56eb2cb635d4ef7ef3873272d4f76cd7578f12284b335a6aba.scope - libcontainer container e2440ffe36ceff56eb2cb635d4ef7ef3873272d4f76cd7578f12284b335a6aba. Mar 6 00:58:41.786520 containerd[2011]: time="2026-03-06T00:58:41.785198220Z" level=info msg="StartContainer for \"e2440ffe36ceff56eb2cb635d4ef7ef3873272d4f76cd7578f12284b335a6aba\" returns successfully" Mar 6 00:58:41.849174 kubelet[3346]: I0306 00:58:41.848800 3346 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-qg2pp" podStartSLOduration=38.789072884 podStartE2EDuration="54.848770308s" podCreationTimestamp="2026-03-06 00:57:47 +0000 UTC" firstStartedPulling="2026-03-06 00:58:25.306833082 +0000 UTC m=+66.667154648" lastFinishedPulling="2026-03-06 00:58:41.366530506 +0000 UTC m=+82.726852072" observedRunningTime="2026-03-06 00:58:41.846492696 +0000 UTC m=+83.206814310" watchObservedRunningTime="2026-03-06 00:58:41.848770308 +0000 UTC m=+83.209091898" Mar 6 00:58:42.136601 kubelet[3346]: I0306 00:58:42.135934 3346 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 6 00:58:42.136601 kubelet[3346]: I0306 00:58:42.136048 3346 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 6 00:58:42.188957 systemd[1]: Started sshd@13-172.31.24.181:22-68.220.241.50:55838.service - OpenSSH per-connection server daemon (68.220.241.50:55838). Mar 6 00:58:42.684885 sshd[6157]: Accepted publickey for core from 68.220.241.50 port 55838 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:58:42.688241 sshd-session[6157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:58:42.703949 systemd-logind[1977]: New session 14 of user core. Mar 6 00:58:42.715104 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 6 00:58:43.186155 sshd[6160]: Connection closed by 68.220.241.50 port 55838 Mar 6 00:58:43.187021 sshd-session[6157]: pam_unix(sshd:session): session closed for user core Mar 6 00:58:43.197268 systemd[1]: sshd@13-172.31.24.181:22-68.220.241.50:55838.service: Deactivated successfully. Mar 6 00:58:43.206588 systemd[1]: session-14.scope: Deactivated successfully. Mar 6 00:58:43.211559 systemd-logind[1977]: Session 14 logged out. Waiting for processes to exit. Mar 6 00:58:43.214632 systemd-logind[1977]: Removed session 14. Mar 6 00:58:43.286046 systemd[1]: Started sshd@14-172.31.24.181:22-68.220.241.50:55840.service - OpenSSH per-connection server daemon (68.220.241.50:55840). Mar 6 00:58:43.774737 sshd[6173]: Accepted publickey for core from 68.220.241.50 port 55840 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:58:43.778626 sshd-session[6173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:58:43.793121 systemd-logind[1977]: New session 15 of user core. Mar 6 00:58:43.803107 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 6 00:58:46.160726 sshd[6201]: Connection closed by 68.220.241.50 port 55840 Mar 6 00:58:46.195167 sshd-session[6173]: pam_unix(sshd:session): session closed for user core Mar 6 00:58:46.203552 systemd[1]: sshd@14-172.31.24.181:22-68.220.241.50:55840.service: Deactivated successfully. Mar 6 00:58:46.211176 systemd[1]: session-15.scope: Deactivated successfully. Mar 6 00:58:46.216203 systemd-logind[1977]: Session 15 logged out. Waiting for processes to exit. Mar 6 00:58:46.220570 systemd-logind[1977]: Removed session 15. Mar 6 00:58:46.259621 systemd[1]: Started sshd@15-172.31.24.181:22-68.220.241.50:55856.service - OpenSSH per-connection server daemon (68.220.241.50:55856). Mar 6 00:58:46.750483 sshd[6217]: Accepted publickey for core from 68.220.241.50 port 55856 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:58:46.752986 sshd-session[6217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:58:46.763889 systemd-logind[1977]: New session 16 of user core. Mar 6 00:58:46.770778 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 6 00:58:48.609499 sshd[6220]: Connection closed by 68.220.241.50 port 55856 Mar 6 00:58:48.608835 sshd-session[6217]: pam_unix(sshd:session): session closed for user core Mar 6 00:58:48.623006 systemd[1]: sshd@15-172.31.24.181:22-68.220.241.50:55856.service: Deactivated successfully. Mar 6 00:58:48.633405 systemd[1]: session-16.scope: Deactivated successfully. Mar 6 00:58:48.639835 systemd-logind[1977]: Session 16 logged out. Waiting for processes to exit. Mar 6 00:58:48.646431 systemd-logind[1977]: Removed session 16. Mar 6 00:58:48.711388 systemd[1]: Started sshd@16-172.31.24.181:22-68.220.241.50:55858.service - OpenSSH per-connection server daemon (68.220.241.50:55858). Mar 6 00:58:49.220840 sshd[6253]: Accepted publickey for core from 68.220.241.50 port 55858 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:58:49.224612 sshd-session[6253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:58:49.237057 systemd-logind[1977]: New session 17 of user core. Mar 6 00:58:49.245791 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 6 00:58:50.083506 sshd[6262]: Connection closed by 68.220.241.50 port 55858 Mar 6 00:58:50.085799 sshd-session[6253]: pam_unix(sshd:session): session closed for user core Mar 6 00:58:50.094149 systemd[1]: sshd@16-172.31.24.181:22-68.220.241.50:55858.service: Deactivated successfully. Mar 6 00:58:50.104302 systemd[1]: session-17.scope: Deactivated successfully. Mar 6 00:58:50.109036 systemd-logind[1977]: Session 17 logged out. Waiting for processes to exit. Mar 6 00:58:50.114196 systemd-logind[1977]: Removed session 17. Mar 6 00:58:50.187537 systemd[1]: Started sshd@17-172.31.24.181:22-68.220.241.50:55864.service - OpenSSH per-connection server daemon (68.220.241.50:55864). Mar 6 00:58:50.677061 sshd[6272]: Accepted publickey for core from 68.220.241.50 port 55864 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:58:50.680895 sshd-session[6272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:58:50.693918 systemd-logind[1977]: New session 18 of user core. Mar 6 00:58:50.701922 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 6 00:58:51.104289 sshd[6275]: Connection closed by 68.220.241.50 port 55864 Mar 6 00:58:51.104156 sshd-session[6272]: pam_unix(sshd:session): session closed for user core Mar 6 00:58:51.116376 systemd[1]: sshd@17-172.31.24.181:22-68.220.241.50:55864.service: Deactivated successfully. Mar 6 00:58:51.126569 systemd[1]: session-18.scope: Deactivated successfully. Mar 6 00:58:51.131189 systemd-logind[1977]: Session 18 logged out. Waiting for processes to exit. Mar 6 00:58:51.135963 systemd-logind[1977]: Removed session 18. Mar 6 00:58:56.204041 systemd[1]: Started sshd@18-172.31.24.181:22-68.220.241.50:49376.service - OpenSSH per-connection server daemon (68.220.241.50:49376). Mar 6 00:58:56.671583 sshd[6302]: Accepted publickey for core from 68.220.241.50 port 49376 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:58:56.675894 sshd-session[6302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:58:56.690860 systemd-logind[1977]: New session 19 of user core. Mar 6 00:58:56.696771 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 6 00:58:57.064695 sshd[6306]: Connection closed by 68.220.241.50 port 49376 Mar 6 00:58:57.065587 sshd-session[6302]: pam_unix(sshd:session): session closed for user core Mar 6 00:58:57.073782 systemd[1]: sshd@18-172.31.24.181:22-68.220.241.50:49376.service: Deactivated successfully. Mar 6 00:58:57.079117 systemd[1]: session-19.scope: Deactivated successfully. Mar 6 00:58:57.082668 systemd-logind[1977]: Session 19 logged out. Waiting for processes to exit. Mar 6 00:58:57.086068 systemd-logind[1977]: Removed session 19. Mar 6 00:59:02.160302 systemd[1]: Started sshd@19-172.31.24.181:22-68.220.241.50:51922.service - OpenSSH per-connection server daemon (68.220.241.50:51922). Mar 6 00:59:02.633754 sshd[6363]: Accepted publickey for core from 68.220.241.50 port 51922 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:59:02.636289 sshd-session[6363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:59:02.645349 systemd-logind[1977]: New session 20 of user core. Mar 6 00:59:02.653771 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 6 00:59:03.032503 sshd[6366]: Connection closed by 68.220.241.50 port 51922 Mar 6 00:59:03.033753 sshd-session[6363]: pam_unix(sshd:session): session closed for user core Mar 6 00:59:03.046549 systemd-logind[1977]: Session 20 logged out. Waiting for processes to exit. Mar 6 00:59:03.047609 systemd[1]: sshd@19-172.31.24.181:22-68.220.241.50:51922.service: Deactivated successfully. Mar 6 00:59:03.055132 systemd[1]: session-20.scope: Deactivated successfully. Mar 6 00:59:03.060792 systemd-logind[1977]: Removed session 20. Mar 6 00:59:08.129198 systemd[1]: Started sshd@20-172.31.24.181:22-68.220.241.50:51926.service - OpenSSH per-connection server daemon (68.220.241.50:51926). Mar 6 00:59:08.609524 sshd[6427]: Accepted publickey for core from 68.220.241.50 port 51926 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:59:08.611576 sshd-session[6427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:59:08.619994 systemd-logind[1977]: New session 21 of user core. Mar 6 00:59:08.635761 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 6 00:59:08.981831 sshd[6430]: Connection closed by 68.220.241.50 port 51926 Mar 6 00:59:08.982996 sshd-session[6427]: pam_unix(sshd:session): session closed for user core Mar 6 00:59:08.991839 systemd[1]: sshd@20-172.31.24.181:22-68.220.241.50:51926.service: Deactivated successfully. Mar 6 00:59:08.995715 systemd[1]: session-21.scope: Deactivated successfully. Mar 6 00:59:08.999391 systemd-logind[1977]: Session 21 logged out. Waiting for processes to exit. Mar 6 00:59:09.002805 systemd-logind[1977]: Removed session 21. Mar 6 00:59:14.082612 systemd[1]: Started sshd@21-172.31.24.181:22-68.220.241.50:35164.service - OpenSSH per-connection server daemon (68.220.241.50:35164). Mar 6 00:59:14.567512 sshd[6468]: Accepted publickey for core from 68.220.241.50 port 35164 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:59:14.570083 sshd-session[6468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:59:14.580953 systemd-logind[1977]: New session 22 of user core. Mar 6 00:59:14.590767 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 6 00:59:14.941615 sshd[6471]: Connection closed by 68.220.241.50 port 35164 Mar 6 00:59:14.942668 sshd-session[6468]: pam_unix(sshd:session): session closed for user core Mar 6 00:59:14.952681 systemd[1]: sshd@21-172.31.24.181:22-68.220.241.50:35164.service: Deactivated successfully. Mar 6 00:59:14.960964 systemd[1]: session-22.scope: Deactivated successfully. Mar 6 00:59:14.964064 systemd-logind[1977]: Session 22 logged out. Waiting for processes to exit. Mar 6 00:59:14.968510 systemd-logind[1977]: Removed session 22. Mar 6 00:59:20.043157 systemd[1]: Started sshd@22-172.31.24.181:22-68.220.241.50:35170.service - OpenSSH per-connection server daemon (68.220.241.50:35170). Mar 6 00:59:20.552972 sshd[6486]: Accepted publickey for core from 68.220.241.50 port 35170 ssh2: RSA SHA256:JA893NYNzIQjt7fMSNMP1D6ZXPb/xbJKtqqTrt+R/vM Mar 6 00:59:20.556888 sshd-session[6486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 6 00:59:20.571059 systemd-logind[1977]: New session 23 of user core. Mar 6 00:59:20.582786 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 6 00:59:20.975485 sshd[6489]: Connection closed by 68.220.241.50 port 35170 Mar 6 00:59:20.974180 sshd-session[6486]: pam_unix(sshd:session): session closed for user core Mar 6 00:59:20.984262 systemd[1]: sshd@22-172.31.24.181:22-68.220.241.50:35170.service: Deactivated successfully. Mar 6 00:59:20.994095 systemd[1]: session-23.scope: Deactivated successfully. Mar 6 00:59:21.000950 systemd-logind[1977]: Session 23 logged out. Waiting for processes to exit. Mar 6 00:59:21.004734 systemd-logind[1977]: Removed session 23. Mar 6 00:59:36.064180 systemd[1]: cri-containerd-5051f2c2186922242b72147943513317808fa3a65b40d2a74ebf36b722685b9d.scope: Deactivated successfully. Mar 6 00:59:36.064857 systemd[1]: cri-containerd-5051f2c2186922242b72147943513317808fa3a65b40d2a74ebf36b722685b9d.scope: Consumed 6.262s CPU time, 58.7M memory peak, 192K read from disk. Mar 6 00:59:36.078578 containerd[2011]: time="2026-03-06T00:59:36.077718962Z" level=info msg="received container exit event container_id:\"5051f2c2186922242b72147943513317808fa3a65b40d2a74ebf36b722685b9d\" id:\"5051f2c2186922242b72147943513317808fa3a65b40d2a74ebf36b722685b9d\" pid:3180 exit_status:1 exited_at:{seconds:1772758776 nanos:76081358}" Mar 6 00:59:36.141249 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5051f2c2186922242b72147943513317808fa3a65b40d2a74ebf36b722685b9d-rootfs.mount: Deactivated successfully. Mar 6 00:59:36.252437 systemd[1]: cri-containerd-24a049a2bead0439dfa4f9bfa04715b8f8bca1c4e857b418a63fc1c7314a619d.scope: Deactivated successfully. Mar 6 00:59:36.253222 systemd[1]: cri-containerd-24a049a2bead0439dfa4f9bfa04715b8f8bca1c4e857b418a63fc1c7314a619d.scope: Consumed 32.004s CPU time, 111.9M memory peak. Mar 6 00:59:36.260690 containerd[2011]: time="2026-03-06T00:59:36.260619171Z" level=info msg="received container exit event container_id:\"24a049a2bead0439dfa4f9bfa04715b8f8bca1c4e857b418a63fc1c7314a619d\" id:\"24a049a2bead0439dfa4f9bfa04715b8f8bca1c4e857b418a63fc1c7314a619d\" pid:3862 exit_status:1 exited_at:{seconds:1772758776 nanos:260002719}" Mar 6 00:59:36.309313 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24a049a2bead0439dfa4f9bfa04715b8f8bca1c4e857b418a63fc1c7314a619d-rootfs.mount: Deactivated successfully. Mar 6 00:59:37.057189 kubelet[3346]: I0306 00:59:37.056716 3346 scope.go:117] "RemoveContainer" containerID="24a049a2bead0439dfa4f9bfa04715b8f8bca1c4e857b418a63fc1c7314a619d" Mar 6 00:59:37.065127 kubelet[3346]: I0306 00:59:37.064314 3346 scope.go:117] "RemoveContainer" containerID="5051f2c2186922242b72147943513317808fa3a65b40d2a74ebf36b722685b9d" Mar 6 00:59:37.066687 containerd[2011]: time="2026-03-06T00:59:37.065637219Z" level=info msg="CreateContainer within sandbox \"74c3355d897f259a7583ae9cff87e995ac1df2bea3d6e5131ba913cb064e6ac2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Mar 6 00:59:37.081618 containerd[2011]: time="2026-03-06T00:59:37.081561651Z" level=info msg="Container 03ae1b96c9014795322b034d7d358dff142a40945b01a5dd2357d384e67bf398: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:59:37.085893 containerd[2011]: time="2026-03-06T00:59:37.085819767Z" level=info msg="CreateContainer within sandbox \"dd0a7b863ccf95c72bc534b6a6c4c65c3d9f6e52c5441f6df59ade4df92620d1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 6 00:59:37.105891 containerd[2011]: time="2026-03-06T00:59:37.105823935Z" level=info msg="CreateContainer within sandbox \"74c3355d897f259a7583ae9cff87e995ac1df2bea3d6e5131ba913cb064e6ac2\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"03ae1b96c9014795322b034d7d358dff142a40945b01a5dd2357d384e67bf398\"" Mar 6 00:59:37.107082 containerd[2011]: time="2026-03-06T00:59:37.107038707Z" level=info msg="StartContainer for \"03ae1b96c9014795322b034d7d358dff142a40945b01a5dd2357d384e67bf398\"" Mar 6 00:59:37.110240 containerd[2011]: time="2026-03-06T00:59:37.110167791Z" level=info msg="connecting to shim 03ae1b96c9014795322b034d7d358dff142a40945b01a5dd2357d384e67bf398" address="unix:///run/containerd/s/4423a0485b583b8200de28707f46e3ff5f8de4129a3c510c12f09a902a985e2c" protocol=ttrpc version=3 Mar 6 00:59:37.118519 containerd[2011]: time="2026-03-06T00:59:37.118425063Z" level=info msg="Container fb347389ca3e5f3bf95992e82549381539ea29a28e310f5218c27a66014c34a9: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:59:37.155439 containerd[2011]: time="2026-03-06T00:59:37.155193339Z" level=info msg="CreateContainer within sandbox \"dd0a7b863ccf95c72bc534b6a6c4c65c3d9f6e52c5441f6df59ade4df92620d1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"fb347389ca3e5f3bf95992e82549381539ea29a28e310f5218c27a66014c34a9\"" Mar 6 00:59:37.156852 containerd[2011]: time="2026-03-06T00:59:37.156788547Z" level=info msg="StartContainer for \"fb347389ca3e5f3bf95992e82549381539ea29a28e310f5218c27a66014c34a9\"" Mar 6 00:59:37.160480 containerd[2011]: time="2026-03-06T00:59:37.160384983Z" level=info msg="connecting to shim fb347389ca3e5f3bf95992e82549381539ea29a28e310f5218c27a66014c34a9" address="unix:///run/containerd/s/037c29725dc1f0b55b57545e2ac9d1cc39e5fb8038f30f795f66e130a26a8f85" protocol=ttrpc version=3 Mar 6 00:59:37.185016 systemd[1]: Started cri-containerd-03ae1b96c9014795322b034d7d358dff142a40945b01a5dd2357d384e67bf398.scope - libcontainer container 03ae1b96c9014795322b034d7d358dff142a40945b01a5dd2357d384e67bf398. Mar 6 00:59:37.216777 systemd[1]: Started cri-containerd-fb347389ca3e5f3bf95992e82549381539ea29a28e310f5218c27a66014c34a9.scope - libcontainer container fb347389ca3e5f3bf95992e82549381539ea29a28e310f5218c27a66014c34a9. Mar 6 00:59:37.310586 containerd[2011]: time="2026-03-06T00:59:37.310402564Z" level=info msg="StartContainer for \"03ae1b96c9014795322b034d7d358dff142a40945b01a5dd2357d384e67bf398\" returns successfully" Mar 6 00:59:37.344975 containerd[2011]: time="2026-03-06T00:59:37.344777584Z" level=info msg="StartContainer for \"fb347389ca3e5f3bf95992e82549381539ea29a28e310f5218c27a66014c34a9\" returns successfully" Mar 6 00:59:40.520142 systemd[1]: cri-containerd-5dbec2e9285ed282c9f502d836db5445942e901f2bfebf6d59cd3b7f7ba251a9.scope: Deactivated successfully. Mar 6 00:59:40.521589 systemd[1]: cri-containerd-5dbec2e9285ed282c9f502d836db5445942e901f2bfebf6d59cd3b7f7ba251a9.scope: Consumed 6.146s CPU time, 20.9M memory peak, 64K read from disk. Mar 6 00:59:40.525345 containerd[2011]: time="2026-03-06T00:59:40.525113768Z" level=info msg="received container exit event container_id:\"5dbec2e9285ed282c9f502d836db5445942e901f2bfebf6d59cd3b7f7ba251a9\" id:\"5dbec2e9285ed282c9f502d836db5445942e901f2bfebf6d59cd3b7f7ba251a9\" pid:3193 exit_status:1 exited_at:{seconds:1772758780 nanos:524703176}" Mar 6 00:59:40.572875 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5dbec2e9285ed282c9f502d836db5445942e901f2bfebf6d59cd3b7f7ba251a9-rootfs.mount: Deactivated successfully. Mar 6 00:59:41.098625 kubelet[3346]: I0306 00:59:41.098567 3346 scope.go:117] "RemoveContainer" containerID="5dbec2e9285ed282c9f502d836db5445942e901f2bfebf6d59cd3b7f7ba251a9" Mar 6 00:59:41.103508 containerd[2011]: time="2026-03-06T00:59:41.102851419Z" level=info msg="CreateContainer within sandbox \"a4b57e8ffcf4bdbc667b6b44ad82ff1df92c2b9fd0623518ee90afcebeea0327\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 6 00:59:41.131927 containerd[2011]: time="2026-03-06T00:59:41.131875495Z" level=info msg="Container 58ee2fb175e73e782dc8aed329e06d22154d82d8f11dedf7a399a7b35d83aaaf: CDI devices from CRI Config.CDIDevices: []" Mar 6 00:59:41.133841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2085537839.mount: Deactivated successfully. Mar 6 00:59:41.152049 containerd[2011]: time="2026-03-06T00:59:41.151993231Z" level=info msg="CreateContainer within sandbox \"a4b57e8ffcf4bdbc667b6b44ad82ff1df92c2b9fd0623518ee90afcebeea0327\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"58ee2fb175e73e782dc8aed329e06d22154d82d8f11dedf7a399a7b35d83aaaf\"" Mar 6 00:59:41.154538 containerd[2011]: time="2026-03-06T00:59:41.154178107Z" level=info msg="StartContainer for \"58ee2fb175e73e782dc8aed329e06d22154d82d8f11dedf7a399a7b35d83aaaf\"" Mar 6 00:59:41.156970 containerd[2011]: time="2026-03-06T00:59:41.156870535Z" level=info msg="connecting to shim 58ee2fb175e73e782dc8aed329e06d22154d82d8f11dedf7a399a7b35d83aaaf" address="unix:///run/containerd/s/8c306c8b206022687567a36c67dd04113a9051c187733079737007eecb8ebb48" protocol=ttrpc version=3 Mar 6 00:59:41.208900 systemd[1]: Started cri-containerd-58ee2fb175e73e782dc8aed329e06d22154d82d8f11dedf7a399a7b35d83aaaf.scope - libcontainer container 58ee2fb175e73e782dc8aed329e06d22154d82d8f11dedf7a399a7b35d83aaaf. Mar 6 00:59:41.307042 containerd[2011]: time="2026-03-06T00:59:41.306872420Z" level=info msg="StartContainer for \"58ee2fb175e73e782dc8aed329e06d22154d82d8f11dedf7a399a7b35d83aaaf\" returns successfully" Mar 6 00:59:41.893420 kubelet[3346]: E0306 00:59:41.893316 3346 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.181:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-181?timeout=10s\": context deadline exceeded" Mar 6 00:59:48.857266 systemd[1]: cri-containerd-03ae1b96c9014795322b034d7d358dff142a40945b01a5dd2357d384e67bf398.scope: Deactivated successfully. Mar 6 00:59:48.862181 containerd[2011]: time="2026-03-06T00:59:48.862111829Z" level=info msg="received container exit event container_id:\"03ae1b96c9014795322b034d7d358dff142a40945b01a5dd2357d384e67bf398\" id:\"03ae1b96c9014795322b034d7d358dff142a40945b01a5dd2357d384e67bf398\" pid:6579 exit_status:1 exited_at:{seconds:1772758788 nanos:861553265}" Mar 6 00:59:48.907543 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03ae1b96c9014795322b034d7d358dff142a40945b01a5dd2357d384e67bf398-rootfs.mount: Deactivated successfully. Mar 6 00:59:49.138861 kubelet[3346]: I0306 00:59:49.138737 3346 scope.go:117] "RemoveContainer" containerID="03ae1b96c9014795322b034d7d358dff142a40945b01a5dd2357d384e67bf398" Mar 6 00:59:49.140064 kubelet[3346]: I0306 00:59:49.140035 3346 scope.go:117] "RemoveContainer" containerID="24a049a2bead0439dfa4f9bfa04715b8f8bca1c4e857b418a63fc1c7314a619d" Mar 6 00:59:49.141576 kubelet[3346]: E0306 00:59:49.141422 3346 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-6bf85f8dd-szc59_tigera-operator(d782ff88-4444-46ee-aba6-b22f365d1f22)\"" pod="tigera-operator/tigera-operator-6bf85f8dd-szc59" podUID="d782ff88-4444-46ee-aba6-b22f365d1f22" Mar 6 00:59:49.144424 containerd[2011]: time="2026-03-06T00:59:49.144368775Z" level=info msg="RemoveContainer for \"24a049a2bead0439dfa4f9bfa04715b8f8bca1c4e857b418a63fc1c7314a619d\"" Mar 6 00:59:49.155092 containerd[2011]: time="2026-03-06T00:59:49.154737735Z" level=info msg="RemoveContainer for \"24a049a2bead0439dfa4f9bfa04715b8f8bca1c4e857b418a63fc1c7314a619d\" returns successfully" Mar 6 00:59:51.894409 kubelet[3346]: E0306 00:59:51.893847 3346 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.181:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-181?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"