Apr 16 23:31:35.145391 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Apr 16 23:31:35.145432 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Thu Apr 16 22:10:49 -00 2026 Apr 16 23:31:35.145456 kernel: KASLR disabled due to lack of seed Apr 16 23:31:35.145472 kernel: efi: EFI v2.7 by EDK II Apr 16 23:31:35.145487 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a734a98 MEMRESERVE=0x78557598 Apr 16 23:31:35.145502 kernel: secureboot: Secure boot disabled Apr 16 23:31:35.145519 kernel: ACPI: Early table checksum verification disabled Apr 16 23:31:35.145534 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Apr 16 23:31:35.145549 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Apr 16 23:31:35.145564 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 16 23:31:35.145580 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 16 23:31:35.145598 kernel: ACPI: FACS 0x0000000078630000 000040 Apr 16 23:31:35.145613 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 16 23:31:35.145629 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Apr 16 23:31:35.145647 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Apr 16 23:31:35.145663 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Apr 16 23:31:35.145682 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 16 23:31:35.145698 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Apr 16 23:31:35.145714 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Apr 16 23:31:35.145730 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Apr 16 23:31:35.145746 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Apr 16 23:31:35.145762 kernel: printk: legacy bootconsole [uart0] enabled Apr 16 23:31:35.145778 kernel: ACPI: Use ACPI SPCR as default console: Yes Apr 16 23:31:35.145794 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Apr 16 23:31:35.145811 kernel: NODE_DATA(0) allocated [mem 0x4b584ea00-0x4b5855fff] Apr 16 23:31:35.145827 kernel: Zone ranges: Apr 16 23:31:35.145842 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 16 23:31:35.145862 kernel: DMA32 empty Apr 16 23:31:35.145878 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Apr 16 23:31:35.145894 kernel: Device empty Apr 16 23:31:35.145909 kernel: Movable zone start for each node Apr 16 23:31:35.145925 kernel: Early memory node ranges Apr 16 23:31:35.145941 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Apr 16 23:31:35.145957 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Apr 16 23:31:35.145973 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Apr 16 23:31:35.145989 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Apr 16 23:31:35.146005 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Apr 16 23:31:35.146021 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Apr 16 23:31:35.146037 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Apr 16 23:31:35.146057 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Apr 16 23:31:35.146079 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Apr 16 23:31:35.146096 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Apr 16 23:31:35.146113 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Apr 16 23:31:35.146130 kernel: psci: probing for conduit method from ACPI. Apr 16 23:31:35.146150 kernel: psci: PSCIv1.0 detected in firmware. Apr 16 23:31:35.146167 kernel: psci: Using standard PSCI v0.2 function IDs Apr 16 23:31:35.146184 kernel: psci: Trusted OS migration not required Apr 16 23:31:35.146258 kernel: psci: SMC Calling Convention v1.1 Apr 16 23:31:35.146281 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Apr 16 23:31:35.146299 kernel: percpu: Embedded 33 pages/cpu s97752 r8192 d29224 u135168 Apr 16 23:31:35.146317 kernel: pcpu-alloc: s97752 r8192 d29224 u135168 alloc=33*4096 Apr 16 23:31:35.146334 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 16 23:31:35.146351 kernel: Detected PIPT I-cache on CPU0 Apr 16 23:31:35.146368 kernel: CPU features: detected: GIC system register CPU interface Apr 16 23:31:35.146385 kernel: CPU features: detected: Spectre-v2 Apr 16 23:31:35.146407 kernel: CPU features: detected: Spectre-v3a Apr 16 23:31:35.146424 kernel: CPU features: detected: Spectre-BHB Apr 16 23:31:35.146440 kernel: CPU features: detected: ARM erratum 1742098 Apr 16 23:31:35.146457 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Apr 16 23:31:35.146474 kernel: alternatives: applying boot alternatives Apr 16 23:31:35.146493 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c4961845f9869114226296d88644496bf9e4629823927a5e8ae22de79f1c7b59 Apr 16 23:31:35.146510 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 16 23:31:35.146527 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 16 23:31:35.146544 kernel: Fallback order for Node 0: 0 Apr 16 23:31:35.146560 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Apr 16 23:31:35.146577 kernel: Policy zone: Normal Apr 16 23:31:35.146597 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 16 23:31:35.146614 kernel: software IO TLB: area num 2. Apr 16 23:31:35.146631 kernel: software IO TLB: mapped [mem 0x0000000074557000-0x0000000078557000] (64MB) Apr 16 23:31:35.146647 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 16 23:31:35.146664 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 16 23:31:35.146681 kernel: rcu: RCU event tracing is enabled. Apr 16 23:31:35.146699 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 16 23:31:35.146716 kernel: Trampoline variant of Tasks RCU enabled. Apr 16 23:31:35.146733 kernel: Tracing variant of Tasks RCU enabled. Apr 16 23:31:35.146750 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 16 23:31:35.146767 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 16 23:31:35.146788 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 16 23:31:35.146805 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 16 23:31:35.146822 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 16 23:31:35.146838 kernel: GICv3: 96 SPIs implemented Apr 16 23:31:35.146854 kernel: GICv3: 0 Extended SPIs implemented Apr 16 23:31:35.146871 kernel: Root IRQ handler: gic_handle_irq Apr 16 23:31:35.146888 kernel: GICv3: GICv3 features: 16 PPIs Apr 16 23:31:35.146904 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Apr 16 23:31:35.146921 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Apr 16 23:31:35.146938 kernel: ITS [mem 0x10080000-0x1009ffff] Apr 16 23:31:35.146955 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Apr 16 23:31:35.146972 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Apr 16 23:31:35.146992 kernel: GICv3: using LPI property table @0x0000000400110000 Apr 16 23:31:35.147009 kernel: ITS: Using hypervisor restricted LPI range [128] Apr 16 23:31:35.147026 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Apr 16 23:31:35.147042 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 16 23:31:35.147059 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Apr 16 23:31:35.147076 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Apr 16 23:31:35.147093 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Apr 16 23:31:35.147110 kernel: Console: colour dummy device 80x25 Apr 16 23:31:35.147127 kernel: printk: legacy console [tty1] enabled Apr 16 23:31:35.147144 kernel: ACPI: Core revision 20240827 Apr 16 23:31:35.147162 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Apr 16 23:31:35.147183 kernel: pid_max: default: 32768 minimum: 301 Apr 16 23:31:35.147215 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 16 23:31:35.147238 kernel: landlock: Up and running. Apr 16 23:31:35.147255 kernel: SELinux: Initializing. Apr 16 23:31:35.147273 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 23:31:35.147290 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 23:31:35.147307 kernel: rcu: Hierarchical SRCU implementation. Apr 16 23:31:35.147324 kernel: rcu: Max phase no-delay instances is 400. Apr 16 23:31:35.147347 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 16 23:31:35.147364 kernel: Remapping and enabling EFI services. Apr 16 23:31:35.147381 kernel: smp: Bringing up secondary CPUs ... Apr 16 23:31:35.147397 kernel: Detected PIPT I-cache on CPU1 Apr 16 23:31:35.147414 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Apr 16 23:31:35.147431 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Apr 16 23:31:35.147449 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Apr 16 23:31:35.147465 kernel: smp: Brought up 1 node, 2 CPUs Apr 16 23:31:35.147482 kernel: SMP: Total of 2 processors activated. Apr 16 23:31:35.147503 kernel: CPU: All CPU(s) started at EL1 Apr 16 23:31:35.147530 kernel: CPU features: detected: 32-bit EL0 Support Apr 16 23:31:35.147549 kernel: CPU features: detected: 32-bit EL1 Support Apr 16 23:31:35.147569 kernel: CPU features: detected: CRC32 instructions Apr 16 23:31:35.147588 kernel: alternatives: applying system-wide alternatives Apr 16 23:31:35.147606 kernel: Memory: 3796264K/4030464K available (11200K kernel code, 2458K rwdata, 9092K rodata, 39552K init, 1038K bss, 212848K reserved, 16384K cma-reserved) Apr 16 23:31:35.147625 kernel: devtmpfs: initialized Apr 16 23:31:35.147643 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 16 23:31:35.147665 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 16 23:31:35.147683 kernel: 16864 pages in range for non-PLT usage Apr 16 23:31:35.147701 kernel: 508384 pages in range for PLT usage Apr 16 23:31:35.147719 kernel: pinctrl core: initialized pinctrl subsystem Apr 16 23:31:35.147737 kernel: SMBIOS 3.0.0 present. Apr 16 23:31:35.147755 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Apr 16 23:31:35.147774 kernel: DMI: Memory slots populated: 0/0 Apr 16 23:31:35.147792 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 16 23:31:35.147810 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 16 23:31:35.147832 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 16 23:31:35.147850 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 16 23:31:35.147868 kernel: audit: initializing netlink subsys (disabled) Apr 16 23:31:35.147886 kernel: audit: type=2000 audit(0.227:1): state=initialized audit_enabled=0 res=1 Apr 16 23:31:35.147903 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 16 23:31:35.147921 kernel: cpuidle: using governor menu Apr 16 23:31:35.147939 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 16 23:31:35.147957 kernel: ASID allocator initialised with 65536 entries Apr 16 23:31:35.147974 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 16 23:31:35.147996 kernel: Serial: AMBA PL011 UART driver Apr 16 23:31:35.148014 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 16 23:31:35.148032 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 16 23:31:35.148049 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 16 23:31:35.148067 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 16 23:31:35.148085 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 16 23:31:35.148103 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 16 23:31:35.148121 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 16 23:31:35.148139 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 16 23:31:35.148161 kernel: ACPI: Added _OSI(Module Device) Apr 16 23:31:35.148179 kernel: ACPI: Added _OSI(Processor Device) Apr 16 23:31:35.148196 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 16 23:31:35.148240 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 16 23:31:35.148259 kernel: ACPI: Interpreter enabled Apr 16 23:31:35.148277 kernel: ACPI: Using GIC for interrupt routing Apr 16 23:31:35.148295 kernel: ACPI: MCFG table detected, 1 entries Apr 16 23:31:35.148313 kernel: ACPI: CPU0 has been hot-added Apr 16 23:31:35.148331 kernel: ACPI: CPU1 has been hot-added Apr 16 23:31:35.148354 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Apr 16 23:31:35.148631 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 16 23:31:35.148840 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 16 23:31:35.149030 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 16 23:31:35.149235 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Apr 16 23:31:35.149427 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Apr 16 23:31:35.149451 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Apr 16 23:31:35.149476 kernel: acpiphp: Slot [1] registered Apr 16 23:31:35.149494 kernel: acpiphp: Slot [2] registered Apr 16 23:31:35.149512 kernel: acpiphp: Slot [3] registered Apr 16 23:31:35.149529 kernel: acpiphp: Slot [4] registered Apr 16 23:31:35.149547 kernel: acpiphp: Slot [5] registered Apr 16 23:31:35.149565 kernel: acpiphp: Slot [6] registered Apr 16 23:31:35.149583 kernel: acpiphp: Slot [7] registered Apr 16 23:31:35.149600 kernel: acpiphp: Slot [8] registered Apr 16 23:31:35.149618 kernel: acpiphp: Slot [9] registered Apr 16 23:31:35.149635 kernel: acpiphp: Slot [10] registered Apr 16 23:31:35.149657 kernel: acpiphp: Slot [11] registered Apr 16 23:31:35.149675 kernel: acpiphp: Slot [12] registered Apr 16 23:31:35.149693 kernel: acpiphp: Slot [13] registered Apr 16 23:31:35.149711 kernel: acpiphp: Slot [14] registered Apr 16 23:31:35.149728 kernel: acpiphp: Slot [15] registered Apr 16 23:31:35.149746 kernel: acpiphp: Slot [16] registered Apr 16 23:31:35.149764 kernel: acpiphp: Slot [17] registered Apr 16 23:31:35.149781 kernel: acpiphp: Slot [18] registered Apr 16 23:31:35.149799 kernel: acpiphp: Slot [19] registered Apr 16 23:31:35.149820 kernel: acpiphp: Slot [20] registered Apr 16 23:31:35.149838 kernel: acpiphp: Slot [21] registered Apr 16 23:31:35.149855 kernel: acpiphp: Slot [22] registered Apr 16 23:31:35.149873 kernel: acpiphp: Slot [23] registered Apr 16 23:31:35.149891 kernel: acpiphp: Slot [24] registered Apr 16 23:31:35.149909 kernel: acpiphp: Slot [25] registered Apr 16 23:31:35.149926 kernel: acpiphp: Slot [26] registered Apr 16 23:31:35.149944 kernel: acpiphp: Slot [27] registered Apr 16 23:31:35.149962 kernel: acpiphp: Slot [28] registered Apr 16 23:31:35.149979 kernel: acpiphp: Slot [29] registered Apr 16 23:31:35.150001 kernel: acpiphp: Slot [30] registered Apr 16 23:31:35.150019 kernel: acpiphp: Slot [31] registered Apr 16 23:31:35.150036 kernel: PCI host bridge to bus 0000:00 Apr 16 23:31:35.150253 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Apr 16 23:31:35.150428 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 16 23:31:35.150597 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Apr 16 23:31:35.150763 kernel: pci_bus 0000:00: root bus resource [bus 00] Apr 16 23:31:35.150986 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Apr 16 23:31:35.151214 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Apr 16 23:31:35.151421 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Apr 16 23:31:35.151639 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Apr 16 23:31:35.151837 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Apr 16 23:31:35.152026 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 16 23:31:35.152253 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Apr 16 23:31:35.152449 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Apr 16 23:31:35.152636 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Apr 16 23:31:35.152842 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Apr 16 23:31:35.153033 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 16 23:31:35.153218 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Apr 16 23:31:35.153399 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 16 23:31:35.153575 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Apr 16 23:31:35.153600 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 16 23:31:35.153618 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 16 23:31:35.153637 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 16 23:31:35.153655 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 16 23:31:35.153673 kernel: iommu: Default domain type: Translated Apr 16 23:31:35.153691 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 16 23:31:35.153709 kernel: efivars: Registered efivars operations Apr 16 23:31:35.153727 kernel: vgaarb: loaded Apr 16 23:31:35.153749 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 16 23:31:35.153767 kernel: VFS: Disk quotas dquot_6.6.0 Apr 16 23:31:35.153785 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 16 23:31:35.153803 kernel: pnp: PnP ACPI init Apr 16 23:31:35.153994 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Apr 16 23:31:35.154021 kernel: pnp: PnP ACPI: found 1 devices Apr 16 23:31:35.154040 kernel: NET: Registered PF_INET protocol family Apr 16 23:31:35.154058 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 16 23:31:35.154082 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 16 23:31:35.154101 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 16 23:31:35.154119 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 16 23:31:35.154137 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 16 23:31:35.154155 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 16 23:31:35.154173 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 23:31:35.154192 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 23:31:35.154235 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 16 23:31:35.154255 kernel: PCI: CLS 0 bytes, default 64 Apr 16 23:31:35.154279 kernel: kvm [1]: HYP mode not available Apr 16 23:31:35.154297 kernel: Initialise system trusted keyrings Apr 16 23:31:35.154315 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 16 23:31:35.154333 kernel: Key type asymmetric registered Apr 16 23:31:35.154350 kernel: Asymmetric key parser 'x509' registered Apr 16 23:31:35.154368 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Apr 16 23:31:35.154386 kernel: io scheduler mq-deadline registered Apr 16 23:31:35.154404 kernel: io scheduler kyber registered Apr 16 23:31:35.154422 kernel: io scheduler bfq registered Apr 16 23:31:35.154647 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Apr 16 23:31:35.154675 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 16 23:31:35.154693 kernel: ACPI: button: Power Button [PWRB] Apr 16 23:31:35.154711 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Apr 16 23:31:35.154729 kernel: ACPI: button: Sleep Button [SLPB] Apr 16 23:31:35.154747 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 16 23:31:35.154766 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 16 23:31:35.154958 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Apr 16 23:31:35.154989 kernel: printk: legacy console [ttyS0] disabled Apr 16 23:31:35.155008 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Apr 16 23:31:35.155026 kernel: printk: legacy console [ttyS0] enabled Apr 16 23:31:35.155044 kernel: printk: legacy bootconsole [uart0] disabled Apr 16 23:31:35.155063 kernel: thunder_xcv, ver 1.0 Apr 16 23:31:35.155081 kernel: thunder_bgx, ver 1.0 Apr 16 23:31:35.155099 kernel: nicpf, ver 1.0 Apr 16 23:31:35.155117 kernel: nicvf, ver 1.0 Apr 16 23:31:35.155470 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 16 23:31:35.155666 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-16T23:31:34 UTC (1776382294) Apr 16 23:31:35.155691 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 16 23:31:35.155710 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Apr 16 23:31:35.155728 kernel: watchdog: NMI not fully supported Apr 16 23:31:35.155746 kernel: NET: Registered PF_INET6 protocol family Apr 16 23:31:35.155764 kernel: watchdog: Hard watchdog permanently disabled Apr 16 23:31:35.155782 kernel: Segment Routing with IPv6 Apr 16 23:31:35.155800 kernel: In-situ OAM (IOAM) with IPv6 Apr 16 23:31:35.155818 kernel: NET: Registered PF_PACKET protocol family Apr 16 23:31:35.155841 kernel: Key type dns_resolver registered Apr 16 23:31:35.155858 kernel: registered taskstats version 1 Apr 16 23:31:35.155876 kernel: Loading compiled-in X.509 certificates Apr 16 23:31:35.155894 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 4acad53138393591155ecb80320b4c1550e344f8' Apr 16 23:31:35.155912 kernel: Demotion targets for Node 0: null Apr 16 23:31:35.155929 kernel: Key type .fscrypt registered Apr 16 23:31:35.155947 kernel: Key type fscrypt-provisioning registered Apr 16 23:31:35.155965 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 16 23:31:35.155983 kernel: ima: Allocated hash algorithm: sha1 Apr 16 23:31:35.156005 kernel: ima: No architecture policies found Apr 16 23:31:35.156023 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 16 23:31:35.156041 kernel: clk: Disabling unused clocks Apr 16 23:31:35.156059 kernel: PM: genpd: Disabling unused power domains Apr 16 23:31:35.156077 kernel: Warning: unable to open an initial console. Apr 16 23:31:35.156095 kernel: Freeing unused kernel memory: 39552K Apr 16 23:31:35.156113 kernel: Run /init as init process Apr 16 23:31:35.156131 kernel: with arguments: Apr 16 23:31:35.156148 kernel: /init Apr 16 23:31:35.156170 kernel: with environment: Apr 16 23:31:35.156188 kernel: HOME=/ Apr 16 23:31:35.156234 kernel: TERM=linux Apr 16 23:31:35.156258 systemd[1]: Successfully made /usr/ read-only. Apr 16 23:31:35.156282 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 16 23:31:35.156303 systemd[1]: Detected virtualization amazon. Apr 16 23:31:35.156323 systemd[1]: Detected architecture arm64. Apr 16 23:31:35.156347 systemd[1]: Running in initrd. Apr 16 23:31:35.156367 systemd[1]: No hostname configured, using default hostname. Apr 16 23:31:35.156387 systemd[1]: Hostname set to . Apr 16 23:31:35.156406 systemd[1]: Initializing machine ID from VM UUID. Apr 16 23:31:35.156425 systemd[1]: Queued start job for default target initrd.target. Apr 16 23:31:35.156445 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 23:31:35.156464 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 23:31:35.156484 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 16 23:31:35.156508 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 23:31:35.156528 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 16 23:31:35.156549 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 16 23:31:35.156571 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 16 23:31:35.156591 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 16 23:31:35.156611 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 23:31:35.156630 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 23:31:35.156654 systemd[1]: Reached target paths.target - Path Units. Apr 16 23:31:35.156673 systemd[1]: Reached target slices.target - Slice Units. Apr 16 23:31:35.156692 systemd[1]: Reached target swap.target - Swaps. Apr 16 23:31:35.156712 systemd[1]: Reached target timers.target - Timer Units. Apr 16 23:31:35.156731 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 23:31:35.156751 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 23:31:35.156770 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 16 23:31:35.156808 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 16 23:31:35.156830 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 23:31:35.156856 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 23:31:35.156875 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 23:31:35.156895 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 23:31:35.156914 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 16 23:31:35.156934 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 23:31:35.156953 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 16 23:31:35.156973 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 16 23:31:35.156993 systemd[1]: Starting systemd-fsck-usr.service... Apr 16 23:31:35.157016 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 23:31:35.157036 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 23:31:35.157055 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:31:35.157074 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 16 23:31:35.157095 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 23:31:35.157119 systemd[1]: Finished systemd-fsck-usr.service. Apr 16 23:31:35.157139 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 23:31:35.157158 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 16 23:31:35.157177 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:31:35.157196 kernel: Bridge firewalling registered Apr 16 23:31:35.157248 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 23:31:35.157305 systemd-journald[257]: Collecting audit messages is disabled. Apr 16 23:31:35.157355 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 23:31:35.157376 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 23:31:35.157396 systemd-journald[257]: Journal started Apr 16 23:31:35.157436 systemd-journald[257]: Runtime Journal (/run/log/journal/ec263eedfc20e8f66a7b855b56134573) is 8M, max 75.3M, 67.3M free. Apr 16 23:31:35.091830 systemd-modules-load[260]: Inserted module 'overlay' Apr 16 23:31:35.129603 systemd-modules-load[260]: Inserted module 'br_netfilter' Apr 16 23:31:35.172248 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 23:31:35.192234 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 23:31:35.200843 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 23:31:35.211190 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 23:31:35.221878 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 23:31:35.231428 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 16 23:31:35.241460 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 23:31:35.255175 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 23:31:35.276776 systemd-tmpfiles[295]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 16 23:31:35.287916 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 23:31:35.297456 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 23:31:35.304336 dracut-cmdline[294]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c4961845f9869114226296d88644496bf9e4629823927a5e8ae22de79f1c7b59 Apr 16 23:31:35.386328 systemd-resolved[309]: Positive Trust Anchors: Apr 16 23:31:35.386354 systemd-resolved[309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 23:31:35.386414 systemd-resolved[309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 23:31:35.464256 kernel: SCSI subsystem initialized Apr 16 23:31:35.472262 kernel: Loading iSCSI transport class v2.0-870. Apr 16 23:31:35.484261 kernel: iscsi: registered transport (tcp) Apr 16 23:31:35.506254 kernel: iscsi: registered transport (qla4xxx) Apr 16 23:31:35.506328 kernel: QLogic iSCSI HBA Driver Apr 16 23:31:35.536754 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 23:31:35.565450 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 23:31:35.578076 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 23:31:35.654247 kernel: random: crng init done Apr 16 23:31:35.654749 systemd-resolved[309]: Defaulting to hostname 'linux'. Apr 16 23:31:35.657721 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 23:31:35.663488 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 23:31:35.687457 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 16 23:31:35.696504 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 16 23:31:35.798230 kernel: raid6: neonx8 gen() 6508 MB/s Apr 16 23:31:35.800246 kernel: raid6: neonx4 gen() 6583 MB/s Apr 16 23:31:35.816236 kernel: raid6: neonx2 gen() 5459 MB/s Apr 16 23:31:35.833237 kernel: raid6: neonx1 gen() 3950 MB/s Apr 16 23:31:35.850235 kernel: raid6: int64x8 gen() 3665 MB/s Apr 16 23:31:35.867235 kernel: raid6: int64x4 gen() 3717 MB/s Apr 16 23:31:35.884234 kernel: raid6: int64x2 gen() 3604 MB/s Apr 16 23:31:35.902277 kernel: raid6: int64x1 gen() 2755 MB/s Apr 16 23:31:35.902323 kernel: raid6: using algorithm neonx4 gen() 6583 MB/s Apr 16 23:31:35.921262 kernel: raid6: .... xor() 4648 MB/s, rmw enabled Apr 16 23:31:35.921303 kernel: raid6: using neon recovery algorithm Apr 16 23:31:35.930036 kernel: xor: measuring software checksum speed Apr 16 23:31:35.930101 kernel: 8regs : 12317 MB/sec Apr 16 23:31:35.931230 kernel: 32regs : 12049 MB/sec Apr 16 23:31:35.933544 kernel: arm64_neon : 8706 MB/sec Apr 16 23:31:35.933577 kernel: xor: using function: 8regs (12317 MB/sec) Apr 16 23:31:36.026259 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 16 23:31:36.038291 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 16 23:31:36.046816 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 23:31:36.097397 systemd-udevd[507]: Using default interface naming scheme 'v255'. Apr 16 23:31:36.109446 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 23:31:36.115127 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 16 23:31:36.154541 dracut-pre-trigger[509]: rd.md=0: removing MD RAID activation Apr 16 23:31:36.200306 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 23:31:36.201900 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 23:31:36.329039 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 23:31:36.333627 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 16 23:31:36.503588 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 16 23:31:36.503665 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Apr 16 23:31:36.506626 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 23:31:36.509348 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:31:36.518122 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 16 23:31:36.524891 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 16 23:31:36.518298 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:31:36.528516 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:31:36.540999 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 16 23:31:36.541040 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 16 23:31:36.537064 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 16 23:31:36.550259 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:e2:f8:6a:94:1f Apr 16 23:31:36.553231 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 16 23:31:36.570467 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 16 23:31:36.570539 kernel: GPT:9289727 != 33554431 Apr 16 23:31:36.570573 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 16 23:31:36.573196 kernel: GPT:9289727 != 33554431 Apr 16 23:31:36.573246 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 16 23:31:36.574292 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 16 23:31:36.582666 (udev-worker)[562]: Network interface NamePolicy= disabled on kernel command line. Apr 16 23:31:36.583004 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:31:36.627258 kernel: nvme nvme0: using unchecked data buffer Apr 16 23:31:36.741645 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 16 23:31:36.802077 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 16 23:31:36.826555 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 16 23:31:36.833257 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 16 23:31:36.840401 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 16 23:31:36.881194 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 16 23:31:36.885852 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 23:31:36.891938 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 23:31:36.897769 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 23:31:36.903595 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 16 23:31:36.908384 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 16 23:31:36.931466 disk-uuid[687]: Primary Header is updated. Apr 16 23:31:36.931466 disk-uuid[687]: Secondary Entries is updated. Apr 16 23:31:36.931466 disk-uuid[687]: Secondary Header is updated. Apr 16 23:31:36.944264 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 16 23:31:36.945301 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 16 23:31:37.972239 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 16 23:31:37.972696 disk-uuid[692]: The operation has completed successfully. Apr 16 23:31:38.165730 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 16 23:31:38.165943 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 16 23:31:38.254705 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 16 23:31:38.274566 sh[955]: Success Apr 16 23:31:38.302726 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 16 23:31:38.302800 kernel: device-mapper: uevent: version 1.0.3 Apr 16 23:31:38.304854 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 16 23:31:38.318259 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Apr 16 23:31:38.426592 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 16 23:31:38.434598 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 16 23:31:38.452919 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 16 23:31:38.483268 kernel: BTRFS: device fsid 10cedb9e-43f1-4d98-9b55-3b84c3a61868 devid 1 transid 33 /dev/mapper/usr (254:0) scanned by mount (990) Apr 16 23:31:38.488628 kernel: BTRFS info (device dm-0): first mount of filesystem 10cedb9e-43f1-4d98-9b55-3b84c3a61868 Apr 16 23:31:38.488679 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 16 23:31:38.558263 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Apr 16 23:31:38.558329 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 16 23:31:38.558355 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 16 23:31:38.562162 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 16 23:31:38.565984 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 16 23:31:38.569905 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 16 23:31:38.571173 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 16 23:31:38.579139 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 16 23:31:38.632248 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1022) Apr 16 23:31:38.637245 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 29b48a10-1a8e-4627-ab21-f0862573351d Apr 16 23:31:38.637351 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 16 23:31:38.657052 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 16 23:31:38.657129 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Apr 16 23:31:38.667253 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 29b48a10-1a8e-4627-ab21-f0862573351d Apr 16 23:31:38.669902 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 16 23:31:38.674967 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 16 23:31:38.781394 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 23:31:38.792580 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 23:31:38.880541 systemd-networkd[1162]: lo: Link UP Apr 16 23:31:38.880563 systemd-networkd[1162]: lo: Gained carrier Apr 16 23:31:38.885075 systemd-networkd[1162]: Enumeration completed Apr 16 23:31:38.885248 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 23:31:38.886643 systemd-networkd[1162]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:31:38.886650 systemd-networkd[1162]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 23:31:38.895137 systemd-networkd[1162]: eth0: Link UP Apr 16 23:31:38.895145 systemd-networkd[1162]: eth0: Gained carrier Apr 16 23:31:38.895168 systemd-networkd[1162]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:31:38.922853 systemd[1]: Reached target network.target - Network. Apr 16 23:31:38.940295 systemd-networkd[1162]: eth0: DHCPv4 address 172.31.16.254/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 16 23:31:38.994553 ignition[1081]: Ignition 2.22.0 Apr 16 23:31:38.994598 ignition[1081]: Stage: fetch-offline Apr 16 23:31:38.997986 ignition[1081]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:31:38.998040 ignition[1081]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 16 23:31:39.000952 ignition[1081]: Ignition finished successfully Apr 16 23:31:39.003839 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 23:31:39.009504 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 16 23:31:39.054611 ignition[1173]: Ignition 2.22.0 Apr 16 23:31:39.055106 ignition[1173]: Stage: fetch Apr 16 23:31:39.055704 ignition[1173]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:31:39.055727 ignition[1173]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 16 23:31:39.055894 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 16 23:31:39.082371 ignition[1173]: PUT result: OK Apr 16 23:31:39.086082 ignition[1173]: parsed url from cmdline: "" Apr 16 23:31:39.086260 ignition[1173]: no config URL provided Apr 16 23:31:39.086279 ignition[1173]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 23:31:39.086343 ignition[1173]: no config at "/usr/lib/ignition/user.ign" Apr 16 23:31:39.086378 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 16 23:31:39.090734 ignition[1173]: PUT result: OK Apr 16 23:31:39.090845 ignition[1173]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 16 23:31:39.099407 ignition[1173]: GET result: OK Apr 16 23:31:39.099575 ignition[1173]: parsing config with SHA512: 754c9f148283653a9cf5f886edc607ed26c2038b8741b1a1e8715b8a60571a60d859d4f9b12af0c9f9bd673b502f1dd05274e4b8fa68585f40d01f295fbf8443 Apr 16 23:31:39.114739 unknown[1173]: fetched base config from "system" Apr 16 23:31:39.114780 unknown[1173]: fetched base config from "system" Apr 16 23:31:39.116146 ignition[1173]: fetch: fetch complete Apr 16 23:31:39.114795 unknown[1173]: fetched user config from "aws" Apr 16 23:31:39.116166 ignition[1173]: fetch: fetch passed Apr 16 23:31:39.116288 ignition[1173]: Ignition finished successfully Apr 16 23:31:39.127618 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 16 23:31:39.134421 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 16 23:31:39.190011 ignition[1179]: Ignition 2.22.0 Apr 16 23:31:39.190621 ignition[1179]: Stage: kargs Apr 16 23:31:39.191162 ignition[1179]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:31:39.191185 ignition[1179]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 16 23:31:39.191355 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 16 23:31:39.195651 ignition[1179]: PUT result: OK Apr 16 23:31:39.205650 ignition[1179]: kargs: kargs passed Apr 16 23:31:39.205942 ignition[1179]: Ignition finished successfully Apr 16 23:31:39.211807 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 16 23:31:39.219505 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 16 23:31:39.268703 ignition[1185]: Ignition 2.22.0 Apr 16 23:31:39.269262 ignition[1185]: Stage: disks Apr 16 23:31:39.269782 ignition[1185]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:31:39.269805 ignition[1185]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 16 23:31:39.269931 ignition[1185]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 16 23:31:39.276531 ignition[1185]: PUT result: OK Apr 16 23:31:39.283376 ignition[1185]: disks: disks passed Apr 16 23:31:39.283482 ignition[1185]: Ignition finished successfully Apr 16 23:31:39.288233 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 16 23:31:39.293268 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 16 23:31:39.295959 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 16 23:31:39.300901 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 23:31:39.305604 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 23:31:39.309783 systemd[1]: Reached target basic.target - Basic System. Apr 16 23:31:39.317342 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 16 23:31:39.359950 systemd-fsck[1193]: ROOT: clean, 15/553520 files, 52789/553472 blocks Apr 16 23:31:39.366853 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 16 23:31:39.373795 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 16 23:31:39.497237 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 717eabe0-7ee2-4bf7-a9aa-0d27bb05c125 r/w with ordered data mode. Quota mode: none. Apr 16 23:31:39.498496 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 16 23:31:39.499325 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 16 23:31:39.509040 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 23:31:39.514755 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 16 23:31:39.520554 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 16 23:31:39.520658 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 16 23:31:39.520709 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 23:31:39.547063 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 16 23:31:39.553271 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 16 23:31:39.571277 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1212) Apr 16 23:31:39.576020 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 29b48a10-1a8e-4627-ab21-f0862573351d Apr 16 23:31:39.576083 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 16 23:31:39.584840 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 16 23:31:39.584904 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Apr 16 23:31:39.587302 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 23:31:39.676293 initrd-setup-root[1236]: cut: /sysroot/etc/passwd: No such file or directory Apr 16 23:31:39.688130 initrd-setup-root[1243]: cut: /sysroot/etc/group: No such file or directory Apr 16 23:31:39.697011 initrd-setup-root[1250]: cut: /sysroot/etc/shadow: No such file or directory Apr 16 23:31:39.704852 initrd-setup-root[1257]: cut: /sysroot/etc/gshadow: No such file or directory Apr 16 23:31:39.865280 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 16 23:31:39.866907 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 16 23:31:39.885326 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 16 23:31:39.900883 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 16 23:31:39.909004 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 29b48a10-1a8e-4627-ab21-f0862573351d Apr 16 23:31:39.961984 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 16 23:31:39.970237 ignition[1324]: INFO : Ignition 2.22.0 Apr 16 23:31:39.970237 ignition[1324]: INFO : Stage: mount Apr 16 23:31:39.970237 ignition[1324]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 23:31:39.970237 ignition[1324]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 16 23:31:39.970237 ignition[1324]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 16 23:31:39.982059 ignition[1324]: INFO : PUT result: OK Apr 16 23:31:39.987289 ignition[1324]: INFO : mount: mount passed Apr 16 23:31:39.989665 ignition[1324]: INFO : Ignition finished successfully Apr 16 23:31:39.993242 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 16 23:31:39.999573 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 16 23:31:40.501353 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 23:31:40.540251 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1336) Apr 16 23:31:40.545413 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 29b48a10-1a8e-4627-ab21-f0862573351d Apr 16 23:31:40.545574 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 16 23:31:40.554651 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 16 23:31:40.554733 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Apr 16 23:31:40.558126 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 23:31:40.607654 ignition[1352]: INFO : Ignition 2.22.0 Apr 16 23:31:40.607654 ignition[1352]: INFO : Stage: files Apr 16 23:31:40.611306 ignition[1352]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 23:31:40.611306 ignition[1352]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 16 23:31:40.616221 ignition[1352]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 16 23:31:40.620832 ignition[1352]: INFO : PUT result: OK Apr 16 23:31:40.630408 ignition[1352]: DEBUG : files: compiled without relabeling support, skipping Apr 16 23:31:40.633337 ignition[1352]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 16 23:31:40.633337 ignition[1352]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 16 23:31:40.639937 ignition[1352]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 16 23:31:40.644427 ignition[1352]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 16 23:31:40.647974 unknown[1352]: wrote ssh authorized keys file for user: core Apr 16 23:31:40.650777 ignition[1352]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 16 23:31:40.655251 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 16 23:31:40.659681 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 16 23:31:40.745389 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 16 23:31:40.839373 systemd-networkd[1162]: eth0: Gained IPv6LL Apr 16 23:31:40.936811 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 16 23:31:40.941274 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 16 23:31:40.941274 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 16 23:31:40.941274 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 16 23:31:40.952598 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 16 23:31:40.952598 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 23:31:40.952598 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 23:31:40.952598 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 23:31:40.952598 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 23:31:40.952598 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 23:31:40.952598 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 23:31:40.952598 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 16 23:31:40.952598 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 16 23:31:40.952598 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 16 23:31:40.952598 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Apr 16 23:31:41.492326 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 16 23:31:42.591003 ignition[1352]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Apr 16 23:31:42.591003 ignition[1352]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 16 23:31:42.599119 ignition[1352]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 23:31:42.599119 ignition[1352]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 23:31:42.599119 ignition[1352]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 16 23:31:42.599119 ignition[1352]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 16 23:31:42.599119 ignition[1352]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 16 23:31:42.599119 ignition[1352]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 16 23:31:42.599119 ignition[1352]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 16 23:31:42.599119 ignition[1352]: INFO : files: files passed Apr 16 23:31:42.599119 ignition[1352]: INFO : Ignition finished successfully Apr 16 23:31:42.629656 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 16 23:31:42.636176 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 16 23:31:42.643346 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 16 23:31:42.664743 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 16 23:31:42.667687 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 16 23:31:42.681901 initrd-setup-root-after-ignition[1387]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 23:31:42.686998 initrd-setup-root-after-ignition[1383]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 23:31:42.686998 initrd-setup-root-after-ignition[1383]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 16 23:31:42.695684 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 23:31:42.699429 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 16 23:31:42.707850 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 16 23:31:42.776120 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 16 23:31:42.778448 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 16 23:31:42.784129 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 16 23:31:42.790646 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 16 23:31:42.793084 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 16 23:31:42.794417 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 16 23:31:42.832265 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 23:31:42.837954 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 16 23:31:42.874338 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 16 23:31:42.879842 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 23:31:42.882836 systemd[1]: Stopped target timers.target - Timer Units. Apr 16 23:31:42.889469 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 16 23:31:42.889698 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 23:31:42.898046 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 16 23:31:42.902757 systemd[1]: Stopped target basic.target - Basic System. Apr 16 23:31:42.905432 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 16 23:31:42.911864 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 23:31:42.915029 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 16 23:31:42.922097 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 16 23:31:42.925334 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 16 23:31:42.931997 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 23:31:42.937344 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 16 23:31:42.941917 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 16 23:31:42.946396 systemd[1]: Stopped target swap.target - Swaps. Apr 16 23:31:42.948399 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 16 23:31:42.948619 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 16 23:31:42.955384 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 16 23:31:42.962694 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 23:31:42.965758 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 16 23:31:42.970124 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 23:31:42.973558 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 16 23:31:42.973874 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 16 23:31:42.981773 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 16 23:31:42.982023 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 23:31:42.989408 systemd[1]: ignition-files.service: Deactivated successfully. Apr 16 23:31:42.989606 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 16 23:31:42.998418 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 16 23:31:43.004319 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 16 23:31:43.004886 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 23:31:43.023736 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 16 23:31:43.025827 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 16 23:31:43.026100 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 23:31:43.038638 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 16 23:31:43.038873 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 23:31:43.059008 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 16 23:31:43.061395 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 16 23:31:43.089739 ignition[1407]: INFO : Ignition 2.22.0 Apr 16 23:31:43.089739 ignition[1407]: INFO : Stage: umount Apr 16 23:31:43.095625 ignition[1407]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 23:31:43.095625 ignition[1407]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 16 23:31:43.095625 ignition[1407]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 16 23:31:43.095625 ignition[1407]: INFO : PUT result: OK Apr 16 23:31:43.104170 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 16 23:31:43.108888 ignition[1407]: INFO : umount: umount passed Apr 16 23:31:43.111335 ignition[1407]: INFO : Ignition finished successfully Apr 16 23:31:43.116031 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 16 23:31:43.118410 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 16 23:31:43.124514 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 16 23:31:43.124620 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 16 23:31:43.130331 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 16 23:31:43.130530 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 16 23:31:43.135018 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 16 23:31:43.135100 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 16 23:31:43.138009 systemd[1]: Stopped target network.target - Network. Apr 16 23:31:43.143984 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 16 23:31:43.144069 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 23:31:43.149153 systemd[1]: Stopped target paths.target - Path Units. Apr 16 23:31:43.152333 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 16 23:31:43.154349 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 23:31:43.157090 systemd[1]: Stopped target slices.target - Slice Units. Apr 16 23:31:43.161183 systemd[1]: Stopped target sockets.target - Socket Units. Apr 16 23:31:43.165253 systemd[1]: iscsid.socket: Deactivated successfully. Apr 16 23:31:43.165394 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 23:31:43.172222 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 16 23:31:43.172293 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 23:31:43.175663 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 16 23:31:43.175758 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 16 23:31:43.184147 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 16 23:31:43.184258 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 16 23:31:43.191432 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 16 23:31:43.194077 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 16 23:31:43.230368 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 16 23:31:43.230585 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 16 23:31:43.241692 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 16 23:31:43.242111 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 16 23:31:43.246035 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 16 23:31:43.258514 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 16 23:31:43.262474 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 16 23:31:43.268807 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 16 23:31:43.268903 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 16 23:31:43.277966 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 16 23:31:43.285853 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 16 23:31:43.286113 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 23:31:43.294390 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 16 23:31:43.294498 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 16 23:31:43.299768 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 16 23:31:43.299859 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 16 23:31:43.307538 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 16 23:31:43.307623 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 23:31:43.317478 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 23:31:43.321798 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 16 23:31:43.321931 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 16 23:31:43.327784 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 16 23:31:43.327966 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 16 23:31:43.344931 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 16 23:31:43.345057 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 16 23:31:43.361390 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 16 23:31:43.363454 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 23:31:43.368077 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 16 23:31:43.368160 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 16 23:31:43.374519 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 16 23:31:43.374889 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 23:31:43.381795 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 16 23:31:43.381893 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 16 23:31:43.390394 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 16 23:31:43.390505 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 16 23:31:43.397429 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 23:31:43.397536 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 23:31:43.409742 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 16 23:31:43.412372 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 16 23:31:43.412480 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 23:31:43.425046 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 16 23:31:43.425146 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 23:31:43.428817 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 23:31:43.428909 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:31:43.449447 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Apr 16 23:31:43.449575 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 16 23:31:43.449661 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 16 23:31:43.450425 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 16 23:31:43.450595 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 16 23:31:43.457938 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 16 23:31:43.458269 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 16 23:31:43.471291 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 16 23:31:43.482681 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 16 23:31:43.512063 systemd[1]: Switching root. Apr 16 23:31:43.553253 systemd-journald[257]: Journal stopped Apr 16 23:31:45.556236 systemd-journald[257]: Received SIGTERM from PID 1 (systemd). Apr 16 23:31:45.556366 kernel: SELinux: policy capability network_peer_controls=1 Apr 16 23:31:45.556408 kernel: SELinux: policy capability open_perms=1 Apr 16 23:31:45.556443 kernel: SELinux: policy capability extended_socket_class=1 Apr 16 23:31:45.556473 kernel: SELinux: policy capability always_check_network=0 Apr 16 23:31:45.556502 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 16 23:31:45.556539 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 16 23:31:45.556567 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 16 23:31:45.556596 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 16 23:31:45.556624 kernel: SELinux: policy capability userspace_initial_context=0 Apr 16 23:31:45.556652 kernel: audit: type=1403 audit(1776382303.775:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 16 23:31:45.556688 systemd[1]: Successfully loaded SELinux policy in 73.932ms. Apr 16 23:31:45.556734 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.981ms. Apr 16 23:31:45.556766 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 16 23:31:45.556825 systemd[1]: Detected virtualization amazon. Apr 16 23:31:45.556857 systemd[1]: Detected architecture arm64. Apr 16 23:31:45.556887 systemd[1]: Detected first boot. Apr 16 23:31:45.556915 systemd[1]: Initializing machine ID from VM UUID. Apr 16 23:31:45.556946 zram_generator::config[1450]: No configuration found. Apr 16 23:31:45.556981 kernel: NET: Registered PF_VSOCK protocol family Apr 16 23:31:45.557013 systemd[1]: Populated /etc with preset unit settings. Apr 16 23:31:45.557046 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 16 23:31:45.557076 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 16 23:31:45.557106 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 16 23:31:45.557137 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 16 23:31:45.557166 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 16 23:31:45.557193 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 16 23:31:45.562302 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 16 23:31:45.562341 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 16 23:31:45.562380 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 16 23:31:45.562413 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 16 23:31:45.562448 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 16 23:31:45.562478 systemd[1]: Created slice user.slice - User and Session Slice. Apr 16 23:31:45.562505 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 23:31:45.562534 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 23:31:45.562562 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 16 23:31:45.562591 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 16 23:31:45.562623 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 16 23:31:45.562656 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 23:31:45.562686 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 16 23:31:45.562717 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 23:31:45.562744 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 23:31:45.562771 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 16 23:31:45.562798 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 16 23:31:45.562825 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 16 23:31:45.562856 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 16 23:31:45.562886 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 23:31:45.562915 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 23:31:45.562943 systemd[1]: Reached target slices.target - Slice Units. Apr 16 23:31:45.562972 systemd[1]: Reached target swap.target - Swaps. Apr 16 23:31:45.563000 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 16 23:31:45.563027 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 16 23:31:45.563058 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 16 23:31:45.563089 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 23:31:45.563121 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 23:31:45.563148 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 23:31:45.563176 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 16 23:31:45.563239 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 16 23:31:45.563281 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 16 23:31:45.563311 systemd[1]: Mounting media.mount - External Media Directory... Apr 16 23:31:45.563339 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 16 23:31:45.563367 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 16 23:31:45.563394 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 16 23:31:45.563428 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 16 23:31:45.563459 systemd[1]: Reached target machines.target - Containers. Apr 16 23:31:45.563486 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 16 23:31:45.563514 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 23:31:45.563541 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 23:31:45.563569 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 16 23:31:45.563598 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 23:31:45.563625 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 23:31:45.563654 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 23:31:45.563685 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 16 23:31:45.563718 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 23:31:45.563746 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 16 23:31:45.563774 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 16 23:31:45.563801 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 16 23:31:45.563829 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 16 23:31:45.563856 systemd[1]: Stopped systemd-fsck-usr.service. Apr 16 23:31:45.563886 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 23:31:45.563918 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 23:31:45.563946 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 23:31:45.563974 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 23:31:45.564006 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 16 23:31:45.564035 kernel: ACPI: bus type drm_connector registered Apr 16 23:31:45.564067 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 16 23:31:45.564095 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 23:31:45.564125 systemd[1]: verity-setup.service: Deactivated successfully. Apr 16 23:31:45.564153 systemd[1]: Stopped verity-setup.service. Apr 16 23:31:45.564182 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 16 23:31:45.568272 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 16 23:31:45.568331 systemd[1]: Mounted media.mount - External Media Directory. Apr 16 23:31:45.568363 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 16 23:31:45.568394 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 16 23:31:45.568422 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 16 23:31:45.572884 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 23:31:45.572920 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 16 23:31:45.572948 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 16 23:31:45.572981 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 23:31:45.573016 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 23:31:45.573046 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 23:31:45.573080 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 23:31:45.573109 kernel: loop: module loaded Apr 16 23:31:45.573137 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 23:31:45.573165 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 23:31:45.573193 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 23:31:45.573244 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 23:31:45.573274 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 23:31:45.573309 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 23:31:45.573339 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 16 23:31:45.573367 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 16 23:31:45.573395 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 23:31:45.573422 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 16 23:31:45.573451 kernel: fuse: init (API version 7.41) Apr 16 23:31:45.573477 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 16 23:31:45.573505 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 23:31:45.573534 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 16 23:31:45.573571 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 16 23:31:45.573602 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 23:31:45.573690 systemd-journald[1533]: Collecting audit messages is disabled. Apr 16 23:31:45.573763 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 16 23:31:45.573798 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 23:31:45.573832 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 16 23:31:45.573860 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 23:31:45.573888 systemd-journald[1533]: Journal started Apr 16 23:31:45.573934 systemd-journald[1533]: Runtime Journal (/run/log/journal/ec263eedfc20e8f66a7b855b56134573) is 8M, max 75.3M, 67.3M free. Apr 16 23:31:44.846906 systemd[1]: Queued start job for default target multi-user.target. Apr 16 23:31:44.859063 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 16 23:31:44.859888 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 16 23:31:45.594189 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 23:31:45.607985 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 16 23:31:45.608073 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 23:31:45.619224 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 16 23:31:45.622355 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 16 23:31:45.622699 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 16 23:31:45.635925 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 16 23:31:45.639568 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 16 23:31:45.672149 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 16 23:31:45.680876 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 16 23:31:45.693533 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 16 23:31:45.706344 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 16 23:31:45.709373 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 23:31:45.729666 kernel: loop0: detected capacity change from 0 to 61264 Apr 16 23:31:45.741602 systemd-journald[1533]: Time spent on flushing to /var/log/journal/ec263eedfc20e8f66a7b855b56134573 is 85.471ms for 927 entries. Apr 16 23:31:45.741602 systemd-journald[1533]: System Journal (/var/log/journal/ec263eedfc20e8f66a7b855b56134573) is 8M, max 195.6M, 187.6M free. Apr 16 23:31:45.841004 systemd-journald[1533]: Received client request to flush runtime journal. Apr 16 23:31:45.797858 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 16 23:31:45.820168 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 23:31:45.850914 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 16 23:31:45.860184 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 16 23:31:45.864978 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 16 23:31:45.869386 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 16 23:31:45.881602 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 23:31:45.905450 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 16 23:31:45.918249 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 16 23:31:45.953244 kernel: loop1: detected capacity change from 0 to 100632 Apr 16 23:31:45.951546 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Apr 16 23:31:45.951570 systemd-tmpfiles[1599]: ACLs are not supported, ignoring. Apr 16 23:31:45.995706 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 23:31:46.031243 kernel: loop2: detected capacity change from 0 to 209336 Apr 16 23:31:46.205248 kernel: loop3: detected capacity change from 0 to 119840 Apr 16 23:31:46.261249 kernel: loop4: detected capacity change from 0 to 61264 Apr 16 23:31:46.285242 kernel: loop5: detected capacity change from 0 to 100632 Apr 16 23:31:46.319877 kernel: loop6: detected capacity change from 0 to 209336 Apr 16 23:31:46.352635 kernel: loop7: detected capacity change from 0 to 119840 Apr 16 23:31:46.388439 (sd-merge)[1611]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 16 23:31:46.391419 (sd-merge)[1611]: Merged extensions into '/usr'. Apr 16 23:31:46.406238 systemd[1]: Reload requested from client PID 1564 ('systemd-sysext') (unit systemd-sysext.service)... Apr 16 23:31:46.406271 systemd[1]: Reloading... Apr 16 23:31:46.440239 ldconfig[1557]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 16 23:31:46.567235 zram_generator::config[1637]: No configuration found. Apr 16 23:31:46.991464 systemd[1]: Reloading finished in 584 ms. Apr 16 23:31:47.014309 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 16 23:31:47.017543 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 16 23:31:47.020953 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 16 23:31:47.038465 systemd[1]: Starting ensure-sysext.service... Apr 16 23:31:47.042446 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 23:31:47.050517 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 23:31:47.079619 systemd[1]: Reload requested from client PID 1692 ('systemctl') (unit ensure-sysext.service)... Apr 16 23:31:47.079650 systemd[1]: Reloading... Apr 16 23:31:47.126352 systemd-udevd[1694]: Using default interface naming scheme 'v255'. Apr 16 23:31:47.131154 systemd-tmpfiles[1693]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 16 23:31:47.132568 systemd-tmpfiles[1693]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 16 23:31:47.133225 systemd-tmpfiles[1693]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 16 23:31:47.133723 systemd-tmpfiles[1693]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 16 23:31:47.135487 systemd-tmpfiles[1693]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 16 23:31:47.136950 systemd-tmpfiles[1693]: ACLs are not supported, ignoring. Apr 16 23:31:47.137096 systemd-tmpfiles[1693]: ACLs are not supported, ignoring. Apr 16 23:31:47.151839 systemd-tmpfiles[1693]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 23:31:47.151860 systemd-tmpfiles[1693]: Skipping /boot Apr 16 23:31:47.184627 systemd-tmpfiles[1693]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 23:31:47.184658 systemd-tmpfiles[1693]: Skipping /boot Apr 16 23:31:47.249234 zram_generator::config[1731]: No configuration found. Apr 16 23:31:47.603980 (udev-worker)[1753]: Network interface NamePolicy= disabled on kernel command line. Apr 16 23:31:47.872406 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 16 23:31:47.873044 systemd[1]: Reloading finished in 792 ms. Apr 16 23:31:47.950915 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 23:31:47.972520 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 23:31:48.009570 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 16 23:31:48.015303 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 16 23:31:48.025588 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 16 23:31:48.034979 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 23:31:48.041569 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 23:31:48.086073 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 16 23:31:48.099843 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 23:31:48.104515 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 23:31:48.110804 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 23:31:48.121910 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 23:31:48.124557 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 23:31:48.124832 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 23:31:48.132939 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 16 23:31:48.141673 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 23:31:48.142048 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 23:31:48.142278 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 23:31:48.151058 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 23:31:48.155173 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 23:31:48.161173 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 23:31:48.161447 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 23:31:48.161768 systemd[1]: Reached target time-set.target - System Time Set. Apr 16 23:31:48.169756 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 16 23:31:48.175529 systemd[1]: Finished ensure-sysext.service. Apr 16 23:31:48.190445 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 16 23:31:48.224827 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 23:31:48.225309 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 23:31:48.231080 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 23:31:48.231504 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 23:31:48.237257 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 23:31:48.258318 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 16 23:31:48.267826 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 23:31:48.268320 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 23:31:48.282759 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 16 23:31:48.289230 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 23:31:48.289732 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 23:31:48.292817 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 23:31:48.366042 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 16 23:31:48.378059 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 16 23:31:48.410655 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:31:48.414626 augenrules[1949]: No rules Apr 16 23:31:48.417042 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 23:31:48.418597 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 16 23:31:48.575414 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 16 23:31:48.581561 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 16 23:31:48.621705 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 16 23:31:48.649748 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 16 23:31:48.663893 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:31:48.769591 systemd-networkd[1887]: lo: Link UP Apr 16 23:31:48.770048 systemd-networkd[1887]: lo: Gained carrier Apr 16 23:31:48.773257 systemd-networkd[1887]: Enumeration completed Apr 16 23:31:48.773619 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 23:31:48.774856 systemd-networkd[1887]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:31:48.774965 systemd-networkd[1887]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 23:31:48.782505 systemd-resolved[1888]: Positive Trust Anchors: Apr 16 23:31:48.782541 systemd-resolved[1888]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 23:31:48.782604 systemd-resolved[1888]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 23:31:48.784168 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 16 23:31:48.785383 systemd-networkd[1887]: eth0: Link UP Apr 16 23:31:48.785681 systemd-networkd[1887]: eth0: Gained carrier Apr 16 23:31:48.785734 systemd-networkd[1887]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:31:48.793556 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 16 23:31:48.803314 systemd-networkd[1887]: eth0: DHCPv4 address 172.31.16.254/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 16 23:31:48.805826 systemd-resolved[1888]: Defaulting to hostname 'linux'. Apr 16 23:31:48.809374 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 23:31:48.812044 systemd[1]: Reached target network.target - Network. Apr 16 23:31:48.814518 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 23:31:48.817188 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 23:31:48.819738 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 16 23:31:48.822451 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 16 23:31:48.825779 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 16 23:31:48.828292 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 16 23:31:48.830985 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 16 23:31:48.834988 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 16 23:31:48.835050 systemd[1]: Reached target paths.target - Path Units. Apr 16 23:31:48.837184 systemd[1]: Reached target timers.target - Timer Units. Apr 16 23:31:48.842005 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 16 23:31:48.851394 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 16 23:31:48.858939 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 16 23:31:48.862097 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 16 23:31:48.865171 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 16 23:31:48.877287 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 16 23:31:48.881084 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 16 23:31:48.886321 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 16 23:31:48.890938 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 16 23:31:48.894376 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 23:31:48.896867 systemd[1]: Reached target basic.target - Basic System. Apr 16 23:31:48.899172 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 16 23:31:48.899245 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 16 23:31:48.901290 systemd[1]: Starting containerd.service - containerd container runtime... Apr 16 23:31:48.910480 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 16 23:31:48.919042 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 16 23:31:48.928472 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 16 23:31:48.933936 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 16 23:31:48.941229 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 16 23:31:48.943551 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 16 23:31:48.947697 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 16 23:31:48.956135 systemd[1]: Started ntpd.service - Network Time Service. Apr 16 23:31:48.968935 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 16 23:31:48.978694 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 16 23:31:48.985460 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 16 23:31:48.996623 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 16 23:31:49.003645 jq[1982]: false Apr 16 23:31:49.010645 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 16 23:31:49.020646 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 16 23:31:49.021565 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 16 23:31:49.023589 systemd[1]: Starting update-engine.service - Update Engine... Apr 16 23:31:49.035607 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 16 23:31:49.068265 extend-filesystems[1983]: Found /dev/nvme0n1p6 Apr 16 23:31:49.056540 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 16 23:31:49.060129 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 16 23:31:49.060610 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 16 23:31:49.091082 extend-filesystems[1983]: Found /dev/nvme0n1p9 Apr 16 23:31:49.116607 extend-filesystems[1983]: Checking size of /dev/nvme0n1p9 Apr 16 23:31:49.106636 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 16 23:31:49.109347 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 16 23:31:49.164105 jq[1998]: true Apr 16 23:31:49.176814 systemd[1]: motdgen.service: Deactivated successfully. Apr 16 23:31:49.179290 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 16 23:31:49.190336 extend-filesystems[1983]: Resized partition /dev/nvme0n1p9 Apr 16 23:31:49.205570 extend-filesystems[2029]: resize2fs 1.47.3 (8-Jul-2025) Apr 16 23:31:49.211287 (ntainerd)[2014]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 16 23:31:49.258281 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Apr 16 23:31:49.264786 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 16 23:31:49.264496 dbus-daemon[1980]: [system] SELinux support is enabled Apr 16 23:31:49.272890 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 16 23:31:49.290915 jq[2021]: true Apr 16 23:31:49.272931 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 16 23:31:49.276423 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 16 23:31:49.276456 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 16 23:31:49.296367 coreos-metadata[1979]: Apr 16 23:31:49.294 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 16 23:31:49.301854 coreos-metadata[1979]: Apr 16 23:31:49.301 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 16 23:31:49.303836 coreos-metadata[1979]: Apr 16 23:31:49.303 INFO Fetch successful Apr 16 23:31:49.303836 coreos-metadata[1979]: Apr 16 23:31:49.303 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 16 23:31:49.310407 coreos-metadata[1979]: Apr 16 23:31:49.308 INFO Fetch successful Apr 16 23:31:49.310407 coreos-metadata[1979]: Apr 16 23:31:49.308 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 16 23:31:49.311031 coreos-metadata[1979]: Apr 16 23:31:49.310 INFO Fetch successful Apr 16 23:31:49.311031 coreos-metadata[1979]: Apr 16 23:31:49.310 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 16 23:31:49.312351 coreos-metadata[1979]: Apr 16 23:31:49.312 INFO Fetch successful Apr 16 23:31:49.312351 coreos-metadata[1979]: Apr 16 23:31:49.312 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 16 23:31:49.314816 coreos-metadata[1979]: Apr 16 23:31:49.314 INFO Fetch failed with 404: resource not found Apr 16 23:31:49.314816 coreos-metadata[1979]: Apr 16 23:31:49.314 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 16 23:31:49.316036 coreos-metadata[1979]: Apr 16 23:31:49.315 INFO Fetch successful Apr 16 23:31:49.316036 coreos-metadata[1979]: Apr 16 23:31:49.315 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 16 23:31:49.317688 coreos-metadata[1979]: Apr 16 23:31:49.317 INFO Fetch successful Apr 16 23:31:49.317688 coreos-metadata[1979]: Apr 16 23:31:49.317 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 16 23:31:49.320531 coreos-metadata[1979]: Apr 16 23:31:49.320 INFO Fetch successful Apr 16 23:31:49.320531 coreos-metadata[1979]: Apr 16 23:31:49.320 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 16 23:31:49.322684 coreos-metadata[1979]: Apr 16 23:31:49.322 INFO Fetch successful Apr 16 23:31:49.322684 coreos-metadata[1979]: Apr 16 23:31:49.322 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 16 23:31:49.331423 coreos-metadata[1979]: Apr 16 23:31:49.331 INFO Fetch successful Apr 16 23:31:49.340373 ntpd[1985]: 16 Apr 23:31:49 ntpd[1985]: ntpd 4.2.8p18@1.4062-o Thu Apr 16 21:38:29 UTC 2026 (1): Starting Apr 16 23:31:49.340373 ntpd[1985]: 16 Apr 23:31:49 ntpd[1985]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 16 23:31:49.340373 ntpd[1985]: 16 Apr 23:31:49 ntpd[1985]: ---------------------------------------------------- Apr 16 23:31:49.340373 ntpd[1985]: 16 Apr 23:31:49 ntpd[1985]: ntp-4 is maintained by Network Time Foundation, Apr 16 23:31:49.340373 ntpd[1985]: 16 Apr 23:31:49 ntpd[1985]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 16 23:31:49.340373 ntpd[1985]: 16 Apr 23:31:49 ntpd[1985]: corporation. Support and training for ntp-4 are Apr 16 23:31:49.340373 ntpd[1985]: 16 Apr 23:31:49 ntpd[1985]: available at https://www.nwtime.org/support Apr 16 23:31:49.340373 ntpd[1985]: 16 Apr 23:31:49 ntpd[1985]: ---------------------------------------------------- Apr 16 23:31:49.338290 ntpd[1985]: ntpd 4.2.8p18@1.4062-o Thu Apr 16 21:38:29 UTC 2026 (1): Starting Apr 16 23:31:49.339783 ntpd[1985]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 16 23:31:49.339807 ntpd[1985]: ---------------------------------------------------- Apr 16 23:31:49.339824 ntpd[1985]: ntp-4 is maintained by Network Time Foundation, Apr 16 23:31:49.339840 ntpd[1985]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 16 23:31:49.339855 ntpd[1985]: corporation. Support and training for ntp-4 are Apr 16 23:31:49.347187 update_engine[1997]: I20260416 23:31:49.345593 1997 main.cc:92] Flatcar Update Engine starting Apr 16 23:31:49.339872 ntpd[1985]: available at https://www.nwtime.org/support Apr 16 23:31:49.339887 ntpd[1985]: ---------------------------------------------------- Apr 16 23:31:49.350458 ntpd[1985]: proto: precision = 0.096 usec (-23) Apr 16 23:31:49.351061 ntpd[1985]: 16 Apr 23:31:49 ntpd[1985]: proto: precision = 0.096 usec (-23) Apr 16 23:31:49.351971 ntpd[1985]: basedate set to 2026-04-04 Apr 16 23:31:49.353500 ntpd[1985]: 16 Apr 23:31:49 ntpd[1985]: basedate set to 2026-04-04 Apr 16 23:31:49.353500 ntpd[1985]: 16 Apr 23:31:49 ntpd[1985]: gps base set to 2026-04-05 (week 2413) Apr 16 23:31:49.352008 ntpd[1985]: gps base set to 2026-04-05 (week 2413) Apr 16 23:31:49.352188 ntpd[1985]: Listen and drop on 0 v6wildcard [::]:123 Apr 16 23:31:49.353902 ntpd[1985]: 16 Apr 23:31:49 ntpd[1985]: Listen and drop on 0 v6wildcard [::]:123 Apr 16 23:31:49.353902 ntpd[1985]: 16 Apr 23:31:49 ntpd[1985]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 16 23:31:49.353771 ntpd[1985]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 16 23:31:49.356137 ntpd[1985]: Listen normally on 2 lo 127.0.0.1:123 Apr 16 23:31:49.356559 ntpd[1985]: 16 Apr 23:31:49 ntpd[1985]: Listen normally on 2 lo 127.0.0.1:123 Apr 16 23:31:49.356559 ntpd[1985]: 16 Apr 23:31:49 ntpd[1985]: Listen normally on 3 eth0 172.31.16.254:123 Apr 16 23:31:49.356559 ntpd[1985]: 16 Apr 23:31:49 ntpd[1985]: Listen normally on 4 lo [::1]:123 Apr 16 23:31:49.356559 ntpd[1985]: 16 Apr 23:31:49 ntpd[1985]: bind(21) AF_INET6 [fe80::4e2:f8ff:fe6a:941f%2]:123 flags 0x811 failed: Cannot assign requested address Apr 16 23:31:49.356559 ntpd[1985]: 16 Apr 23:31:49 ntpd[1985]: unable to create socket on eth0 (5) for [fe80::4e2:f8ff:fe6a:941f%2]:123 Apr 16 23:31:49.356195 ntpd[1985]: Listen normally on 3 eth0 172.31.16.254:123 Apr 16 23:31:49.356277 ntpd[1985]: Listen normally on 4 lo [::1]:123 Apr 16 23:31:49.356323 ntpd[1985]: bind(21) AF_INET6 [fe80::4e2:f8ff:fe6a:941f%2]:123 flags 0x811 failed: Cannot assign requested address Apr 16 23:31:49.356360 ntpd[1985]: unable to create socket on eth0 (5) for [fe80::4e2:f8ff:fe6a:941f%2]:123 Apr 16 23:31:49.361079 dbus-daemon[1980]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1887 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 16 23:31:49.370279 systemd-coredump[2042]: Process 1985 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Apr 16 23:31:49.379236 update_engine[1997]: I20260416 23:31:49.375865 1997 update_check_scheduler.cc:74] Next update check in 11m52s Apr 16 23:31:49.380166 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 16 23:31:49.384933 tar[2013]: linux-arm64/LICENSE Apr 16 23:31:49.384933 tar[2013]: linux-arm64/helm Apr 16 23:31:49.403622 systemd[1]: Started update-engine.service - Update Engine. Apr 16 23:31:49.411177 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Apr 16 23:31:49.428535 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 16 23:31:49.439526 systemd[1]: Started systemd-coredump@0-2042-0.service - Process Core Dump (PID 2042/UID 0). Apr 16 23:31:49.456044 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 16 23:31:49.600489 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Apr 16 23:31:49.605277 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 16 23:31:49.608864 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 16 23:31:49.625052 systemd-logind[1991]: Watching system buttons on /dev/input/event0 (Power Button) Apr 16 23:31:49.625094 systemd-logind[1991]: Watching system buttons on /dev/input/event1 (Sleep Button) Apr 16 23:31:49.629040 systemd-logind[1991]: New seat seat0. Apr 16 23:31:49.635410 extend-filesystems[2029]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 16 23:31:49.635410 extend-filesystems[2029]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 16 23:31:49.635410 extend-filesystems[2029]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Apr 16 23:31:49.662142 extend-filesystems[1983]: Resized filesystem in /dev/nvme0n1p9 Apr 16 23:31:49.641464 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 16 23:31:49.641879 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 16 23:31:49.655131 systemd[1]: Started systemd-logind.service - User Login Management. Apr 16 23:31:49.716843 bash[2084]: Updated "/home/core/.ssh/authorized_keys" Apr 16 23:31:49.716781 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 16 23:31:49.731269 systemd[1]: Starting sshkeys.service... Apr 16 23:31:49.852516 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 16 23:31:49.860321 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 16 23:31:50.023731 coreos-metadata[2124]: Apr 16 23:31:50.023 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 16 23:31:50.028445 coreos-metadata[2124]: Apr 16 23:31:50.026 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 16 23:31:50.028445 coreos-metadata[2124]: Apr 16 23:31:50.027 INFO Fetch successful Apr 16 23:31:50.028445 coreos-metadata[2124]: Apr 16 23:31:50.027 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 16 23:31:50.032403 coreos-metadata[2124]: Apr 16 23:31:50.030 INFO Fetch successful Apr 16 23:31:50.033855 unknown[2124]: wrote ssh authorized keys file for user: core Apr 16 23:31:50.116277 containerd[2014]: time="2026-04-16T23:31:50Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 16 23:31:50.123778 containerd[2014]: time="2026-04-16T23:31:50.118911477Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 16 23:31:50.174554 update-ssh-keys[2156]: Updated "/home/core/.ssh/authorized_keys" Apr 16 23:31:50.179277 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 16 23:31:50.188081 systemd-networkd[1887]: eth0: Gained IPv6LL Apr 16 23:31:50.196592 systemd[1]: Finished sshkeys.service. Apr 16 23:31:50.204436 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 16 23:31:50.208150 systemd[1]: Reached target network-online.target - Network is Online. Apr 16 23:31:50.218674 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 16 23:31:50.230508 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:31:50.235174 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 16 23:31:50.297543 containerd[2014]: time="2026-04-16T23:31:50.296569666Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.28µs" Apr 16 23:31:50.299768 containerd[2014]: time="2026-04-16T23:31:50.299706118Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 16 23:31:50.302462 containerd[2014]: time="2026-04-16T23:31:50.299898142Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 16 23:31:50.305753 containerd[2014]: time="2026-04-16T23:31:50.300185530Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 16 23:31:50.305753 containerd[2014]: time="2026-04-16T23:31:50.305036494Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 16 23:31:50.305753 containerd[2014]: time="2026-04-16T23:31:50.305114014Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 16 23:31:50.305753 containerd[2014]: time="2026-04-16T23:31:50.305296942Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 16 23:31:50.305753 containerd[2014]: time="2026-04-16T23:31:50.305325550Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 16 23:31:50.305753 containerd[2014]: time="2026-04-16T23:31:50.305675002Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 16 23:31:50.305753 containerd[2014]: time="2026-04-16T23:31:50.305709694Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 16 23:31:50.313369 containerd[2014]: time="2026-04-16T23:31:50.309345106Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 16 23:31:50.313369 containerd[2014]: time="2026-04-16T23:31:50.309397114Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 16 23:31:50.313369 containerd[2014]: time="2026-04-16T23:31:50.309595666Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 16 23:31:50.313369 containerd[2014]: time="2026-04-16T23:31:50.309982474Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 16 23:31:50.313369 containerd[2014]: time="2026-04-16T23:31:50.310042714Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 16 23:31:50.313369 containerd[2014]: time="2026-04-16T23:31:50.310074358Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 16 23:31:50.318875 containerd[2014]: time="2026-04-16T23:31:50.318374218Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 16 23:31:50.324060 containerd[2014]: time="2026-04-16T23:31:50.320197378Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 16 23:31:50.324060 containerd[2014]: time="2026-04-16T23:31:50.322523770Z" level=info msg="metadata content store policy set" policy=shared Apr 16 23:31:50.346843 containerd[2014]: time="2026-04-16T23:31:50.346780162Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 16 23:31:50.348018 containerd[2014]: time="2026-04-16T23:31:50.347242222Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 16 23:31:50.348018 containerd[2014]: time="2026-04-16T23:31:50.347287846Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 16 23:31:50.348018 containerd[2014]: time="2026-04-16T23:31:50.347343706Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 16 23:31:50.348018 containerd[2014]: time="2026-04-16T23:31:50.347373874Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 16 23:31:50.348018 containerd[2014]: time="2026-04-16T23:31:50.347427214Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 16 23:31:50.348018 containerd[2014]: time="2026-04-16T23:31:50.347499298Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 16 23:31:50.348018 containerd[2014]: time="2026-04-16T23:31:50.347533726Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 16 23:31:50.348018 containerd[2014]: time="2026-04-16T23:31:50.347586118Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 16 23:31:50.348018 containerd[2014]: time="2026-04-16T23:31:50.347617966Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 16 23:31:50.348018 containerd[2014]: time="2026-04-16T23:31:50.347643166Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 16 23:31:50.348018 containerd[2014]: time="2026-04-16T23:31:50.347699326Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 16 23:31:50.355932 containerd[2014]: time="2026-04-16T23:31:50.350311090Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 16 23:31:50.355932 containerd[2014]: time="2026-04-16T23:31:50.352283206Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 16 23:31:50.355932 containerd[2014]: time="2026-04-16T23:31:50.352366246Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 16 23:31:50.355932 containerd[2014]: time="2026-04-16T23:31:50.352932034Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 16 23:31:50.366123 containerd[2014]: time="2026-04-16T23:31:50.361983214Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 16 23:31:50.366123 containerd[2014]: time="2026-04-16T23:31:50.362072410Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 16 23:31:50.366123 containerd[2014]: time="2026-04-16T23:31:50.362111638Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 16 23:31:50.366123 containerd[2014]: time="2026-04-16T23:31:50.362139298Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 16 23:31:50.366123 containerd[2014]: time="2026-04-16T23:31:50.362167738Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 16 23:31:50.366123 containerd[2014]: time="2026-04-16T23:31:50.362194882Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 16 23:31:50.366123 containerd[2014]: time="2026-04-16T23:31:50.362248246Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 16 23:31:50.366123 containerd[2014]: time="2026-04-16T23:31:50.362619802Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 16 23:31:50.366123 containerd[2014]: time="2026-04-16T23:31:50.362655670Z" level=info msg="Start snapshots syncer" Apr 16 23:31:50.366123 containerd[2014]: time="2026-04-16T23:31:50.362715778Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 16 23:31:50.374942 containerd[2014]: time="2026-04-16T23:31:50.363167770Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 16 23:31:50.374942 containerd[2014]: time="2026-04-16T23:31:50.371442274Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 16 23:31:50.375243 containerd[2014]: time="2026-04-16T23:31:50.371611606Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 16 23:31:50.375243 containerd[2014]: time="2026-04-16T23:31:50.374614990Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 16 23:31:50.375243 containerd[2014]: time="2026-04-16T23:31:50.374705362Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 16 23:31:50.375243 containerd[2014]: time="2026-04-16T23:31:50.374765566Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 16 23:31:50.377672 containerd[2014]: time="2026-04-16T23:31:50.374794354Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 16 23:31:50.377672 containerd[2014]: time="2026-04-16T23:31:50.375476674Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 16 23:31:50.377672 containerd[2014]: time="2026-04-16T23:31:50.377311894Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 16 23:31:50.379034 containerd[2014]: time="2026-04-16T23:31:50.377640262Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 16 23:31:50.379034 containerd[2014]: time="2026-04-16T23:31:50.378894946Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 16 23:31:50.379034 containerd[2014]: time="2026-04-16T23:31:50.378954058Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 16 23:31:50.379276 containerd[2014]: time="2026-04-16T23:31:50.379005562Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 16 23:31:50.379489 containerd[2014]: time="2026-04-16T23:31:50.379461790Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 16 23:31:50.382238 containerd[2014]: time="2026-04-16T23:31:50.381262666Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 16 23:31:50.382238 containerd[2014]: time="2026-04-16T23:31:50.381333322Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 16 23:31:50.382238 containerd[2014]: time="2026-04-16T23:31:50.381362710Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 16 23:31:50.382238 containerd[2014]: time="2026-04-16T23:31:50.381384190Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 16 23:31:50.382238 containerd[2014]: time="2026-04-16T23:31:50.381411934Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 16 23:31:50.382238 containerd[2014]: time="2026-04-16T23:31:50.381438934Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 16 23:31:50.382238 containerd[2014]: time="2026-04-16T23:31:50.381616438Z" level=info msg="runtime interface created" Apr 16 23:31:50.382238 containerd[2014]: time="2026-04-16T23:31:50.381635710Z" level=info msg="created NRI interface" Apr 16 23:31:50.382238 containerd[2014]: time="2026-04-16T23:31:50.381657622Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 16 23:31:50.382238 containerd[2014]: time="2026-04-16T23:31:50.381696934Z" level=info msg="Connect containerd service" Apr 16 23:31:50.382238 containerd[2014]: time="2026-04-16T23:31:50.381755038Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 16 23:31:50.404623 containerd[2014]: time="2026-04-16T23:31:50.402734182Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 23:31:50.413382 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 16 23:31:50.418759 locksmithd[2052]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 16 23:31:50.501912 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 16 23:31:50.534171 dbus-daemon[1980]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 16 23:31:50.555334 dbus-daemon[1980]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2049 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 16 23:31:50.565270 systemd[1]: Starting polkit.service - Authorization Manager... Apr 16 23:31:50.612831 amazon-ssm-agent[2169]: Initializing new seelog logger Apr 16 23:31:50.612831 amazon-ssm-agent[2169]: New Seelog Logger Creation Complete Apr 16 23:31:50.612831 amazon-ssm-agent[2169]: 2026/04/16 23:31:50 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 16 23:31:50.612831 amazon-ssm-agent[2169]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 16 23:31:50.612831 amazon-ssm-agent[2169]: 2026/04/16 23:31:50 processing appconfig overrides Apr 16 23:31:50.621037 amazon-ssm-agent[2169]: 2026-04-16 23:31:50.6101 INFO Proxy environment variables: Apr 16 23:31:50.621037 amazon-ssm-agent[2169]: 2026/04/16 23:31:50 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 16 23:31:50.621037 amazon-ssm-agent[2169]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 16 23:31:50.621037 amazon-ssm-agent[2169]: 2026/04/16 23:31:50 processing appconfig overrides Apr 16 23:31:50.621037 amazon-ssm-agent[2169]: 2026/04/16 23:31:50 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 16 23:31:50.621037 amazon-ssm-agent[2169]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 16 23:31:50.621037 amazon-ssm-agent[2169]: 2026/04/16 23:31:50 processing appconfig overrides Apr 16 23:31:50.645245 amazon-ssm-agent[2169]: 2026/04/16 23:31:50 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 16 23:31:50.645245 amazon-ssm-agent[2169]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 16 23:31:50.645245 amazon-ssm-agent[2169]: 2026/04/16 23:31:50 processing appconfig overrides Apr 16 23:31:50.653069 systemd-coredump[2050]: Process 1985 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1985: #0 0x0000aaaab6290b5c n/a (ntpd + 0x60b5c) #1 0x0000aaaab623fe60 n/a (ntpd + 0xfe60) #2 0x0000aaaab6240240 n/a (ntpd + 0x10240) #3 0x0000aaaab623be14 n/a (ntpd + 0xbe14) #4 0x0000aaaab623d3ec n/a (ntpd + 0xd3ec) #5 0x0000aaaab6245a38 n/a (ntpd + 0x15a38) #6 0x0000aaaab623738c n/a (ntpd + 0x738c) #7 0x0000ffffbac02034 n/a (libc.so.6 + 0x22034) #8 0x0000ffffbac02118 __libc_start_main (libc.so.6 + 0x22118) #9 0x0000aaaab62373f0 n/a (ntpd + 0x73f0) ELF object binary architecture: AARCH64 Apr 16 23:31:50.668901 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Apr 16 23:31:50.669885 systemd[1]: ntpd.service: Failed with result 'core-dump'. Apr 16 23:31:50.677811 systemd[1]: systemd-coredump@0-2042-0.service: Deactivated successfully. Apr 16 23:31:50.723227 amazon-ssm-agent[2169]: 2026-04-16 23:31:50.6101 INFO https_proxy: Apr 16 23:31:50.800675 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Apr 16 23:31:50.810219 systemd[1]: Started ntpd.service - Network Time Service. Apr 16 23:31:50.829046 amazon-ssm-agent[2169]: 2026-04-16 23:31:50.6101 INFO http_proxy: Apr 16 23:31:50.911674 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 16 23:31:50.931262 amazon-ssm-agent[2169]: 2026-04-16 23:31:50.6101 INFO no_proxy: Apr 16 23:31:50.964257 containerd[2014]: time="2026-04-16T23:31:50.963633805Z" level=info msg="Start subscribing containerd event" Apr 16 23:31:50.973353 containerd[2014]: time="2026-04-16T23:31:50.971357857Z" level=info msg="Start recovering state" Apr 16 23:31:50.973353 containerd[2014]: time="2026-04-16T23:31:50.971581477Z" level=info msg="Start event monitor" Apr 16 23:31:50.973353 containerd[2014]: time="2026-04-16T23:31:50.971631181Z" level=info msg="Start cni network conf syncer for default" Apr 16 23:31:50.973353 containerd[2014]: time="2026-04-16T23:31:50.971659357Z" level=info msg="Start streaming server" Apr 16 23:31:50.973353 containerd[2014]: time="2026-04-16T23:31:50.971679553Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 16 23:31:50.973353 containerd[2014]: time="2026-04-16T23:31:50.971719873Z" level=info msg="runtime interface starting up..." Apr 16 23:31:50.973353 containerd[2014]: time="2026-04-16T23:31:50.971737249Z" level=info msg="starting plugins..." Apr 16 23:31:50.973353 containerd[2014]: time="2026-04-16T23:31:50.971767261Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 16 23:31:50.973353 containerd[2014]: time="2026-04-16T23:31:50.964191997Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 16 23:31:50.973353 containerd[2014]: time="2026-04-16T23:31:50.972169921Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 16 23:31:50.973353 containerd[2014]: time="2026-04-16T23:31:50.972374245Z" level=info msg="containerd successfully booted in 0.858707s" Apr 16 23:31:50.972488 systemd[1]: Started containerd.service - containerd container runtime. Apr 16 23:31:50.984043 ntpd[2217]: ntpd 4.2.8p18@1.4062-o Thu Apr 16 21:38:29 UTC 2026 (1): Starting Apr 16 23:31:50.985715 ntpd[2217]: 16 Apr 23:31:50 ntpd[2217]: ntpd 4.2.8p18@1.4062-o Thu Apr 16 21:38:29 UTC 2026 (1): Starting Apr 16 23:31:50.985715 ntpd[2217]: 16 Apr 23:31:50 ntpd[2217]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 16 23:31:50.985715 ntpd[2217]: 16 Apr 23:31:50 ntpd[2217]: ---------------------------------------------------- Apr 16 23:31:50.984157 ntpd[2217]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 16 23:31:50.984175 ntpd[2217]: ---------------------------------------------------- Apr 16 23:31:50.990527 ntpd[2217]: 16 Apr 23:31:50 ntpd[2217]: ntp-4 is maintained by Network Time Foundation, Apr 16 23:31:50.990527 ntpd[2217]: 16 Apr 23:31:50 ntpd[2217]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 16 23:31:50.990527 ntpd[2217]: 16 Apr 23:31:50 ntpd[2217]: corporation. Support and training for ntp-4 are Apr 16 23:31:50.990527 ntpd[2217]: 16 Apr 23:31:50 ntpd[2217]: available at https://www.nwtime.org/support Apr 16 23:31:50.990527 ntpd[2217]: 16 Apr 23:31:50 ntpd[2217]: ---------------------------------------------------- Apr 16 23:31:50.990527 ntpd[2217]: 16 Apr 23:31:50 ntpd[2217]: proto: precision = 0.096 usec (-23) Apr 16 23:31:50.990527 ntpd[2217]: 16 Apr 23:31:50 ntpd[2217]: basedate set to 2026-04-04 Apr 16 23:31:50.990527 ntpd[2217]: 16 Apr 23:31:50 ntpd[2217]: gps base set to 2026-04-05 (week 2413) Apr 16 23:31:50.990527 ntpd[2217]: 16 Apr 23:31:50 ntpd[2217]: Listen and drop on 0 v6wildcard [::]:123 Apr 16 23:31:50.990527 ntpd[2217]: 16 Apr 23:31:50 ntpd[2217]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 16 23:31:50.984192 ntpd[2217]: ntp-4 is maintained by Network Time Foundation, Apr 16 23:31:50.986345 ntpd[2217]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 16 23:31:50.986370 ntpd[2217]: corporation. Support and training for ntp-4 are Apr 16 23:31:50.986386 ntpd[2217]: available at https://www.nwtime.org/support Apr 16 23:31:50.986402 ntpd[2217]: ---------------------------------------------------- Apr 16 23:31:50.987440 ntpd[2217]: proto: precision = 0.096 usec (-23) Apr 16 23:31:50.998466 ntpd[2217]: 16 Apr 23:31:50 ntpd[2217]: Listen normally on 2 lo 127.0.0.1:123 Apr 16 23:31:50.998466 ntpd[2217]: 16 Apr 23:31:50 ntpd[2217]: Listen normally on 3 eth0 172.31.16.254:123 Apr 16 23:31:50.998466 ntpd[2217]: 16 Apr 23:31:50 ntpd[2217]: Listen normally on 4 lo [::1]:123 Apr 16 23:31:50.998466 ntpd[2217]: 16 Apr 23:31:50 ntpd[2217]: Listen normally on 5 eth0 [fe80::4e2:f8ff:fe6a:941f%2]:123 Apr 16 23:31:50.998466 ntpd[2217]: 16 Apr 23:31:50 ntpd[2217]: Listening on routing socket on fd #22 for interface updates Apr 16 23:31:50.987768 ntpd[2217]: basedate set to 2026-04-04 Apr 16 23:31:50.987789 ntpd[2217]: gps base set to 2026-04-05 (week 2413) Apr 16 23:31:50.987903 ntpd[2217]: Listen and drop on 0 v6wildcard [::]:123 Apr 16 23:31:50.987945 ntpd[2217]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 16 23:31:50.992323 ntpd[2217]: Listen normally on 2 lo 127.0.0.1:123 Apr 16 23:31:50.992389 ntpd[2217]: Listen normally on 3 eth0 172.31.16.254:123 Apr 16 23:31:50.992437 ntpd[2217]: Listen normally on 4 lo [::1]:123 Apr 16 23:31:50.992481 ntpd[2217]: Listen normally on 5 eth0 [fe80::4e2:f8ff:fe6a:941f%2]:123 Apr 16 23:31:50.992523 ntpd[2217]: Listening on routing socket on fd #22 for interface updates Apr 16 23:31:51.017583 ntpd[2217]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 16 23:31:51.018245 ntpd[2217]: 16 Apr 23:31:51 ntpd[2217]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 16 23:31:51.018245 ntpd[2217]: 16 Apr 23:31:51 ntpd[2217]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 16 23:31:51.017645 ntpd[2217]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 16 23:31:51.030230 amazon-ssm-agent[2169]: 2026-04-16 23:31:50.6169 INFO Checking if agent identity type OnPrem can be assumed Apr 16 23:31:51.127816 polkitd[2199]: Started polkitd version 126 Apr 16 23:31:51.129559 amazon-ssm-agent[2169]: 2026-04-16 23:31:50.6170 INFO Checking if agent identity type EC2 can be assumed Apr 16 23:31:51.159797 polkitd[2199]: Loading rules from directory /etc/polkit-1/rules.d Apr 16 23:31:51.160447 polkitd[2199]: Loading rules from directory /run/polkit-1/rules.d Apr 16 23:31:51.160540 polkitd[2199]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Apr 16 23:31:51.161173 polkitd[2199]: Loading rules from directory /usr/local/share/polkit-1/rules.d Apr 16 23:31:51.170313 polkitd[2199]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Apr 16 23:31:51.170430 polkitd[2199]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 16 23:31:51.172240 polkitd[2199]: Finished loading, compiling and executing 2 rules Apr 16 23:31:51.172649 systemd[1]: Started polkit.service - Authorization Manager. Apr 16 23:31:51.178087 dbus-daemon[1980]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 16 23:31:51.181808 polkitd[2199]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 16 23:31:51.228580 amazon-ssm-agent[2169]: 2026-04-16 23:31:50.7859 INFO Agent will take identity from EC2 Apr 16 23:31:51.229752 systemd-hostnamed[2049]: Hostname set to (transient) Apr 16 23:31:51.231253 systemd-resolved[1888]: System hostname changed to 'ip-172-31-16-254'. Apr 16 23:31:51.328223 amazon-ssm-agent[2169]: 2026-04-16 23:31:50.7950 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Apr 16 23:31:51.353227 tar[2013]: linux-arm64/README.md Apr 16 23:31:51.388841 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 16 23:31:51.426527 amazon-ssm-agent[2169]: 2026-04-16 23:31:50.7951 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Apr 16 23:31:51.527316 amazon-ssm-agent[2169]: 2026-04-16 23:31:50.7951 INFO [amazon-ssm-agent] Starting Core Agent Apr 16 23:31:51.568961 amazon-ssm-agent[2169]: 2026/04/16 23:31:51 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 16 23:31:51.568961 amazon-ssm-agent[2169]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 16 23:31:51.569136 amazon-ssm-agent[2169]: 2026/04/16 23:31:51 processing appconfig overrides Apr 16 23:31:51.599308 amazon-ssm-agent[2169]: 2026-04-16 23:31:50.7951 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Apr 16 23:31:51.599308 amazon-ssm-agent[2169]: 2026-04-16 23:31:50.7951 INFO [Registrar] Starting registrar module Apr 16 23:31:51.599485 amazon-ssm-agent[2169]: 2026-04-16 23:31:50.8030 INFO [EC2Identity] Checking disk for registration info Apr 16 23:31:51.599485 amazon-ssm-agent[2169]: 2026-04-16 23:31:50.8031 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Apr 16 23:31:51.599485 amazon-ssm-agent[2169]: 2026-04-16 23:31:50.8031 INFO [EC2Identity] Generating registration keypair Apr 16 23:31:51.599485 amazon-ssm-agent[2169]: 2026-04-16 23:31:51.5330 INFO [EC2Identity] Checking write access before registering Apr 16 23:31:51.599485 amazon-ssm-agent[2169]: 2026-04-16 23:31:51.5337 INFO [EC2Identity] Registering EC2 instance with Systems Manager Apr 16 23:31:51.599485 amazon-ssm-agent[2169]: 2026-04-16 23:31:51.5686 INFO [EC2Identity] EC2 registration was successful. Apr 16 23:31:51.599485 amazon-ssm-agent[2169]: 2026-04-16 23:31:51.5687 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Apr 16 23:31:51.599485 amazon-ssm-agent[2169]: 2026-04-16 23:31:51.5688 INFO [CredentialRefresher] credentialRefresher has started Apr 16 23:31:51.599485 amazon-ssm-agent[2169]: 2026-04-16 23:31:51.5688 INFO [CredentialRefresher] Starting credentials refresher loop Apr 16 23:31:51.599485 amazon-ssm-agent[2169]: 2026-04-16 23:31:51.5989 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 16 23:31:51.599485 amazon-ssm-agent[2169]: 2026-04-16 23:31:51.5992 INFO [CredentialRefresher] Credentials ready Apr 16 23:31:51.626862 amazon-ssm-agent[2169]: 2026-04-16 23:31:51.5995 INFO [CredentialRefresher] Next credential rotation will be in 29.999990749 minutes Apr 16 23:31:51.993379 sshd_keygen[2033]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 16 23:31:52.038999 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 16 23:31:52.044824 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 16 23:31:52.051150 systemd[1]: Started sshd@0-172.31.16.254:22-20.229.252.112:55514.service - OpenSSH per-connection server daemon (20.229.252.112:55514). Apr 16 23:31:52.085408 systemd[1]: issuegen.service: Deactivated successfully. Apr 16 23:31:52.086095 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 16 23:31:52.094091 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 16 23:31:52.127514 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 16 23:31:52.135790 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 16 23:31:52.142672 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 16 23:31:52.147659 systemd[1]: Reached target getty.target - Login Prompts. Apr 16 23:31:52.625746 amazon-ssm-agent[2169]: 2026-04-16 23:31:52.6253 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 16 23:31:52.727020 amazon-ssm-agent[2169]: 2026-04-16 23:31:52.6281 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2258) started Apr 16 23:31:52.827948 amazon-ssm-agent[2169]: 2026-04-16 23:31:52.6281 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 16 23:31:53.016960 sshd[2246]: Accepted publickey for core from 20.229.252.112 port 55514 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:31:53.020412 sshd-session[2246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:31:53.036831 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 16 23:31:53.041382 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 16 23:31:53.070049 systemd-logind[1991]: New session 1 of user core. Apr 16 23:31:53.090564 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 16 23:31:53.099048 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 16 23:31:53.124392 (systemd)[2271]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 16 23:31:53.130407 systemd-logind[1991]: New session c1 of user core. Apr 16 23:31:53.250439 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:31:53.253952 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 16 23:31:53.269715 (kubelet)[2282]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 23:31:53.442302 systemd[2271]: Queued start job for default target default.target. Apr 16 23:31:53.451277 systemd[2271]: Created slice app.slice - User Application Slice. Apr 16 23:31:53.451331 systemd[2271]: Reached target paths.target - Paths. Apr 16 23:31:53.451416 systemd[2271]: Reached target timers.target - Timers. Apr 16 23:31:53.454138 systemd[2271]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 16 23:31:53.486749 systemd[2271]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 16 23:31:53.487225 systemd[2271]: Reached target sockets.target - Sockets. Apr 16 23:31:53.487482 systemd[2271]: Reached target basic.target - Basic System. Apr 16 23:31:53.487695 systemd[2271]: Reached target default.target - Main User Target. Apr 16 23:31:53.487871 systemd[2271]: Startup finished in 341ms. Apr 16 23:31:53.488314 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 16 23:31:53.501505 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 16 23:31:53.504705 systemd[1]: Startup finished in 3.732s (kernel) + 9.079s (initrd) + 9.804s (userspace) = 22.616s. Apr 16 23:31:54.041660 systemd[1]: Started sshd@1-172.31.16.254:22-20.229.252.112:55522.service - OpenSSH per-connection server daemon (20.229.252.112:55522). Apr 16 23:31:54.694025 kubelet[2282]: E0416 23:31:54.693909 2282 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 23:31:54.698549 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 23:31:54.698866 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 23:31:54.699535 systemd[1]: kubelet.service: Consumed 1.417s CPU time, 257.2M memory peak. Apr 16 23:31:54.944335 sshd[2296]: Accepted publickey for core from 20.229.252.112 port 55522 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:31:54.944594 sshd-session[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:31:54.954163 systemd-logind[1991]: New session 2 of user core. Apr 16 23:31:54.963496 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 16 23:31:55.449247 sshd[2301]: Connection closed by 20.229.252.112 port 55522 Apr 16 23:31:55.450096 sshd-session[2296]: pam_unix(sshd:session): session closed for user core Apr 16 23:31:55.456157 systemd-logind[1991]: Session 2 logged out. Waiting for processes to exit. Apr 16 23:31:55.458071 systemd[1]: sshd@1-172.31.16.254:22-20.229.252.112:55522.service: Deactivated successfully. Apr 16 23:31:55.463039 systemd[1]: session-2.scope: Deactivated successfully. Apr 16 23:31:55.467959 systemd-logind[1991]: Removed session 2. Apr 16 23:31:55.626304 systemd[1]: Started sshd@2-172.31.16.254:22-20.229.252.112:44788.service - OpenSSH per-connection server daemon (20.229.252.112:44788). Apr 16 23:31:56.510722 sshd[2307]: Accepted publickey for core from 20.229.252.112 port 44788 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:31:56.513067 sshd-session[2307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:31:56.521095 systemd-logind[1991]: New session 3 of user core. Apr 16 23:31:56.533467 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 16 23:31:57.003106 sshd[2310]: Connection closed by 20.229.252.112 port 44788 Apr 16 23:31:57.003925 sshd-session[2307]: pam_unix(sshd:session): session closed for user core Apr 16 23:31:57.010748 systemd[1]: sshd@2-172.31.16.254:22-20.229.252.112:44788.service: Deactivated successfully. Apr 16 23:31:57.014381 systemd[1]: session-3.scope: Deactivated successfully. Apr 16 23:31:57.015875 systemd-logind[1991]: Session 3 logged out. Waiting for processes to exit. Apr 16 23:31:57.018758 systemd-logind[1991]: Removed session 3. Apr 16 23:31:57.181166 systemd[1]: Started sshd@3-172.31.16.254:22-20.229.252.112:44800.service - OpenSSH per-connection server daemon (20.229.252.112:44800). Apr 16 23:31:57.543434 systemd-resolved[1888]: Clock change detected. Flushing caches. Apr 16 23:31:57.624157 sshd[2316]: Accepted publickey for core from 20.229.252.112 port 44800 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:31:57.626521 sshd-session[2316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:31:57.634322 systemd-logind[1991]: New session 4 of user core. Apr 16 23:31:57.646555 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 16 23:31:58.124730 sshd[2319]: Connection closed by 20.229.252.112 port 44800 Apr 16 23:31:58.125575 sshd-session[2316]: pam_unix(sshd:session): session closed for user core Apr 16 23:31:58.132646 systemd[1]: sshd@3-172.31.16.254:22-20.229.252.112:44800.service: Deactivated successfully. Apr 16 23:31:58.136118 systemd[1]: session-4.scope: Deactivated successfully. Apr 16 23:31:58.138240 systemd-logind[1991]: Session 4 logged out. Waiting for processes to exit. Apr 16 23:31:58.141309 systemd-logind[1991]: Removed session 4. Apr 16 23:31:58.304393 systemd[1]: Started sshd@4-172.31.16.254:22-20.229.252.112:44804.service - OpenSSH per-connection server daemon (20.229.252.112:44804). Apr 16 23:31:59.191347 sshd[2325]: Accepted publickey for core from 20.229.252.112 port 44804 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:31:59.193735 sshd-session[2325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:31:59.201573 systemd-logind[1991]: New session 5 of user core. Apr 16 23:31:59.213562 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 16 23:31:59.542114 sudo[2329]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 16 23:31:59.543575 sudo[2329]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:31:59.558205 sudo[2329]: pam_unix(sudo:session): session closed for user root Apr 16 23:31:59.723321 sshd[2328]: Connection closed by 20.229.252.112 port 44804 Apr 16 23:31:59.724397 sshd-session[2325]: pam_unix(sshd:session): session closed for user core Apr 16 23:31:59.732431 systemd[1]: sshd@4-172.31.16.254:22-20.229.252.112:44804.service: Deactivated successfully. Apr 16 23:31:59.735241 systemd[1]: session-5.scope: Deactivated successfully. Apr 16 23:31:59.736863 systemd-logind[1991]: Session 5 logged out. Waiting for processes to exit. Apr 16 23:31:59.739466 systemd-logind[1991]: Removed session 5. Apr 16 23:31:59.902725 systemd[1]: Started sshd@5-172.31.16.254:22-20.229.252.112:44810.service - OpenSSH per-connection server daemon (20.229.252.112:44810). Apr 16 23:32:00.790133 sshd[2335]: Accepted publickey for core from 20.229.252.112 port 44810 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:32:00.792518 sshd-session[2335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:32:00.800145 systemd-logind[1991]: New session 6 of user core. Apr 16 23:32:00.811542 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 16 23:32:01.128283 sudo[2340]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 16 23:32:01.129954 sudo[2340]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:32:01.136254 sudo[2340]: pam_unix(sudo:session): session closed for user root Apr 16 23:32:01.146084 sudo[2339]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 16 23:32:01.147204 sudo[2339]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:32:01.163400 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 16 23:32:01.223996 augenrules[2362]: No rules Apr 16 23:32:01.226460 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 23:32:01.227015 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 16 23:32:01.231958 sudo[2339]: pam_unix(sudo:session): session closed for user root Apr 16 23:32:01.397534 sshd[2338]: Connection closed by 20.229.252.112 port 44810 Apr 16 23:32:01.397421 sshd-session[2335]: pam_unix(sshd:session): session closed for user core Apr 16 23:32:01.406473 systemd-logind[1991]: Session 6 logged out. Waiting for processes to exit. Apr 16 23:32:01.406741 systemd[1]: sshd@5-172.31.16.254:22-20.229.252.112:44810.service: Deactivated successfully. Apr 16 23:32:01.410871 systemd[1]: session-6.scope: Deactivated successfully. Apr 16 23:32:01.414907 systemd-logind[1991]: Removed session 6. Apr 16 23:32:01.582265 systemd[1]: Started sshd@6-172.31.16.254:22-20.229.252.112:44824.service - OpenSSH per-connection server daemon (20.229.252.112:44824). Apr 16 23:32:02.476954 sshd[2371]: Accepted publickey for core from 20.229.252.112 port 44824 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:32:02.479275 sshd-session[2371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:32:02.488381 systemd-logind[1991]: New session 7 of user core. Apr 16 23:32:02.495541 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 16 23:32:02.818634 sudo[2375]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 16 23:32:02.819213 sudo[2375]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:32:03.335198 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 16 23:32:03.349876 (dockerd)[2392]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 16 23:32:03.733806 dockerd[2392]: time="2026-04-16T23:32:03.733725238Z" level=info msg="Starting up" Apr 16 23:32:03.738139 dockerd[2392]: time="2026-04-16T23:32:03.738096634Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 16 23:32:03.760532 dockerd[2392]: time="2026-04-16T23:32:03.760449238Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 16 23:32:03.800160 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2404719144-merged.mount: Deactivated successfully. Apr 16 23:32:03.817153 systemd[1]: var-lib-docker-metacopy\x2dcheck4184765032-merged.mount: Deactivated successfully. Apr 16 23:32:03.839256 dockerd[2392]: time="2026-04-16T23:32:03.838908791Z" level=info msg="Loading containers: start." Apr 16 23:32:03.858361 kernel: Initializing XFRM netlink socket Apr 16 23:32:04.183772 (udev-worker)[2413]: Network interface NamePolicy= disabled on kernel command line. Apr 16 23:32:04.260468 systemd-networkd[1887]: docker0: Link UP Apr 16 23:32:04.272575 dockerd[2392]: time="2026-04-16T23:32:04.272506545Z" level=info msg="Loading containers: done." Apr 16 23:32:04.297784 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 16 23:32:04.301616 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:32:04.325080 dockerd[2392]: time="2026-04-16T23:32:04.324485889Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 16 23:32:04.325080 dockerd[2392]: time="2026-04-16T23:32:04.324608481Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 16 23:32:04.325080 dockerd[2392]: time="2026-04-16T23:32:04.324755613Z" level=info msg="Initializing buildkit" Apr 16 23:32:04.389829 dockerd[2392]: time="2026-04-16T23:32:04.389776738Z" level=info msg="Completed buildkit initialization" Apr 16 23:32:04.409506 dockerd[2392]: time="2026-04-16T23:32:04.409423270Z" level=info msg="Daemon has completed initialization" Apr 16 23:32:04.409868 dockerd[2392]: time="2026-04-16T23:32:04.409680730Z" level=info msg="API listen on /run/docker.sock" Apr 16 23:32:04.410871 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 16 23:32:04.733795 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:32:04.748804 (kubelet)[2610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 23:32:04.793646 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3435568621-merged.mount: Deactivated successfully. Apr 16 23:32:04.841698 kubelet[2610]: E0416 23:32:04.841610 2610 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 23:32:04.849457 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 23:32:04.850920 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 23:32:04.852122 systemd[1]: kubelet.service: Consumed 322ms CPU time, 105.6M memory peak. Apr 16 23:32:05.247935 containerd[2014]: time="2026-04-16T23:32:05.247870006Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 16 23:32:05.999585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3341736332.mount: Deactivated successfully. Apr 16 23:32:07.905317 containerd[2014]: time="2026-04-16T23:32:07.903752703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:07.905831 containerd[2014]: time="2026-04-16T23:32:07.905460387Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=27008787" Apr 16 23:32:07.906135 containerd[2014]: time="2026-04-16T23:32:07.906096027Z" level=info msg="ImageCreate event name:\"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:07.911014 containerd[2014]: time="2026-04-16T23:32:07.910950135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:07.913230 containerd[2014]: time="2026-04-16T23:32:07.913182903Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"27005386\" in 2.665251277s" Apr 16 23:32:07.913431 containerd[2014]: time="2026-04-16T23:32:07.913401711Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:51b83c5cb2f791f72696c040be904535bad3c81a6ffc19a55013ac150a24d9b0\"" Apr 16 23:32:07.914375 containerd[2014]: time="2026-04-16T23:32:07.914322123Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 16 23:32:09.964533 containerd[2014]: time="2026-04-16T23:32:09.964469909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:09.966248 containerd[2014]: time="2026-04-16T23:32:09.966194549Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=23297774" Apr 16 23:32:09.967352 containerd[2014]: time="2026-04-16T23:32:09.966969005Z" level=info msg="ImageCreate event name:\"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:09.971584 containerd[2014]: time="2026-04-16T23:32:09.971510333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:09.974001 containerd[2014]: time="2026-04-16T23:32:09.973477229Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"24804413\" in 2.059099102s" Apr 16 23:32:09.974001 containerd[2014]: time="2026-04-16T23:32:09.973536905Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:df8bcecad66863646fb4016494163838761da38376bae5a7592e04041db8489a\"" Apr 16 23:32:09.974410 containerd[2014]: time="2026-04-16T23:32:09.974376593Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 16 23:32:11.875034 containerd[2014]: time="2026-04-16T23:32:11.874949743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:11.877769 containerd[2014]: time="2026-04-16T23:32:11.877726843Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=18141358" Apr 16 23:32:11.878613 containerd[2014]: time="2026-04-16T23:32:11.878542879Z" level=info msg="ImageCreate event name:\"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:11.884256 containerd[2014]: time="2026-04-16T23:32:11.883264579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:11.885573 containerd[2014]: time="2026-04-16T23:32:11.885513079Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"19648015\" in 1.91016511s" Apr 16 23:32:11.885660 containerd[2014]: time="2026-04-16T23:32:11.885571267Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:8c8e25fd00e5c108fb9ab5490c25bfaeb0231b1c59f749dab4f5300f1c49995b\"" Apr 16 23:32:11.886179 containerd[2014]: time="2026-04-16T23:32:11.886131655Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 16 23:32:13.117000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1596960907.mount: Deactivated successfully. Apr 16 23:32:13.688463 containerd[2014]: time="2026-04-16T23:32:13.688406360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:13.690245 containerd[2014]: time="2026-04-16T23:32:13.690203924Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=28040508" Apr 16 23:32:13.692124 containerd[2014]: time="2026-04-16T23:32:13.692055116Z" level=info msg="ImageCreate event name:\"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:13.697571 containerd[2014]: time="2026-04-16T23:32:13.697044080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:13.698310 containerd[2014]: time="2026-04-16T23:32:13.698243564Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"28039527\" in 1.812057081s" Apr 16 23:32:13.698706 containerd[2014]: time="2026-04-16T23:32:13.698315828Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:7ce14d6fb1e5134a578d2aaa327fd701273e3d222b9b8d88054dd86b87a7dc36\"" Apr 16 23:32:13.699109 containerd[2014]: time="2026-04-16T23:32:13.699058904Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 16 23:32:14.302984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount706690883.mount: Deactivated successfully. Apr 16 23:32:14.885537 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 16 23:32:14.888366 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:32:15.260604 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:32:15.278568 (kubelet)[2754]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 23:32:15.366924 kubelet[2754]: E0416 23:32:15.366848 2754 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 23:32:15.373854 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 23:32:15.374169 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 23:32:15.375191 systemd[1]: kubelet.service: Consumed 308ms CPU time, 107.1M memory peak. Apr 16 23:32:15.876491 containerd[2014]: time="2026-04-16T23:32:15.876413675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:15.879523 containerd[2014]: time="2026-04-16T23:32:15.879458807Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Apr 16 23:32:15.880465 containerd[2014]: time="2026-04-16T23:32:15.880409939Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:15.887479 containerd[2014]: time="2026-04-16T23:32:15.886534307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:15.888659 containerd[2014]: time="2026-04-16T23:32:15.888600971Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 2.188606067s" Apr 16 23:32:15.888775 containerd[2014]: time="2026-04-16T23:32:15.888657647Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Apr 16 23:32:15.889451 containerd[2014]: time="2026-04-16T23:32:15.889400603Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 16 23:32:16.372659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount373171009.mount: Deactivated successfully. Apr 16 23:32:16.382092 containerd[2014]: time="2026-04-16T23:32:16.382036665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:32:16.383507 containerd[2014]: time="2026-04-16T23:32:16.383470149Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Apr 16 23:32:16.383852 containerd[2014]: time="2026-04-16T23:32:16.383788941Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:32:16.387743 containerd[2014]: time="2026-04-16T23:32:16.387268689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:32:16.389095 containerd[2014]: time="2026-04-16T23:32:16.388593585Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 499.137134ms" Apr 16 23:32:16.389095 containerd[2014]: time="2026-04-16T23:32:16.388644873Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Apr 16 23:32:16.389552 containerd[2014]: time="2026-04-16T23:32:16.389517105Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 16 23:32:17.021257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1472387420.mount: Deactivated successfully. Apr 16 23:32:18.369259 containerd[2014]: time="2026-04-16T23:32:18.369173675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:18.371314 containerd[2014]: time="2026-04-16T23:32:18.371232695Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21886366" Apr 16 23:32:18.374243 containerd[2014]: time="2026-04-16T23:32:18.374145155Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:18.380334 containerd[2014]: time="2026-04-16T23:32:18.380097407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:18.382460 containerd[2014]: time="2026-04-16T23:32:18.382171679Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 1.991898874s" Apr 16 23:32:18.382460 containerd[2014]: time="2026-04-16T23:32:18.382229159Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Apr 16 23:32:20.824453 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 16 23:32:24.383200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:32:24.383784 systemd[1]: kubelet.service: Consumed 308ms CPU time, 107.1M memory peak. Apr 16 23:32:24.391542 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:32:24.436559 systemd[1]: Reload requested from client PID 2859 ('systemctl') (unit session-7.scope)... Apr 16 23:32:24.436592 systemd[1]: Reloading... Apr 16 23:32:24.690359 zram_generator::config[2906]: No configuration found. Apr 16 23:32:25.152354 systemd[1]: Reloading finished in 715 ms. Apr 16 23:32:25.251703 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 16 23:32:25.252060 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 16 23:32:25.253409 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:32:25.253645 systemd[1]: kubelet.service: Consumed 223ms CPU time, 94.8M memory peak. Apr 16 23:32:25.256697 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:32:25.583271 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:32:25.599851 (kubelet)[2966]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 23:32:25.671340 kubelet[2966]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 23:32:25.671340 kubelet[2966]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 23:32:25.671340 kubelet[2966]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 23:32:25.671340 kubelet[2966]: I0416 23:32:25.671136 2966 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 23:32:26.842099 kubelet[2966]: I0416 23:32:26.842032 2966 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 16 23:32:26.842099 kubelet[2966]: I0416 23:32:26.842086 2966 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 23:32:26.844781 kubelet[2966]: I0416 23:32:26.842548 2966 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 23:32:26.904475 kubelet[2966]: E0416 23:32:26.904407 2966 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.16.254:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.254:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 23:32:26.906584 kubelet[2966]: I0416 23:32:26.906521 2966 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 23:32:26.921240 kubelet[2966]: I0416 23:32:26.920746 2966 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 23:32:26.929982 kubelet[2966]: I0416 23:32:26.929805 2966 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 16 23:32:26.930569 kubelet[2966]: I0416 23:32:26.930513 2966 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 23:32:26.930841 kubelet[2966]: I0416 23:32:26.930569 2966 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-254","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 23:32:26.930994 kubelet[2966]: I0416 23:32:26.930852 2966 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 23:32:26.930994 kubelet[2966]: I0416 23:32:26.930872 2966 container_manager_linux.go:303] "Creating device plugin manager" Apr 16 23:32:26.932468 kubelet[2966]: I0416 23:32:26.932422 2966 state_mem.go:36] "Initialized new in-memory state store" Apr 16 23:32:26.940195 kubelet[2966]: I0416 23:32:26.939992 2966 kubelet.go:480] "Attempting to sync node with API server" Apr 16 23:32:26.940195 kubelet[2966]: I0416 23:32:26.940035 2966 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 23:32:26.944206 kubelet[2966]: I0416 23:32:26.942164 2966 kubelet.go:386] "Adding apiserver pod source" Apr 16 23:32:26.944458 kubelet[2966]: I0416 23:32:26.944432 2966 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 23:32:26.950857 kubelet[2966]: E0416 23:32:26.950277 2966 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.16.254:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-254&limit=500&resourceVersion=0\": dial tcp 172.31.16.254:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 23:32:26.953323 kubelet[2966]: I0416 23:32:26.951014 2966 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 16 23:32:26.953323 kubelet[2966]: I0416 23:32:26.952197 2966 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 23:32:26.953323 kubelet[2966]: W0416 23:32:26.952475 2966 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 16 23:32:26.960004 kubelet[2966]: I0416 23:32:26.959960 2966 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 16 23:32:26.960134 kubelet[2966]: I0416 23:32:26.960028 2966 server.go:1289] "Started kubelet" Apr 16 23:32:26.985026 kubelet[2966]: E0416 23:32:26.982618 2966 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.16.254:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.254:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 23:32:26.985026 kubelet[2966]: E0416 23:32:26.979095 2966 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.254:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.254:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-254.18a6fa5537d32afa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-254,UID:ip-172-31-16-254,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-254,},FirstTimestamp:2026-04-16 23:32:26.959989498 +0000 UTC m=+1.353190352,LastTimestamp:2026-04-16 23:32:26.959989498 +0000 UTC m=+1.353190352,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-254,}" Apr 16 23:32:26.985026 kubelet[2966]: I0416 23:32:26.984705 2966 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 23:32:26.986506 kubelet[2966]: I0416 23:32:26.986436 2966 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 23:32:26.988100 kubelet[2966]: I0416 23:32:26.988046 2966 server.go:317] "Adding debug handlers to kubelet server" Apr 16 23:32:26.991446 kubelet[2966]: I0416 23:32:26.990163 2966 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 23:32:26.991446 kubelet[2966]: I0416 23:32:26.990348 2966 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 16 23:32:26.991446 kubelet[2966]: I0416 23:32:26.990644 2966 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 23:32:26.991446 kubelet[2966]: I0416 23:32:26.990993 2966 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 23:32:26.991446 kubelet[2966]: E0416 23:32:26.991228 2966 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-16-254\" not found" Apr 16 23:32:26.994182 kubelet[2966]: I0416 23:32:26.994145 2966 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 16 23:32:26.994464 kubelet[2966]: I0416 23:32:26.994444 2966 reconciler.go:26] "Reconciler: start to sync state" Apr 16 23:32:26.995884 kubelet[2966]: E0416 23:32:26.995840 2966 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.16.254:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.254:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 16 23:32:26.997109 kubelet[2966]: E0416 23:32:26.997010 2966 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-254?timeout=10s\": dial tcp 172.31.16.254:6443: connect: connection refused" interval="200ms" Apr 16 23:32:26.997801 kubelet[2966]: I0416 23:32:26.997770 2966 factory.go:223] Registration of the systemd container factory successfully Apr 16 23:32:26.998396 kubelet[2966]: I0416 23:32:26.998357 2966 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 23:32:27.001703 kubelet[2966]: I0416 23:32:27.001588 2966 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 16 23:32:27.002175 kubelet[2966]: I0416 23:32:27.002143 2966 factory.go:223] Registration of the containerd container factory successfully Apr 16 23:32:27.030612 kubelet[2966]: E0416 23:32:27.029624 2966 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 23:32:27.038413 kubelet[2966]: I0416 23:32:27.038350 2966 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 16 23:32:27.038555 kubelet[2966]: I0416 23:32:27.038421 2966 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 16 23:32:27.038555 kubelet[2966]: I0416 23:32:27.038456 2966 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 23:32:27.038555 kubelet[2966]: I0416 23:32:27.038469 2966 kubelet.go:2436] "Starting kubelet main sync loop" Apr 16 23:32:27.038717 kubelet[2966]: E0416 23:32:27.038537 2966 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 23:32:27.042723 kubelet[2966]: E0416 23:32:27.042609 2966 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.16.254:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.254:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 23:32:27.046417 kubelet[2966]: I0416 23:32:27.045938 2966 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 23:32:27.046417 kubelet[2966]: I0416 23:32:27.045974 2966 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 23:32:27.046417 kubelet[2966]: I0416 23:32:27.046029 2966 state_mem.go:36] "Initialized new in-memory state store" Apr 16 23:32:27.053751 kubelet[2966]: I0416 23:32:27.053691 2966 policy_none.go:49] "None policy: Start" Apr 16 23:32:27.053751 kubelet[2966]: I0416 23:32:27.053741 2966 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 16 23:32:27.053944 kubelet[2966]: I0416 23:32:27.053766 2966 state_mem.go:35] "Initializing new in-memory state store" Apr 16 23:32:27.067854 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 16 23:32:27.084199 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 16 23:32:27.091848 kubelet[2966]: E0416 23:32:27.091770 2966 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-16-254\" not found" Apr 16 23:32:27.098604 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 16 23:32:27.103339 kubelet[2966]: E0416 23:32:27.102849 2966 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 23:32:27.103590 kubelet[2966]: I0416 23:32:27.103551 2966 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 23:32:27.105645 kubelet[2966]: I0416 23:32:27.103594 2966 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 23:32:27.105645 kubelet[2966]: I0416 23:32:27.104798 2966 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 23:32:27.107213 kubelet[2966]: E0416 23:32:27.107150 2966 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 23:32:27.107350 kubelet[2966]: E0416 23:32:27.107236 2966 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-254\" not found" Apr 16 23:32:27.160533 systemd[1]: Created slice kubepods-burstable-podfe545e39b06e26d87157b20dc7b2c03e.slice - libcontainer container kubepods-burstable-podfe545e39b06e26d87157b20dc7b2c03e.slice. Apr 16 23:32:27.180225 kubelet[2966]: E0416 23:32:27.180156 2966 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-254\" not found" node="ip-172-31-16-254" Apr 16 23:32:27.186712 systemd[1]: Created slice kubepods-burstable-pod8b49a6dc65c2fed0099c82e58ff0a3c8.slice - libcontainer container kubepods-burstable-pod8b49a6dc65c2fed0099c82e58ff0a3c8.slice. Apr 16 23:32:27.191857 kubelet[2966]: E0416 23:32:27.191815 2966 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-254\" not found" node="ip-172-31-16-254" Apr 16 23:32:27.201847 systemd[1]: Created slice kubepods-burstable-podd48d4841dfdc1fe5063ef662bfbdea9d.slice - libcontainer container kubepods-burstable-podd48d4841dfdc1fe5063ef662bfbdea9d.slice. Apr 16 23:32:27.204596 kubelet[2966]: I0416 23:32:27.204537 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d48d4841dfdc1fe5063ef662bfbdea9d-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-254\" (UID: \"d48d4841dfdc1fe5063ef662bfbdea9d\") " pod="kube-system/kube-apiserver-ip-172-31-16-254" Apr 16 23:32:27.205503 kubelet[2966]: I0416 23:32:27.204785 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fe545e39b06e26d87157b20dc7b2c03e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-254\" (UID: \"fe545e39b06e26d87157b20dc7b2c03e\") " pod="kube-system/kube-controller-manager-ip-172-31-16-254" Apr 16 23:32:27.205503 kubelet[2966]: I0416 23:32:27.205372 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fe545e39b06e26d87157b20dc7b2c03e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-254\" (UID: \"fe545e39b06e26d87157b20dc7b2c03e\") " pod="kube-system/kube-controller-manager-ip-172-31-16-254" Apr 16 23:32:27.205503 kubelet[2966]: I0416 23:32:27.205441 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe545e39b06e26d87157b20dc7b2c03e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-254\" (UID: \"fe545e39b06e26d87157b20dc7b2c03e\") " pod="kube-system/kube-controller-manager-ip-172-31-16-254" Apr 16 23:32:27.205830 kubelet[2966]: I0416 23:32:27.205754 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fe545e39b06e26d87157b20dc7b2c03e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-254\" (UID: \"fe545e39b06e26d87157b20dc7b2c03e\") " pod="kube-system/kube-controller-manager-ip-172-31-16-254" Apr 16 23:32:27.205995 kubelet[2966]: I0416 23:32:27.205918 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d48d4841dfdc1fe5063ef662bfbdea9d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-254\" (UID: \"d48d4841dfdc1fe5063ef662bfbdea9d\") " pod="kube-system/kube-apiserver-ip-172-31-16-254" Apr 16 23:32:27.206167 kubelet[2966]: I0416 23:32:27.205966 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fe545e39b06e26d87157b20dc7b2c03e-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-254\" (UID: \"fe545e39b06e26d87157b20dc7b2c03e\") " pod="kube-system/kube-controller-manager-ip-172-31-16-254" Apr 16 23:32:27.206167 kubelet[2966]: I0416 23:32:27.206117 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8b49a6dc65c2fed0099c82e58ff0a3c8-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-254\" (UID: \"8b49a6dc65c2fed0099c82e58ff0a3c8\") " pod="kube-system/kube-scheduler-ip-172-31-16-254" Apr 16 23:32:27.206341 kubelet[2966]: I0416 23:32:27.206318 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d48d4841dfdc1fe5063ef662bfbdea9d-ca-certs\") pod \"kube-apiserver-ip-172-31-16-254\" (UID: \"d48d4841dfdc1fe5063ef662bfbdea9d\") " pod="kube-system/kube-apiserver-ip-172-31-16-254" Apr 16 23:32:27.209161 kubelet[2966]: I0416 23:32:27.208508 2966 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-254" Apr 16 23:32:27.209161 kubelet[2966]: E0416 23:32:27.209014 2966 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-254?timeout=10s\": dial tcp 172.31.16.254:6443: connect: connection refused" interval="400ms" Apr 16 23:32:27.210080 kubelet[2966]: E0416 23:32:27.209743 2966 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.254:6443/api/v1/nodes\": dial tcp 172.31.16.254:6443: connect: connection refused" node="ip-172-31-16-254" Apr 16 23:32:27.211071 kubelet[2966]: E0416 23:32:27.210748 2966 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-254\" not found" node="ip-172-31-16-254" Apr 16 23:32:27.412963 kubelet[2966]: I0416 23:32:27.412907 2966 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-254" Apr 16 23:32:27.413482 kubelet[2966]: E0416 23:32:27.413436 2966 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.254:6443/api/v1/nodes\": dial tcp 172.31.16.254:6443: connect: connection refused" node="ip-172-31-16-254" Apr 16 23:32:27.482088 containerd[2014]: time="2026-04-16T23:32:27.481940936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-254,Uid:fe545e39b06e26d87157b20dc7b2c03e,Namespace:kube-system,Attempt:0,}" Apr 16 23:32:27.493855 containerd[2014]: time="2026-04-16T23:32:27.493794248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-254,Uid:8b49a6dc65c2fed0099c82e58ff0a3c8,Namespace:kube-system,Attempt:0,}" Apr 16 23:32:27.513333 containerd[2014]: time="2026-04-16T23:32:27.513217004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-254,Uid:d48d4841dfdc1fe5063ef662bfbdea9d,Namespace:kube-system,Attempt:0,}" Apr 16 23:32:27.541342 containerd[2014]: time="2026-04-16T23:32:27.540547952Z" level=info msg="connecting to shim b3c87053ab256896725e9611a8f6167ca098323e5063ecf05c209239b1a073cf" address="unix:///run/containerd/s/459c810279a5ca6049469fdb4b5d799521a42c8d66311d803c0b65ecf8b019ab" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:32:27.580483 containerd[2014]: time="2026-04-16T23:32:27.580427481Z" level=info msg="connecting to shim b479b92e6fe8b811ed7b49a1ba42b333ee55b09f62bf3f3470a1715e3825a909" address="unix:///run/containerd/s/6abd28cb5e9586cdc84eb3cf29eb602a30bb24b8ad87b20737d977d5d34b896b" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:32:27.610474 kubelet[2966]: E0416 23:32:27.610410 2966 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-254?timeout=10s\": dial tcp 172.31.16.254:6443: connect: connection refused" interval="800ms" Apr 16 23:32:27.626908 containerd[2014]: time="2026-04-16T23:32:27.626853657Z" level=info msg="connecting to shim 2ab86bcb94b6f8118d093bd74ae16addac162cece626ed2ec17187c68600c7eb" address="unix:///run/containerd/s/57673bb530110d45ae0eb54112719b57d5fb4615e0c03ac3163544d241084193" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:32:27.643673 systemd[1]: Started cri-containerd-b3c87053ab256896725e9611a8f6167ca098323e5063ecf05c209239b1a073cf.scope - libcontainer container b3c87053ab256896725e9611a8f6167ca098323e5063ecf05c209239b1a073cf. Apr 16 23:32:27.668584 systemd[1]: Started cri-containerd-b479b92e6fe8b811ed7b49a1ba42b333ee55b09f62bf3f3470a1715e3825a909.scope - libcontainer container b479b92e6fe8b811ed7b49a1ba42b333ee55b09f62bf3f3470a1715e3825a909. Apr 16 23:32:27.716714 systemd[1]: Started cri-containerd-2ab86bcb94b6f8118d093bd74ae16addac162cece626ed2ec17187c68600c7eb.scope - libcontainer container 2ab86bcb94b6f8118d093bd74ae16addac162cece626ed2ec17187c68600c7eb. Apr 16 23:32:27.809327 containerd[2014]: time="2026-04-16T23:32:27.809218546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-254,Uid:fe545e39b06e26d87157b20dc7b2c03e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3c87053ab256896725e9611a8f6167ca098323e5063ecf05c209239b1a073cf\"" Apr 16 23:32:27.818827 kubelet[2966]: I0416 23:32:27.818792 2966 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-254" Apr 16 23:32:27.819833 kubelet[2966]: E0416 23:32:27.819765 2966 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.254:6443/api/v1/nodes\": dial tcp 172.31.16.254:6443: connect: connection refused" node="ip-172-31-16-254" Apr 16 23:32:27.830371 containerd[2014]: time="2026-04-16T23:32:27.830196622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-254,Uid:8b49a6dc65c2fed0099c82e58ff0a3c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"b479b92e6fe8b811ed7b49a1ba42b333ee55b09f62bf3f3470a1715e3825a909\"" Apr 16 23:32:27.831668 containerd[2014]: time="2026-04-16T23:32:27.831374890Z" level=info msg="CreateContainer within sandbox \"b3c87053ab256896725e9611a8f6167ca098323e5063ecf05c209239b1a073cf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 16 23:32:27.846455 containerd[2014]: time="2026-04-16T23:32:27.845955166Z" level=info msg="CreateContainer within sandbox \"b479b92e6fe8b811ed7b49a1ba42b333ee55b09f62bf3f3470a1715e3825a909\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 16 23:32:27.854837 containerd[2014]: time="2026-04-16T23:32:27.854740270Z" level=info msg="Container f85d5df95e91cdba86e50a67e3001a86cccbbe0ae325a62113c91deca55ef70b: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:32:27.873162 containerd[2014]: time="2026-04-16T23:32:27.873102574Z" level=info msg="CreateContainer within sandbox \"b3c87053ab256896725e9611a8f6167ca098323e5063ecf05c209239b1a073cf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f85d5df95e91cdba86e50a67e3001a86cccbbe0ae325a62113c91deca55ef70b\"" Apr 16 23:32:27.874877 containerd[2014]: time="2026-04-16T23:32:27.874831570Z" level=info msg="StartContainer for \"f85d5df95e91cdba86e50a67e3001a86cccbbe0ae325a62113c91deca55ef70b\"" Apr 16 23:32:27.877638 containerd[2014]: time="2026-04-16T23:32:27.877571974Z" level=info msg="Container e97666a69618427cef37b747f13216798c6801c8f14987c02a4fd147ca0f22fd: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:32:27.880005 containerd[2014]: time="2026-04-16T23:32:27.879959134Z" level=info msg="connecting to shim f85d5df95e91cdba86e50a67e3001a86cccbbe0ae325a62113c91deca55ef70b" address="unix:///run/containerd/s/459c810279a5ca6049469fdb4b5d799521a42c8d66311d803c0b65ecf8b019ab" protocol=ttrpc version=3 Apr 16 23:32:27.881767 containerd[2014]: time="2026-04-16T23:32:27.880961590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-254,Uid:d48d4841dfdc1fe5063ef662bfbdea9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ab86bcb94b6f8118d093bd74ae16addac162cece626ed2ec17187c68600c7eb\"" Apr 16 23:32:27.891705 containerd[2014]: time="2026-04-16T23:32:27.891475114Z" level=info msg="CreateContainer within sandbox \"2ab86bcb94b6f8118d093bd74ae16addac162cece626ed2ec17187c68600c7eb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 16 23:32:27.896537 containerd[2014]: time="2026-04-16T23:32:27.896483602Z" level=info msg="CreateContainer within sandbox \"b479b92e6fe8b811ed7b49a1ba42b333ee55b09f62bf3f3470a1715e3825a909\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e97666a69618427cef37b747f13216798c6801c8f14987c02a4fd147ca0f22fd\"" Apr 16 23:32:27.898412 containerd[2014]: time="2026-04-16T23:32:27.898042738Z" level=info msg="StartContainer for \"e97666a69618427cef37b747f13216798c6801c8f14987c02a4fd147ca0f22fd\"" Apr 16 23:32:27.907770 containerd[2014]: time="2026-04-16T23:32:27.905153590Z" level=info msg="connecting to shim e97666a69618427cef37b747f13216798c6801c8f14987c02a4fd147ca0f22fd" address="unix:///run/containerd/s/6abd28cb5e9586cdc84eb3cf29eb602a30bb24b8ad87b20737d977d5d34b896b" protocol=ttrpc version=3 Apr 16 23:32:27.927852 systemd[1]: Started cri-containerd-f85d5df95e91cdba86e50a67e3001a86cccbbe0ae325a62113c91deca55ef70b.scope - libcontainer container f85d5df95e91cdba86e50a67e3001a86cccbbe0ae325a62113c91deca55ef70b. Apr 16 23:32:27.938696 containerd[2014]: time="2026-04-16T23:32:27.937660798Z" level=info msg="Container 5528466444b8240a31c5a49e2f16fc771a5d65f78fe65bac3de45c9e4c472d03: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:32:27.946843 kubelet[2966]: E0416 23:32:27.946663 2966 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.16.254:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.254:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 16 23:32:27.958837 containerd[2014]: time="2026-04-16T23:32:27.958757855Z" level=info msg="CreateContainer within sandbox \"2ab86bcb94b6f8118d093bd74ae16addac162cece626ed2ec17187c68600c7eb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5528466444b8240a31c5a49e2f16fc771a5d65f78fe65bac3de45c9e4c472d03\"" Apr 16 23:32:27.962031 containerd[2014]: time="2026-04-16T23:32:27.961966847Z" level=info msg="StartContainer for \"5528466444b8240a31c5a49e2f16fc771a5d65f78fe65bac3de45c9e4c472d03\"" Apr 16 23:32:27.965965 containerd[2014]: time="2026-04-16T23:32:27.965204159Z" level=info msg="connecting to shim 5528466444b8240a31c5a49e2f16fc771a5d65f78fe65bac3de45c9e4c472d03" address="unix:///run/containerd/s/57673bb530110d45ae0eb54112719b57d5fb4615e0c03ac3163544d241084193" protocol=ttrpc version=3 Apr 16 23:32:27.988093 systemd[1]: Started cri-containerd-e97666a69618427cef37b747f13216798c6801c8f14987c02a4fd147ca0f22fd.scope - libcontainer container e97666a69618427cef37b747f13216798c6801c8f14987c02a4fd147ca0f22fd. Apr 16 23:32:28.010909 systemd[1]: Started cri-containerd-5528466444b8240a31c5a49e2f16fc771a5d65f78fe65bac3de45c9e4c472d03.scope - libcontainer container 5528466444b8240a31c5a49e2f16fc771a5d65f78fe65bac3de45c9e4c472d03. Apr 16 23:32:28.101323 containerd[2014]: time="2026-04-16T23:32:28.101205139Z" level=info msg="StartContainer for \"f85d5df95e91cdba86e50a67e3001a86cccbbe0ae325a62113c91deca55ef70b\" returns successfully" Apr 16 23:32:28.197851 containerd[2014]: time="2026-04-16T23:32:28.195615992Z" level=info msg="StartContainer for \"e97666a69618427cef37b747f13216798c6801c8f14987c02a4fd147ca0f22fd\" returns successfully" Apr 16 23:32:28.201618 containerd[2014]: time="2026-04-16T23:32:28.201556760Z" level=info msg="StartContainer for \"5528466444b8240a31c5a49e2f16fc771a5d65f78fe65bac3de45c9e4c472d03\" returns successfully" Apr 16 23:32:28.279318 kubelet[2966]: E0416 23:32:28.278331 2966 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.16.254:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.254:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 16 23:32:28.373031 kubelet[2966]: E0416 23:32:28.372865 2966 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.16.254:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-254&limit=500&resourceVersion=0\": dial tcp 172.31.16.254:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 16 23:32:28.411398 kubelet[2966]: E0416 23:32:28.411326 2966 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-254?timeout=10s\": dial tcp 172.31.16.254:6443: connect: connection refused" interval="1.6s" Apr 16 23:32:28.625423 kubelet[2966]: I0416 23:32:28.624177 2966 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-254" Apr 16 23:32:29.087526 kubelet[2966]: E0416 23:32:29.087488 2966 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-254\" not found" node="ip-172-31-16-254" Apr 16 23:32:29.095864 kubelet[2966]: E0416 23:32:29.095826 2966 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-254\" not found" node="ip-172-31-16-254" Apr 16 23:32:29.101979 kubelet[2966]: E0416 23:32:29.101719 2966 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-254\" not found" node="ip-172-31-16-254" Apr 16 23:32:30.104952 kubelet[2966]: E0416 23:32:30.104892 2966 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-254\" not found" node="ip-172-31-16-254" Apr 16 23:32:30.105541 kubelet[2966]: E0416 23:32:30.105497 2966 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-254\" not found" node="ip-172-31-16-254" Apr 16 23:32:30.107144 kubelet[2966]: E0416 23:32:30.107098 2966 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-254\" not found" node="ip-172-31-16-254" Apr 16 23:32:31.119330 kubelet[2966]: E0416 23:32:31.117734 2966 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-254\" not found" node="ip-172-31-16-254" Apr 16 23:32:31.119330 kubelet[2966]: E0416 23:32:31.117875 2966 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-254\" not found" node="ip-172-31-16-254" Apr 16 23:32:31.399662 kubelet[2966]: E0416 23:32:31.399591 2966 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-16-254\" not found" node="ip-172-31-16-254" Apr 16 23:32:31.558319 kubelet[2966]: E0416 23:32:31.558006 2966 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-16-254.18a6fa5537d32afa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-254,UID:ip-172-31-16-254,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-254,},FirstTimestamp:2026-04-16 23:32:26.959989498 +0000 UTC m=+1.353190352,LastTimestamp:2026-04-16 23:32:26.959989498 +0000 UTC m=+1.353190352,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-254,}" Apr 16 23:32:31.577607 kubelet[2966]: I0416 23:32:31.577436 2966 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-254" Apr 16 23:32:31.596664 kubelet[2966]: I0416 23:32:31.596601 2966 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-254" Apr 16 23:32:31.618050 kubelet[2966]: E0416 23:32:31.617937 2966 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-16-254\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-16-254" Apr 16 23:32:31.618050 kubelet[2966]: I0416 23:32:31.618006 2966 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-254" Apr 16 23:32:31.623687 kubelet[2966]: E0416 23:32:31.623334 2966 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-16-254\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-16-254" Apr 16 23:32:31.623687 kubelet[2966]: I0416 23:32:31.623380 2966 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-254" Apr 16 23:32:31.627907 kubelet[2966]: E0416 23:32:31.627864 2966 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-254\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-16-254" Apr 16 23:32:31.975850 kubelet[2966]: I0416 23:32:31.975771 2966 apiserver.go:52] "Watching apiserver" Apr 16 23:32:31.995232 kubelet[2966]: I0416 23:32:31.995177 2966 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 16 23:32:33.576134 systemd[1]: Reload requested from client PID 3247 ('systemctl') (unit session-7.scope)... Apr 16 23:32:33.576159 systemd[1]: Reloading... Apr 16 23:32:33.775402 zram_generator::config[3297]: No configuration found. Apr 16 23:32:34.255316 systemd[1]: Reloading finished in 678 ms. Apr 16 23:32:34.318891 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:32:34.339914 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 23:32:34.340415 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:32:34.340493 systemd[1]: kubelet.service: Consumed 2.072s CPU time, 126.2M memory peak. Apr 16 23:32:34.345627 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:32:34.444547 update_engine[1997]: I20260416 23:32:34.444468 1997 update_attempter.cc:509] Updating boot flags... Apr 16 23:32:34.991584 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:32:35.049867 (kubelet)[3456]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 23:32:35.191327 kubelet[3456]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 23:32:35.191327 kubelet[3456]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 16 23:32:35.191327 kubelet[3456]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 23:32:35.191327 kubelet[3456]: I0416 23:32:35.189568 3456 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 16 23:32:35.211362 kubelet[3456]: I0416 23:32:35.210555 3456 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 16 23:32:35.211362 kubelet[3456]: I0416 23:32:35.210630 3456 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 23:32:35.211575 kubelet[3456]: I0416 23:32:35.211347 3456 server.go:956] "Client rotation is on, will bootstrap in background" Apr 16 23:32:35.229589 kubelet[3456]: I0416 23:32:35.228848 3456 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 16 23:32:35.250831 kubelet[3456]: I0416 23:32:35.243691 3456 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 23:32:35.260998 kubelet[3456]: I0416 23:32:35.260950 3456 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 23:32:35.276007 kubelet[3456]: I0416 23:32:35.274647 3456 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 16 23:32:35.278179 kubelet[3456]: I0416 23:32:35.276358 3456 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 23:32:35.278179 kubelet[3456]: I0416 23:32:35.276427 3456 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-254","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 23:32:35.278179 kubelet[3456]: I0416 23:32:35.276713 3456 topology_manager.go:138] "Creating topology manager with none policy" Apr 16 23:32:35.278179 kubelet[3456]: I0416 23:32:35.276733 3456 container_manager_linux.go:303] "Creating device plugin manager" Apr 16 23:32:35.278618 kubelet[3456]: I0416 23:32:35.278265 3456 state_mem.go:36] "Initialized new in-memory state store" Apr 16 23:32:35.278670 kubelet[3456]: I0416 23:32:35.278640 3456 kubelet.go:480] "Attempting to sync node with API server" Apr 16 23:32:35.281393 kubelet[3456]: I0416 23:32:35.281333 3456 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 23:32:35.287380 kubelet[3456]: I0416 23:32:35.281477 3456 kubelet.go:386] "Adding apiserver pod source" Apr 16 23:32:35.287380 kubelet[3456]: I0416 23:32:35.284329 3456 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 23:32:35.287579 kubelet[3456]: I0416 23:32:35.287513 3456 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 16 23:32:35.288567 kubelet[3456]: I0416 23:32:35.288516 3456 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 23:32:35.315002 kubelet[3456]: I0416 23:32:35.314951 3456 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 16 23:32:35.316315 kubelet[3456]: I0416 23:32:35.315158 3456 server.go:1289] "Started kubelet" Apr 16 23:32:35.322389 kubelet[3456]: I0416 23:32:35.315453 3456 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 23:32:35.332476 kubelet[3456]: I0416 23:32:35.329989 3456 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 16 23:32:35.339319 kubelet[3456]: I0416 23:32:35.334275 3456 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 23:32:35.353655 kubelet[3456]: I0416 23:32:35.330242 3456 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 23:32:35.399115 kubelet[3456]: I0416 23:32:35.330029 3456 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 23:32:35.426502 kubelet[3456]: I0416 23:32:35.343391 3456 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 16 23:32:35.429881 kubelet[3456]: I0416 23:32:35.427792 3456 server.go:317] "Adding debug handlers to kubelet server" Apr 16 23:32:35.434737 kubelet[3456]: E0416 23:32:35.344467 3456 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-16-254\" not found" Apr 16 23:32:35.434737 kubelet[3456]: I0416 23:32:35.343619 3456 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 16 23:32:35.435803 kubelet[3456]: I0416 23:32:35.435743 3456 factory.go:223] Registration of the systemd container factory successfully Apr 16 23:32:35.437487 kubelet[3456]: I0416 23:32:35.435933 3456 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 23:32:35.438382 kubelet[3456]: I0416 23:32:35.437695 3456 reconciler.go:26] "Reconciler: start to sync state" Apr 16 23:32:35.538594 kubelet[3456]: I0416 23:32:35.537531 3456 factory.go:223] Registration of the containerd container factory successfully Apr 16 23:32:35.546539 kubelet[3456]: E0416 23:32:35.546490 3456 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-16-254\" not found" Apr 16 23:32:35.651597 kubelet[3456]: E0416 23:32:35.647789 3456 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 23:32:35.742632 kubelet[3456]: I0416 23:32:35.742574 3456 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 16 23:32:35.769277 kubelet[3456]: I0416 23:32:35.768962 3456 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 16 23:32:35.769277 kubelet[3456]: I0416 23:32:35.769006 3456 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 16 23:32:35.769277 kubelet[3456]: I0416 23:32:35.769038 3456 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 23:32:35.769277 kubelet[3456]: I0416 23:32:35.769051 3456 kubelet.go:2436] "Starting kubelet main sync loop" Apr 16 23:32:35.769277 kubelet[3456]: E0416 23:32:35.769119 3456 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 23:32:35.871390 kubelet[3456]: E0416 23:32:35.870322 3456 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 23:32:35.943184 kubelet[3456]: I0416 23:32:35.943132 3456 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 16 23:32:35.943365 kubelet[3456]: I0416 23:32:35.943201 3456 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 16 23:32:35.943365 kubelet[3456]: I0416 23:32:35.943240 3456 state_mem.go:36] "Initialized new in-memory state store" Apr 16 23:32:35.943762 kubelet[3456]: I0416 23:32:35.943725 3456 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 16 23:32:35.943833 kubelet[3456]: I0416 23:32:35.943759 3456 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 16 23:32:35.943833 kubelet[3456]: I0416 23:32:35.943799 3456 policy_none.go:49] "None policy: Start" Apr 16 23:32:35.943833 kubelet[3456]: I0416 23:32:35.943818 3456 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 16 23:32:35.943985 kubelet[3456]: I0416 23:32:35.943839 3456 state_mem.go:35] "Initializing new in-memory state store" Apr 16 23:32:35.944043 kubelet[3456]: I0416 23:32:35.944003 3456 state_mem.go:75] "Updated machine memory state" Apr 16 23:32:35.953168 kubelet[3456]: E0416 23:32:35.952388 3456 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 23:32:35.953168 kubelet[3456]: I0416 23:32:35.952659 3456 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 16 23:32:35.953168 kubelet[3456]: I0416 23:32:35.952677 3456 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 23:32:35.954201 kubelet[3456]: I0416 23:32:35.954160 3456 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 16 23:32:35.963898 kubelet[3456]: E0416 23:32:35.961423 3456 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 23:32:36.073073 kubelet[3456]: I0416 23:32:36.072925 3456 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-254" Apr 16 23:32:36.073427 kubelet[3456]: I0416 23:32:36.073392 3456 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-254" Apr 16 23:32:36.076271 kubelet[3456]: I0416 23:32:36.075575 3456 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-254" Apr 16 23:32:36.085650 kubelet[3456]: I0416 23:32:36.083268 3456 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-254" Apr 16 23:32:36.104884 kubelet[3456]: I0416 23:32:36.104829 3456 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-16-254" Apr 16 23:32:36.105045 kubelet[3456]: I0416 23:32:36.104947 3456 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-254" Apr 16 23:32:36.156953 kubelet[3456]: I0416 23:32:36.156881 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d48d4841dfdc1fe5063ef662bfbdea9d-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-254\" (UID: \"d48d4841dfdc1fe5063ef662bfbdea9d\") " pod="kube-system/kube-apiserver-ip-172-31-16-254" Apr 16 23:32:36.157116 kubelet[3456]: I0416 23:32:36.156992 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d48d4841dfdc1fe5063ef662bfbdea9d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-254\" (UID: \"d48d4841dfdc1fe5063ef662bfbdea9d\") " pod="kube-system/kube-apiserver-ip-172-31-16-254" Apr 16 23:32:36.157116 kubelet[3456]: I0416 23:32:36.157101 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fe545e39b06e26d87157b20dc7b2c03e-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-254\" (UID: \"fe545e39b06e26d87157b20dc7b2c03e\") " pod="kube-system/kube-controller-manager-ip-172-31-16-254" Apr 16 23:32:36.157219 kubelet[3456]: I0416 23:32:36.157183 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe545e39b06e26d87157b20dc7b2c03e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-254\" (UID: \"fe545e39b06e26d87157b20dc7b2c03e\") " pod="kube-system/kube-controller-manager-ip-172-31-16-254" Apr 16 23:32:36.157584 kubelet[3456]: I0416 23:32:36.157536 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fe545e39b06e26d87157b20dc7b2c03e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-254\" (UID: \"fe545e39b06e26d87157b20dc7b2c03e\") " pod="kube-system/kube-controller-manager-ip-172-31-16-254" Apr 16 23:32:36.157663 kubelet[3456]: I0416 23:32:36.157627 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fe545e39b06e26d87157b20dc7b2c03e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-254\" (UID: \"fe545e39b06e26d87157b20dc7b2c03e\") " pod="kube-system/kube-controller-manager-ip-172-31-16-254" Apr 16 23:32:36.158005 kubelet[3456]: I0416 23:32:36.157955 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fe545e39b06e26d87157b20dc7b2c03e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-254\" (UID: \"fe545e39b06e26d87157b20dc7b2c03e\") " pod="kube-system/kube-controller-manager-ip-172-31-16-254" Apr 16 23:32:36.158434 kubelet[3456]: I0416 23:32:36.158387 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8b49a6dc65c2fed0099c82e58ff0a3c8-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-254\" (UID: \"8b49a6dc65c2fed0099c82e58ff0a3c8\") " pod="kube-system/kube-scheduler-ip-172-31-16-254" Apr 16 23:32:36.158497 kubelet[3456]: I0416 23:32:36.158452 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d48d4841dfdc1fe5063ef662bfbdea9d-ca-certs\") pod \"kube-apiserver-ip-172-31-16-254\" (UID: \"d48d4841dfdc1fe5063ef662bfbdea9d\") " pod="kube-system/kube-apiserver-ip-172-31-16-254" Apr 16 23:32:36.285951 kubelet[3456]: I0416 23:32:36.285867 3456 apiserver.go:52] "Watching apiserver" Apr 16 23:32:36.335361 kubelet[3456]: I0416 23:32:36.335118 3456 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 16 23:32:36.457543 kubelet[3456]: I0416 23:32:36.456942 3456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-254" podStartSLOduration=0.456917177 podStartE2EDuration="456.917177ms" podCreationTimestamp="2026-04-16 23:32:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:32:36.455370965 +0000 UTC m=+1.375670240" watchObservedRunningTime="2026-04-16 23:32:36.456917177 +0000 UTC m=+1.377216404" Apr 16 23:32:36.479571 kubelet[3456]: I0416 23:32:36.479442 3456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-254" podStartSLOduration=0.479421989 podStartE2EDuration="479.421989ms" podCreationTimestamp="2026-04-16 23:32:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:32:36.479397653 +0000 UTC m=+1.399696880" watchObservedRunningTime="2026-04-16 23:32:36.479421989 +0000 UTC m=+1.399721204" Apr 16 23:32:36.515011 kubelet[3456]: I0416 23:32:36.514787 3456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-254" podStartSLOduration=0.514766009 podStartE2EDuration="514.766009ms" podCreationTimestamp="2026-04-16 23:32:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:32:36.497003693 +0000 UTC m=+1.417302920" watchObservedRunningTime="2026-04-16 23:32:36.514766009 +0000 UTC m=+1.435065248" Apr 16 23:32:39.589556 kubelet[3456]: I0416 23:32:39.589348 3456 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 16 23:32:39.591387 containerd[2014]: time="2026-04-16T23:32:39.590796440Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 16 23:32:39.591866 kubelet[3456]: I0416 23:32:39.591166 3456 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 16 23:32:40.415959 systemd[1]: Created slice kubepods-besteffort-pod4470d839_af83_47cb_9418_fe515ffde66e.slice - libcontainer container kubepods-besteffort-pod4470d839_af83_47cb_9418_fe515ffde66e.slice. Apr 16 23:32:40.484128 kubelet[3456]: I0416 23:32:40.483867 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4470d839-af83-47cb-9418-fe515ffde66e-lib-modules\") pod \"kube-proxy-vk84t\" (UID: \"4470d839-af83-47cb-9418-fe515ffde66e\") " pod="kube-system/kube-proxy-vk84t" Apr 16 23:32:40.484128 kubelet[3456]: I0416 23:32:40.483932 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4470d839-af83-47cb-9418-fe515ffde66e-xtables-lock\") pod \"kube-proxy-vk84t\" (UID: \"4470d839-af83-47cb-9418-fe515ffde66e\") " pod="kube-system/kube-proxy-vk84t" Apr 16 23:32:40.484128 kubelet[3456]: I0416 23:32:40.483984 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5n7s\" (UniqueName: \"kubernetes.io/projected/4470d839-af83-47cb-9418-fe515ffde66e-kube-api-access-k5n7s\") pod \"kube-proxy-vk84t\" (UID: \"4470d839-af83-47cb-9418-fe515ffde66e\") " pod="kube-system/kube-proxy-vk84t" Apr 16 23:32:40.484128 kubelet[3456]: I0416 23:32:40.484028 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4470d839-af83-47cb-9418-fe515ffde66e-kube-proxy\") pod \"kube-proxy-vk84t\" (UID: \"4470d839-af83-47cb-9418-fe515ffde66e\") " pod="kube-system/kube-proxy-vk84t" Apr 16 23:32:40.731657 containerd[2014]: time="2026-04-16T23:32:40.731071330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vk84t,Uid:4470d839-af83-47cb-9418-fe515ffde66e,Namespace:kube-system,Attempt:0,}" Apr 16 23:32:40.780603 containerd[2014]: time="2026-04-16T23:32:40.780530170Z" level=info msg="connecting to shim 488ccd520ea22e1c77306c3aa135f2333e61823f437e22a25f7f7d3ae79f45a4" address="unix:///run/containerd/s/df335542dab67bfe71979ec6d251741df1d49744ff8108dafd3e2667124a074c" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:32:40.845628 systemd[1]: Started cri-containerd-488ccd520ea22e1c77306c3aa135f2333e61823f437e22a25f7f7d3ae79f45a4.scope - libcontainer container 488ccd520ea22e1c77306c3aa135f2333e61823f437e22a25f7f7d3ae79f45a4. Apr 16 23:32:40.910417 systemd[1]: Created slice kubepods-besteffort-pod59d1f7b9_b64f_49ed_bba0_c1c172e38133.slice - libcontainer container kubepods-besteffort-pod59d1f7b9_b64f_49ed_bba0_c1c172e38133.slice. Apr 16 23:32:40.980372 containerd[2014]: time="2026-04-16T23:32:40.980215847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vk84t,Uid:4470d839-af83-47cb-9418-fe515ffde66e,Namespace:kube-system,Attempt:0,} returns sandbox id \"488ccd520ea22e1c77306c3aa135f2333e61823f437e22a25f7f7d3ae79f45a4\"" Apr 16 23:32:40.987119 kubelet[3456]: I0416 23:32:40.986856 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9gqn\" (UniqueName: \"kubernetes.io/projected/59d1f7b9-b64f-49ed-bba0-c1c172e38133-kube-api-access-p9gqn\") pod \"tigera-operator-6bf85f8dd-zl4cx\" (UID: \"59d1f7b9-b64f-49ed-bba0-c1c172e38133\") " pod="tigera-operator/tigera-operator-6bf85f8dd-zl4cx" Apr 16 23:32:40.987119 kubelet[3456]: I0416 23:32:40.986933 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/59d1f7b9-b64f-49ed-bba0-c1c172e38133-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-zl4cx\" (UID: \"59d1f7b9-b64f-49ed-bba0-c1c172e38133\") " pod="tigera-operator/tigera-operator-6bf85f8dd-zl4cx" Apr 16 23:32:40.992664 containerd[2014]: time="2026-04-16T23:32:40.992602775Z" level=info msg="CreateContainer within sandbox \"488ccd520ea22e1c77306c3aa135f2333e61823f437e22a25f7f7d3ae79f45a4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 16 23:32:41.011923 containerd[2014]: time="2026-04-16T23:32:41.010455007Z" level=info msg="Container 2393d37f0c6ff1565a6e80a5ed7c2240a74644e9643ed6c0923d3c2f74c9f179: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:32:41.029092 containerd[2014]: time="2026-04-16T23:32:41.029031535Z" level=info msg="CreateContainer within sandbox \"488ccd520ea22e1c77306c3aa135f2333e61823f437e22a25f7f7d3ae79f45a4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2393d37f0c6ff1565a6e80a5ed7c2240a74644e9643ed6c0923d3c2f74c9f179\"" Apr 16 23:32:41.030215 containerd[2014]: time="2026-04-16T23:32:41.030158600Z" level=info msg="StartContainer for \"2393d37f0c6ff1565a6e80a5ed7c2240a74644e9643ed6c0923d3c2f74c9f179\"" Apr 16 23:32:41.033334 containerd[2014]: time="2026-04-16T23:32:41.033239564Z" level=info msg="connecting to shim 2393d37f0c6ff1565a6e80a5ed7c2240a74644e9643ed6c0923d3c2f74c9f179" address="unix:///run/containerd/s/df335542dab67bfe71979ec6d251741df1d49744ff8108dafd3e2667124a074c" protocol=ttrpc version=3 Apr 16 23:32:41.069581 systemd[1]: Started cri-containerd-2393d37f0c6ff1565a6e80a5ed7c2240a74644e9643ed6c0923d3c2f74c9f179.scope - libcontainer container 2393d37f0c6ff1565a6e80a5ed7c2240a74644e9643ed6c0923d3c2f74c9f179. Apr 16 23:32:41.189183 containerd[2014]: time="2026-04-16T23:32:41.189121184Z" level=info msg="StartContainer for \"2393d37f0c6ff1565a6e80a5ed7c2240a74644e9643ed6c0923d3c2f74c9f179\" returns successfully" Apr 16 23:32:41.218660 containerd[2014]: time="2026-04-16T23:32:41.218472260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-zl4cx,Uid:59d1f7b9-b64f-49ed-bba0-c1c172e38133,Namespace:tigera-operator,Attempt:0,}" Apr 16 23:32:41.257519 containerd[2014]: time="2026-04-16T23:32:41.255832257Z" level=info msg="connecting to shim 4afb4c4a236adfc7eb47e877d31fe278ff5c6e152b95a3d3a62f57fa7720fa81" address="unix:///run/containerd/s/ef3d757f800b0cb705a2927b865122b5891d6429ed7ec39a4b4e1d6558e615cb" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:32:41.305614 systemd[1]: Started cri-containerd-4afb4c4a236adfc7eb47e877d31fe278ff5c6e152b95a3d3a62f57fa7720fa81.scope - libcontainer container 4afb4c4a236adfc7eb47e877d31fe278ff5c6e152b95a3d3a62f57fa7720fa81. Apr 16 23:32:41.413317 containerd[2014]: time="2026-04-16T23:32:41.413210421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-zl4cx,Uid:59d1f7b9-b64f-49ed-bba0-c1c172e38133,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4afb4c4a236adfc7eb47e877d31fe278ff5c6e152b95a3d3a62f57fa7720fa81\"" Apr 16 23:32:41.418165 containerd[2014]: time="2026-04-16T23:32:41.417263925Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 16 23:32:41.926118 kubelet[3456]: I0416 23:32:41.925764 3456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vk84t" podStartSLOduration=1.925745292 podStartE2EDuration="1.925745292s" podCreationTimestamp="2026-04-16 23:32:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:32:41.925201464 +0000 UTC m=+6.845500679" watchObservedRunningTime="2026-04-16 23:32:41.925745292 +0000 UTC m=+6.846044507" Apr 16 23:32:42.606051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3983673178.mount: Deactivated successfully. Apr 16 23:32:43.751332 containerd[2014]: time="2026-04-16T23:32:43.750832945Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:43.752206 containerd[2014]: time="2026-04-16T23:32:43.752148733Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Apr 16 23:32:43.754338 containerd[2014]: time="2026-04-16T23:32:43.753344005Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:43.758518 containerd[2014]: time="2026-04-16T23:32:43.758465773Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.34105666s" Apr 16 23:32:43.758725 containerd[2014]: time="2026-04-16T23:32:43.758699101Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Apr 16 23:32:43.758925 containerd[2014]: time="2026-04-16T23:32:43.758653813Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:43.767310 containerd[2014]: time="2026-04-16T23:32:43.767235025Z" level=info msg="CreateContainer within sandbox \"4afb4c4a236adfc7eb47e877d31fe278ff5c6e152b95a3d3a62f57fa7720fa81\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 16 23:32:43.782794 containerd[2014]: time="2026-04-16T23:32:43.782744785Z" level=info msg="Container 0ebdd1dc05885f833d4757579755df192262bbf432adf75a643624faab4954d1: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:32:43.795454 containerd[2014]: time="2026-04-16T23:32:43.795388057Z" level=info msg="CreateContainer within sandbox \"4afb4c4a236adfc7eb47e877d31fe278ff5c6e152b95a3d3a62f57fa7720fa81\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0ebdd1dc05885f833d4757579755df192262bbf432adf75a643624faab4954d1\"" Apr 16 23:32:43.797705 containerd[2014]: time="2026-04-16T23:32:43.797623201Z" level=info msg="StartContainer for \"0ebdd1dc05885f833d4757579755df192262bbf432adf75a643624faab4954d1\"" Apr 16 23:32:43.801561 containerd[2014]: time="2026-04-16T23:32:43.801493693Z" level=info msg="connecting to shim 0ebdd1dc05885f833d4757579755df192262bbf432adf75a643624faab4954d1" address="unix:///run/containerd/s/ef3d757f800b0cb705a2927b865122b5891d6429ed7ec39a4b4e1d6558e615cb" protocol=ttrpc version=3 Apr 16 23:32:43.846603 systemd[1]: Started cri-containerd-0ebdd1dc05885f833d4757579755df192262bbf432adf75a643624faab4954d1.scope - libcontainer container 0ebdd1dc05885f833d4757579755df192262bbf432adf75a643624faab4954d1. Apr 16 23:32:43.900430 containerd[2014]: time="2026-04-16T23:32:43.900355034Z" level=info msg="StartContainer for \"0ebdd1dc05885f833d4757579755df192262bbf432adf75a643624faab4954d1\" returns successfully" Apr 16 23:32:44.941278 kubelet[3456]: I0416 23:32:44.940958 3456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-zl4cx" podStartSLOduration=2.594986327 podStartE2EDuration="4.940936131s" podCreationTimestamp="2026-04-16 23:32:40 +0000 UTC" firstStartedPulling="2026-04-16 23:32:41.415546365 +0000 UTC m=+6.335845580" lastFinishedPulling="2026-04-16 23:32:43.761496169 +0000 UTC m=+8.681795384" observedRunningTime="2026-04-16 23:32:43.93464411 +0000 UTC m=+8.854943361" watchObservedRunningTime="2026-04-16 23:32:44.940936131 +0000 UTC m=+9.861235346" Apr 16 23:32:52.384967 sudo[2375]: pam_unix(sudo:session): session closed for user root Apr 16 23:32:52.551311 sshd[2374]: Connection closed by 20.229.252.112 port 44824 Apr 16 23:32:52.553363 sshd-session[2371]: pam_unix(sshd:session): session closed for user core Apr 16 23:32:52.561770 systemd[1]: sshd@6-172.31.16.254:22-20.229.252.112:44824.service: Deactivated successfully. Apr 16 23:32:52.571246 systemd[1]: session-7.scope: Deactivated successfully. Apr 16 23:32:52.573819 systemd[1]: session-7.scope: Consumed 9.921s CPU time, 222.5M memory peak. Apr 16 23:32:52.580665 systemd-logind[1991]: Session 7 logged out. Waiting for processes to exit. Apr 16 23:32:52.587022 systemd-logind[1991]: Removed session 7. Apr 16 23:33:04.008431 systemd[1]: Created slice kubepods-besteffort-pod7580353e_c2d7_4810_a006_372a352c18f0.slice - libcontainer container kubepods-besteffort-pod7580353e_c2d7_4810_a006_372a352c18f0.slice. Apr 16 23:33:04.055218 kubelet[3456]: I0416 23:33:04.055171 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlg8t\" (UniqueName: \"kubernetes.io/projected/7580353e-c2d7-4810-a006-372a352c18f0-kube-api-access-nlg8t\") pod \"calico-typha-6798c79bb-vnpcg\" (UID: \"7580353e-c2d7-4810-a006-372a352c18f0\") " pod="calico-system/calico-typha-6798c79bb-vnpcg" Apr 16 23:33:04.056049 kubelet[3456]: I0416 23:33:04.056013 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7580353e-c2d7-4810-a006-372a352c18f0-tigera-ca-bundle\") pod \"calico-typha-6798c79bb-vnpcg\" (UID: \"7580353e-c2d7-4810-a006-372a352c18f0\") " pod="calico-system/calico-typha-6798c79bb-vnpcg" Apr 16 23:33:04.057515 kubelet[3456]: I0416 23:33:04.057463 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7580353e-c2d7-4810-a006-372a352c18f0-typha-certs\") pod \"calico-typha-6798c79bb-vnpcg\" (UID: \"7580353e-c2d7-4810-a006-372a352c18f0\") " pod="calico-system/calico-typha-6798c79bb-vnpcg" Apr 16 23:33:04.211110 systemd[1]: Created slice kubepods-besteffort-pod7bbddefd_573a_4831_b08b_73619ae1ca48.slice - libcontainer container kubepods-besteffort-pod7bbddefd_573a_4831_b08b_73619ae1ca48.slice. Apr 16 23:33:04.259823 kubelet[3456]: I0416 23:33:04.259758 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7bbddefd-573a-4831-b08b-73619ae1ca48-node-certs\") pod \"calico-node-nk69b\" (UID: \"7bbddefd-573a-4831-b08b-73619ae1ca48\") " pod="calico-system/calico-node-nk69b" Apr 16 23:33:04.259983 kubelet[3456]: I0416 23:33:04.259832 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bbddefd-573a-4831-b08b-73619ae1ca48-tigera-ca-bundle\") pod \"calico-node-nk69b\" (UID: \"7bbddefd-573a-4831-b08b-73619ae1ca48\") " pod="calico-system/calico-node-nk69b" Apr 16 23:33:04.259983 kubelet[3456]: I0416 23:33:04.259874 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7bbddefd-573a-4831-b08b-73619ae1ca48-var-run-calico\") pod \"calico-node-nk69b\" (UID: \"7bbddefd-573a-4831-b08b-73619ae1ca48\") " pod="calico-system/calico-node-nk69b" Apr 16 23:33:04.259983 kubelet[3456]: I0416 23:33:04.259912 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7bbddefd-573a-4831-b08b-73619ae1ca48-cni-net-dir\") pod \"calico-node-nk69b\" (UID: \"7bbddefd-573a-4831-b08b-73619ae1ca48\") " pod="calico-system/calico-node-nk69b" Apr 16 23:33:04.259983 kubelet[3456]: I0416 23:33:04.259952 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7bbddefd-573a-4831-b08b-73619ae1ca48-var-lib-calico\") pod \"calico-node-nk69b\" (UID: \"7bbddefd-573a-4831-b08b-73619ae1ca48\") " pod="calico-system/calico-node-nk69b" Apr 16 23:33:04.260202 kubelet[3456]: I0416 23:33:04.259991 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bbddefd-573a-4831-b08b-73619ae1ca48-lib-modules\") pod \"calico-node-nk69b\" (UID: \"7bbddefd-573a-4831-b08b-73619ae1ca48\") " pod="calico-system/calico-node-nk69b" Apr 16 23:33:04.260202 kubelet[3456]: I0416 23:33:04.260024 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bbddefd-573a-4831-b08b-73619ae1ca48-xtables-lock\") pod \"calico-node-nk69b\" (UID: \"7bbddefd-573a-4831-b08b-73619ae1ca48\") " pod="calico-system/calico-node-nk69b" Apr 16 23:33:04.260202 kubelet[3456]: I0416 23:33:04.260064 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n847t\" (UniqueName: \"kubernetes.io/projected/7bbddefd-573a-4831-b08b-73619ae1ca48-kube-api-access-n847t\") pod \"calico-node-nk69b\" (UID: \"7bbddefd-573a-4831-b08b-73619ae1ca48\") " pod="calico-system/calico-node-nk69b" Apr 16 23:33:04.260202 kubelet[3456]: I0416 23:33:04.260106 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7bbddefd-573a-4831-b08b-73619ae1ca48-cni-bin-dir\") pod \"calico-node-nk69b\" (UID: \"7bbddefd-573a-4831-b08b-73619ae1ca48\") " pod="calico-system/calico-node-nk69b" Apr 16 23:33:04.260202 kubelet[3456]: I0416 23:33:04.260141 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7bbddefd-573a-4831-b08b-73619ae1ca48-flexvol-driver-host\") pod \"calico-node-nk69b\" (UID: \"7bbddefd-573a-4831-b08b-73619ae1ca48\") " pod="calico-system/calico-node-nk69b" Apr 16 23:33:04.260481 kubelet[3456]: I0416 23:33:04.260175 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/7bbddefd-573a-4831-b08b-73619ae1ca48-sys-fs\") pod \"calico-node-nk69b\" (UID: \"7bbddefd-573a-4831-b08b-73619ae1ca48\") " pod="calico-system/calico-node-nk69b" Apr 16 23:33:04.260481 kubelet[3456]: I0416 23:33:04.260216 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/7bbddefd-573a-4831-b08b-73619ae1ca48-bpffs\") pod \"calico-node-nk69b\" (UID: \"7bbddefd-573a-4831-b08b-73619ae1ca48\") " pod="calico-system/calico-node-nk69b" Apr 16 23:33:04.260481 kubelet[3456]: I0416 23:33:04.260255 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7bbddefd-573a-4831-b08b-73619ae1ca48-cni-log-dir\") pod \"calico-node-nk69b\" (UID: \"7bbddefd-573a-4831-b08b-73619ae1ca48\") " pod="calico-system/calico-node-nk69b" Apr 16 23:33:04.260481 kubelet[3456]: I0416 23:33:04.260318 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/7bbddefd-573a-4831-b08b-73619ae1ca48-nodeproc\") pod \"calico-node-nk69b\" (UID: \"7bbddefd-573a-4831-b08b-73619ae1ca48\") " pod="calico-system/calico-node-nk69b" Apr 16 23:33:04.260481 kubelet[3456]: I0416 23:33:04.260357 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7bbddefd-573a-4831-b08b-73619ae1ca48-policysync\") pod \"calico-node-nk69b\" (UID: \"7bbddefd-573a-4831-b08b-73619ae1ca48\") " pod="calico-system/calico-node-nk69b" Apr 16 23:33:04.291712 kubelet[3456]: E0416 23:33:04.290711 3456 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krskz" podUID="5e5719b3-71c2-46db-8619-93cea73547a5" Apr 16 23:33:04.330855 containerd[2014]: time="2026-04-16T23:33:04.330702667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6798c79bb-vnpcg,Uid:7580353e-c2d7-4810-a006-372a352c18f0,Namespace:calico-system,Attempt:0,}" Apr 16 23:33:04.364561 kubelet[3456]: I0416 23:33:04.361568 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5e5719b3-71c2-46db-8619-93cea73547a5-varrun\") pod \"csi-node-driver-krskz\" (UID: \"5e5719b3-71c2-46db-8619-93cea73547a5\") " pod="calico-system/csi-node-driver-krskz" Apr 16 23:33:04.364561 kubelet[3456]: I0416 23:33:04.361723 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5e5719b3-71c2-46db-8619-93cea73547a5-socket-dir\") pod \"csi-node-driver-krskz\" (UID: \"5e5719b3-71c2-46db-8619-93cea73547a5\") " pod="calico-system/csi-node-driver-krskz" Apr 16 23:33:04.364561 kubelet[3456]: I0416 23:33:04.361820 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5e5719b3-71c2-46db-8619-93cea73547a5-registration-dir\") pod \"csi-node-driver-krskz\" (UID: \"5e5719b3-71c2-46db-8619-93cea73547a5\") " pod="calico-system/csi-node-driver-krskz" Apr 16 23:33:04.364561 kubelet[3456]: I0416 23:33:04.361856 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcv7g\" (UniqueName: \"kubernetes.io/projected/5e5719b3-71c2-46db-8619-93cea73547a5-kube-api-access-lcv7g\") pod \"csi-node-driver-krskz\" (UID: \"5e5719b3-71c2-46db-8619-93cea73547a5\") " pod="calico-system/csi-node-driver-krskz" Apr 16 23:33:04.364561 kubelet[3456]: I0416 23:33:04.362003 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5e5719b3-71c2-46db-8619-93cea73547a5-kubelet-dir\") pod \"csi-node-driver-krskz\" (UID: \"5e5719b3-71c2-46db-8619-93cea73547a5\") " pod="calico-system/csi-node-driver-krskz" Apr 16 23:33:04.375227 kubelet[3456]: E0416 23:33:04.374554 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.375940 kubelet[3456]: W0416 23:33:04.375880 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.376519 kubelet[3456]: E0416 23:33:04.376470 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.381492 kubelet[3456]: E0416 23:33:04.380784 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.381492 kubelet[3456]: W0416 23:33:04.381452 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.382139 kubelet[3456]: E0416 23:33:04.381997 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.391778 kubelet[3456]: E0416 23:33:04.390766 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.391778 kubelet[3456]: W0416 23:33:04.391005 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.395888 kubelet[3456]: E0416 23:33:04.394161 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.415713 kubelet[3456]: E0416 23:33:04.400718 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.415713 kubelet[3456]: W0416 23:33:04.402114 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.415713 kubelet[3456]: E0416 23:33:04.402277 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.415713 kubelet[3456]: E0416 23:33:04.407638 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.415713 kubelet[3456]: W0416 23:33:04.408003 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.415713 kubelet[3456]: E0416 23:33:04.408490 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.415713 kubelet[3456]: E0416 23:33:04.412127 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.415713 kubelet[3456]: W0416 23:33:04.412180 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.415713 kubelet[3456]: E0416 23:33:04.412213 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.416453 kubelet[3456]: E0416 23:33:04.416012 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.416453 kubelet[3456]: W0416 23:33:04.416037 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.416453 kubelet[3456]: E0416 23:33:04.416281 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.421035 kubelet[3456]: E0416 23:33:04.420100 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.421449 kubelet[3456]: W0416 23:33:04.421399 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.422115 kubelet[3456]: E0416 23:33:04.421783 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.426378 kubelet[3456]: E0416 23:33:04.424954 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.426378 kubelet[3456]: W0416 23:33:04.425007 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.426378 kubelet[3456]: E0416 23:33:04.425041 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.426613 kubelet[3456]: E0416 23:33:04.426508 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.426613 kubelet[3456]: W0416 23:33:04.426559 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.426613 kubelet[3456]: E0416 23:33:04.426589 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.429417 kubelet[3456]: E0416 23:33:04.427591 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.429567 kubelet[3456]: W0416 23:33:04.429435 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.429567 kubelet[3456]: E0416 23:33:04.429473 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.436370 kubelet[3456]: E0416 23:33:04.429932 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.436370 kubelet[3456]: W0416 23:33:04.429964 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.436370 kubelet[3456]: E0416 23:33:04.429989 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.436370 kubelet[3456]: E0416 23:33:04.432445 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.436370 kubelet[3456]: W0416 23:33:04.432475 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.436370 kubelet[3456]: E0416 23:33:04.432506 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.436370 kubelet[3456]: E0416 23:33:04.432996 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.436370 kubelet[3456]: W0416 23:33:04.433015 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.436370 kubelet[3456]: E0416 23:33:04.433062 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.436370 kubelet[3456]: E0416 23:33:04.434514 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.436981 kubelet[3456]: W0416 23:33:04.434540 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.436981 kubelet[3456]: E0416 23:33:04.434591 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.449758 containerd[2014]: time="2026-04-16T23:33:04.449690048Z" level=info msg="connecting to shim 121cca85a1eb512ed923e1cc44dd98d561ad5e7d3a5faff1f807f4c0714b5dcf" address="unix:///run/containerd/s/e4d8f523d17aefc4e0a5311b9c0e54188c104cfde4b1ccf7219ceb9dae7134e0" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:33:04.465488 kubelet[3456]: E0416 23:33:04.465424 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.465488 kubelet[3456]: W0416 23:33:04.465459 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.466342 kubelet[3456]: E0416 23:33:04.465494 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.470532 kubelet[3456]: E0416 23:33:04.470470 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.471033 kubelet[3456]: W0416 23:33:04.470770 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.471033 kubelet[3456]: E0416 23:33:04.470819 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.473590 kubelet[3456]: E0416 23:33:04.473539 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.473590 kubelet[3456]: W0416 23:33:04.473576 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.473784 kubelet[3456]: E0416 23:33:04.473609 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.477438 kubelet[3456]: E0416 23:33:04.477391 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.477438 kubelet[3456]: W0416 23:33:04.477433 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.477656 kubelet[3456]: E0416 23:33:04.477466 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.479553 kubelet[3456]: E0416 23:33:04.479496 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.479553 kubelet[3456]: W0416 23:33:04.479534 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.479822 kubelet[3456]: E0416 23:33:04.479566 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.485720 kubelet[3456]: E0416 23:33:04.485631 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.485720 kubelet[3456]: W0416 23:33:04.485670 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.485720 kubelet[3456]: E0416 23:33:04.485707 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.487502 kubelet[3456]: E0416 23:33:04.487409 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.487502 kubelet[3456]: W0416 23:33:04.487447 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.487502 kubelet[3456]: E0416 23:33:04.487478 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.488901 kubelet[3456]: E0416 23:33:04.488738 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.488901 kubelet[3456]: W0416 23:33:04.488794 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.488901 kubelet[3456]: E0416 23:33:04.488826 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.491342 kubelet[3456]: E0416 23:33:04.491251 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.491342 kubelet[3456]: W0416 23:33:04.491310 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.491342 kubelet[3456]: E0416 23:33:04.491346 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.494423 kubelet[3456]: E0416 23:33:04.493628 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.494423 kubelet[3456]: W0416 23:33:04.493670 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.494423 kubelet[3456]: E0416 23:33:04.493705 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.494890 kubelet[3456]: E0416 23:33:04.494469 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.494890 kubelet[3456]: W0416 23:33:04.494495 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.494890 kubelet[3456]: E0416 23:33:04.494524 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.496957 kubelet[3456]: E0416 23:33:04.496487 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.496957 kubelet[3456]: W0416 23:33:04.496529 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.496957 kubelet[3456]: E0416 23:33:04.496564 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.497696 kubelet[3456]: E0416 23:33:04.497654 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.497696 kubelet[3456]: W0416 23:33:04.497691 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.497878 kubelet[3456]: E0416 23:33:04.497723 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.498805 kubelet[3456]: E0416 23:33:04.498759 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.498805 kubelet[3456]: W0416 23:33:04.498796 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.499341 kubelet[3456]: E0416 23:33:04.498825 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.500373 kubelet[3456]: E0416 23:33:04.500071 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.500373 kubelet[3456]: W0416 23:33:04.500360 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.500551 kubelet[3456]: E0416 23:33:04.500394 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.501679 kubelet[3456]: E0416 23:33:04.501623 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.501679 kubelet[3456]: W0416 23:33:04.501662 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.502430 kubelet[3456]: E0416 23:33:04.501695 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.502950 kubelet[3456]: E0416 23:33:04.502869 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.502950 kubelet[3456]: W0416 23:33:04.502907 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.502950 kubelet[3456]: E0416 23:33:04.502940 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.504529 kubelet[3456]: E0416 23:33:04.504460 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.504529 kubelet[3456]: W0416 23:33:04.504499 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.504529 kubelet[3456]: E0416 23:33:04.504532 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.505957 kubelet[3456]: E0416 23:33:04.505904 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.505957 kubelet[3456]: W0416 23:33:04.505943 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.506105 kubelet[3456]: E0416 23:33:04.505976 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.509320 kubelet[3456]: E0416 23:33:04.508684 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.509320 kubelet[3456]: W0416 23:33:04.508723 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.509320 kubelet[3456]: E0416 23:33:04.508770 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.511134 kubelet[3456]: E0416 23:33:04.511082 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.511134 kubelet[3456]: W0416 23:33:04.511123 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.511421 kubelet[3456]: E0416 23:33:04.511156 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.514536 kubelet[3456]: E0416 23:33:04.514474 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.514536 kubelet[3456]: W0416 23:33:04.514513 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.514739 kubelet[3456]: E0416 23:33:04.514546 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.516233 kubelet[3456]: E0416 23:33:04.516102 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.516233 kubelet[3456]: W0416 23:33:04.516145 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.516233 kubelet[3456]: E0416 23:33:04.516178 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.517576 kubelet[3456]: E0416 23:33:04.517516 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.517576 kubelet[3456]: W0416 23:33:04.517554 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.517758 kubelet[3456]: E0416 23:33:04.517587 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.519340 kubelet[3456]: E0416 23:33:04.519029 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.519340 kubelet[3456]: W0416 23:33:04.519070 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.519340 kubelet[3456]: E0416 23:33:04.519104 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.536984 kubelet[3456]: E0416 23:33:04.536722 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:04.536984 kubelet[3456]: W0416 23:33:04.536772 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:04.536984 kubelet[3456]: E0416 23:33:04.536805 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:04.538655 containerd[2014]: time="2026-04-16T23:33:04.538586192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nk69b,Uid:7bbddefd-573a-4831-b08b-73619ae1ca48,Namespace:calico-system,Attempt:0,}" Apr 16 23:33:04.541875 systemd[1]: Started cri-containerd-121cca85a1eb512ed923e1cc44dd98d561ad5e7d3a5faff1f807f4c0714b5dcf.scope - libcontainer container 121cca85a1eb512ed923e1cc44dd98d561ad5e7d3a5faff1f807f4c0714b5dcf. Apr 16 23:33:04.577834 containerd[2014]: time="2026-04-16T23:33:04.577620716Z" level=info msg="connecting to shim cd4fb82ad3701bee5602b575030a266f5e53ca55289ed135ba7c6c2fa808e501" address="unix:///run/containerd/s/591a6c5431c07c122ff5b5aa682892927de93d799cb40a7d25cbd2ecf4b25897" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:33:04.634696 systemd[1]: Started cri-containerd-cd4fb82ad3701bee5602b575030a266f5e53ca55289ed135ba7c6c2fa808e501.scope - libcontainer container cd4fb82ad3701bee5602b575030a266f5e53ca55289ed135ba7c6c2fa808e501. Apr 16 23:33:04.665587 containerd[2014]: time="2026-04-16T23:33:04.665470017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6798c79bb-vnpcg,Uid:7580353e-c2d7-4810-a006-372a352c18f0,Namespace:calico-system,Attempt:0,} returns sandbox id \"121cca85a1eb512ed923e1cc44dd98d561ad5e7d3a5faff1f807f4c0714b5dcf\"" Apr 16 23:33:04.669665 containerd[2014]: time="2026-04-16T23:33:04.669596673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 16 23:33:04.722542 containerd[2014]: time="2026-04-16T23:33:04.722467665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nk69b,Uid:7bbddefd-573a-4831-b08b-73619ae1ca48,Namespace:calico-system,Attempt:0,} returns sandbox id \"cd4fb82ad3701bee5602b575030a266f5e53ca55289ed135ba7c6c2fa808e501\"" Apr 16 23:33:05.774767 kubelet[3456]: E0416 23:33:05.771016 3456 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krskz" podUID="5e5719b3-71c2-46db-8619-93cea73547a5" Apr 16 23:33:05.909148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4209973157.mount: Deactivated successfully. Apr 16 23:33:06.801471 containerd[2014]: time="2026-04-16T23:33:06.801400140Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:06.803147 containerd[2014]: time="2026-04-16T23:33:06.803084052Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Apr 16 23:33:06.803421 containerd[2014]: time="2026-04-16T23:33:06.803385576Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:06.806875 containerd[2014]: time="2026-04-16T23:33:06.806790516Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:06.808528 containerd[2014]: time="2026-04-16T23:33:06.808487868Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.138831831s" Apr 16 23:33:06.808685 containerd[2014]: time="2026-04-16T23:33:06.808655556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Apr 16 23:33:06.811595 containerd[2014]: time="2026-04-16T23:33:06.811437012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 16 23:33:06.839947 containerd[2014]: time="2026-04-16T23:33:06.839897796Z" level=info msg="CreateContainer within sandbox \"121cca85a1eb512ed923e1cc44dd98d561ad5e7d3a5faff1f807f4c0714b5dcf\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 16 23:33:06.853579 containerd[2014]: time="2026-04-16T23:33:06.853522872Z" level=info msg="Container 452167be73bb085d1499e414b0422493ad5e7e5db0d14ea3127afe3822f25505: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:33:06.866347 containerd[2014]: time="2026-04-16T23:33:06.866262684Z" level=info msg="CreateContainer within sandbox \"121cca85a1eb512ed923e1cc44dd98d561ad5e7d3a5faff1f807f4c0714b5dcf\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"452167be73bb085d1499e414b0422493ad5e7e5db0d14ea3127afe3822f25505\"" Apr 16 23:33:06.867696 containerd[2014]: time="2026-04-16T23:33:06.867560676Z" level=info msg="StartContainer for \"452167be73bb085d1499e414b0422493ad5e7e5db0d14ea3127afe3822f25505\"" Apr 16 23:33:06.870931 containerd[2014]: time="2026-04-16T23:33:06.870752508Z" level=info msg="connecting to shim 452167be73bb085d1499e414b0422493ad5e7e5db0d14ea3127afe3822f25505" address="unix:///run/containerd/s/e4d8f523d17aefc4e0a5311b9c0e54188c104cfde4b1ccf7219ceb9dae7134e0" protocol=ttrpc version=3 Apr 16 23:33:06.916691 systemd[1]: Started cri-containerd-452167be73bb085d1499e414b0422493ad5e7e5db0d14ea3127afe3822f25505.scope - libcontainer container 452167be73bb085d1499e414b0422493ad5e7e5db0d14ea3127afe3822f25505. Apr 16 23:33:07.014030 containerd[2014]: time="2026-04-16T23:33:07.013896957Z" level=info msg="StartContainer for \"452167be73bb085d1499e414b0422493ad5e7e5db0d14ea3127afe3822f25505\" returns successfully" Apr 16 23:33:07.771214 kubelet[3456]: E0416 23:33:07.769872 3456 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krskz" podUID="5e5719b3-71c2-46db-8619-93cea73547a5" Apr 16 23:33:07.995164 containerd[2014]: time="2026-04-16T23:33:07.995085061Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:07.997261 containerd[2014]: time="2026-04-16T23:33:07.996938497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Apr 16 23:33:07.998564 containerd[2014]: time="2026-04-16T23:33:07.998281201Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:08.011332 containerd[2014]: time="2026-04-16T23:33:08.010409014Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:08.014599 containerd[2014]: time="2026-04-16T23:33:08.014520646Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.202020254s" Apr 16 23:33:08.014750 containerd[2014]: time="2026-04-16T23:33:08.014700826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Apr 16 23:33:08.024641 containerd[2014]: time="2026-04-16T23:33:08.024462430Z" level=info msg="CreateContainer within sandbox \"cd4fb82ad3701bee5602b575030a266f5e53ca55289ed135ba7c6c2fa808e501\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 16 23:33:08.028773 kubelet[3456]: I0416 23:33:08.028663 3456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6798c79bb-vnpcg" podStartSLOduration=2.887483219 podStartE2EDuration="5.028638322s" podCreationTimestamp="2026-04-16 23:33:03 +0000 UTC" firstStartedPulling="2026-04-16 23:33:04.668891337 +0000 UTC m=+29.589190552" lastFinishedPulling="2026-04-16 23:33:06.81004644 +0000 UTC m=+31.730345655" observedRunningTime="2026-04-16 23:33:08.025119238 +0000 UTC m=+32.945418537" watchObservedRunningTime="2026-04-16 23:33:08.028638322 +0000 UTC m=+32.948937537" Apr 16 23:33:08.047847 kubelet[3456]: E0416 23:33:08.047796 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:08.047847 kubelet[3456]: W0416 23:33:08.047835 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:08.048078 kubelet[3456]: E0416 23:33:08.047868 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:08.050616 kubelet[3456]: E0416 23:33:08.050567 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:08.050766 kubelet[3456]: W0416 23:33:08.050606 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:08.050766 kubelet[3456]: E0416 23:33:08.050682 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:08.050955 containerd[2014]: time="2026-04-16T23:33:08.050891302Z" level=info msg="Container 8bfe624bf97a0bcbe2532bf11708520a38a38e33589787ab99e4eb7d60f5a0f1: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:33:08.054743 kubelet[3456]: E0416 23:33:08.054684 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:08.054743 kubelet[3456]: W0416 23:33:08.054722 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:08.054944 kubelet[3456]: E0416 23:33:08.054755 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:08.056214 kubelet[3456]: E0416 23:33:08.055144 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:08.056214 kubelet[3456]: W0416 23:33:08.055171 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:08.056214 kubelet[3456]: E0416 23:33:08.055194 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:08.057779 kubelet[3456]: E0416 23:33:08.057662 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:08.057779 kubelet[3456]: W0416 23:33:08.057698 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:08.057779 kubelet[3456]: E0416 23:33:08.057731 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:08.058386 kubelet[3456]: E0416 23:33:08.058061 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:08.058386 kubelet[3456]: W0416 23:33:08.058091 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:08.058386 kubelet[3456]: E0416 23:33:08.058114 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:08.058594 kubelet[3456]: E0416 23:33:08.058445 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:08.058594 kubelet[3456]: W0416 23:33:08.058462 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:08.058594 kubelet[3456]: E0416 23:33:08.058482 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:08.059856 kubelet[3456]: E0416 23:33:08.059791 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:08.059856 kubelet[3456]: W0416 23:33:08.059824 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:08.059856 kubelet[3456]: E0416 23:33:08.059854 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:08.060341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1287345117.mount: Deactivated successfully. Apr 16 23:33:08.062095 kubelet[3456]: E0416 23:33:08.061825 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:08.062095 kubelet[3456]: W0416 23:33:08.061862 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:08.062095 kubelet[3456]: E0416 23:33:08.061921 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:08.062807 kubelet[3456]: E0416 23:33:08.062762 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:08.062902 kubelet[3456]: W0416 23:33:08.062801 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:08.062902 kubelet[3456]: E0416 23:33:08.062866 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:08.064019 kubelet[3456]: E0416 23:33:08.063345 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:08.064019 kubelet[3456]: W0416 23:33:08.063388 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:08.064019 kubelet[3456]: E0416 23:33:08.063415 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:08.064019 kubelet[3456]: E0416 23:33:08.063868 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:08.064019 kubelet[3456]: W0416 23:33:08.063889 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:08.064019 kubelet[3456]: E0416 23:33:08.063909 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:08.064493 kubelet[3456]: E0416 23:33:08.064368 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:08.064493 kubelet[3456]: W0416 23:33:08.064413 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:08.064493 kubelet[3456]: E0416 23:33:08.064441 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:08.065372 kubelet[3456]: E0416 23:33:08.064888 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:08.065372 kubelet[3456]: W0416 23:33:08.064944 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:08.065372 kubelet[3456]: E0416 23:33:08.064969 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:08.065924 kubelet[3456]: E0416 23:33:08.065472 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:08.065924 kubelet[3456]: W0416 23:33:08.065517 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:08.065924 kubelet[3456]: E0416 23:33:08.065545 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:08.076548 containerd[2014]: time="2026-04-16T23:33:08.076497862Z" level=info msg="CreateContainer within sandbox \"cd4fb82ad3701bee5602b575030a266f5e53ca55289ed135ba7c6c2fa808e501\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8bfe624bf97a0bcbe2532bf11708520a38a38e33589787ab99e4eb7d60f5a0f1\"" Apr 16 23:33:08.077399 containerd[2014]: time="2026-04-16T23:33:08.077361862Z" level=info msg="StartContainer for \"8bfe624bf97a0bcbe2532bf11708520a38a38e33589787ab99e4eb7d60f5a0f1\"" Apr 16 23:33:08.082481 containerd[2014]: time="2026-04-16T23:33:08.082399210Z" level=info msg="connecting to shim 8bfe624bf97a0bcbe2532bf11708520a38a38e33589787ab99e4eb7d60f5a0f1" address="unix:///run/containerd/s/591a6c5431c07c122ff5b5aa682892927de93d799cb40a7d25cbd2ecf4b25897" protocol=ttrpc version=3 Apr 16 23:33:08.121622 systemd[1]: Started cri-containerd-8bfe624bf97a0bcbe2532bf11708520a38a38e33589787ab99e4eb7d60f5a0f1.scope - libcontainer container 8bfe624bf97a0bcbe2532bf11708520a38a38e33589787ab99e4eb7d60f5a0f1. Apr 16 23:33:08.130278 kubelet[3456]: E0416 23:33:08.130236 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:08.130278 kubelet[3456]: W0416 23:33:08.130272 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:08.130278 kubelet[3456]: E0416 23:33:08.130319 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:08.130968 kubelet[3456]: E0416 23:33:08.130934 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:08.130968 kubelet[3456]: W0416 23:33:08.130965 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:08.131343 kubelet[3456]: E0416 23:33:08.130995 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:08.131625 kubelet[3456]: E0416 23:33:08.131574 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:08.131828 kubelet[3456]: W0416 23:33:08.131710 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:08.131828 kubelet[3456]: E0416 23:33:08.131742 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:08.132242 kubelet[3456]: E0416 23:33:08.132206 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:08.132242 kubelet[3456]: W0416 23:33:08.132236 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:08.132536 kubelet[3456]: E0416 23:33:08.132259 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:08.133241 kubelet[3456]: E0416 23:33:08.133200 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:08.133241 kubelet[3456]: W0416 23:33:08.133234 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:08.133606 kubelet[3456]: E0416 23:33:08.133264 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:08.134105 kubelet[3456]: E0416 23:33:08.134045 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:08.134105 kubelet[3456]: W0416 23:33:08.134074 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:08.134429 kubelet[3456]: E0416 23:33:08.134277 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:08.134910 kubelet[3456]: E0416 23:33:08.134851 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:08.134910 kubelet[3456]: W0416 23:33:08.134874 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:08.135144 kubelet[3456]: E0416 23:33:08.135068 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:08.135704 kubelet[3456]: E0416 23:33:08.135609 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:08.135704 kubelet[3456]: W0416 23:33:08.135632 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:08.135704 kubelet[3456]: E0416 23:33:08.135655 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:08.136263 kubelet[3456]: E0416 23:33:08.136237 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:08.136491 kubelet[3456]: W0416 23:33:08.136380 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:08.136491 kubelet[3456]: E0416 23:33:08.136408 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:08.137136 kubelet[3456]: E0416 23:33:08.137007 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:08.137136 kubelet[3456]: W0416 23:33:08.137029 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:08.137136 kubelet[3456]: E0416 23:33:08.137051 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:08.137674 kubelet[3456]: E0416 23:33:08.137653 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:08.137979 kubelet[3456]: W0416 23:33:08.137773 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:08.137979 kubelet[3456]: E0416 23:33:08.137805 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:08.138233 kubelet[3456]: E0416 23:33:08.138213 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:08.138418 kubelet[3456]: W0416 23:33:08.138394 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:08.138556 kubelet[3456]: E0416 23:33:08.138533 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:08.139279 kubelet[3456]: E0416 23:33:08.139109 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:08.139279 kubelet[3456]: W0416 23:33:08.139136 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:08.139279 kubelet[3456]: E0416 23:33:08.139161 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:08.140251 kubelet[3456]: E0416 23:33:08.140197 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:08.140550 kubelet[3456]: W0416 23:33:08.140359 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:08.140550 kubelet[3456]: E0416 23:33:08.140393 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:08.141317 kubelet[3456]: E0416 23:33:08.141060 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:08.141317 kubelet[3456]: W0416 23:33:08.141094 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:08.141317 kubelet[3456]: E0416 23:33:08.141122 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:08.141862 kubelet[3456]: E0416 23:33:08.141792 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:08.141862 kubelet[3456]: W0416 23:33:08.141825 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:08.141862 kubelet[3456]: E0416 23:33:08.141854 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:08.142754 kubelet[3456]: E0416 23:33:08.142453 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:08.142754 kubelet[3456]: W0416 23:33:08.142476 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:08.142754 kubelet[3456]: E0416 23:33:08.142502 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:08.143847 kubelet[3456]: E0416 23:33:08.143784 3456 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:08.144104 kubelet[3456]: W0416 23:33:08.144017 3456 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:08.144104 kubelet[3456]: E0416 23:33:08.144056 3456 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:08.231020 containerd[2014]: time="2026-04-16T23:33:08.230865815Z" level=info msg="StartContainer for \"8bfe624bf97a0bcbe2532bf11708520a38a38e33589787ab99e4eb7d60f5a0f1\" returns successfully" Apr 16 23:33:08.260647 systemd[1]: cri-containerd-8bfe624bf97a0bcbe2532bf11708520a38a38e33589787ab99e4eb7d60f5a0f1.scope: Deactivated successfully. Apr 16 23:33:08.267850 containerd[2014]: time="2026-04-16T23:33:08.267759503Z" level=info msg="received container exit event container_id:\"8bfe624bf97a0bcbe2532bf11708520a38a38e33589787ab99e4eb7d60f5a0f1\" id:\"8bfe624bf97a0bcbe2532bf11708520a38a38e33589787ab99e4eb7d60f5a0f1\" pid:4166 exited_at:{seconds:1776382388 nanos:266953067}" Apr 16 23:33:08.317158 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8bfe624bf97a0bcbe2532bf11708520a38a38e33589787ab99e4eb7d60f5a0f1-rootfs.mount: Deactivated successfully. Apr 16 23:33:09.012323 kubelet[3456]: I0416 23:33:09.011186 3456 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:33:09.017255 containerd[2014]: time="2026-04-16T23:33:09.017179955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 16 23:33:09.770375 kubelet[3456]: E0416 23:33:09.769964 3456 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krskz" podUID="5e5719b3-71c2-46db-8619-93cea73547a5" Apr 16 23:33:11.770313 kubelet[3456]: E0416 23:33:11.770184 3456 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krskz" podUID="5e5719b3-71c2-46db-8619-93cea73547a5" Apr 16 23:33:13.769941 kubelet[3456]: E0416 23:33:13.769868 3456 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krskz" podUID="5e5719b3-71c2-46db-8619-93cea73547a5" Apr 16 23:33:15.082628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2191121053.mount: Deactivated successfully. Apr 16 23:33:15.140320 containerd[2014]: time="2026-04-16T23:33:15.139953149Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:15.141330 containerd[2014]: time="2026-04-16T23:33:15.141111929Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Apr 16 23:33:15.142830 containerd[2014]: time="2026-04-16T23:33:15.142771133Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:15.148072 containerd[2014]: time="2026-04-16T23:33:15.147987497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:15.149222 containerd[2014]: time="2026-04-16T23:33:15.149168105Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 6.131632578s" Apr 16 23:33:15.149347 containerd[2014]: time="2026-04-16T23:33:15.149222369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Apr 16 23:33:15.158326 containerd[2014]: time="2026-04-16T23:33:15.157528985Z" level=info msg="CreateContainer within sandbox \"cd4fb82ad3701bee5602b575030a266f5e53ca55289ed135ba7c6c2fa808e501\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 16 23:33:15.171876 containerd[2014]: time="2026-04-16T23:33:15.171825449Z" level=info msg="Container 9d5de2397bf55180471f92993ee4e4fb3922757e7a60a7e4f4def81218fb837b: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:33:15.182733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3373696925.mount: Deactivated successfully. Apr 16 23:33:15.192938 containerd[2014]: time="2026-04-16T23:33:15.192888017Z" level=info msg="CreateContainer within sandbox \"cd4fb82ad3701bee5602b575030a266f5e53ca55289ed135ba7c6c2fa808e501\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"9d5de2397bf55180471f92993ee4e4fb3922757e7a60a7e4f4def81218fb837b\"" Apr 16 23:33:15.194145 containerd[2014]: time="2026-04-16T23:33:15.194066585Z" level=info msg="StartContainer for \"9d5de2397bf55180471f92993ee4e4fb3922757e7a60a7e4f4def81218fb837b\"" Apr 16 23:33:15.198268 containerd[2014]: time="2026-04-16T23:33:15.198204053Z" level=info msg="connecting to shim 9d5de2397bf55180471f92993ee4e4fb3922757e7a60a7e4f4def81218fb837b" address="unix:///run/containerd/s/591a6c5431c07c122ff5b5aa682892927de93d799cb40a7d25cbd2ecf4b25897" protocol=ttrpc version=3 Apr 16 23:33:15.236595 systemd[1]: Started cri-containerd-9d5de2397bf55180471f92993ee4e4fb3922757e7a60a7e4f4def81218fb837b.scope - libcontainer container 9d5de2397bf55180471f92993ee4e4fb3922757e7a60a7e4f4def81218fb837b. Apr 16 23:33:15.351130 containerd[2014]: time="2026-04-16T23:33:15.350807490Z" level=info msg="StartContainer for \"9d5de2397bf55180471f92993ee4e4fb3922757e7a60a7e4f4def81218fb837b\" returns successfully" Apr 16 23:33:15.555526 systemd[1]: cri-containerd-9d5de2397bf55180471f92993ee4e4fb3922757e7a60a7e4f4def81218fb837b.scope: Deactivated successfully. Apr 16 23:33:15.558796 containerd[2014]: time="2026-04-16T23:33:15.558736435Z" level=info msg="received container exit event container_id:\"9d5de2397bf55180471f92993ee4e4fb3922757e7a60a7e4f4def81218fb837b\" id:\"9d5de2397bf55180471f92993ee4e4fb3922757e7a60a7e4f4def81218fb837b\" pid:4244 exited_at:{seconds:1776382395 nanos:558132199}" Apr 16 23:33:15.770161 kubelet[3456]: E0416 23:33:15.769666 3456 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krskz" podUID="5e5719b3-71c2-46db-8619-93cea73547a5" Apr 16 23:33:16.081580 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d5de2397bf55180471f92993ee4e4fb3922757e7a60a7e4f4def81218fb837b-rootfs.mount: Deactivated successfully. Apr 16 23:33:17.050864 containerd[2014]: time="2026-04-16T23:33:17.050794254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 16 23:33:17.775130 kubelet[3456]: E0416 23:33:17.774522 3456 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krskz" podUID="5e5719b3-71c2-46db-8619-93cea73547a5" Apr 16 23:33:19.771845 kubelet[3456]: E0416 23:33:19.770904 3456 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krskz" podUID="5e5719b3-71c2-46db-8619-93cea73547a5" Apr 16 23:33:19.829317 containerd[2014]: time="2026-04-16T23:33:19.828939036Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:19.831603 containerd[2014]: time="2026-04-16T23:33:19.831485232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Apr 16 23:33:19.832949 containerd[2014]: time="2026-04-16T23:33:19.832729992Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:19.838316 containerd[2014]: time="2026-04-16T23:33:19.837118752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:19.838616 containerd[2014]: time="2026-04-16T23:33:19.838568088Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 2.78771555s" Apr 16 23:33:19.838737 containerd[2014]: time="2026-04-16T23:33:19.838710012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Apr 16 23:33:19.845494 containerd[2014]: time="2026-04-16T23:33:19.845443548Z" level=info msg="CreateContainer within sandbox \"cd4fb82ad3701bee5602b575030a266f5e53ca55289ed135ba7c6c2fa808e501\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 16 23:33:19.858744 containerd[2014]: time="2026-04-16T23:33:19.858677784Z" level=info msg="Container 6426aa14fe095178a31508d6b4dd5cc69b5407aef850b73299bb0824e7c5fb66: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:33:19.868629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3419040659.mount: Deactivated successfully. Apr 16 23:33:19.880669 containerd[2014]: time="2026-04-16T23:33:19.880517568Z" level=info msg="CreateContainer within sandbox \"cd4fb82ad3701bee5602b575030a266f5e53ca55289ed135ba7c6c2fa808e501\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6426aa14fe095178a31508d6b4dd5cc69b5407aef850b73299bb0824e7c5fb66\"" Apr 16 23:33:19.881869 containerd[2014]: time="2026-04-16T23:33:19.881801832Z" level=info msg="StartContainer for \"6426aa14fe095178a31508d6b4dd5cc69b5407aef850b73299bb0824e7c5fb66\"" Apr 16 23:33:19.885325 containerd[2014]: time="2026-04-16T23:33:19.885230460Z" level=info msg="connecting to shim 6426aa14fe095178a31508d6b4dd5cc69b5407aef850b73299bb0824e7c5fb66" address="unix:///run/containerd/s/591a6c5431c07c122ff5b5aa682892927de93d799cb40a7d25cbd2ecf4b25897" protocol=ttrpc version=3 Apr 16 23:33:19.939608 systemd[1]: Started cri-containerd-6426aa14fe095178a31508d6b4dd5cc69b5407aef850b73299bb0824e7c5fb66.scope - libcontainer container 6426aa14fe095178a31508d6b4dd5cc69b5407aef850b73299bb0824e7c5fb66. Apr 16 23:33:20.050054 containerd[2014]: time="2026-04-16T23:33:20.049915209Z" level=info msg="StartContainer for \"6426aa14fe095178a31508d6b4dd5cc69b5407aef850b73299bb0824e7c5fb66\" returns successfully" Apr 16 23:33:21.770845 kubelet[3456]: E0416 23:33:21.770772 3456 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krskz" podUID="5e5719b3-71c2-46db-8619-93cea73547a5" Apr 16 23:33:21.800588 containerd[2014]: time="2026-04-16T23:33:21.800516858Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 23:33:21.805847 systemd[1]: cri-containerd-6426aa14fe095178a31508d6b4dd5cc69b5407aef850b73299bb0824e7c5fb66.scope: Deactivated successfully. Apr 16 23:33:21.806665 systemd[1]: cri-containerd-6426aa14fe095178a31508d6b4dd5cc69b5407aef850b73299bb0824e7c5fb66.scope: Consumed 946ms CPU time, 181.6M memory peak, 1.8M read from disk, 171.3M written to disk. Apr 16 23:33:21.812328 containerd[2014]: time="2026-04-16T23:33:21.812127878Z" level=info msg="received container exit event container_id:\"6426aa14fe095178a31508d6b4dd5cc69b5407aef850b73299bb0824e7c5fb66\" id:\"6426aa14fe095178a31508d6b4dd5cc69b5407aef850b73299bb0824e7c5fb66\" pid:4301 exited_at:{seconds:1776382401 nanos:811781978}" Apr 16 23:33:21.854948 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6426aa14fe095178a31508d6b4dd5cc69b5407aef850b73299bb0824e7c5fb66-rootfs.mount: Deactivated successfully. Apr 16 23:33:21.876179 kubelet[3456]: I0416 23:33:21.876118 3456 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 16 23:33:21.960657 systemd[1]: Created slice kubepods-burstable-pod74ea04a2_eb40_4e5c_a7e9_f65a5f7a4bed.slice - libcontainer container kubepods-burstable-pod74ea04a2_eb40_4e5c_a7e9_f65a5f7a4bed.slice. Apr 16 23:33:21.989383 systemd[1]: Created slice kubepods-burstable-pod12d12a87_eb98_402f_ac50_574c7d1f3b7f.slice - libcontainer container kubepods-burstable-pod12d12a87_eb98_402f_ac50_574c7d1f3b7f.slice. Apr 16 23:33:22.008041 systemd[1]: Created slice kubepods-besteffort-poddd66b20b_02d7_4907_8e47_757f4d8364ff.slice - libcontainer container kubepods-besteffort-poddd66b20b_02d7_4907_8e47_757f4d8364ff.slice. Apr 16 23:33:22.024451 systemd[1]: Created slice kubepods-besteffort-podc6915744_c93f_4fa3_b9f6_d711cdb2d534.slice - libcontainer container kubepods-besteffort-podc6915744_c93f_4fa3_b9f6_d711cdb2d534.slice. Apr 16 23:33:22.048990 kubelet[3456]: I0416 23:33:22.048933 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9bc6d820-19b1-4a25-9507-b10429f10481-calico-apiserver-certs\") pod \"calico-apiserver-744c4c5668-n7rcp\" (UID: \"9bc6d820-19b1-4a25-9507-b10429f10481\") " pod="calico-system/calico-apiserver-744c4c5668-n7rcp" Apr 16 23:33:22.049157 kubelet[3456]: I0416 23:33:22.049022 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqnw6\" (UniqueName: \"kubernetes.io/projected/74ea04a2-eb40-4e5c-a7e9-f65a5f7a4bed-kube-api-access-sqnw6\") pod \"coredns-674b8bbfcf-p2qcr\" (UID: \"74ea04a2-eb40-4e5c-a7e9-f65a5f7a4bed\") " pod="kube-system/coredns-674b8bbfcf-p2qcr" Apr 16 23:33:22.049157 kubelet[3456]: I0416 23:33:22.049073 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwl4m\" (UniqueName: \"kubernetes.io/projected/dd66b20b-02d7-4907-8e47-757f4d8364ff-kube-api-access-nwl4m\") pod \"calico-kube-controllers-569c4bbfd5-xdx25\" (UID: \"dd66b20b-02d7-4907-8e47-757f4d8364ff\") " pod="calico-system/calico-kube-controllers-569c4bbfd5-xdx25" Apr 16 23:33:22.049157 kubelet[3456]: I0416 23:33:22.049119 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd66b20b-02d7-4907-8e47-757f4d8364ff-tigera-ca-bundle\") pod \"calico-kube-controllers-569c4bbfd5-xdx25\" (UID: \"dd66b20b-02d7-4907-8e47-757f4d8364ff\") " pod="calico-system/calico-kube-controllers-569c4bbfd5-xdx25" Apr 16 23:33:22.050381 kubelet[3456]: I0416 23:33:22.049156 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74ea04a2-eb40-4e5c-a7e9-f65a5f7a4bed-config-volume\") pod \"coredns-674b8bbfcf-p2qcr\" (UID: \"74ea04a2-eb40-4e5c-a7e9-f65a5f7a4bed\") " pod="kube-system/coredns-674b8bbfcf-p2qcr" Apr 16 23:33:22.050381 kubelet[3456]: I0416 23:33:22.049192 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfrqn\" (UniqueName: \"kubernetes.io/projected/12d12a87-eb98-402f-ac50-574c7d1f3b7f-kube-api-access-nfrqn\") pod \"coredns-674b8bbfcf-gvwh7\" (UID: \"12d12a87-eb98-402f-ac50-574c7d1f3b7f\") " pod="kube-system/coredns-674b8bbfcf-gvwh7" Apr 16 23:33:22.050381 kubelet[3456]: I0416 23:33:22.049230 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r89fv\" (UniqueName: \"kubernetes.io/projected/9bc6d820-19b1-4a25-9507-b10429f10481-kube-api-access-r89fv\") pod \"calico-apiserver-744c4c5668-n7rcp\" (UID: \"9bc6d820-19b1-4a25-9507-b10429f10481\") " pod="calico-system/calico-apiserver-744c4c5668-n7rcp" Apr 16 23:33:22.050381 kubelet[3456]: I0416 23:33:22.049273 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12d12a87-eb98-402f-ac50-574c7d1f3b7f-config-volume\") pod \"coredns-674b8bbfcf-gvwh7\" (UID: \"12d12a87-eb98-402f-ac50-574c7d1f3b7f\") " pod="kube-system/coredns-674b8bbfcf-gvwh7" Apr 16 23:33:22.052526 systemd[1]: Created slice kubepods-besteffort-pod9bc6d820_19b1_4a25_9507_b10429f10481.slice - libcontainer container kubepods-besteffort-pod9bc6d820_19b1_4a25_9507_b10429f10481.slice. Apr 16 23:33:22.066064 systemd[1]: Created slice kubepods-besteffort-podee424a99_5ee7_4660_9b0b_b14d2676c736.slice - libcontainer container kubepods-besteffort-podee424a99_5ee7_4660_9b0b_b14d2676c736.slice. Apr 16 23:33:22.086671 systemd[1]: Created slice kubepods-besteffort-pod04b00041_5135_47c6_80cb_115e4c7b10f3.slice - libcontainer container kubepods-besteffort-pod04b00041_5135_47c6_80cb_115e4c7b10f3.slice. Apr 16 23:33:22.144490 containerd[2014]: time="2026-04-16T23:33:22.144261720Z" level=info msg="CreateContainer within sandbox \"cd4fb82ad3701bee5602b575030a266f5e53ca55289ed135ba7c6c2fa808e501\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 16 23:33:22.155386 kubelet[3456]: I0416 23:33:22.154545 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kppj9\" (UniqueName: \"kubernetes.io/projected/c6915744-c93f-4fa3-b9f6-d711cdb2d534-kube-api-access-kppj9\") pod \"goldmane-5b85766d88-xlvp6\" (UID: \"c6915744-c93f-4fa3-b9f6-d711cdb2d534\") " pod="calico-system/goldmane-5b85766d88-xlvp6" Apr 16 23:33:22.155386 kubelet[3456]: I0416 23:33:22.154621 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/04b00041-5135-47c6-80cb-115e4c7b10f3-whisker-backend-key-pair\") pod \"whisker-bdb7f4766-fv4jl\" (UID: \"04b00041-5135-47c6-80cb-115e4c7b10f3\") " pod="calico-system/whisker-bdb7f4766-fv4jl" Apr 16 23:33:22.155386 kubelet[3456]: I0416 23:33:22.154671 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04b00041-5135-47c6-80cb-115e4c7b10f3-whisker-ca-bundle\") pod \"whisker-bdb7f4766-fv4jl\" (UID: \"04b00041-5135-47c6-80cb-115e4c7b10f3\") " pod="calico-system/whisker-bdb7f4766-fv4jl" Apr 16 23:33:22.155656 kubelet[3456]: I0416 23:33:22.155266 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ee424a99-5ee7-4660-9b0b-b14d2676c736-calico-apiserver-certs\") pod \"calico-apiserver-744c4c5668-pgmzq\" (UID: \"ee424a99-5ee7-4660-9b0b-b14d2676c736\") " pod="calico-system/calico-apiserver-744c4c5668-pgmzq" Apr 16 23:33:22.155656 kubelet[3456]: I0416 23:33:22.155635 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pbd8\" (UniqueName: \"kubernetes.io/projected/ee424a99-5ee7-4660-9b0b-b14d2676c736-kube-api-access-7pbd8\") pod \"calico-apiserver-744c4c5668-pgmzq\" (UID: \"ee424a99-5ee7-4660-9b0b-b14d2676c736\") " pod="calico-system/calico-apiserver-744c4c5668-pgmzq" Apr 16 23:33:22.156339 kubelet[3456]: I0416 23:33:22.155893 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/04b00041-5135-47c6-80cb-115e4c7b10f3-nginx-config\") pod \"whisker-bdb7f4766-fv4jl\" (UID: \"04b00041-5135-47c6-80cb-115e4c7b10f3\") " pod="calico-system/whisker-bdb7f4766-fv4jl" Apr 16 23:33:22.156339 kubelet[3456]: I0416 23:33:22.156072 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6915744-c93f-4fa3-b9f6-d711cdb2d534-config\") pod \"goldmane-5b85766d88-xlvp6\" (UID: \"c6915744-c93f-4fa3-b9f6-d711cdb2d534\") " pod="calico-system/goldmane-5b85766d88-xlvp6" Apr 16 23:33:22.156517 kubelet[3456]: I0416 23:33:22.156344 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-877lg\" (UniqueName: \"kubernetes.io/projected/04b00041-5135-47c6-80cb-115e4c7b10f3-kube-api-access-877lg\") pod \"whisker-bdb7f4766-fv4jl\" (UID: \"04b00041-5135-47c6-80cb-115e4c7b10f3\") " pod="calico-system/whisker-bdb7f4766-fv4jl" Apr 16 23:33:22.157324 kubelet[3456]: I0416 23:33:22.156679 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6915744-c93f-4fa3-b9f6-d711cdb2d534-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-xlvp6\" (UID: \"c6915744-c93f-4fa3-b9f6-d711cdb2d534\") " pod="calico-system/goldmane-5b85766d88-xlvp6" Apr 16 23:33:22.157324 kubelet[3456]: I0416 23:33:22.156798 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c6915744-c93f-4fa3-b9f6-d711cdb2d534-goldmane-key-pair\") pod \"goldmane-5b85766d88-xlvp6\" (UID: \"c6915744-c93f-4fa3-b9f6-d711cdb2d534\") " pod="calico-system/goldmane-5b85766d88-xlvp6" Apr 16 23:33:22.186337 containerd[2014]: time="2026-04-16T23:33:22.185596068Z" level=info msg="Container a165d182152f6f7c38ae3549403c64572a2fb4e6748511adb490c6c67053661a: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:33:22.230009 containerd[2014]: time="2026-04-16T23:33:22.229254060Z" level=info msg="CreateContainer within sandbox \"cd4fb82ad3701bee5602b575030a266f5e53ca55289ed135ba7c6c2fa808e501\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a165d182152f6f7c38ae3549403c64572a2fb4e6748511adb490c6c67053661a\"" Apr 16 23:33:22.236795 containerd[2014]: time="2026-04-16T23:33:22.236549988Z" level=info msg="StartContainer for \"a165d182152f6f7c38ae3549403c64572a2fb4e6748511adb490c6c67053661a\"" Apr 16 23:33:22.249594 containerd[2014]: time="2026-04-16T23:33:22.249510072Z" level=info msg="connecting to shim a165d182152f6f7c38ae3549403c64572a2fb4e6748511adb490c6c67053661a" address="unix:///run/containerd/s/591a6c5431c07c122ff5b5aa682892927de93d799cb40a7d25cbd2ecf4b25897" protocol=ttrpc version=3 Apr 16 23:33:22.303821 containerd[2014]: time="2026-04-16T23:33:22.302589505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p2qcr,Uid:74ea04a2-eb40-4e5c-a7e9-f65a5f7a4bed,Namespace:kube-system,Attempt:0,}" Apr 16 23:33:22.307866 systemd[1]: Started cri-containerd-a165d182152f6f7c38ae3549403c64572a2fb4e6748511adb490c6c67053661a.scope - libcontainer container a165d182152f6f7c38ae3549403c64572a2fb4e6748511adb490c6c67053661a. Apr 16 23:33:22.314194 containerd[2014]: time="2026-04-16T23:33:22.314130049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gvwh7,Uid:12d12a87-eb98-402f-ac50-574c7d1f3b7f,Namespace:kube-system,Attempt:0,}" Apr 16 23:33:22.319898 containerd[2014]: time="2026-04-16T23:33:22.319479145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-569c4bbfd5-xdx25,Uid:dd66b20b-02d7-4907-8e47-757f4d8364ff,Namespace:calico-system,Attempt:0,}" Apr 16 23:33:22.370635 containerd[2014]: time="2026-04-16T23:33:22.370283377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-744c4c5668-n7rcp,Uid:9bc6d820-19b1-4a25-9507-b10429f10481,Namespace:calico-system,Attempt:0,}" Apr 16 23:33:22.385996 containerd[2014]: time="2026-04-16T23:33:22.385419697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-744c4c5668-pgmzq,Uid:ee424a99-5ee7-4660-9b0b-b14d2676c736,Namespace:calico-system,Attempt:0,}" Apr 16 23:33:22.398410 containerd[2014]: time="2026-04-16T23:33:22.395786329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bdb7f4766-fv4jl,Uid:04b00041-5135-47c6-80cb-115e4c7b10f3,Namespace:calico-system,Attempt:0,}" Apr 16 23:33:22.641885 containerd[2014]: time="2026-04-16T23:33:22.640839770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-xlvp6,Uid:c6915744-c93f-4fa3-b9f6-d711cdb2d534,Namespace:calico-system,Attempt:0,}" Apr 16 23:33:22.731016 containerd[2014]: time="2026-04-16T23:33:22.730845711Z" level=info msg="StartContainer for \"a165d182152f6f7c38ae3549403c64572a2fb4e6748511adb490c6c67053661a\" returns successfully" Apr 16 23:33:23.092492 containerd[2014]: time="2026-04-16T23:33:23.092344284Z" level=error msg="Failed to destroy network for sandbox \"8eb1118ab528b594c743ec3ac572ba4d912f9b439c658d3a2af27f151f8d5e66\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:23.100690 containerd[2014]: time="2026-04-16T23:33:23.098886588Z" level=error msg="Failed to destroy network for sandbox \"5abe1d7ee663b1d716f224401dbd04b65be7855ed7e78ebc4089a992f3f80d82\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:23.104199 systemd[1]: run-netns-cni\x2d56aed71c\x2d9c7c\x2d3056\x2d415d\x2dcd4ea4c7eec9.mount: Deactivated successfully. Apr 16 23:33:23.105219 systemd[1]: run-netns-cni\x2d3c1f96b6\x2dd5a5\x2df0da\x2dcc97\x2d8d6d6cb35773.mount: Deactivated successfully. Apr 16 23:33:23.118975 containerd[2014]: time="2026-04-16T23:33:23.118700857Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-744c4c5668-n7rcp,Uid:9bc6d820-19b1-4a25-9507-b10429f10481,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5abe1d7ee663b1d716f224401dbd04b65be7855ed7e78ebc4089a992f3f80d82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:23.121319 containerd[2014]: time="2026-04-16T23:33:23.121100521Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bdb7f4766-fv4jl,Uid:04b00041-5135-47c6-80cb-115e4c7b10f3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eb1118ab528b594c743ec3ac572ba4d912f9b439c658d3a2af27f151f8d5e66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:23.124413 kubelet[3456]: E0416 23:33:23.124354 3456 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eb1118ab528b594c743ec3ac572ba4d912f9b439c658d3a2af27f151f8d5e66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:23.127883 kubelet[3456]: E0416 23:33:23.125079 3456 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eb1118ab528b594c743ec3ac572ba4d912f9b439c658d3a2af27f151f8d5e66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bdb7f4766-fv4jl" Apr 16 23:33:23.127883 kubelet[3456]: E0416 23:33:23.127215 3456 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eb1118ab528b594c743ec3ac572ba4d912f9b439c658d3a2af27f151f8d5e66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bdb7f4766-fv4jl" Apr 16 23:33:23.127883 kubelet[3456]: E0416 23:33:23.127331 3456 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-bdb7f4766-fv4jl_calico-system(04b00041-5135-47c6-80cb-115e4c7b10f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-bdb7f4766-fv4jl_calico-system(04b00041-5135-47c6-80cb-115e4c7b10f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8eb1118ab528b594c743ec3ac572ba4d912f9b439c658d3a2af27f151f8d5e66\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-bdb7f4766-fv4jl" podUID="04b00041-5135-47c6-80cb-115e4c7b10f3" Apr 16 23:33:23.128231 kubelet[3456]: E0416 23:33:23.124373 3456 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5abe1d7ee663b1d716f224401dbd04b65be7855ed7e78ebc4089a992f3f80d82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:23.128231 kubelet[3456]: E0416 23:33:23.127732 3456 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5abe1d7ee663b1d716f224401dbd04b65be7855ed7e78ebc4089a992f3f80d82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-744c4c5668-n7rcp" Apr 16 23:33:23.128231 kubelet[3456]: E0416 23:33:23.127773 3456 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5abe1d7ee663b1d716f224401dbd04b65be7855ed7e78ebc4089a992f3f80d82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-744c4c5668-n7rcp" Apr 16 23:33:23.130577 kubelet[3456]: E0416 23:33:23.129082 3456 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-744c4c5668-n7rcp_calico-system(9bc6d820-19b1-4a25-9507-b10429f10481)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-744c4c5668-n7rcp_calico-system(9bc6d820-19b1-4a25-9507-b10429f10481)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5abe1d7ee663b1d716f224401dbd04b65be7855ed7e78ebc4089a992f3f80d82\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-744c4c5668-n7rcp" podUID="9bc6d820-19b1-4a25-9507-b10429f10481" Apr 16 23:33:23.130787 containerd[2014]: time="2026-04-16T23:33:23.125811073Z" level=error msg="Failed to destroy network for sandbox \"4c5582e70afa18f841bb24e7267d4a6e0285cdc2143e682d14e84c0344bd64f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:23.135240 systemd[1]: run-netns-cni\x2de4a5790f\x2d071f\x2dcb34\x2d32be\x2d577c826028db.mount: Deactivated successfully. Apr 16 23:33:23.145619 containerd[2014]: time="2026-04-16T23:33:23.144645853Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-744c4c5668-pgmzq,Uid:ee424a99-5ee7-4660-9b0b-b14d2676c736,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c5582e70afa18f841bb24e7267d4a6e0285cdc2143e682d14e84c0344bd64f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:23.145954 kubelet[3456]: E0416 23:33:23.145499 3456 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c5582e70afa18f841bb24e7267d4a6e0285cdc2143e682d14e84c0344bd64f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:23.146240 kubelet[3456]: E0416 23:33:23.146203 3456 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c5582e70afa18f841bb24e7267d4a6e0285cdc2143e682d14e84c0344bd64f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-744c4c5668-pgmzq" Apr 16 23:33:23.146850 kubelet[3456]: E0416 23:33:23.146667 3456 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c5582e70afa18f841bb24e7267d4a6e0285cdc2143e682d14e84c0344bd64f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-744c4c5668-pgmzq" Apr 16 23:33:23.146850 kubelet[3456]: E0416 23:33:23.146763 3456 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-744c4c5668-pgmzq_calico-system(ee424a99-5ee7-4660-9b0b-b14d2676c736)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-744c4c5668-pgmzq_calico-system(ee424a99-5ee7-4660-9b0b-b14d2676c736)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c5582e70afa18f841bb24e7267d4a6e0285cdc2143e682d14e84c0344bd64f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-744c4c5668-pgmzq" podUID="ee424a99-5ee7-4660-9b0b-b14d2676c736" Apr 16 23:33:23.176123 containerd[2014]: time="2026-04-16T23:33:23.175113385Z" level=error msg="Failed to destroy network for sandbox \"6957107d4a132f249e2ee0af2e2c7942b69524430e0e88e972f7d7ea5830ea97\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:23.183445 systemd[1]: run-netns-cni\x2d0a71833f\x2d70cc\x2d2eac\x2d9023\x2d71b58c8b96d1.mount: Deactivated successfully. Apr 16 23:33:23.195446 containerd[2014]: time="2026-04-16T23:33:23.195366961Z" level=error msg="Failed to destroy network for sandbox \"0d64cca838322809c46421dc39b5ad232f49762c87ce5ffc8319d0abc6829f24\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:23.196897 containerd[2014]: time="2026-04-16T23:33:23.196809649Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p2qcr,Uid:74ea04a2-eb40-4e5c-a7e9-f65a5f7a4bed,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6957107d4a132f249e2ee0af2e2c7942b69524430e0e88e972f7d7ea5830ea97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:23.198762 kubelet[3456]: E0416 23:33:23.198678 3456 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6957107d4a132f249e2ee0af2e2c7942b69524430e0e88e972f7d7ea5830ea97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:23.198957 kubelet[3456]: E0416 23:33:23.198789 3456 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6957107d4a132f249e2ee0af2e2c7942b69524430e0e88e972f7d7ea5830ea97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p2qcr" Apr 16 23:33:23.198957 kubelet[3456]: E0416 23:33:23.198829 3456 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6957107d4a132f249e2ee0af2e2c7942b69524430e0e88e972f7d7ea5830ea97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p2qcr" Apr 16 23:33:23.198957 kubelet[3456]: E0416 23:33:23.198916 3456 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-p2qcr_kube-system(74ea04a2-eb40-4e5c-a7e9-f65a5f7a4bed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-p2qcr_kube-system(74ea04a2-eb40-4e5c-a7e9-f65a5f7a4bed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6957107d4a132f249e2ee0af2e2c7942b69524430e0e88e972f7d7ea5830ea97\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-p2qcr" podUID="74ea04a2-eb40-4e5c-a7e9-f65a5f7a4bed" Apr 16 23:33:23.205019 containerd[2014]: time="2026-04-16T23:33:23.201814309Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gvwh7,Uid:12d12a87-eb98-402f-ac50-574c7d1f3b7f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d64cca838322809c46421dc39b5ad232f49762c87ce5ffc8319d0abc6829f24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:23.209784 kubelet[3456]: E0416 23:33:23.209449 3456 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d64cca838322809c46421dc39b5ad232f49762c87ce5ffc8319d0abc6829f24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:23.209784 kubelet[3456]: E0416 23:33:23.209544 3456 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d64cca838322809c46421dc39b5ad232f49762c87ce5ffc8319d0abc6829f24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gvwh7" Apr 16 23:33:23.209784 kubelet[3456]: E0416 23:33:23.209582 3456 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d64cca838322809c46421dc39b5ad232f49762c87ce5ffc8319d0abc6829f24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gvwh7" Apr 16 23:33:23.210086 kubelet[3456]: E0416 23:33:23.209689 3456 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-gvwh7_kube-system(12d12a87-eb98-402f-ac50-574c7d1f3b7f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-gvwh7_kube-system(12d12a87-eb98-402f-ac50-574c7d1f3b7f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d64cca838322809c46421dc39b5ad232f49762c87ce5ffc8319d0abc6829f24\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-gvwh7" podUID="12d12a87-eb98-402f-ac50-574c7d1f3b7f" Apr 16 23:33:23.307828 kubelet[3456]: I0416 23:33:23.306709 3456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nk69b" podStartSLOduration=4.190931014 podStartE2EDuration="19.306685825s" podCreationTimestamp="2026-04-16 23:33:04 +0000 UTC" firstStartedPulling="2026-04-16 23:33:04.723988845 +0000 UTC m=+29.644288060" lastFinishedPulling="2026-04-16 23:33:19.839743656 +0000 UTC m=+44.760042871" observedRunningTime="2026-04-16 23:33:23.300477517 +0000 UTC m=+48.220776792" watchObservedRunningTime="2026-04-16 23:33:23.306685825 +0000 UTC m=+48.226985052" Apr 16 23:33:23.509268 containerd[2014]: 2026-04-16 23:33:23.331 [INFO][4519] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b4409b74bd4e4d8f4f3b26c3d169acc8105c9291d22066928bbcfb197bb964e0" Apr 16 23:33:23.509268 containerd[2014]: 2026-04-16 23:33:23.332 [INFO][4519] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b4409b74bd4e4d8f4f3b26c3d169acc8105c9291d22066928bbcfb197bb964e0" iface="eth0" netns="/var/run/netns/cni-48986870-1abf-0b23-4f2e-d281fa229f4d" Apr 16 23:33:23.509268 containerd[2014]: 2026-04-16 23:33:23.333 [INFO][4519] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b4409b74bd4e4d8f4f3b26c3d169acc8105c9291d22066928bbcfb197bb964e0" iface="eth0" netns="/var/run/netns/cni-48986870-1abf-0b23-4f2e-d281fa229f4d" Apr 16 23:33:23.509268 containerd[2014]: 2026-04-16 23:33:23.335 [INFO][4519] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b4409b74bd4e4d8f4f3b26c3d169acc8105c9291d22066928bbcfb197bb964e0" iface="eth0" netns="/var/run/netns/cni-48986870-1abf-0b23-4f2e-d281fa229f4d" Apr 16 23:33:23.509268 containerd[2014]: 2026-04-16 23:33:23.335 [INFO][4519] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b4409b74bd4e4d8f4f3b26c3d169acc8105c9291d22066928bbcfb197bb964e0" Apr 16 23:33:23.509268 containerd[2014]: 2026-04-16 23:33:23.335 [INFO][4519] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b4409b74bd4e4d8f4f3b26c3d169acc8105c9291d22066928bbcfb197bb964e0" Apr 16 23:33:23.509268 containerd[2014]: 2026-04-16 23:33:23.466 [INFO][4574] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b4409b74bd4e4d8f4f3b26c3d169acc8105c9291d22066928bbcfb197bb964e0" HandleID="k8s-pod-network.b4409b74bd4e4d8f4f3b26c3d169acc8105c9291d22066928bbcfb197bb964e0" Workload="ip--172--31--16--254-k8s-calico--kube--controllers--569c4bbfd5--xdx25-eth0" Apr 16 23:33:23.509268 containerd[2014]: 2026-04-16 23:33:23.466 [INFO][4574] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:33:23.509268 containerd[2014]: 2026-04-16 23:33:23.467 [INFO][4574] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:33:23.509822 containerd[2014]: 2026-04-16 23:33:23.484 [WARNING][4574] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b4409b74bd4e4d8f4f3b26c3d169acc8105c9291d22066928bbcfb197bb964e0" HandleID="k8s-pod-network.b4409b74bd4e4d8f4f3b26c3d169acc8105c9291d22066928bbcfb197bb964e0" Workload="ip--172--31--16--254-k8s-calico--kube--controllers--569c4bbfd5--xdx25-eth0" Apr 16 23:33:23.509822 containerd[2014]: 2026-04-16 23:33:23.484 [INFO][4574] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b4409b74bd4e4d8f4f3b26c3d169acc8105c9291d22066928bbcfb197bb964e0" HandleID="k8s-pod-network.b4409b74bd4e4d8f4f3b26c3d169acc8105c9291d22066928bbcfb197bb964e0" Workload="ip--172--31--16--254-k8s-calico--kube--controllers--569c4bbfd5--xdx25-eth0" Apr 16 23:33:23.509822 containerd[2014]: 2026-04-16 23:33:23.487 [INFO][4574] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:33:23.509822 containerd[2014]: 2026-04-16 23:33:23.499 [INFO][4519] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b4409b74bd4e4d8f4f3b26c3d169acc8105c9291d22066928bbcfb197bb964e0" Apr 16 23:33:23.512502 containerd[2014]: time="2026-04-16T23:33:23.512418999Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-569c4bbfd5-xdx25,Uid:dd66b20b-02d7-4907-8e47-757f4d8364ff,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4409b74bd4e4d8f4f3b26c3d169acc8105c9291d22066928bbcfb197bb964e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:23.513661 kubelet[3456]: E0416 23:33:23.513570 3456 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4409b74bd4e4d8f4f3b26c3d169acc8105c9291d22066928bbcfb197bb964e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:23.513897 kubelet[3456]: E0416 23:33:23.513659 3456 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4409b74bd4e4d8f4f3b26c3d169acc8105c9291d22066928bbcfb197bb964e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-569c4bbfd5-xdx25" Apr 16 23:33:23.513897 kubelet[3456]: E0416 23:33:23.513696 3456 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4409b74bd4e4d8f4f3b26c3d169acc8105c9291d22066928bbcfb197bb964e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-569c4bbfd5-xdx25" Apr 16 23:33:23.514736 kubelet[3456]: E0416 23:33:23.513790 3456 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-569c4bbfd5-xdx25_calico-system(dd66b20b-02d7-4907-8e47-757f4d8364ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-569c4bbfd5-xdx25_calico-system(dd66b20b-02d7-4907-8e47-757f4d8364ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4409b74bd4e4d8f4f3b26c3d169acc8105c9291d22066928bbcfb197bb964e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-569c4bbfd5-xdx25" podUID="dd66b20b-02d7-4907-8e47-757f4d8364ff" Apr 16 23:33:23.526164 containerd[2014]: 2026-04-16 23:33:23.337 [INFO][4543] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ab3eef2cefdddadd738679504f9ec812f9d5213430c8f566e159a5606d3aac9a" Apr 16 23:33:23.526164 containerd[2014]: 2026-04-16 23:33:23.338 [INFO][4543] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ab3eef2cefdddadd738679504f9ec812f9d5213430c8f566e159a5606d3aac9a" iface="eth0" netns="/var/run/netns/cni-55927bce-deb6-2efd-28a2-5dfe6e77680f" Apr 16 23:33:23.526164 containerd[2014]: 2026-04-16 23:33:23.338 [INFO][4543] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ab3eef2cefdddadd738679504f9ec812f9d5213430c8f566e159a5606d3aac9a" iface="eth0" netns="/var/run/netns/cni-55927bce-deb6-2efd-28a2-5dfe6e77680f" Apr 16 23:33:23.526164 containerd[2014]: 2026-04-16 23:33:23.340 [INFO][4543] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ab3eef2cefdddadd738679504f9ec812f9d5213430c8f566e159a5606d3aac9a" iface="eth0" netns="/var/run/netns/cni-55927bce-deb6-2efd-28a2-5dfe6e77680f" Apr 16 23:33:23.526164 containerd[2014]: 2026-04-16 23:33:23.340 [INFO][4543] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ab3eef2cefdddadd738679504f9ec812f9d5213430c8f566e159a5606d3aac9a" Apr 16 23:33:23.526164 containerd[2014]: 2026-04-16 23:33:23.340 [INFO][4543] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ab3eef2cefdddadd738679504f9ec812f9d5213430c8f566e159a5606d3aac9a" Apr 16 23:33:23.526164 containerd[2014]: 2026-04-16 23:33:23.476 [INFO][4576] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ab3eef2cefdddadd738679504f9ec812f9d5213430c8f566e159a5606d3aac9a" HandleID="k8s-pod-network.ab3eef2cefdddadd738679504f9ec812f9d5213430c8f566e159a5606d3aac9a" Workload="ip--172--31--16--254-k8s-goldmane--5b85766d88--xlvp6-eth0" Apr 16 23:33:23.526164 containerd[2014]: 2026-04-16 23:33:23.476 [INFO][4576] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:33:23.526164 containerd[2014]: 2026-04-16 23:33:23.487 [INFO][4576] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:33:23.528004 containerd[2014]: 2026-04-16 23:33:23.507 [WARNING][4576] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ab3eef2cefdddadd738679504f9ec812f9d5213430c8f566e159a5606d3aac9a" HandleID="k8s-pod-network.ab3eef2cefdddadd738679504f9ec812f9d5213430c8f566e159a5606d3aac9a" Workload="ip--172--31--16--254-k8s-goldmane--5b85766d88--xlvp6-eth0" Apr 16 23:33:23.528004 containerd[2014]: 2026-04-16 23:33:23.508 [INFO][4576] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ab3eef2cefdddadd738679504f9ec812f9d5213430c8f566e159a5606d3aac9a" HandleID="k8s-pod-network.ab3eef2cefdddadd738679504f9ec812f9d5213430c8f566e159a5606d3aac9a" Workload="ip--172--31--16--254-k8s-goldmane--5b85766d88--xlvp6-eth0" Apr 16 23:33:23.528004 containerd[2014]: 2026-04-16 23:33:23.512 [INFO][4576] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:33:23.528004 containerd[2014]: 2026-04-16 23:33:23.520 [INFO][4543] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ab3eef2cefdddadd738679504f9ec812f9d5213430c8f566e159a5606d3aac9a" Apr 16 23:33:23.529944 containerd[2014]: time="2026-04-16T23:33:23.529859499Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-xlvp6,Uid:c6915744-c93f-4fa3-b9f6-d711cdb2d534,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab3eef2cefdddadd738679504f9ec812f9d5213430c8f566e159a5606d3aac9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:23.531654 kubelet[3456]: E0416 23:33:23.530387 3456 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab3eef2cefdddadd738679504f9ec812f9d5213430c8f566e159a5606d3aac9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:23.531654 kubelet[3456]: E0416 23:33:23.531423 3456 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab3eef2cefdddadd738679504f9ec812f9d5213430c8f566e159a5606d3aac9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-xlvp6" Apr 16 23:33:23.531654 kubelet[3456]: E0416 23:33:23.531470 3456 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab3eef2cefdddadd738679504f9ec812f9d5213430c8f566e159a5606d3aac9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-xlvp6" Apr 16 23:33:23.532007 kubelet[3456]: E0416 23:33:23.531564 3456 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-xlvp6_calico-system(c6915744-c93f-4fa3-b9f6-d711cdb2d534)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-xlvp6_calico-system(c6915744-c93f-4fa3-b9f6-d711cdb2d534)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab3eef2cefdddadd738679504f9ec812f9d5213430c8f566e159a5606d3aac9a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-xlvp6" podUID="c6915744-c93f-4fa3-b9f6-d711cdb2d534" Apr 16 23:33:23.785609 systemd[1]: Created slice kubepods-besteffort-pod5e5719b3_71c2_46db_8619_93cea73547a5.slice - libcontainer container kubepods-besteffort-pod5e5719b3_71c2_46db_8619_93cea73547a5.slice. Apr 16 23:33:23.793109 containerd[2014]: time="2026-04-16T23:33:23.792955576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-krskz,Uid:5e5719b3-71c2-46db-8619-93cea73547a5,Namespace:calico-system,Attempt:0,}" Apr 16 23:33:23.855904 systemd[1]: run-netns-cni\x2d55927bce\x2ddeb6\x2d2efd\x2d28a2\x2d5dfe6e77680f.mount: Deactivated successfully. Apr 16 23:33:23.856057 systemd[1]: run-netns-cni\x2db7d1999b\x2d33d0\x2d33ce\x2d9112\x2d07f1938fe040.mount: Deactivated successfully. Apr 16 23:33:23.856175 systemd[1]: run-netns-cni\x2d48986870\x2d1abf\x2d0b23\x2d4f2e\x2dd281fa229f4d.mount: Deactivated successfully. Apr 16 23:33:23.997274 systemd-networkd[1887]: caliac66c494f69: Link UP Apr 16 23:33:23.999863 systemd-networkd[1887]: caliac66c494f69: Gained carrier Apr 16 23:33:24.005263 (udev-worker)[4637]: Network interface NamePolicy= disabled on kernel command line. Apr 16 23:33:24.032909 containerd[2014]: 2026-04-16 23:33:23.842 [ERROR][4613] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 23:33:24.032909 containerd[2014]: 2026-04-16 23:33:23.873 [INFO][4613] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--254-k8s-csi--node--driver--krskz-eth0 csi-node-driver- calico-system 5e5719b3-71c2-46db-8619-93cea73547a5 759 0 2026-04-16 23:33:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-16-254 csi-node-driver-krskz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliac66c494f69 [] [] }} ContainerID="29996cdbb8c5889a248b7813b86636ff4cecd50e602e5c44270b451e2ccf841f" Namespace="calico-system" Pod="csi-node-driver-krskz" WorkloadEndpoint="ip--172--31--16--254-k8s-csi--node--driver--krskz-" Apr 16 23:33:24.032909 containerd[2014]: 2026-04-16 23:33:23.874 [INFO][4613] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="29996cdbb8c5889a248b7813b86636ff4cecd50e602e5c44270b451e2ccf841f" Namespace="calico-system" Pod="csi-node-driver-krskz" WorkloadEndpoint="ip--172--31--16--254-k8s-csi--node--driver--krskz-eth0" Apr 16 23:33:24.032909 containerd[2014]: 2026-04-16 23:33:23.919 [INFO][4628] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="29996cdbb8c5889a248b7813b86636ff4cecd50e602e5c44270b451e2ccf841f" HandleID="k8s-pod-network.29996cdbb8c5889a248b7813b86636ff4cecd50e602e5c44270b451e2ccf841f" Workload="ip--172--31--16--254-k8s-csi--node--driver--krskz-eth0" Apr 16 23:33:24.033634 containerd[2014]: 2026-04-16 23:33:23.935 [INFO][4628] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="29996cdbb8c5889a248b7813b86636ff4cecd50e602e5c44270b451e2ccf841f" HandleID="k8s-pod-network.29996cdbb8c5889a248b7813b86636ff4cecd50e602e5c44270b451e2ccf841f" Workload="ip--172--31--16--254-k8s-csi--node--driver--krskz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000380150), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-254", "pod":"csi-node-driver-krskz", "timestamp":"2026-04-16 23:33:23.919472357 +0000 UTC"}, Hostname:"ip-172-31-16-254", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002e8000)} Apr 16 23:33:24.033634 containerd[2014]: 2026-04-16 23:33:23.935 [INFO][4628] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:33:24.033634 containerd[2014]: 2026-04-16 23:33:23.935 [INFO][4628] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:33:24.033634 containerd[2014]: 2026-04-16 23:33:23.935 [INFO][4628] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-254' Apr 16 23:33:24.033634 containerd[2014]: 2026-04-16 23:33:23.938 [INFO][4628] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.29996cdbb8c5889a248b7813b86636ff4cecd50e602e5c44270b451e2ccf841f" host="ip-172-31-16-254" Apr 16 23:33:24.033634 containerd[2014]: 2026-04-16 23:33:23.945 [INFO][4628] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-16-254" Apr 16 23:33:24.033634 containerd[2014]: 2026-04-16 23:33:23.952 [INFO][4628] ipam/ipam.go 526: Trying affinity for 192.168.58.192/26 host="ip-172-31-16-254" Apr 16 23:33:24.033634 containerd[2014]: 2026-04-16 23:33:23.955 [INFO][4628] ipam/ipam.go 160: Attempting to load block cidr=192.168.58.192/26 host="ip-172-31-16-254" Apr 16 23:33:24.033634 containerd[2014]: 2026-04-16 23:33:23.959 [INFO][4628] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.58.192/26 host="ip-172-31-16-254" Apr 16 23:33:24.034126 containerd[2014]: 2026-04-16 23:33:23.959 [INFO][4628] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.58.192/26 handle="k8s-pod-network.29996cdbb8c5889a248b7813b86636ff4cecd50e602e5c44270b451e2ccf841f" host="ip-172-31-16-254" Apr 16 23:33:24.034126 containerd[2014]: 2026-04-16 23:33:23.961 [INFO][4628] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.29996cdbb8c5889a248b7813b86636ff4cecd50e602e5c44270b451e2ccf841f Apr 16 23:33:24.034126 containerd[2014]: 2026-04-16 23:33:23.967 [INFO][4628] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.58.192/26 handle="k8s-pod-network.29996cdbb8c5889a248b7813b86636ff4cecd50e602e5c44270b451e2ccf841f" host="ip-172-31-16-254" Apr 16 23:33:24.034126 containerd[2014]: 2026-04-16 23:33:23.978 [INFO][4628] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.58.193/26] block=192.168.58.192/26 handle="k8s-pod-network.29996cdbb8c5889a248b7813b86636ff4cecd50e602e5c44270b451e2ccf841f" host="ip-172-31-16-254" Apr 16 23:33:24.034126 containerd[2014]: 2026-04-16 23:33:23.978 [INFO][4628] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.58.193/26] handle="k8s-pod-network.29996cdbb8c5889a248b7813b86636ff4cecd50e602e5c44270b451e2ccf841f" host="ip-172-31-16-254" Apr 16 23:33:24.034126 containerd[2014]: 2026-04-16 23:33:23.978 [INFO][4628] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:33:24.034126 containerd[2014]: 2026-04-16 23:33:23.978 [INFO][4628] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.58.193/26] IPv6=[] ContainerID="29996cdbb8c5889a248b7813b86636ff4cecd50e602e5c44270b451e2ccf841f" HandleID="k8s-pod-network.29996cdbb8c5889a248b7813b86636ff4cecd50e602e5c44270b451e2ccf841f" Workload="ip--172--31--16--254-k8s-csi--node--driver--krskz-eth0" Apr 16 23:33:24.034538 containerd[2014]: 2026-04-16 23:33:23.982 [INFO][4613] cni-plugin/k8s.go 418: Populated endpoint ContainerID="29996cdbb8c5889a248b7813b86636ff4cecd50e602e5c44270b451e2ccf841f" Namespace="calico-system" Pod="csi-node-driver-krskz" WorkloadEndpoint="ip--172--31--16--254-k8s-csi--node--driver--krskz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--254-k8s-csi--node--driver--krskz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5e5719b3-71c2-46db-8619-93cea73547a5", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 33, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-254", ContainerID:"", Pod:"csi-node-driver-krskz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.58.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliac66c494f69", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:33:24.034676 containerd[2014]: 2026-04-16 23:33:23.982 [INFO][4613] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.193/32] ContainerID="29996cdbb8c5889a248b7813b86636ff4cecd50e602e5c44270b451e2ccf841f" Namespace="calico-system" Pod="csi-node-driver-krskz" WorkloadEndpoint="ip--172--31--16--254-k8s-csi--node--driver--krskz-eth0" Apr 16 23:33:24.034676 containerd[2014]: 2026-04-16 23:33:23.982 [INFO][4613] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac66c494f69 ContainerID="29996cdbb8c5889a248b7813b86636ff4cecd50e602e5c44270b451e2ccf841f" Namespace="calico-system" Pod="csi-node-driver-krskz" WorkloadEndpoint="ip--172--31--16--254-k8s-csi--node--driver--krskz-eth0" Apr 16 23:33:24.034676 containerd[2014]: 2026-04-16 23:33:24.000 [INFO][4613] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="29996cdbb8c5889a248b7813b86636ff4cecd50e602e5c44270b451e2ccf841f" Namespace="calico-system" Pod="csi-node-driver-krskz" WorkloadEndpoint="ip--172--31--16--254-k8s-csi--node--driver--krskz-eth0" Apr 16 23:33:24.034801 containerd[2014]: 2026-04-16 23:33:24.001 [INFO][4613] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="29996cdbb8c5889a248b7813b86636ff4cecd50e602e5c44270b451e2ccf841f" Namespace="calico-system" Pod="csi-node-driver-krskz" WorkloadEndpoint="ip--172--31--16--254-k8s-csi--node--driver--krskz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--254-k8s-csi--node--driver--krskz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5e5719b3-71c2-46db-8619-93cea73547a5", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 33, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-254", ContainerID:"29996cdbb8c5889a248b7813b86636ff4cecd50e602e5c44270b451e2ccf841f", Pod:"csi-node-driver-krskz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.58.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliac66c494f69", MAC:"5e:c3:7d:1f:ff:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:33:24.034919 containerd[2014]: 2026-04-16 23:33:24.028 [INFO][4613] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="29996cdbb8c5889a248b7813b86636ff4cecd50e602e5c44270b451e2ccf841f" Namespace="calico-system" Pod="csi-node-driver-krskz" WorkloadEndpoint="ip--172--31--16--254-k8s-csi--node--driver--krskz-eth0" Apr 16 23:33:24.079496 containerd[2014]: time="2026-04-16T23:33:24.078169381Z" level=info msg="connecting to shim 29996cdbb8c5889a248b7813b86636ff4cecd50e602e5c44270b451e2ccf841f" address="unix:///run/containerd/s/e25e250238b3346e8402f02f0aaa96a4f05d7a48317d07e98724f3e6a6afb529" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:33:24.124604 systemd[1]: Started cri-containerd-29996cdbb8c5889a248b7813b86636ff4cecd50e602e5c44270b451e2ccf841f.scope - libcontainer container 29996cdbb8c5889a248b7813b86636ff4cecd50e602e5c44270b451e2ccf841f. Apr 16 23:33:24.187271 containerd[2014]: time="2026-04-16T23:33:24.187209110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-krskz,Uid:5e5719b3-71c2-46db-8619-93cea73547a5,Namespace:calico-system,Attempt:0,} returns sandbox id \"29996cdbb8c5889a248b7813b86636ff4cecd50e602e5c44270b451e2ccf841f\"" Apr 16 23:33:24.190177 containerd[2014]: time="2026-04-16T23:33:24.189819158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 16 23:33:24.214709 containerd[2014]: time="2026-04-16T23:33:24.214638110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-xlvp6,Uid:c6915744-c93f-4fa3-b9f6-d711cdb2d534,Namespace:calico-system,Attempt:0,}" Apr 16 23:33:24.225684 containerd[2014]: time="2026-04-16T23:33:24.225384290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-569c4bbfd5-xdx25,Uid:dd66b20b-02d7-4907-8e47-757f4d8364ff,Namespace:calico-system,Attempt:0,}" Apr 16 23:33:24.378102 kubelet[3456]: I0416 23:33:24.377950 3456 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/04b00041-5135-47c6-80cb-115e4c7b10f3-nginx-config\") pod \"04b00041-5135-47c6-80cb-115e4c7b10f3\" (UID: \"04b00041-5135-47c6-80cb-115e4c7b10f3\") " Apr 16 23:33:24.378102 kubelet[3456]: I0416 23:33:24.378024 3456 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04b00041-5135-47c6-80cb-115e4c7b10f3-whisker-ca-bundle\") pod \"04b00041-5135-47c6-80cb-115e4c7b10f3\" (UID: \"04b00041-5135-47c6-80cb-115e4c7b10f3\") " Apr 16 23:33:24.378102 kubelet[3456]: I0416 23:33:24.378085 3456 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/04b00041-5135-47c6-80cb-115e4c7b10f3-whisker-backend-key-pair\") pod \"04b00041-5135-47c6-80cb-115e4c7b10f3\" (UID: \"04b00041-5135-47c6-80cb-115e4c7b10f3\") " Apr 16 23:33:24.380845 kubelet[3456]: I0416 23:33:24.378157 3456 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-877lg\" (UniqueName: \"kubernetes.io/projected/04b00041-5135-47c6-80cb-115e4c7b10f3-kube-api-access-877lg\") pod \"04b00041-5135-47c6-80cb-115e4c7b10f3\" (UID: \"04b00041-5135-47c6-80cb-115e4c7b10f3\") " Apr 16 23:33:24.381810 kubelet[3456]: I0416 23:33:24.381473 3456 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04b00041-5135-47c6-80cb-115e4c7b10f3-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "04b00041-5135-47c6-80cb-115e4c7b10f3" (UID: "04b00041-5135-47c6-80cb-115e4c7b10f3"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 23:33:24.382768 kubelet[3456]: I0416 23:33:24.382722 3456 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04b00041-5135-47c6-80cb-115e4c7b10f3-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "04b00041-5135-47c6-80cb-115e4c7b10f3" (UID: "04b00041-5135-47c6-80cb-115e4c7b10f3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 23:33:24.403601 kubelet[3456]: I0416 23:33:24.403415 3456 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04b00041-5135-47c6-80cb-115e4c7b10f3-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "04b00041-5135-47c6-80cb-115e4c7b10f3" (UID: "04b00041-5135-47c6-80cb-115e4c7b10f3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 23:33:24.404236 kubelet[3456]: I0416 23:33:24.403741 3456 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04b00041-5135-47c6-80cb-115e4c7b10f3-kube-api-access-877lg" (OuterVolumeSpecName: "kube-api-access-877lg") pod "04b00041-5135-47c6-80cb-115e4c7b10f3" (UID: "04b00041-5135-47c6-80cb-115e4c7b10f3"). InnerVolumeSpecName "kube-api-access-877lg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 23:33:24.480641 kubelet[3456]: I0416 23:33:24.480069 3456 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/04b00041-5135-47c6-80cb-115e4c7b10f3-nginx-config\") on node \"ip-172-31-16-254\" DevicePath \"\"" Apr 16 23:33:24.480641 kubelet[3456]: I0416 23:33:24.480362 3456 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04b00041-5135-47c6-80cb-115e4c7b10f3-whisker-ca-bundle\") on node \"ip-172-31-16-254\" DevicePath \"\"" Apr 16 23:33:24.480641 kubelet[3456]: I0416 23:33:24.480390 3456 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/04b00041-5135-47c6-80cb-115e4c7b10f3-whisker-backend-key-pair\") on node \"ip-172-31-16-254\" DevicePath \"\"" Apr 16 23:33:24.481253 kubelet[3456]: I0416 23:33:24.480573 3456 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-877lg\" (UniqueName: \"kubernetes.io/projected/04b00041-5135-47c6-80cb-115e4c7b10f3-kube-api-access-877lg\") on node \"ip-172-31-16-254\" DevicePath \"\"" Apr 16 23:33:24.562563 systemd-networkd[1887]: calic9c3cc994bb: Link UP Apr 16 23:33:24.563485 systemd-networkd[1887]: calic9c3cc994bb: Gained carrier Apr 16 23:33:24.606651 containerd[2014]: 2026-04-16 23:33:24.295 [ERROR][4689] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 23:33:24.606651 containerd[2014]: 2026-04-16 23:33:24.331 [INFO][4689] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--254-k8s-goldmane--5b85766d88--xlvp6-eth0 goldmane-5b85766d88- calico-system c6915744-c93f-4fa3-b9f6-d711cdb2d534 923 0 2026-04-16 23:33:00 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-16-254 goldmane-5b85766d88-xlvp6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic9c3cc994bb [] [] }} ContainerID="b5baf69796b3ec2d8be0ce49914675d7e2a033667a071975e00e5ca7a7748e2f" Namespace="calico-system" Pod="goldmane-5b85766d88-xlvp6" WorkloadEndpoint="ip--172--31--16--254-k8s-goldmane--5b85766d88--xlvp6-" Apr 16 23:33:24.606651 containerd[2014]: 2026-04-16 23:33:24.331 [INFO][4689] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b5baf69796b3ec2d8be0ce49914675d7e2a033667a071975e00e5ca7a7748e2f" Namespace="calico-system" Pod="goldmane-5b85766d88-xlvp6" WorkloadEndpoint="ip--172--31--16--254-k8s-goldmane--5b85766d88--xlvp6-eth0" Apr 16 23:33:24.606651 containerd[2014]: 2026-04-16 23:33:24.443 [INFO][4729] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b5baf69796b3ec2d8be0ce49914675d7e2a033667a071975e00e5ca7a7748e2f" HandleID="k8s-pod-network.b5baf69796b3ec2d8be0ce49914675d7e2a033667a071975e00e5ca7a7748e2f" Workload="ip--172--31--16--254-k8s-goldmane--5b85766d88--xlvp6-eth0" Apr 16 23:33:24.607446 containerd[2014]: 2026-04-16 23:33:24.481 [INFO][4729] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b5baf69796b3ec2d8be0ce49914675d7e2a033667a071975e00e5ca7a7748e2f" HandleID="k8s-pod-network.b5baf69796b3ec2d8be0ce49914675d7e2a033667a071975e00e5ca7a7748e2f" Workload="ip--172--31--16--254-k8s-goldmane--5b85766d88--xlvp6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004deb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-254", "pod":"goldmane-5b85766d88-xlvp6", "timestamp":"2026-04-16 23:33:24.443193375 +0000 UTC"}, Hostname:"ip-172-31-16-254", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000186000)} Apr 16 23:33:24.607446 containerd[2014]: 2026-04-16 23:33:24.482 [INFO][4729] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:33:24.607446 containerd[2014]: 2026-04-16 23:33:24.482 [INFO][4729] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:33:24.607446 containerd[2014]: 2026-04-16 23:33:24.482 [INFO][4729] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-254' Apr 16 23:33:24.607446 containerd[2014]: 2026-04-16 23:33:24.489 [INFO][4729] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b5baf69796b3ec2d8be0ce49914675d7e2a033667a071975e00e5ca7a7748e2f" host="ip-172-31-16-254" Apr 16 23:33:24.607446 containerd[2014]: 2026-04-16 23:33:24.502 [INFO][4729] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-16-254" Apr 16 23:33:24.607446 containerd[2014]: 2026-04-16 23:33:24.513 [INFO][4729] ipam/ipam.go 526: Trying affinity for 192.168.58.192/26 host="ip-172-31-16-254" Apr 16 23:33:24.607446 containerd[2014]: 2026-04-16 23:33:24.517 [INFO][4729] ipam/ipam.go 160: Attempting to load block cidr=192.168.58.192/26 host="ip-172-31-16-254" Apr 16 23:33:24.607446 containerd[2014]: 2026-04-16 23:33:24.522 [INFO][4729] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.58.192/26 host="ip-172-31-16-254" Apr 16 23:33:24.607896 containerd[2014]: 2026-04-16 23:33:24.522 [INFO][4729] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.58.192/26 handle="k8s-pod-network.b5baf69796b3ec2d8be0ce49914675d7e2a033667a071975e00e5ca7a7748e2f" host="ip-172-31-16-254" Apr 16 23:33:24.607896 containerd[2014]: 2026-04-16 23:33:24.525 [INFO][4729] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b5baf69796b3ec2d8be0ce49914675d7e2a033667a071975e00e5ca7a7748e2f Apr 16 23:33:24.607896 containerd[2014]: 2026-04-16 23:33:24.533 [INFO][4729] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.58.192/26 handle="k8s-pod-network.b5baf69796b3ec2d8be0ce49914675d7e2a033667a071975e00e5ca7a7748e2f" host="ip-172-31-16-254" Apr 16 23:33:24.607896 containerd[2014]: 2026-04-16 23:33:24.545 [INFO][4729] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.58.194/26] block=192.168.58.192/26 handle="k8s-pod-network.b5baf69796b3ec2d8be0ce49914675d7e2a033667a071975e00e5ca7a7748e2f" host="ip-172-31-16-254" Apr 16 23:33:24.607896 containerd[2014]: 2026-04-16 23:33:24.546 [INFO][4729] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.58.194/26] handle="k8s-pod-network.b5baf69796b3ec2d8be0ce49914675d7e2a033667a071975e00e5ca7a7748e2f" host="ip-172-31-16-254" Apr 16 23:33:24.607896 containerd[2014]: 2026-04-16 23:33:24.546 [INFO][4729] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:33:24.607896 containerd[2014]: 2026-04-16 23:33:24.546 [INFO][4729] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.58.194/26] IPv6=[] ContainerID="b5baf69796b3ec2d8be0ce49914675d7e2a033667a071975e00e5ca7a7748e2f" HandleID="k8s-pod-network.b5baf69796b3ec2d8be0ce49914675d7e2a033667a071975e00e5ca7a7748e2f" Workload="ip--172--31--16--254-k8s-goldmane--5b85766d88--xlvp6-eth0" Apr 16 23:33:24.608358 containerd[2014]: 2026-04-16 23:33:24.551 [INFO][4689] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b5baf69796b3ec2d8be0ce49914675d7e2a033667a071975e00e5ca7a7748e2f" Namespace="calico-system" Pod="goldmane-5b85766d88-xlvp6" WorkloadEndpoint="ip--172--31--16--254-k8s-goldmane--5b85766d88--xlvp6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--254-k8s-goldmane--5b85766d88--xlvp6-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"c6915744-c93f-4fa3-b9f6-d711cdb2d534", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 33, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-254", ContainerID:"", Pod:"goldmane-5b85766d88-xlvp6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.58.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic9c3cc994bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:33:24.608358 containerd[2014]: 2026-04-16 23:33:24.551 [INFO][4689] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.194/32] ContainerID="b5baf69796b3ec2d8be0ce49914675d7e2a033667a071975e00e5ca7a7748e2f" Namespace="calico-system" Pod="goldmane-5b85766d88-xlvp6" WorkloadEndpoint="ip--172--31--16--254-k8s-goldmane--5b85766d88--xlvp6-eth0" Apr 16 23:33:24.608558 containerd[2014]: 2026-04-16 23:33:24.552 [INFO][4689] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9c3cc994bb ContainerID="b5baf69796b3ec2d8be0ce49914675d7e2a033667a071975e00e5ca7a7748e2f" Namespace="calico-system" Pod="goldmane-5b85766d88-xlvp6" WorkloadEndpoint="ip--172--31--16--254-k8s-goldmane--5b85766d88--xlvp6-eth0" Apr 16 23:33:24.608558 containerd[2014]: 2026-04-16 23:33:24.565 [INFO][4689] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b5baf69796b3ec2d8be0ce49914675d7e2a033667a071975e00e5ca7a7748e2f" Namespace="calico-system" Pod="goldmane-5b85766d88-xlvp6" WorkloadEndpoint="ip--172--31--16--254-k8s-goldmane--5b85766d88--xlvp6-eth0" Apr 16 23:33:24.608647 containerd[2014]: 2026-04-16 23:33:24.568 [INFO][4689] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b5baf69796b3ec2d8be0ce49914675d7e2a033667a071975e00e5ca7a7748e2f" Namespace="calico-system" Pod="goldmane-5b85766d88-xlvp6" WorkloadEndpoint="ip--172--31--16--254-k8s-goldmane--5b85766d88--xlvp6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--254-k8s-goldmane--5b85766d88--xlvp6-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"c6915744-c93f-4fa3-b9f6-d711cdb2d534", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 33, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-254", ContainerID:"b5baf69796b3ec2d8be0ce49914675d7e2a033667a071975e00e5ca7a7748e2f", Pod:"goldmane-5b85766d88-xlvp6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.58.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic9c3cc994bb", MAC:"3e:f6:53:78:02:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:33:24.608787 containerd[2014]: 2026-04-16 23:33:24.599 [INFO][4689] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b5baf69796b3ec2d8be0ce49914675d7e2a033667a071975e00e5ca7a7748e2f" Namespace="calico-system" Pod="goldmane-5b85766d88-xlvp6" WorkloadEndpoint="ip--172--31--16--254-k8s-goldmane--5b85766d88--xlvp6-eth0" Apr 16 23:33:24.629861 kubelet[3456]: I0416 23:33:24.629580 3456 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:33:24.666638 containerd[2014]: time="2026-04-16T23:33:24.666565876Z" level=info msg="connecting to shim b5baf69796b3ec2d8be0ce49914675d7e2a033667a071975e00e5ca7a7748e2f" address="unix:///run/containerd/s/bec0569ec347400a8881394a38f3dfd97d7dd7432312e7bcc53beafe735e6de2" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:33:24.694877 systemd-networkd[1887]: cali047cf9f4220: Link UP Apr 16 23:33:24.699585 systemd-networkd[1887]: cali047cf9f4220: Gained carrier Apr 16 23:33:24.784540 containerd[2014]: 2026-04-16 23:33:24.356 [ERROR][4699] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 23:33:24.784540 containerd[2014]: 2026-04-16 23:33:24.400 [INFO][4699] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--254-k8s-calico--kube--controllers--569c4bbfd5--xdx25-eth0 calico-kube-controllers-569c4bbfd5- calico-system dd66b20b-02d7-4907-8e47-757f4d8364ff 922 0 2026-04-16 23:33:04 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:569c4bbfd5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-16-254 calico-kube-controllers-569c4bbfd5-xdx25 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali047cf9f4220 [] [] }} ContainerID="284ab95d85bc1dc81e22e5e1aabb040267e12509ed06f2295888bff869f62e3c" Namespace="calico-system" Pod="calico-kube-controllers-569c4bbfd5-xdx25" WorkloadEndpoint="ip--172--31--16--254-k8s-calico--kube--controllers--569c4bbfd5--xdx25-" Apr 16 23:33:24.784540 containerd[2014]: 2026-04-16 23:33:24.404 [INFO][4699] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="284ab95d85bc1dc81e22e5e1aabb040267e12509ed06f2295888bff869f62e3c" Namespace="calico-system" Pod="calico-kube-controllers-569c4bbfd5-xdx25" WorkloadEndpoint="ip--172--31--16--254-k8s-calico--kube--controllers--569c4bbfd5--xdx25-eth0" Apr 16 23:33:24.784540 containerd[2014]: 2026-04-16 23:33:24.501 [INFO][4742] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="284ab95d85bc1dc81e22e5e1aabb040267e12509ed06f2295888bff869f62e3c" HandleID="k8s-pod-network.284ab95d85bc1dc81e22e5e1aabb040267e12509ed06f2295888bff869f62e3c" Workload="ip--172--31--16--254-k8s-calico--kube--controllers--569c4bbfd5--xdx25-eth0" Apr 16 23:33:24.784943 containerd[2014]: 2026-04-16 23:33:24.522 [INFO][4742] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="284ab95d85bc1dc81e22e5e1aabb040267e12509ed06f2295888bff869f62e3c" HandleID="k8s-pod-network.284ab95d85bc1dc81e22e5e1aabb040267e12509ed06f2295888bff869f62e3c" Workload="ip--172--31--16--254-k8s-calico--kube--controllers--569c4bbfd5--xdx25-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000380940), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-254", "pod":"calico-kube-controllers-569c4bbfd5-xdx25", "timestamp":"2026-04-16 23:33:24.501079731 +0000 UTC"}, Hostname:"ip-172-31-16-254", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40000c6420)} Apr 16 23:33:24.784943 containerd[2014]: 2026-04-16 23:33:24.522 [INFO][4742] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:33:24.784943 containerd[2014]: 2026-04-16 23:33:24.546 [INFO][4742] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:33:24.784943 containerd[2014]: 2026-04-16 23:33:24.547 [INFO][4742] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-254' Apr 16 23:33:24.784943 containerd[2014]: 2026-04-16 23:33:24.601 [INFO][4742] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.284ab95d85bc1dc81e22e5e1aabb040267e12509ed06f2295888bff869f62e3c" host="ip-172-31-16-254" Apr 16 23:33:24.784943 containerd[2014]: 2026-04-16 23:33:24.614 [INFO][4742] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-16-254" Apr 16 23:33:24.784943 containerd[2014]: 2026-04-16 23:33:24.623 [INFO][4742] ipam/ipam.go 526: Trying affinity for 192.168.58.192/26 host="ip-172-31-16-254" Apr 16 23:33:24.784943 containerd[2014]: 2026-04-16 23:33:24.626 [INFO][4742] ipam/ipam.go 160: Attempting to load block cidr=192.168.58.192/26 host="ip-172-31-16-254" Apr 16 23:33:24.784943 containerd[2014]: 2026-04-16 23:33:24.633 [INFO][4742] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.58.192/26 host="ip-172-31-16-254" Apr 16 23:33:24.785414 containerd[2014]: 2026-04-16 23:33:24.634 [INFO][4742] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.58.192/26 handle="k8s-pod-network.284ab95d85bc1dc81e22e5e1aabb040267e12509ed06f2295888bff869f62e3c" host="ip-172-31-16-254" Apr 16 23:33:24.785414 containerd[2014]: 2026-04-16 23:33:24.637 [INFO][4742] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.284ab95d85bc1dc81e22e5e1aabb040267e12509ed06f2295888bff869f62e3c Apr 16 23:33:24.785414 containerd[2014]: 2026-04-16 23:33:24.652 [INFO][4742] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.58.192/26 handle="k8s-pod-network.284ab95d85bc1dc81e22e5e1aabb040267e12509ed06f2295888bff869f62e3c" host="ip-172-31-16-254" Apr 16 23:33:24.785414 containerd[2014]: 2026-04-16 23:33:24.670 [INFO][4742] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.58.195/26] block=192.168.58.192/26 handle="k8s-pod-network.284ab95d85bc1dc81e22e5e1aabb040267e12509ed06f2295888bff869f62e3c" host="ip-172-31-16-254" Apr 16 23:33:24.785414 containerd[2014]: 2026-04-16 23:33:24.670 [INFO][4742] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.58.195/26] handle="k8s-pod-network.284ab95d85bc1dc81e22e5e1aabb040267e12509ed06f2295888bff869f62e3c" host="ip-172-31-16-254" Apr 16 23:33:24.785414 containerd[2014]: 2026-04-16 23:33:24.670 [INFO][4742] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:33:24.785414 containerd[2014]: 2026-04-16 23:33:24.670 [INFO][4742] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.58.195/26] IPv6=[] ContainerID="284ab95d85bc1dc81e22e5e1aabb040267e12509ed06f2295888bff869f62e3c" HandleID="k8s-pod-network.284ab95d85bc1dc81e22e5e1aabb040267e12509ed06f2295888bff869f62e3c" Workload="ip--172--31--16--254-k8s-calico--kube--controllers--569c4bbfd5--xdx25-eth0" Apr 16 23:33:24.787416 containerd[2014]: 2026-04-16 23:33:24.683 [INFO][4699] cni-plugin/k8s.go 418: Populated endpoint ContainerID="284ab95d85bc1dc81e22e5e1aabb040267e12509ed06f2295888bff869f62e3c" Namespace="calico-system" Pod="calico-kube-controllers-569c4bbfd5-xdx25" WorkloadEndpoint="ip--172--31--16--254-k8s-calico--kube--controllers--569c4bbfd5--xdx25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--254-k8s-calico--kube--controllers--569c4bbfd5--xdx25-eth0", GenerateName:"calico-kube-controllers-569c4bbfd5-", Namespace:"calico-system", SelfLink:"", UID:"dd66b20b-02d7-4907-8e47-757f4d8364ff", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 33, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"569c4bbfd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-254", ContainerID:"", Pod:"calico-kube-controllers-569c4bbfd5-xdx25", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.58.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali047cf9f4220", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:33:24.787955 containerd[2014]: 2026-04-16 23:33:24.684 [INFO][4699] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.195/32] ContainerID="284ab95d85bc1dc81e22e5e1aabb040267e12509ed06f2295888bff869f62e3c" Namespace="calico-system" Pod="calico-kube-controllers-569c4bbfd5-xdx25" WorkloadEndpoint="ip--172--31--16--254-k8s-calico--kube--controllers--569c4bbfd5--xdx25-eth0" Apr 16 23:33:24.787955 containerd[2014]: 2026-04-16 23:33:24.685 [INFO][4699] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali047cf9f4220 ContainerID="284ab95d85bc1dc81e22e5e1aabb040267e12509ed06f2295888bff869f62e3c" Namespace="calico-system" Pod="calico-kube-controllers-569c4bbfd5-xdx25" WorkloadEndpoint="ip--172--31--16--254-k8s-calico--kube--controllers--569c4bbfd5--xdx25-eth0" Apr 16 23:33:24.787955 containerd[2014]: 2026-04-16 23:33:24.715 [INFO][4699] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="284ab95d85bc1dc81e22e5e1aabb040267e12509ed06f2295888bff869f62e3c" Namespace="calico-system" Pod="calico-kube-controllers-569c4bbfd5-xdx25" WorkloadEndpoint="ip--172--31--16--254-k8s-calico--kube--controllers--569c4bbfd5--xdx25-eth0" Apr 16 23:33:24.789906 containerd[2014]: 2026-04-16 23:33:24.718 [INFO][4699] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="284ab95d85bc1dc81e22e5e1aabb040267e12509ed06f2295888bff869f62e3c" Namespace="calico-system" Pod="calico-kube-controllers-569c4bbfd5-xdx25" WorkloadEndpoint="ip--172--31--16--254-k8s-calico--kube--controllers--569c4bbfd5--xdx25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--254-k8s-calico--kube--controllers--569c4bbfd5--xdx25-eth0", GenerateName:"calico-kube-controllers-569c4bbfd5-", Namespace:"calico-system", SelfLink:"", UID:"dd66b20b-02d7-4907-8e47-757f4d8364ff", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 33, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"569c4bbfd5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-254", ContainerID:"284ab95d85bc1dc81e22e5e1aabb040267e12509ed06f2295888bff869f62e3c", Pod:"calico-kube-controllers-569c4bbfd5-xdx25", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.58.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali047cf9f4220", MAC:"fe:38:39:d2:d8:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:33:24.790420 containerd[2014]: 2026-04-16 23:33:24.776 [INFO][4699] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="284ab95d85bc1dc81e22e5e1aabb040267e12509ed06f2295888bff869f62e3c" Namespace="calico-system" Pod="calico-kube-controllers-569c4bbfd5-xdx25" WorkloadEndpoint="ip--172--31--16--254-k8s-calico--kube--controllers--569c4bbfd5--xdx25-eth0" Apr 16 23:33:24.798412 systemd[1]: Started cri-containerd-b5baf69796b3ec2d8be0ce49914675d7e2a033667a071975e00e5ca7a7748e2f.scope - libcontainer container b5baf69796b3ec2d8be0ce49914675d7e2a033667a071975e00e5ca7a7748e2f. Apr 16 23:33:24.858589 containerd[2014]: time="2026-04-16T23:33:24.858511421Z" level=info msg="connecting to shim 284ab95d85bc1dc81e22e5e1aabb040267e12509ed06f2295888bff869f62e3c" address="unix:///run/containerd/s/3b6fa55c801d09f904fb8887c0028c5872e570ebf2f95225103fa76ea5c9af3e" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:33:24.877253 systemd[1]: var-lib-kubelet-pods-04b00041\x2d5135\x2d47c6\x2d80cb\x2d115e4c7b10f3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d877lg.mount: Deactivated successfully. Apr 16 23:33:24.877490 systemd[1]: var-lib-kubelet-pods-04b00041\x2d5135\x2d47c6\x2d80cb\x2d115e4c7b10f3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 16 23:33:24.953652 systemd[1]: Started cri-containerd-284ab95d85bc1dc81e22e5e1aabb040267e12509ed06f2295888bff869f62e3c.scope - libcontainer container 284ab95d85bc1dc81e22e5e1aabb040267e12509ed06f2295888bff869f62e3c. Apr 16 23:33:25.094360 containerd[2014]: time="2026-04-16T23:33:25.092957594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-xlvp6,Uid:c6915744-c93f-4fa3-b9f6-d711cdb2d534,Namespace:calico-system,Attempt:0,} returns sandbox id \"b5baf69796b3ec2d8be0ce49914675d7e2a033667a071975e00e5ca7a7748e2f\"" Apr 16 23:33:25.205020 containerd[2014]: time="2026-04-16T23:33:25.204856599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-569c4bbfd5-xdx25,Uid:dd66b20b-02d7-4907-8e47-757f4d8364ff,Namespace:calico-system,Attempt:0,} returns sandbox id \"284ab95d85bc1dc81e22e5e1aabb040267e12509ed06f2295888bff869f62e3c\"" Apr 16 23:33:25.252231 systemd[1]: Removed slice kubepods-besteffort-pod04b00041_5135_47c6_80cb_115e4c7b10f3.slice - libcontainer container kubepods-besteffort-pod04b00041_5135_47c6_80cb_115e4c7b10f3.slice. Apr 16 23:33:25.391401 systemd[1]: Created slice kubepods-besteffort-pod0b28eaa9_033a_41db_84d2_151cfaceff73.slice - libcontainer container kubepods-besteffort-pod0b28eaa9_033a_41db_84d2_151cfaceff73.slice. Apr 16 23:33:25.491398 kubelet[3456]: I0416 23:33:25.491165 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/0b28eaa9-033a-41db-84d2-151cfaceff73-nginx-config\") pod \"whisker-96d84b779-hfw6f\" (UID: \"0b28eaa9-033a-41db-84d2-151cfaceff73\") " pod="calico-system/whisker-96d84b779-hfw6f" Apr 16 23:33:25.491398 kubelet[3456]: I0416 23:33:25.491254 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0b28eaa9-033a-41db-84d2-151cfaceff73-whisker-backend-key-pair\") pod \"whisker-96d84b779-hfw6f\" (UID: \"0b28eaa9-033a-41db-84d2-151cfaceff73\") " pod="calico-system/whisker-96d84b779-hfw6f" Apr 16 23:33:25.492333 kubelet[3456]: I0416 23:33:25.492177 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b28eaa9-033a-41db-84d2-151cfaceff73-whisker-ca-bundle\") pod \"whisker-96d84b779-hfw6f\" (UID: \"0b28eaa9-033a-41db-84d2-151cfaceff73\") " pod="calico-system/whisker-96d84b779-hfw6f" Apr 16 23:33:25.492437 kubelet[3456]: I0416 23:33:25.492362 3456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7nfm\" (UniqueName: \"kubernetes.io/projected/0b28eaa9-033a-41db-84d2-151cfaceff73-kube-api-access-f7nfm\") pod \"whisker-96d84b779-hfw6f\" (UID: \"0b28eaa9-033a-41db-84d2-151cfaceff73\") " pod="calico-system/whisker-96d84b779-hfw6f" Apr 16 23:33:25.708749 containerd[2014]: time="2026-04-16T23:33:25.707936525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-96d84b779-hfw6f,Uid:0b28eaa9-033a-41db-84d2-151cfaceff73,Namespace:calico-system,Attempt:0,}" Apr 16 23:33:25.796663 kubelet[3456]: I0416 23:33:25.796505 3456 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04b00041-5135-47c6-80cb-115e4c7b10f3" path="/var/lib/kubelet/pods/04b00041-5135-47c6-80cb-115e4c7b10f3/volumes" Apr 16 23:33:25.868475 systemd-networkd[1887]: calic9c3cc994bb: Gained IPv6LL Apr 16 23:33:25.931570 systemd-networkd[1887]: caliac66c494f69: Gained IPv6LL Apr 16 23:33:26.190943 containerd[2014]: time="2026-04-16T23:33:26.190866412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:26.192967 containerd[2014]: time="2026-04-16T23:33:26.192873892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Apr 16 23:33:26.196418 containerd[2014]: time="2026-04-16T23:33:26.196182892Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:26.223619 containerd[2014]: time="2026-04-16T23:33:26.222560404Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:26.228107 containerd[2014]: time="2026-04-16T23:33:26.227899984Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 2.03798311s" Apr 16 23:33:26.228107 containerd[2014]: time="2026-04-16T23:33:26.227980012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Apr 16 23:33:26.252531 systemd-networkd[1887]: cali047cf9f4220: Gained IPv6LL Apr 16 23:33:26.259950 containerd[2014]: time="2026-04-16T23:33:26.259762348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 16 23:33:26.268980 containerd[2014]: time="2026-04-16T23:33:26.268643044Z" level=info msg="CreateContainer within sandbox \"29996cdbb8c5889a248b7813b86636ff4cecd50e602e5c44270b451e2ccf841f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 16 23:33:26.306210 systemd-networkd[1887]: cali0a603cc0426: Link UP Apr 16 23:33:26.308595 systemd-networkd[1887]: cali0a603cc0426: Gained carrier Apr 16 23:33:26.363802 containerd[2014]: 2026-04-16 23:33:25.916 [ERROR][4958] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 23:33:26.363802 containerd[2014]: 2026-04-16 23:33:25.970 [INFO][4958] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--254-k8s-whisker--96d84b779--hfw6f-eth0 whisker-96d84b779- calico-system 0b28eaa9-033a-41db-84d2-151cfaceff73 978 0 2026-04-16 23:33:25 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:96d84b779 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-16-254 whisker-96d84b779-hfw6f eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0a603cc0426 [] [] }} ContainerID="42a17d523dbe9542377d9f78250ce742f0acb024157e2a62910c4a3dee91a97a" Namespace="calico-system" Pod="whisker-96d84b779-hfw6f" WorkloadEndpoint="ip--172--31--16--254-k8s-whisker--96d84b779--hfw6f-" Apr 16 23:33:26.363802 containerd[2014]: 2026-04-16 23:33:25.970 [INFO][4958] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="42a17d523dbe9542377d9f78250ce742f0acb024157e2a62910c4a3dee91a97a" Namespace="calico-system" Pod="whisker-96d84b779-hfw6f" WorkloadEndpoint="ip--172--31--16--254-k8s-whisker--96d84b779--hfw6f-eth0" Apr 16 23:33:26.363802 containerd[2014]: 2026-04-16 23:33:26.134 [INFO][4976] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="42a17d523dbe9542377d9f78250ce742f0acb024157e2a62910c4a3dee91a97a" HandleID="k8s-pod-network.42a17d523dbe9542377d9f78250ce742f0acb024157e2a62910c4a3dee91a97a" Workload="ip--172--31--16--254-k8s-whisker--96d84b779--hfw6f-eth0" Apr 16 23:33:26.364151 containerd[2014]: 2026-04-16 23:33:26.155 [INFO][4976] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="42a17d523dbe9542377d9f78250ce742f0acb024157e2a62910c4a3dee91a97a" HandleID="k8s-pod-network.42a17d523dbe9542377d9f78250ce742f0acb024157e2a62910c4a3dee91a97a" Workload="ip--172--31--16--254-k8s-whisker--96d84b779--hfw6f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003657e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-254", "pod":"whisker-96d84b779-hfw6f", "timestamp":"2026-04-16 23:33:26.134128636 +0000 UTC"}, Hostname:"ip-172-31-16-254", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002871e0)} Apr 16 23:33:26.364151 containerd[2014]: 2026-04-16 23:33:26.156 [INFO][4976] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:33:26.364151 containerd[2014]: 2026-04-16 23:33:26.158 [INFO][4976] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:33:26.364151 containerd[2014]: 2026-04-16 23:33:26.158 [INFO][4976] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-254' Apr 16 23:33:26.364151 containerd[2014]: 2026-04-16 23:33:26.164 [INFO][4976] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.42a17d523dbe9542377d9f78250ce742f0acb024157e2a62910c4a3dee91a97a" host="ip-172-31-16-254" Apr 16 23:33:26.364151 containerd[2014]: 2026-04-16 23:33:26.175 [INFO][4976] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-16-254" Apr 16 23:33:26.364151 containerd[2014]: 2026-04-16 23:33:26.191 [INFO][4976] ipam/ipam.go 526: Trying affinity for 192.168.58.192/26 host="ip-172-31-16-254" Apr 16 23:33:26.364151 containerd[2014]: 2026-04-16 23:33:26.199 [INFO][4976] ipam/ipam.go 160: Attempting to load block cidr=192.168.58.192/26 host="ip-172-31-16-254" Apr 16 23:33:26.364151 containerd[2014]: 2026-04-16 23:33:26.210 [INFO][4976] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.58.192/26 host="ip-172-31-16-254" Apr 16 23:33:26.368446 containerd[2014]: 2026-04-16 23:33:26.211 [INFO][4976] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.58.192/26 handle="k8s-pod-network.42a17d523dbe9542377d9f78250ce742f0acb024157e2a62910c4a3dee91a97a" host="ip-172-31-16-254" Apr 16 23:33:26.368446 containerd[2014]: 2026-04-16 23:33:26.216 [INFO][4976] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.42a17d523dbe9542377d9f78250ce742f0acb024157e2a62910c4a3dee91a97a Apr 16 23:33:26.368446 containerd[2014]: 2026-04-16 23:33:26.231 [INFO][4976] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.58.192/26 handle="k8s-pod-network.42a17d523dbe9542377d9f78250ce742f0acb024157e2a62910c4a3dee91a97a" host="ip-172-31-16-254" Apr 16 23:33:26.368446 containerd[2014]: 2026-04-16 23:33:26.264 [INFO][4976] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.58.196/26] block=192.168.58.192/26 handle="k8s-pod-network.42a17d523dbe9542377d9f78250ce742f0acb024157e2a62910c4a3dee91a97a" host="ip-172-31-16-254" Apr 16 23:33:26.368446 containerd[2014]: 2026-04-16 23:33:26.264 [INFO][4976] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.58.196/26] handle="k8s-pod-network.42a17d523dbe9542377d9f78250ce742f0acb024157e2a62910c4a3dee91a97a" host="ip-172-31-16-254" Apr 16 23:33:26.368446 containerd[2014]: 2026-04-16 23:33:26.264 [INFO][4976] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:33:26.368446 containerd[2014]: 2026-04-16 23:33:26.266 [INFO][4976] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.58.196/26] IPv6=[] ContainerID="42a17d523dbe9542377d9f78250ce742f0acb024157e2a62910c4a3dee91a97a" HandleID="k8s-pod-network.42a17d523dbe9542377d9f78250ce742f0acb024157e2a62910c4a3dee91a97a" Workload="ip--172--31--16--254-k8s-whisker--96d84b779--hfw6f-eth0" Apr 16 23:33:26.368813 containerd[2014]: 2026-04-16 23:33:26.283 [INFO][4958] cni-plugin/k8s.go 418: Populated endpoint ContainerID="42a17d523dbe9542377d9f78250ce742f0acb024157e2a62910c4a3dee91a97a" Namespace="calico-system" Pod="whisker-96d84b779-hfw6f" WorkloadEndpoint="ip--172--31--16--254-k8s-whisker--96d84b779--hfw6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--254-k8s-whisker--96d84b779--hfw6f-eth0", GenerateName:"whisker-96d84b779-", Namespace:"calico-system", SelfLink:"", UID:"0b28eaa9-033a-41db-84d2-151cfaceff73", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 33, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"96d84b779", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-254", ContainerID:"", Pod:"whisker-96d84b779-hfw6f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.58.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0a603cc0426", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:33:26.368813 containerd[2014]: 2026-04-16 23:33:26.285 [INFO][4958] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.196/32] ContainerID="42a17d523dbe9542377d9f78250ce742f0acb024157e2a62910c4a3dee91a97a" Namespace="calico-system" Pod="whisker-96d84b779-hfw6f" WorkloadEndpoint="ip--172--31--16--254-k8s-whisker--96d84b779--hfw6f-eth0" Apr 16 23:33:26.369002 containerd[2014]: 2026-04-16 23:33:26.286 [INFO][4958] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0a603cc0426 ContainerID="42a17d523dbe9542377d9f78250ce742f0acb024157e2a62910c4a3dee91a97a" Namespace="calico-system" Pod="whisker-96d84b779-hfw6f" WorkloadEndpoint="ip--172--31--16--254-k8s-whisker--96d84b779--hfw6f-eth0" Apr 16 23:33:26.369002 containerd[2014]: 2026-04-16 23:33:26.306 [INFO][4958] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="42a17d523dbe9542377d9f78250ce742f0acb024157e2a62910c4a3dee91a97a" Namespace="calico-system" Pod="whisker-96d84b779-hfw6f" WorkloadEndpoint="ip--172--31--16--254-k8s-whisker--96d84b779--hfw6f-eth0" Apr 16 23:33:26.369104 containerd[2014]: 2026-04-16 23:33:26.311 [INFO][4958] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="42a17d523dbe9542377d9f78250ce742f0acb024157e2a62910c4a3dee91a97a" Namespace="calico-system" Pod="whisker-96d84b779-hfw6f" WorkloadEndpoint="ip--172--31--16--254-k8s-whisker--96d84b779--hfw6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--254-k8s-whisker--96d84b779--hfw6f-eth0", GenerateName:"whisker-96d84b779-", Namespace:"calico-system", SelfLink:"", UID:"0b28eaa9-033a-41db-84d2-151cfaceff73", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 33, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"96d84b779", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-254", ContainerID:"42a17d523dbe9542377d9f78250ce742f0acb024157e2a62910c4a3dee91a97a", Pod:"whisker-96d84b779-hfw6f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.58.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0a603cc0426", MAC:"82:69:ba:80:43:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:33:26.369215 containerd[2014]: 2026-04-16 23:33:26.354 [INFO][4958] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="42a17d523dbe9542377d9f78250ce742f0acb024157e2a62910c4a3dee91a97a" Namespace="calico-system" Pod="whisker-96d84b779-hfw6f" WorkloadEndpoint="ip--172--31--16--254-k8s-whisker--96d84b779--hfw6f-eth0" Apr 16 23:33:26.376356 containerd[2014]: time="2026-04-16T23:33:26.372170969Z" level=info msg="Container 0ca1bdcdd8dd542d3ec46947c9ac0be9fe3613fde605c63239674d99703eaaf6: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:33:26.383800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4126180802.mount: Deactivated successfully. Apr 16 23:33:26.423370 containerd[2014]: time="2026-04-16T23:33:26.423197321Z" level=info msg="CreateContainer within sandbox \"29996cdbb8c5889a248b7813b86636ff4cecd50e602e5c44270b451e2ccf841f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0ca1bdcdd8dd542d3ec46947c9ac0be9fe3613fde605c63239674d99703eaaf6\"" Apr 16 23:33:26.424999 containerd[2014]: time="2026-04-16T23:33:26.424949969Z" level=info msg="StartContainer for \"0ca1bdcdd8dd542d3ec46947c9ac0be9fe3613fde605c63239674d99703eaaf6\"" Apr 16 23:33:26.433414 containerd[2014]: time="2026-04-16T23:33:26.433347653Z" level=info msg="connecting to shim 0ca1bdcdd8dd542d3ec46947c9ac0be9fe3613fde605c63239674d99703eaaf6" address="unix:///run/containerd/s/e25e250238b3346e8402f02f0aaa96a4f05d7a48317d07e98724f3e6a6afb529" protocol=ttrpc version=3 Apr 16 23:33:26.489793 containerd[2014]: time="2026-04-16T23:33:26.489601661Z" level=info msg="connecting to shim 42a17d523dbe9542377d9f78250ce742f0acb024157e2a62910c4a3dee91a97a" address="unix:///run/containerd/s/aa6d1e7c971be0b3e785f66ae6e8af9688be6be8c764690bf833eeeb8052572b" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:33:26.512614 systemd[1]: Started cri-containerd-0ca1bdcdd8dd542d3ec46947c9ac0be9fe3613fde605c63239674d99703eaaf6.scope - libcontainer container 0ca1bdcdd8dd542d3ec46947c9ac0be9fe3613fde605c63239674d99703eaaf6. Apr 16 23:33:26.574520 systemd[1]: Started cri-containerd-42a17d523dbe9542377d9f78250ce742f0acb024157e2a62910c4a3dee91a97a.scope - libcontainer container 42a17d523dbe9542377d9f78250ce742f0acb024157e2a62910c4a3dee91a97a. Apr 16 23:33:26.746382 containerd[2014]: time="2026-04-16T23:33:26.746191939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-96d84b779-hfw6f,Uid:0b28eaa9-033a-41db-84d2-151cfaceff73,Namespace:calico-system,Attempt:0,} returns sandbox id \"42a17d523dbe9542377d9f78250ce742f0acb024157e2a62910c4a3dee91a97a\"" Apr 16 23:33:26.851467 containerd[2014]: time="2026-04-16T23:33:26.851037523Z" level=info msg="StartContainer for \"0ca1bdcdd8dd542d3ec46947c9ac0be9fe3613fde605c63239674d99703eaaf6\" returns successfully" Apr 16 23:33:27.487210 systemd-networkd[1887]: vxlan.calico: Link UP Apr 16 23:33:27.487236 systemd-networkd[1887]: vxlan.calico: Gained carrier Apr 16 23:33:27.553760 (udev-worker)[4636]: Network interface NamePolicy= disabled on kernel command line. Apr 16 23:33:28.108553 systemd-networkd[1887]: cali0a603cc0426: Gained IPv6LL Apr 16 23:33:28.747460 systemd-networkd[1887]: vxlan.calico: Gained IPv6LL Apr 16 23:33:28.756791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2834420657.mount: Deactivated successfully. Apr 16 23:33:29.374102 containerd[2014]: time="2026-04-16T23:33:29.374048720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:29.376871 containerd[2014]: time="2026-04-16T23:33:29.376811132Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Apr 16 23:33:29.377114 containerd[2014]: time="2026-04-16T23:33:29.377083724Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:29.382277 containerd[2014]: time="2026-04-16T23:33:29.382193084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:29.383968 containerd[2014]: time="2026-04-16T23:33:29.383595524Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 3.12373864s" Apr 16 23:33:29.383968 containerd[2014]: time="2026-04-16T23:33:29.383652908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Apr 16 23:33:29.385953 containerd[2014]: time="2026-04-16T23:33:29.385893980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 16 23:33:29.393283 containerd[2014]: time="2026-04-16T23:33:29.393214160Z" level=info msg="CreateContainer within sandbox \"b5baf69796b3ec2d8be0ce49914675d7e2a033667a071975e00e5ca7a7748e2f\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 16 23:33:29.409431 containerd[2014]: time="2026-04-16T23:33:29.408555980Z" level=info msg="Container 66450900d7b8726412b0288b041e988642b89cd0c1fff5e8b9cf73a80edc40db: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:33:29.428175 containerd[2014]: time="2026-04-16T23:33:29.428056388Z" level=info msg="CreateContainer within sandbox \"b5baf69796b3ec2d8be0ce49914675d7e2a033667a071975e00e5ca7a7748e2f\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"66450900d7b8726412b0288b041e988642b89cd0c1fff5e8b9cf73a80edc40db\"" Apr 16 23:33:29.430400 containerd[2014]: time="2026-04-16T23:33:29.429207908Z" level=info msg="StartContainer for \"66450900d7b8726412b0288b041e988642b89cd0c1fff5e8b9cf73a80edc40db\"" Apr 16 23:33:29.431878 containerd[2014]: time="2026-04-16T23:33:29.431826104Z" level=info msg="connecting to shim 66450900d7b8726412b0288b041e988642b89cd0c1fff5e8b9cf73a80edc40db" address="unix:///run/containerd/s/bec0569ec347400a8881394a38f3dfd97d7dd7432312e7bcc53beafe735e6de2" protocol=ttrpc version=3 Apr 16 23:33:29.477606 systemd[1]: Started cri-containerd-66450900d7b8726412b0288b041e988642b89cd0c1fff5e8b9cf73a80edc40db.scope - libcontainer container 66450900d7b8726412b0288b041e988642b89cd0c1fff5e8b9cf73a80edc40db. Apr 16 23:33:29.584022 containerd[2014]: time="2026-04-16T23:33:29.583974873Z" level=info msg="StartContainer for \"66450900d7b8726412b0288b041e988642b89cd0c1fff5e8b9cf73a80edc40db\" returns successfully" Apr 16 23:33:31.492928 kubelet[3456]: I0416 23:33:31.492192 3456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-xlvp6" podStartSLOduration=27.204568048 podStartE2EDuration="31.492161062s" podCreationTimestamp="2026-04-16 23:33:00 +0000 UTC" firstStartedPulling="2026-04-16 23:33:25.097944902 +0000 UTC m=+50.018244105" lastFinishedPulling="2026-04-16 23:33:29.385537904 +0000 UTC m=+54.305837119" observedRunningTime="2026-04-16 23:33:30.303993728 +0000 UTC m=+55.224292955" watchObservedRunningTime="2026-04-16 23:33:31.492161062 +0000 UTC m=+56.412460289" Apr 16 23:33:31.543171 ntpd[2217]: Listen normally on 6 vxlan.calico 192.168.58.192:123 Apr 16 23:33:31.544434 ntpd[2217]: Listen normally on 7 caliac66c494f69 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 16 23:33:31.546127 ntpd[2217]: 16 Apr 23:33:31 ntpd[2217]: Listen normally on 6 vxlan.calico 192.168.58.192:123 Apr 16 23:33:31.546127 ntpd[2217]: 16 Apr 23:33:31 ntpd[2217]: Listen normally on 7 caliac66c494f69 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 16 23:33:31.546127 ntpd[2217]: 16 Apr 23:33:31 ntpd[2217]: Listen normally on 8 calic9c3cc994bb [fe80::ecee:eeff:feee:eeee%5]:123 Apr 16 23:33:31.546127 ntpd[2217]: 16 Apr 23:33:31 ntpd[2217]: Listen normally on 9 cali047cf9f4220 [fe80::ecee:eeff:feee:eeee%6]:123 Apr 16 23:33:31.546127 ntpd[2217]: 16 Apr 23:33:31 ntpd[2217]: Listen normally on 10 cali0a603cc0426 [fe80::ecee:eeff:feee:eeee%7]:123 Apr 16 23:33:31.546127 ntpd[2217]: 16 Apr 23:33:31 ntpd[2217]: Listen normally on 11 vxlan.calico [fe80::6408:27ff:fe2e:c38%8]:123 Apr 16 23:33:31.544483 ntpd[2217]: Listen normally on 8 calic9c3cc994bb [fe80::ecee:eeff:feee:eeee%5]:123 Apr 16 23:33:31.544527 ntpd[2217]: Listen normally on 9 cali047cf9f4220 [fe80::ecee:eeff:feee:eeee%6]:123 Apr 16 23:33:31.544573 ntpd[2217]: Listen normally on 10 cali0a603cc0426 [fe80::ecee:eeff:feee:eeee%7]:123 Apr 16 23:33:31.544618 ntpd[2217]: Listen normally on 11 vxlan.calico [fe80::6408:27ff:fe2e:c38%8]:123 Apr 16 23:33:31.992951 containerd[2014]: time="2026-04-16T23:33:31.992865013Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:31.995152 containerd[2014]: time="2026-04-16T23:33:31.995095429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Apr 16 23:33:31.996351 containerd[2014]: time="2026-04-16T23:33:31.996312565Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:32.000358 containerd[2014]: time="2026-04-16T23:33:32.000276837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:32.002130 containerd[2014]: time="2026-04-16T23:33:32.002076321Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 2.616124081s" Apr 16 23:33:32.002354 containerd[2014]: time="2026-04-16T23:33:32.002321517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Apr 16 23:33:32.004274 containerd[2014]: time="2026-04-16T23:33:32.004192197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 16 23:33:32.038501 containerd[2014]: time="2026-04-16T23:33:32.038437593Z" level=info msg="CreateContainer within sandbox \"284ab95d85bc1dc81e22e5e1aabb040267e12509ed06f2295888bff869f62e3c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 16 23:33:32.050392 containerd[2014]: time="2026-04-16T23:33:32.050332497Z" level=info msg="Container 5ba11206ab07d08f957abf971076b0a4b5f77d605174c2fbb762756a5ac5ad7a: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:33:32.063096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2168113627.mount: Deactivated successfully. Apr 16 23:33:32.070239 containerd[2014]: time="2026-04-16T23:33:32.070170441Z" level=info msg="CreateContainer within sandbox \"284ab95d85bc1dc81e22e5e1aabb040267e12509ed06f2295888bff869f62e3c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5ba11206ab07d08f957abf971076b0a4b5f77d605174c2fbb762756a5ac5ad7a\"" Apr 16 23:33:32.071350 containerd[2014]: time="2026-04-16T23:33:32.070801857Z" level=info msg="StartContainer for \"5ba11206ab07d08f957abf971076b0a4b5f77d605174c2fbb762756a5ac5ad7a\"" Apr 16 23:33:32.075104 containerd[2014]: time="2026-04-16T23:33:32.075051369Z" level=info msg="connecting to shim 5ba11206ab07d08f957abf971076b0a4b5f77d605174c2fbb762756a5ac5ad7a" address="unix:///run/containerd/s/3b6fa55c801d09f904fb8887c0028c5872e570ebf2f95225103fa76ea5c9af3e" protocol=ttrpc version=3 Apr 16 23:33:32.118588 systemd[1]: Started cri-containerd-5ba11206ab07d08f957abf971076b0a4b5f77d605174c2fbb762756a5ac5ad7a.scope - libcontainer container 5ba11206ab07d08f957abf971076b0a4b5f77d605174c2fbb762756a5ac5ad7a. Apr 16 23:33:32.218466 containerd[2014]: time="2026-04-16T23:33:32.218404450Z" level=info msg="StartContainer for \"5ba11206ab07d08f957abf971076b0a4b5f77d605174c2fbb762756a5ac5ad7a\" returns successfully" Apr 16 23:33:32.334579 kubelet[3456]: I0416 23:33:32.334082 3456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-569c4bbfd5-xdx25" podStartSLOduration=21.538052908 podStartE2EDuration="28.33405901s" podCreationTimestamp="2026-04-16 23:33:04 +0000 UTC" firstStartedPulling="2026-04-16 23:33:25.207759375 +0000 UTC m=+50.128058590" lastFinishedPulling="2026-04-16 23:33:32.003765489 +0000 UTC m=+56.924064692" observedRunningTime="2026-04-16 23:33:32.334003702 +0000 UTC m=+57.254302941" watchObservedRunningTime="2026-04-16 23:33:32.33405901 +0000 UTC m=+57.254358237" Apr 16 23:33:33.400792 containerd[2014]: time="2026-04-16T23:33:33.400722720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:33.403345 containerd[2014]: time="2026-04-16T23:33:33.403018752Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Apr 16 23:33:33.405820 containerd[2014]: time="2026-04-16T23:33:33.405754080Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:33.412703 containerd[2014]: time="2026-04-16T23:33:33.412621116Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:33.416958 containerd[2014]: time="2026-04-16T23:33:33.416142972Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.410929335s" Apr 16 23:33:33.416958 containerd[2014]: time="2026-04-16T23:33:33.416210796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Apr 16 23:33:33.422735 containerd[2014]: time="2026-04-16T23:33:33.422267016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 16 23:33:33.432843 containerd[2014]: time="2026-04-16T23:33:33.432778452Z" level=info msg="CreateContainer within sandbox \"42a17d523dbe9542377d9f78250ce742f0acb024157e2a62910c4a3dee91a97a\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 16 23:33:33.456358 containerd[2014]: time="2026-04-16T23:33:33.452535792Z" level=info msg="Container 73977a3a265f4fe66c31a9a293b997800b2fb3a1545ec2e419a971fedfd3d89f: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:33:33.472337 containerd[2014]: time="2026-04-16T23:33:33.471340680Z" level=info msg="CreateContainer within sandbox \"42a17d523dbe9542377d9f78250ce742f0acb024157e2a62910c4a3dee91a97a\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"73977a3a265f4fe66c31a9a293b997800b2fb3a1545ec2e419a971fedfd3d89f\"" Apr 16 23:33:33.473042 containerd[2014]: time="2026-04-16T23:33:33.472980576Z" level=info msg="StartContainer for \"73977a3a265f4fe66c31a9a293b997800b2fb3a1545ec2e419a971fedfd3d89f\"" Apr 16 23:33:33.477931 containerd[2014]: time="2026-04-16T23:33:33.477862188Z" level=info msg="connecting to shim 73977a3a265f4fe66c31a9a293b997800b2fb3a1545ec2e419a971fedfd3d89f" address="unix:///run/containerd/s/aa6d1e7c971be0b3e785f66ae6e8af9688be6be8c764690bf833eeeb8052572b" protocol=ttrpc version=3 Apr 16 23:33:33.537610 systemd[1]: Started cri-containerd-73977a3a265f4fe66c31a9a293b997800b2fb3a1545ec2e419a971fedfd3d89f.scope - libcontainer container 73977a3a265f4fe66c31a9a293b997800b2fb3a1545ec2e419a971fedfd3d89f. Apr 16 23:33:33.621727 containerd[2014]: time="2026-04-16T23:33:33.621515881Z" level=info msg="StartContainer for \"73977a3a265f4fe66c31a9a293b997800b2fb3a1545ec2e419a971fedfd3d89f\" returns successfully" Apr 16 23:33:34.876096 containerd[2014]: time="2026-04-16T23:33:34.876032715Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:34.877920 containerd[2014]: time="2026-04-16T23:33:34.877542615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Apr 16 23:33:34.879136 containerd[2014]: time="2026-04-16T23:33:34.879081639Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:34.883099 containerd[2014]: time="2026-04-16T23:33:34.883048035Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:34.884555 containerd[2014]: time="2026-04-16T23:33:34.884378583Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.461403447s" Apr 16 23:33:34.884555 containerd[2014]: time="2026-04-16T23:33:34.884429127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Apr 16 23:33:34.887340 containerd[2014]: time="2026-04-16T23:33:34.887202303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 16 23:33:34.891912 containerd[2014]: time="2026-04-16T23:33:34.891847779Z" level=info msg="CreateContainer within sandbox \"29996cdbb8c5889a248b7813b86636ff4cecd50e602e5c44270b451e2ccf841f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 16 23:33:34.918332 containerd[2014]: time="2026-04-16T23:33:34.916765587Z" level=info msg="Container 0db6f1d845363196c94cb389a81abadc1673189c695a490fb9985fc4dcc9bf6e: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:33:34.934858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1480073993.mount: Deactivated successfully. Apr 16 23:33:34.941318 containerd[2014]: time="2026-04-16T23:33:34.941234355Z" level=info msg="CreateContainer within sandbox \"29996cdbb8c5889a248b7813b86636ff4cecd50e602e5c44270b451e2ccf841f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0db6f1d845363196c94cb389a81abadc1673189c695a490fb9985fc4dcc9bf6e\"" Apr 16 23:33:34.944081 containerd[2014]: time="2026-04-16T23:33:34.943621995Z" level=info msg="StartContainer for \"0db6f1d845363196c94cb389a81abadc1673189c695a490fb9985fc4dcc9bf6e\"" Apr 16 23:33:34.948075 containerd[2014]: time="2026-04-16T23:33:34.948007539Z" level=info msg="connecting to shim 0db6f1d845363196c94cb389a81abadc1673189c695a490fb9985fc4dcc9bf6e" address="unix:///run/containerd/s/e25e250238b3346e8402f02f0aaa96a4f05d7a48317d07e98724f3e6a6afb529" protocol=ttrpc version=3 Apr 16 23:33:34.989623 systemd[1]: Started cri-containerd-0db6f1d845363196c94cb389a81abadc1673189c695a490fb9985fc4dcc9bf6e.scope - libcontainer container 0db6f1d845363196c94cb389a81abadc1673189c695a490fb9985fc4dcc9bf6e. Apr 16 23:33:35.255846 containerd[2014]: time="2026-04-16T23:33:35.255765673Z" level=info msg="StartContainer for \"0db6f1d845363196c94cb389a81abadc1673189c695a490fb9985fc4dcc9bf6e\" returns successfully" Apr 16 23:33:35.359353 kubelet[3456]: I0416 23:33:35.359041 3456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-krskz" podStartSLOduration=20.6619159 podStartE2EDuration="31.359019073s" podCreationTimestamp="2026-04-16 23:33:04 +0000 UTC" firstStartedPulling="2026-04-16 23:33:24.188898518 +0000 UTC m=+49.109197733" lastFinishedPulling="2026-04-16 23:33:34.886001703 +0000 UTC m=+59.806300906" observedRunningTime="2026-04-16 23:33:35.358417213 +0000 UTC m=+60.278716512" watchObservedRunningTime="2026-04-16 23:33:35.359019073 +0000 UTC m=+60.279318288" Apr 16 23:33:35.772490 containerd[2014]: time="2026-04-16T23:33:35.772177707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gvwh7,Uid:12d12a87-eb98-402f-ac50-574c7d1f3b7f,Namespace:kube-system,Attempt:0,}" Apr 16 23:33:36.007014 kubelet[3456]: I0416 23:33:36.006962 3456 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 16 23:33:36.007014 kubelet[3456]: I0416 23:33:36.007029 3456 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 16 23:33:36.072369 systemd-networkd[1887]: calibd9aef1c9ff: Link UP Apr 16 23:33:36.079185 systemd-networkd[1887]: calibd9aef1c9ff: Gained carrier Apr 16 23:33:36.083936 (udev-worker)[5457]: Network interface NamePolicy= disabled on kernel command line. Apr 16 23:33:36.128811 containerd[2014]: 2026-04-16 23:33:35.873 [INFO][5437] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--254-k8s-coredns--674b8bbfcf--gvwh7-eth0 coredns-674b8bbfcf- kube-system 12d12a87-eb98-402f-ac50-574c7d1f3b7f 893 0 2026-04-16 23:32:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-254 coredns-674b8bbfcf-gvwh7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibd9aef1c9ff [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3ae6b8ccc803977306193922e8eb4fe9a6a6860873b934aafa258f2e0e7b5d88" Namespace="kube-system" Pod="coredns-674b8bbfcf-gvwh7" WorkloadEndpoint="ip--172--31--16--254-k8s-coredns--674b8bbfcf--gvwh7-" Apr 16 23:33:36.128811 containerd[2014]: 2026-04-16 23:33:35.875 [INFO][5437] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3ae6b8ccc803977306193922e8eb4fe9a6a6860873b934aafa258f2e0e7b5d88" Namespace="kube-system" Pod="coredns-674b8bbfcf-gvwh7" WorkloadEndpoint="ip--172--31--16--254-k8s-coredns--674b8bbfcf--gvwh7-eth0" Apr 16 23:33:36.128811 containerd[2014]: 2026-04-16 23:33:35.944 [INFO][5450] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ae6b8ccc803977306193922e8eb4fe9a6a6860873b934aafa258f2e0e7b5d88" HandleID="k8s-pod-network.3ae6b8ccc803977306193922e8eb4fe9a6a6860873b934aafa258f2e0e7b5d88" Workload="ip--172--31--16--254-k8s-coredns--674b8bbfcf--gvwh7-eth0" Apr 16 23:33:36.130232 containerd[2014]: 2026-04-16 23:33:35.961 [INFO][5450] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3ae6b8ccc803977306193922e8eb4fe9a6a6860873b934aafa258f2e0e7b5d88" HandleID="k8s-pod-network.3ae6b8ccc803977306193922e8eb4fe9a6a6860873b934aafa258f2e0e7b5d88" Workload="ip--172--31--16--254-k8s-coredns--674b8bbfcf--gvwh7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e3c90), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-254", "pod":"coredns-674b8bbfcf-gvwh7", "timestamp":"2026-04-16 23:33:35.94424866 +0000 UTC"}, Hostname:"ip-172-31-16-254", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400032c6e0)} Apr 16 23:33:36.130232 containerd[2014]: 2026-04-16 23:33:35.961 [INFO][5450] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:33:36.130232 containerd[2014]: 2026-04-16 23:33:35.961 [INFO][5450] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:33:36.130232 containerd[2014]: 2026-04-16 23:33:35.961 [INFO][5450] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-254' Apr 16 23:33:36.130232 containerd[2014]: 2026-04-16 23:33:35.966 [INFO][5450] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3ae6b8ccc803977306193922e8eb4fe9a6a6860873b934aafa258f2e0e7b5d88" host="ip-172-31-16-254" Apr 16 23:33:36.130232 containerd[2014]: 2026-04-16 23:33:35.977 [INFO][5450] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-16-254" Apr 16 23:33:36.130232 containerd[2014]: 2026-04-16 23:33:35.998 [INFO][5450] ipam/ipam.go 526: Trying affinity for 192.168.58.192/26 host="ip-172-31-16-254" Apr 16 23:33:36.130232 containerd[2014]: 2026-04-16 23:33:36.005 [INFO][5450] ipam/ipam.go 160: Attempting to load block cidr=192.168.58.192/26 host="ip-172-31-16-254" Apr 16 23:33:36.130232 containerd[2014]: 2026-04-16 23:33:36.013 [INFO][5450] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.58.192/26 host="ip-172-31-16-254" Apr 16 23:33:36.131732 containerd[2014]: 2026-04-16 23:33:36.014 [INFO][5450] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.58.192/26 handle="k8s-pod-network.3ae6b8ccc803977306193922e8eb4fe9a6a6860873b934aafa258f2e0e7b5d88" host="ip-172-31-16-254" Apr 16 23:33:36.131732 containerd[2014]: 2026-04-16 23:33:36.020 [INFO][5450] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3ae6b8ccc803977306193922e8eb4fe9a6a6860873b934aafa258f2e0e7b5d88 Apr 16 23:33:36.131732 containerd[2014]: 2026-04-16 23:33:36.029 [INFO][5450] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.58.192/26 handle="k8s-pod-network.3ae6b8ccc803977306193922e8eb4fe9a6a6860873b934aafa258f2e0e7b5d88" host="ip-172-31-16-254" Apr 16 23:33:36.131732 containerd[2014]: 2026-04-16 23:33:36.043 [INFO][5450] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.58.197/26] block=192.168.58.192/26 handle="k8s-pod-network.3ae6b8ccc803977306193922e8eb4fe9a6a6860873b934aafa258f2e0e7b5d88" host="ip-172-31-16-254" Apr 16 23:33:36.131732 containerd[2014]: 2026-04-16 23:33:36.044 [INFO][5450] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.58.197/26] handle="k8s-pod-network.3ae6b8ccc803977306193922e8eb4fe9a6a6860873b934aafa258f2e0e7b5d88" host="ip-172-31-16-254" Apr 16 23:33:36.131732 containerd[2014]: 2026-04-16 23:33:36.046 [INFO][5450] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:33:36.131732 containerd[2014]: 2026-04-16 23:33:36.046 [INFO][5450] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.58.197/26] IPv6=[] ContainerID="3ae6b8ccc803977306193922e8eb4fe9a6a6860873b934aafa258f2e0e7b5d88" HandleID="k8s-pod-network.3ae6b8ccc803977306193922e8eb4fe9a6a6860873b934aafa258f2e0e7b5d88" Workload="ip--172--31--16--254-k8s-coredns--674b8bbfcf--gvwh7-eth0" Apr 16 23:33:36.132086 containerd[2014]: 2026-04-16 23:33:36.053 [INFO][5437] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3ae6b8ccc803977306193922e8eb4fe9a6a6860873b934aafa258f2e0e7b5d88" Namespace="kube-system" Pod="coredns-674b8bbfcf-gvwh7" WorkloadEndpoint="ip--172--31--16--254-k8s-coredns--674b8bbfcf--gvwh7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--254-k8s-coredns--674b8bbfcf--gvwh7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"12d12a87-eb98-402f-ac50-574c7d1f3b7f", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 32, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-254", ContainerID:"", Pod:"coredns-674b8bbfcf-gvwh7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibd9aef1c9ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:33:36.132086 containerd[2014]: 2026-04-16 23:33:36.054 [INFO][5437] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.197/32] ContainerID="3ae6b8ccc803977306193922e8eb4fe9a6a6860873b934aafa258f2e0e7b5d88" Namespace="kube-system" Pod="coredns-674b8bbfcf-gvwh7" WorkloadEndpoint="ip--172--31--16--254-k8s-coredns--674b8bbfcf--gvwh7-eth0" Apr 16 23:33:36.132086 containerd[2014]: 2026-04-16 23:33:36.054 [INFO][5437] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd9aef1c9ff ContainerID="3ae6b8ccc803977306193922e8eb4fe9a6a6860873b934aafa258f2e0e7b5d88" Namespace="kube-system" Pod="coredns-674b8bbfcf-gvwh7" WorkloadEndpoint="ip--172--31--16--254-k8s-coredns--674b8bbfcf--gvwh7-eth0" Apr 16 23:33:36.132086 containerd[2014]: 2026-04-16 23:33:36.081 [INFO][5437] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ae6b8ccc803977306193922e8eb4fe9a6a6860873b934aafa258f2e0e7b5d88" Namespace="kube-system" Pod="coredns-674b8bbfcf-gvwh7" WorkloadEndpoint="ip--172--31--16--254-k8s-coredns--674b8bbfcf--gvwh7-eth0" Apr 16 23:33:36.132086 containerd[2014]: 2026-04-16 23:33:36.083 [INFO][5437] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3ae6b8ccc803977306193922e8eb4fe9a6a6860873b934aafa258f2e0e7b5d88" Namespace="kube-system" Pod="coredns-674b8bbfcf-gvwh7" WorkloadEndpoint="ip--172--31--16--254-k8s-coredns--674b8bbfcf--gvwh7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--254-k8s-coredns--674b8bbfcf--gvwh7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"12d12a87-eb98-402f-ac50-574c7d1f3b7f", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 32, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-254", ContainerID:"3ae6b8ccc803977306193922e8eb4fe9a6a6860873b934aafa258f2e0e7b5d88", Pod:"coredns-674b8bbfcf-gvwh7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibd9aef1c9ff", MAC:"62:e1:78:13:c1:4b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:33:36.132086 containerd[2014]: 2026-04-16 23:33:36.104 [INFO][5437] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3ae6b8ccc803977306193922e8eb4fe9a6a6860873b934aafa258f2e0e7b5d88" Namespace="kube-system" Pod="coredns-674b8bbfcf-gvwh7" WorkloadEndpoint="ip--172--31--16--254-k8s-coredns--674b8bbfcf--gvwh7-eth0" Apr 16 23:33:36.197645 containerd[2014]: time="2026-04-16T23:33:36.197569022Z" level=info msg="connecting to shim 3ae6b8ccc803977306193922e8eb4fe9a6a6860873b934aafa258f2e0e7b5d88" address="unix:///run/containerd/s/57bcb69196c7d3a2ed552f16a5583f0a7952560df8111fac0b0e074500662fe5" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:33:36.261006 systemd[1]: Started cri-containerd-3ae6b8ccc803977306193922e8eb4fe9a6a6860873b934aafa258f2e0e7b5d88.scope - libcontainer container 3ae6b8ccc803977306193922e8eb4fe9a6a6860873b934aafa258f2e0e7b5d88. Apr 16 23:33:36.406946 containerd[2014]: time="2026-04-16T23:33:36.406894755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gvwh7,Uid:12d12a87-eb98-402f-ac50-574c7d1f3b7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ae6b8ccc803977306193922e8eb4fe9a6a6860873b934aafa258f2e0e7b5d88\"" Apr 16 23:33:36.424322 containerd[2014]: time="2026-04-16T23:33:36.423089943Z" level=info msg="CreateContainer within sandbox \"3ae6b8ccc803977306193922e8eb4fe9a6a6860873b934aafa258f2e0e7b5d88\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 23:33:36.442425 containerd[2014]: time="2026-04-16T23:33:36.442365003Z" level=info msg="Container b00e646a9444d216719ac7864ec7e0610be1eb0be5aa3ddcb7086e523f8fea78: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:33:36.455243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1110123158.mount: Deactivated successfully. Apr 16 23:33:36.481274 containerd[2014]: time="2026-04-16T23:33:36.481214655Z" level=info msg="CreateContainer within sandbox \"3ae6b8ccc803977306193922e8eb4fe9a6a6860873b934aafa258f2e0e7b5d88\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b00e646a9444d216719ac7864ec7e0610be1eb0be5aa3ddcb7086e523f8fea78\"" Apr 16 23:33:36.485655 containerd[2014]: time="2026-04-16T23:33:36.485369283Z" level=info msg="StartContainer for \"b00e646a9444d216719ac7864ec7e0610be1eb0be5aa3ddcb7086e523f8fea78\"" Apr 16 23:33:36.507977 containerd[2014]: time="2026-04-16T23:33:36.507904299Z" level=info msg="connecting to shim b00e646a9444d216719ac7864ec7e0610be1eb0be5aa3ddcb7086e523f8fea78" address="unix:///run/containerd/s/57bcb69196c7d3a2ed552f16a5583f0a7952560df8111fac0b0e074500662fe5" protocol=ttrpc version=3 Apr 16 23:33:36.583626 systemd[1]: Started cri-containerd-b00e646a9444d216719ac7864ec7e0610be1eb0be5aa3ddcb7086e523f8fea78.scope - libcontainer container b00e646a9444d216719ac7864ec7e0610be1eb0be5aa3ddcb7086e523f8fea78. Apr 16 23:33:36.723385 containerd[2014]: time="2026-04-16T23:33:36.722337724Z" level=info msg="StartContainer for \"b00e646a9444d216719ac7864ec7e0610be1eb0be5aa3ddcb7086e523f8fea78\" returns successfully" Apr 16 23:33:36.772640 containerd[2014]: time="2026-04-16T23:33:36.772526680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p2qcr,Uid:74ea04a2-eb40-4e5c-a7e9-f65a5f7a4bed,Namespace:kube-system,Attempt:0,}" Apr 16 23:33:37.250135 (udev-worker)[5459]: Network interface NamePolicy= disabled on kernel command line. Apr 16 23:33:37.253463 systemd-networkd[1887]: calibece208bb9d: Link UP Apr 16 23:33:37.255682 systemd-networkd[1887]: calibece208bb9d: Gained carrier Apr 16 23:33:37.310087 containerd[2014]: 2026-04-16 23:33:36.963 [INFO][5558] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--254-k8s-coredns--674b8bbfcf--p2qcr-eth0 coredns-674b8bbfcf- kube-system 74ea04a2-eb40-4e5c-a7e9-f65a5f7a4bed 886 0 2026-04-16 23:32:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-254 coredns-674b8bbfcf-p2qcr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibece208bb9d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="04253b4a6c518e0a95fb5d702bada39d7c5ed1708b338c4163eed2cdacaa4573" Namespace="kube-system" Pod="coredns-674b8bbfcf-p2qcr" WorkloadEndpoint="ip--172--31--16--254-k8s-coredns--674b8bbfcf--p2qcr-" Apr 16 23:33:37.310087 containerd[2014]: 2026-04-16 23:33:36.963 [INFO][5558] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="04253b4a6c518e0a95fb5d702bada39d7c5ed1708b338c4163eed2cdacaa4573" Namespace="kube-system" Pod="coredns-674b8bbfcf-p2qcr" WorkloadEndpoint="ip--172--31--16--254-k8s-coredns--674b8bbfcf--p2qcr-eth0" Apr 16 23:33:37.310087 containerd[2014]: 2026-04-16 23:33:37.108 [INFO][5574] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="04253b4a6c518e0a95fb5d702bada39d7c5ed1708b338c4163eed2cdacaa4573" HandleID="k8s-pod-network.04253b4a6c518e0a95fb5d702bada39d7c5ed1708b338c4163eed2cdacaa4573" Workload="ip--172--31--16--254-k8s-coredns--674b8bbfcf--p2qcr-eth0" Apr 16 23:33:37.310087 containerd[2014]: 2026-04-16 23:33:37.137 [INFO][5574] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="04253b4a6c518e0a95fb5d702bada39d7c5ed1708b338c4163eed2cdacaa4573" HandleID="k8s-pod-network.04253b4a6c518e0a95fb5d702bada39d7c5ed1708b338c4163eed2cdacaa4573" Workload="ip--172--31--16--254-k8s-coredns--674b8bbfcf--p2qcr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005ee350), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-254", "pod":"coredns-674b8bbfcf-p2qcr", "timestamp":"2026-04-16 23:33:37.108361826 +0000 UTC"}, Hostname:"ip-172-31-16-254", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004db1e0)} Apr 16 23:33:37.310087 containerd[2014]: 2026-04-16 23:33:37.138 [INFO][5574] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:33:37.310087 containerd[2014]: 2026-04-16 23:33:37.138 [INFO][5574] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:33:37.310087 containerd[2014]: 2026-04-16 23:33:37.138 [INFO][5574] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-254' Apr 16 23:33:37.310087 containerd[2014]: 2026-04-16 23:33:37.146 [INFO][5574] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.04253b4a6c518e0a95fb5d702bada39d7c5ed1708b338c4163eed2cdacaa4573" host="ip-172-31-16-254" Apr 16 23:33:37.310087 containerd[2014]: 2026-04-16 23:33:37.157 [INFO][5574] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-16-254" Apr 16 23:33:37.310087 containerd[2014]: 2026-04-16 23:33:37.183 [INFO][5574] ipam/ipam.go 526: Trying affinity for 192.168.58.192/26 host="ip-172-31-16-254" Apr 16 23:33:37.310087 containerd[2014]: 2026-04-16 23:33:37.188 [INFO][5574] ipam/ipam.go 160: Attempting to load block cidr=192.168.58.192/26 host="ip-172-31-16-254" Apr 16 23:33:37.310087 containerd[2014]: 2026-04-16 23:33:37.197 [INFO][5574] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.58.192/26 host="ip-172-31-16-254" Apr 16 23:33:37.310087 containerd[2014]: 2026-04-16 23:33:37.197 [INFO][5574] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.58.192/26 handle="k8s-pod-network.04253b4a6c518e0a95fb5d702bada39d7c5ed1708b338c4163eed2cdacaa4573" host="ip-172-31-16-254" Apr 16 23:33:37.310087 containerd[2014]: 2026-04-16 23:33:37.201 [INFO][5574] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.04253b4a6c518e0a95fb5d702bada39d7c5ed1708b338c4163eed2cdacaa4573 Apr 16 23:33:37.310087 containerd[2014]: 2026-04-16 23:33:37.215 [INFO][5574] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.58.192/26 handle="k8s-pod-network.04253b4a6c518e0a95fb5d702bada39d7c5ed1708b338c4163eed2cdacaa4573" host="ip-172-31-16-254" Apr 16 23:33:37.310087 containerd[2014]: 2026-04-16 23:33:37.237 [INFO][5574] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.58.198/26] block=192.168.58.192/26 handle="k8s-pod-network.04253b4a6c518e0a95fb5d702bada39d7c5ed1708b338c4163eed2cdacaa4573" host="ip-172-31-16-254" Apr 16 23:33:37.310087 containerd[2014]: 2026-04-16 23:33:37.238 [INFO][5574] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.58.198/26] handle="k8s-pod-network.04253b4a6c518e0a95fb5d702bada39d7c5ed1708b338c4163eed2cdacaa4573" host="ip-172-31-16-254" Apr 16 23:33:37.310087 containerd[2014]: 2026-04-16 23:33:37.238 [INFO][5574] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:33:37.310087 containerd[2014]: 2026-04-16 23:33:37.238 [INFO][5574] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.58.198/26] IPv6=[] ContainerID="04253b4a6c518e0a95fb5d702bada39d7c5ed1708b338c4163eed2cdacaa4573" HandleID="k8s-pod-network.04253b4a6c518e0a95fb5d702bada39d7c5ed1708b338c4163eed2cdacaa4573" Workload="ip--172--31--16--254-k8s-coredns--674b8bbfcf--p2qcr-eth0" Apr 16 23:33:37.314991 containerd[2014]: 2026-04-16 23:33:37.244 [INFO][5558] cni-plugin/k8s.go 418: Populated endpoint ContainerID="04253b4a6c518e0a95fb5d702bada39d7c5ed1708b338c4163eed2cdacaa4573" Namespace="kube-system" Pod="coredns-674b8bbfcf-p2qcr" WorkloadEndpoint="ip--172--31--16--254-k8s-coredns--674b8bbfcf--p2qcr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--254-k8s-coredns--674b8bbfcf--p2qcr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"74ea04a2-eb40-4e5c-a7e9-f65a5f7a4bed", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 32, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-254", ContainerID:"", Pod:"coredns-674b8bbfcf-p2qcr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibece208bb9d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:33:37.314991 containerd[2014]: 2026-04-16 23:33:37.244 [INFO][5558] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.198/32] ContainerID="04253b4a6c518e0a95fb5d702bada39d7c5ed1708b338c4163eed2cdacaa4573" Namespace="kube-system" Pod="coredns-674b8bbfcf-p2qcr" WorkloadEndpoint="ip--172--31--16--254-k8s-coredns--674b8bbfcf--p2qcr-eth0" Apr 16 23:33:37.314991 containerd[2014]: 2026-04-16 23:33:37.244 [INFO][5558] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibece208bb9d ContainerID="04253b4a6c518e0a95fb5d702bada39d7c5ed1708b338c4163eed2cdacaa4573" Namespace="kube-system" Pod="coredns-674b8bbfcf-p2qcr" WorkloadEndpoint="ip--172--31--16--254-k8s-coredns--674b8bbfcf--p2qcr-eth0" Apr 16 23:33:37.314991 containerd[2014]: 2026-04-16 23:33:37.257 [INFO][5558] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="04253b4a6c518e0a95fb5d702bada39d7c5ed1708b338c4163eed2cdacaa4573" Namespace="kube-system" Pod="coredns-674b8bbfcf-p2qcr" WorkloadEndpoint="ip--172--31--16--254-k8s-coredns--674b8bbfcf--p2qcr-eth0" Apr 16 23:33:37.314991 containerd[2014]: 2026-04-16 23:33:37.260 [INFO][5558] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="04253b4a6c518e0a95fb5d702bada39d7c5ed1708b338c4163eed2cdacaa4573" Namespace="kube-system" Pod="coredns-674b8bbfcf-p2qcr" WorkloadEndpoint="ip--172--31--16--254-k8s-coredns--674b8bbfcf--p2qcr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--254-k8s-coredns--674b8bbfcf--p2qcr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"74ea04a2-eb40-4e5c-a7e9-f65a5f7a4bed", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 32, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-254", ContainerID:"04253b4a6c518e0a95fb5d702bada39d7c5ed1708b338c4163eed2cdacaa4573", Pod:"coredns-674b8bbfcf-p2qcr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.58.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibece208bb9d", MAC:"f2:b1:b6:cd:d9:22", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:33:37.314991 containerd[2014]: 2026-04-16 23:33:37.288 [INFO][5558] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="04253b4a6c518e0a95fb5d702bada39d7c5ed1708b338c4163eed2cdacaa4573" Namespace="kube-system" Pod="coredns-674b8bbfcf-p2qcr" WorkloadEndpoint="ip--172--31--16--254-k8s-coredns--674b8bbfcf--p2qcr-eth0" Apr 16 23:33:37.312743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3987701106.mount: Deactivated successfully. Apr 16 23:33:37.343566 containerd[2014]: time="2026-04-16T23:33:37.343274319Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:37.349871 containerd[2014]: time="2026-04-16T23:33:37.349809687Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Apr 16 23:33:37.352366 containerd[2014]: time="2026-04-16T23:33:37.352008411Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:37.386699 containerd[2014]: time="2026-04-16T23:33:37.386635887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:37.399269 kubelet[3456]: I0416 23:33:37.398715 3456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-gvwh7" podStartSLOduration=57.398692923 podStartE2EDuration="57.398692923s" podCreationTimestamp="2026-04-16 23:32:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:33:37.395211231 +0000 UTC m=+62.315510470" watchObservedRunningTime="2026-04-16 23:33:37.398692923 +0000 UTC m=+62.318992150" Apr 16 23:33:37.402156 containerd[2014]: time="2026-04-16T23:33:37.402074452Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 2.514472573s" Apr 16 23:33:37.402156 containerd[2014]: time="2026-04-16T23:33:37.402147664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Apr 16 23:33:37.421900 containerd[2014]: time="2026-04-16T23:33:37.421759564Z" level=info msg="CreateContainer within sandbox \"42a17d523dbe9542377d9f78250ce742f0acb024157e2a62910c4a3dee91a97a\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 16 23:33:37.458768 containerd[2014]: time="2026-04-16T23:33:37.457676944Z" level=info msg="Container da367122dfbe3d0fa14da86510780684b84e02cd4556671a89f690c4a0cfe81c: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:33:37.503069 containerd[2014]: time="2026-04-16T23:33:37.502092772Z" level=info msg="CreateContainer within sandbox \"42a17d523dbe9542377d9f78250ce742f0acb024157e2a62910c4a3dee91a97a\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"da367122dfbe3d0fa14da86510780684b84e02cd4556671a89f690c4a0cfe81c\"" Apr 16 23:33:37.514870 containerd[2014]: time="2026-04-16T23:33:37.514701016Z" level=info msg="StartContainer for \"da367122dfbe3d0fa14da86510780684b84e02cd4556671a89f690c4a0cfe81c\"" Apr 16 23:33:37.515581 systemd-networkd[1887]: calibd9aef1c9ff: Gained IPv6LL Apr 16 23:33:37.530215 containerd[2014]: time="2026-04-16T23:33:37.529848700Z" level=info msg="connecting to shim da367122dfbe3d0fa14da86510780684b84e02cd4556671a89f690c4a0cfe81c" address="unix:///run/containerd/s/aa6d1e7c971be0b3e785f66ae6e8af9688be6be8c764690bf833eeeb8052572b" protocol=ttrpc version=3 Apr 16 23:33:37.558529 containerd[2014]: time="2026-04-16T23:33:37.556460716Z" level=info msg="connecting to shim 04253b4a6c518e0a95fb5d702bada39d7c5ed1708b338c4163eed2cdacaa4573" address="unix:///run/containerd/s/3278bc017c15f0c41ff0200fcd6db3a8576884b7c96939dbc672a72bccbb287d" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:33:37.709689 systemd[1]: Started cri-containerd-04253b4a6c518e0a95fb5d702bada39d7c5ed1708b338c4163eed2cdacaa4573.scope - libcontainer container 04253b4a6c518e0a95fb5d702bada39d7c5ed1708b338c4163eed2cdacaa4573. Apr 16 23:33:37.713103 systemd[1]: Started cri-containerd-da367122dfbe3d0fa14da86510780684b84e02cd4556671a89f690c4a0cfe81c.scope - libcontainer container da367122dfbe3d0fa14da86510780684b84e02cd4556671a89f690c4a0cfe81c. Apr 16 23:33:37.778792 containerd[2014]: time="2026-04-16T23:33:37.778663277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-744c4c5668-n7rcp,Uid:9bc6d820-19b1-4a25-9507-b10429f10481,Namespace:calico-system,Attempt:0,}" Apr 16 23:33:37.856153 containerd[2014]: time="2026-04-16T23:33:37.855607446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p2qcr,Uid:74ea04a2-eb40-4e5c-a7e9-f65a5f7a4bed,Namespace:kube-system,Attempt:0,} returns sandbox id \"04253b4a6c518e0a95fb5d702bada39d7c5ed1708b338c4163eed2cdacaa4573\"" Apr 16 23:33:37.867514 containerd[2014]: time="2026-04-16T23:33:37.866653302Z" level=info msg="CreateContainer within sandbox \"04253b4a6c518e0a95fb5d702bada39d7c5ed1708b338c4163eed2cdacaa4573\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 23:33:37.900233 containerd[2014]: time="2026-04-16T23:33:37.900079014Z" level=info msg="StartContainer for \"da367122dfbe3d0fa14da86510780684b84e02cd4556671a89f690c4a0cfe81c\" returns successfully" Apr 16 23:33:37.905348 containerd[2014]: time="2026-04-16T23:33:37.904943982Z" level=info msg="Container 5adb7daa31bd6c359e779436358ca12ba97d9719f733d5c4a9133fe14ad39523: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:33:37.919553 containerd[2014]: time="2026-04-16T23:33:37.919141410Z" level=info msg="CreateContainer within sandbox \"04253b4a6c518e0a95fb5d702bada39d7c5ed1708b338c4163eed2cdacaa4573\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5adb7daa31bd6c359e779436358ca12ba97d9719f733d5c4a9133fe14ad39523\"" Apr 16 23:33:37.922320 containerd[2014]: time="2026-04-16T23:33:37.922190082Z" level=info msg="StartContainer for \"5adb7daa31bd6c359e779436358ca12ba97d9719f733d5c4a9133fe14ad39523\"" Apr 16 23:33:37.926439 containerd[2014]: time="2026-04-16T23:33:37.925936626Z" level=info msg="connecting to shim 5adb7daa31bd6c359e779436358ca12ba97d9719f733d5c4a9133fe14ad39523" address="unix:///run/containerd/s/3278bc017c15f0c41ff0200fcd6db3a8576884b7c96939dbc672a72bccbb287d" protocol=ttrpc version=3 Apr 16 23:33:37.990617 systemd[1]: Started cri-containerd-5adb7daa31bd6c359e779436358ca12ba97d9719f733d5c4a9133fe14ad39523.scope - libcontainer container 5adb7daa31bd6c359e779436358ca12ba97d9719f733d5c4a9133fe14ad39523. Apr 16 23:33:38.138060 containerd[2014]: time="2026-04-16T23:33:38.137995443Z" level=info msg="StartContainer for \"5adb7daa31bd6c359e779436358ca12ba97d9719f733d5c4a9133fe14ad39523\" returns successfully" Apr 16 23:33:38.206884 systemd-networkd[1887]: calic327c259ec0: Link UP Apr 16 23:33:38.209539 systemd-networkd[1887]: calic327c259ec0: Gained carrier Apr 16 23:33:38.253329 containerd[2014]: 2026-04-16 23:33:37.946 [INFO][5664] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--254-k8s-calico--apiserver--744c4c5668--n7rcp-eth0 calico-apiserver-744c4c5668- calico-system 9bc6d820-19b1-4a25-9507-b10429f10481 890 0 2026-04-16 23:33:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:744c4c5668 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-254 calico-apiserver-744c4c5668-n7rcp eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calic327c259ec0 [] [] }} ContainerID="351a079cada4f1efd94265e3dc3ba6a1a70c34144790d301f631debe29c2a06e" Namespace="calico-system" Pod="calico-apiserver-744c4c5668-n7rcp" WorkloadEndpoint="ip--172--31--16--254-k8s-calico--apiserver--744c4c5668--n7rcp-" Apr 16 23:33:38.253329 containerd[2014]: 2026-04-16 23:33:37.948 [INFO][5664] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="351a079cada4f1efd94265e3dc3ba6a1a70c34144790d301f631debe29c2a06e" Namespace="calico-system" Pod="calico-apiserver-744c4c5668-n7rcp" WorkloadEndpoint="ip--172--31--16--254-k8s-calico--apiserver--744c4c5668--n7rcp-eth0" Apr 16 23:33:38.253329 containerd[2014]: 2026-04-16 23:33:38.044 [INFO][5705] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="351a079cada4f1efd94265e3dc3ba6a1a70c34144790d301f631debe29c2a06e" HandleID="k8s-pod-network.351a079cada4f1efd94265e3dc3ba6a1a70c34144790d301f631debe29c2a06e" Workload="ip--172--31--16--254-k8s-calico--apiserver--744c4c5668--n7rcp-eth0" Apr 16 23:33:38.253329 containerd[2014]: 2026-04-16 23:33:38.081 [INFO][5705] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="351a079cada4f1efd94265e3dc3ba6a1a70c34144790d301f631debe29c2a06e" HandleID="k8s-pod-network.351a079cada4f1efd94265e3dc3ba6a1a70c34144790d301f631debe29c2a06e" Workload="ip--172--31--16--254-k8s-calico--apiserver--744c4c5668--n7rcp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000367e80), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-254", "pod":"calico-apiserver-744c4c5668-n7rcp", "timestamp":"2026-04-16 23:33:38.044984139 +0000 UTC"}, Hostname:"ip-172-31-16-254", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000283080)} Apr 16 23:33:38.253329 containerd[2014]: 2026-04-16 23:33:38.081 [INFO][5705] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:33:38.253329 containerd[2014]: 2026-04-16 23:33:38.081 [INFO][5705] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:33:38.253329 containerd[2014]: 2026-04-16 23:33:38.081 [INFO][5705] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-254' Apr 16 23:33:38.253329 containerd[2014]: 2026-04-16 23:33:38.087 [INFO][5705] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.351a079cada4f1efd94265e3dc3ba6a1a70c34144790d301f631debe29c2a06e" host="ip-172-31-16-254" Apr 16 23:33:38.253329 containerd[2014]: 2026-04-16 23:33:38.110 [INFO][5705] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-16-254" Apr 16 23:33:38.253329 containerd[2014]: 2026-04-16 23:33:38.129 [INFO][5705] ipam/ipam.go 526: Trying affinity for 192.168.58.192/26 host="ip-172-31-16-254" Apr 16 23:33:38.253329 containerd[2014]: 2026-04-16 23:33:38.135 [INFO][5705] ipam/ipam.go 160: Attempting to load block cidr=192.168.58.192/26 host="ip-172-31-16-254" Apr 16 23:33:38.253329 containerd[2014]: 2026-04-16 23:33:38.145 [INFO][5705] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.58.192/26 host="ip-172-31-16-254" Apr 16 23:33:38.253329 containerd[2014]: 2026-04-16 23:33:38.145 [INFO][5705] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.58.192/26 handle="k8s-pod-network.351a079cada4f1efd94265e3dc3ba6a1a70c34144790d301f631debe29c2a06e" host="ip-172-31-16-254" Apr 16 23:33:38.253329 containerd[2014]: 2026-04-16 23:33:38.152 [INFO][5705] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.351a079cada4f1efd94265e3dc3ba6a1a70c34144790d301f631debe29c2a06e Apr 16 23:33:38.253329 containerd[2014]: 2026-04-16 23:33:38.172 [INFO][5705] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.58.192/26 handle="k8s-pod-network.351a079cada4f1efd94265e3dc3ba6a1a70c34144790d301f631debe29c2a06e" host="ip-172-31-16-254" Apr 16 23:33:38.253329 containerd[2014]: 2026-04-16 23:33:38.194 [INFO][5705] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.58.199/26] block=192.168.58.192/26 handle="k8s-pod-network.351a079cada4f1efd94265e3dc3ba6a1a70c34144790d301f631debe29c2a06e" host="ip-172-31-16-254" Apr 16 23:33:38.253329 containerd[2014]: 2026-04-16 23:33:38.194 [INFO][5705] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.58.199/26] handle="k8s-pod-network.351a079cada4f1efd94265e3dc3ba6a1a70c34144790d301f631debe29c2a06e" host="ip-172-31-16-254" Apr 16 23:33:38.253329 containerd[2014]: 2026-04-16 23:33:38.194 [INFO][5705] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:33:38.253329 containerd[2014]: 2026-04-16 23:33:38.194 [INFO][5705] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.58.199/26] IPv6=[] ContainerID="351a079cada4f1efd94265e3dc3ba6a1a70c34144790d301f631debe29c2a06e" HandleID="k8s-pod-network.351a079cada4f1efd94265e3dc3ba6a1a70c34144790d301f631debe29c2a06e" Workload="ip--172--31--16--254-k8s-calico--apiserver--744c4c5668--n7rcp-eth0" Apr 16 23:33:38.256641 containerd[2014]: 2026-04-16 23:33:38.200 [INFO][5664] cni-plugin/k8s.go 418: Populated endpoint ContainerID="351a079cada4f1efd94265e3dc3ba6a1a70c34144790d301f631debe29c2a06e" Namespace="calico-system" Pod="calico-apiserver-744c4c5668-n7rcp" WorkloadEndpoint="ip--172--31--16--254-k8s-calico--apiserver--744c4c5668--n7rcp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--254-k8s-calico--apiserver--744c4c5668--n7rcp-eth0", GenerateName:"calico-apiserver-744c4c5668-", Namespace:"calico-system", SelfLink:"", UID:"9bc6d820-19b1-4a25-9507-b10429f10481", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 33, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"744c4c5668", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-254", ContainerID:"", Pod:"calico-apiserver-744c4c5668-n7rcp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic327c259ec0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:33:38.256641 containerd[2014]: 2026-04-16 23:33:38.201 [INFO][5664] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.199/32] ContainerID="351a079cada4f1efd94265e3dc3ba6a1a70c34144790d301f631debe29c2a06e" Namespace="calico-system" Pod="calico-apiserver-744c4c5668-n7rcp" WorkloadEndpoint="ip--172--31--16--254-k8s-calico--apiserver--744c4c5668--n7rcp-eth0" Apr 16 23:33:38.256641 containerd[2014]: 2026-04-16 23:33:38.201 [INFO][5664] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic327c259ec0 ContainerID="351a079cada4f1efd94265e3dc3ba6a1a70c34144790d301f631debe29c2a06e" Namespace="calico-system" Pod="calico-apiserver-744c4c5668-n7rcp" WorkloadEndpoint="ip--172--31--16--254-k8s-calico--apiserver--744c4c5668--n7rcp-eth0" Apr 16 23:33:38.256641 containerd[2014]: 2026-04-16 23:33:38.208 [INFO][5664] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="351a079cada4f1efd94265e3dc3ba6a1a70c34144790d301f631debe29c2a06e" Namespace="calico-system" Pod="calico-apiserver-744c4c5668-n7rcp" WorkloadEndpoint="ip--172--31--16--254-k8s-calico--apiserver--744c4c5668--n7rcp-eth0" Apr 16 23:33:38.256641 containerd[2014]: 2026-04-16 23:33:38.213 [INFO][5664] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="351a079cada4f1efd94265e3dc3ba6a1a70c34144790d301f631debe29c2a06e" Namespace="calico-system" Pod="calico-apiserver-744c4c5668-n7rcp" WorkloadEndpoint="ip--172--31--16--254-k8s-calico--apiserver--744c4c5668--n7rcp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--254-k8s-calico--apiserver--744c4c5668--n7rcp-eth0", GenerateName:"calico-apiserver-744c4c5668-", Namespace:"calico-system", SelfLink:"", UID:"9bc6d820-19b1-4a25-9507-b10429f10481", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 33, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"744c4c5668", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-254", ContainerID:"351a079cada4f1efd94265e3dc3ba6a1a70c34144790d301f631debe29c2a06e", Pod:"calico-apiserver-744c4c5668-n7rcp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic327c259ec0", MAC:"42:95:c2:1a:b3:ac", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:33:38.256641 containerd[2014]: 2026-04-16 23:33:38.244 [INFO][5664] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="351a079cada4f1efd94265e3dc3ba6a1a70c34144790d301f631debe29c2a06e" Namespace="calico-system" Pod="calico-apiserver-744c4c5668-n7rcp" WorkloadEndpoint="ip--172--31--16--254-k8s-calico--apiserver--744c4c5668--n7rcp-eth0" Apr 16 23:33:38.304915 containerd[2014]: time="2026-04-16T23:33:38.304837072Z" level=info msg="connecting to shim 351a079cada4f1efd94265e3dc3ba6a1a70c34144790d301f631debe29c2a06e" address="unix:///run/containerd/s/e4009db0ed0c9beb7f3c1475ecaa9fdedb0a628dec371167f78a190938db7582" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:33:38.413340 systemd-networkd[1887]: calibece208bb9d: Gained IPv6LL Apr 16 23:33:38.502334 kubelet[3456]: I0416 23:33:38.501370 3456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-96d84b779-hfw6f" podStartSLOduration=2.84640058 podStartE2EDuration="13.501346589s" podCreationTimestamp="2026-04-16 23:33:25 +0000 UTC" firstStartedPulling="2026-04-16 23:33:26.753504415 +0000 UTC m=+51.673803630" lastFinishedPulling="2026-04-16 23:33:37.408450436 +0000 UTC m=+62.328749639" observedRunningTime="2026-04-16 23:33:38.444619565 +0000 UTC m=+63.364918792" watchObservedRunningTime="2026-04-16 23:33:38.501346589 +0000 UTC m=+63.421645840" Apr 16 23:33:38.503453 kubelet[3456]: I0416 23:33:38.503382 3456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-p2qcr" podStartSLOduration=58.503360873 podStartE2EDuration="58.503360873s" podCreationTimestamp="2026-04-16 23:32:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:33:38.500896937 +0000 UTC m=+63.421196224" watchObservedRunningTime="2026-04-16 23:33:38.503360873 +0000 UTC m=+63.423660112" Apr 16 23:33:38.598634 systemd[1]: Started cri-containerd-351a079cada4f1efd94265e3dc3ba6a1a70c34144790d301f631debe29c2a06e.scope - libcontainer container 351a079cada4f1efd94265e3dc3ba6a1a70c34144790d301f631debe29c2a06e. Apr 16 23:33:38.755572 containerd[2014]: time="2026-04-16T23:33:38.754939986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-744c4c5668-n7rcp,Uid:9bc6d820-19b1-4a25-9507-b10429f10481,Namespace:calico-system,Attempt:0,} returns sandbox id \"351a079cada4f1efd94265e3dc3ba6a1a70c34144790d301f631debe29c2a06e\"" Apr 16 23:33:38.760633 containerd[2014]: time="2026-04-16T23:33:38.760232430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 16 23:33:38.771397 containerd[2014]: time="2026-04-16T23:33:38.771345246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-744c4c5668-pgmzq,Uid:ee424a99-5ee7-4660-9b0b-b14d2676c736,Namespace:calico-system,Attempt:0,}" Apr 16 23:33:39.000089 systemd-networkd[1887]: cali2a799cb0a38: Link UP Apr 16 23:33:39.002200 systemd-networkd[1887]: cali2a799cb0a38: Gained carrier Apr 16 23:33:39.026110 systemd[1]: Started sshd@7-172.31.16.254:22-20.229.252.112:41398.service - OpenSSH per-connection server daemon (20.229.252.112:41398). Apr 16 23:33:39.048369 containerd[2014]: 2026-04-16 23:33:38.863 [INFO][5804] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--254-k8s-calico--apiserver--744c4c5668--pgmzq-eth0 calico-apiserver-744c4c5668- calico-system ee424a99-5ee7-4660-9b0b-b14d2676c736 892 0 2026-04-16 23:33:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:744c4c5668 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-254 calico-apiserver-744c4c5668-pgmzq eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali2a799cb0a38 [] [] }} ContainerID="3775b26d62e70b60d681d3bfa9eaf279b655fdbe787edc60781cea1cc9d19d27" Namespace="calico-system" Pod="calico-apiserver-744c4c5668-pgmzq" WorkloadEndpoint="ip--172--31--16--254-k8s-calico--apiserver--744c4c5668--pgmzq-" Apr 16 23:33:39.048369 containerd[2014]: 2026-04-16 23:33:38.864 [INFO][5804] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3775b26d62e70b60d681d3bfa9eaf279b655fdbe787edc60781cea1cc9d19d27" Namespace="calico-system" Pod="calico-apiserver-744c4c5668-pgmzq" WorkloadEndpoint="ip--172--31--16--254-k8s-calico--apiserver--744c4c5668--pgmzq-eth0" Apr 16 23:33:39.048369 containerd[2014]: 2026-04-16 23:33:38.921 [INFO][5821] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3775b26d62e70b60d681d3bfa9eaf279b655fdbe787edc60781cea1cc9d19d27" HandleID="k8s-pod-network.3775b26d62e70b60d681d3bfa9eaf279b655fdbe787edc60781cea1cc9d19d27" Workload="ip--172--31--16--254-k8s-calico--apiserver--744c4c5668--pgmzq-eth0" Apr 16 23:33:39.048369 containerd[2014]: 2026-04-16 23:33:38.939 [INFO][5821] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3775b26d62e70b60d681d3bfa9eaf279b655fdbe787edc60781cea1cc9d19d27" HandleID="k8s-pod-network.3775b26d62e70b60d681d3bfa9eaf279b655fdbe787edc60781cea1cc9d19d27" Workload="ip--172--31--16--254-k8s-calico--apiserver--744c4c5668--pgmzq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002fbdd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-254", "pod":"calico-apiserver-744c4c5668-pgmzq", "timestamp":"2026-04-16 23:33:38.921899179 +0000 UTC"}, Hostname:"ip-172-31-16-254", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000301600)} Apr 16 23:33:39.048369 containerd[2014]: 2026-04-16 23:33:38.939 [INFO][5821] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:33:39.048369 containerd[2014]: 2026-04-16 23:33:38.939 [INFO][5821] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:33:39.048369 containerd[2014]: 2026-04-16 23:33:38.939 [INFO][5821] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-254' Apr 16 23:33:39.048369 containerd[2014]: 2026-04-16 23:33:38.943 [INFO][5821] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3775b26d62e70b60d681d3bfa9eaf279b655fdbe787edc60781cea1cc9d19d27" host="ip-172-31-16-254" Apr 16 23:33:39.048369 containerd[2014]: 2026-04-16 23:33:38.950 [INFO][5821] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-16-254" Apr 16 23:33:39.048369 containerd[2014]: 2026-04-16 23:33:38.957 [INFO][5821] ipam/ipam.go 526: Trying affinity for 192.168.58.192/26 host="ip-172-31-16-254" Apr 16 23:33:39.048369 containerd[2014]: 2026-04-16 23:33:38.960 [INFO][5821] ipam/ipam.go 160: Attempting to load block cidr=192.168.58.192/26 host="ip-172-31-16-254" Apr 16 23:33:39.048369 containerd[2014]: 2026-04-16 23:33:38.965 [INFO][5821] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.58.192/26 host="ip-172-31-16-254" Apr 16 23:33:39.048369 containerd[2014]: 2026-04-16 23:33:38.965 [INFO][5821] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.58.192/26 handle="k8s-pod-network.3775b26d62e70b60d681d3bfa9eaf279b655fdbe787edc60781cea1cc9d19d27" host="ip-172-31-16-254" Apr 16 23:33:39.048369 containerd[2014]: 2026-04-16 23:33:38.968 [INFO][5821] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3775b26d62e70b60d681d3bfa9eaf279b655fdbe787edc60781cea1cc9d19d27 Apr 16 23:33:39.048369 containerd[2014]: 2026-04-16 23:33:38.974 [INFO][5821] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.58.192/26 handle="k8s-pod-network.3775b26d62e70b60d681d3bfa9eaf279b655fdbe787edc60781cea1cc9d19d27" host="ip-172-31-16-254" Apr 16 23:33:39.048369 containerd[2014]: 2026-04-16 23:33:38.986 [INFO][5821] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.58.200/26] block=192.168.58.192/26 handle="k8s-pod-network.3775b26d62e70b60d681d3bfa9eaf279b655fdbe787edc60781cea1cc9d19d27" host="ip-172-31-16-254" Apr 16 23:33:39.048369 containerd[2014]: 2026-04-16 23:33:38.986 [INFO][5821] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.58.200/26] handle="k8s-pod-network.3775b26d62e70b60d681d3bfa9eaf279b655fdbe787edc60781cea1cc9d19d27" host="ip-172-31-16-254" Apr 16 23:33:39.048369 containerd[2014]: 2026-04-16 23:33:38.986 [INFO][5821] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:33:39.048369 containerd[2014]: 2026-04-16 23:33:38.987 [INFO][5821] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.58.200/26] IPv6=[] ContainerID="3775b26d62e70b60d681d3bfa9eaf279b655fdbe787edc60781cea1cc9d19d27" HandleID="k8s-pod-network.3775b26d62e70b60d681d3bfa9eaf279b655fdbe787edc60781cea1cc9d19d27" Workload="ip--172--31--16--254-k8s-calico--apiserver--744c4c5668--pgmzq-eth0" Apr 16 23:33:39.052414 containerd[2014]: 2026-04-16 23:33:38.992 [INFO][5804] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3775b26d62e70b60d681d3bfa9eaf279b655fdbe787edc60781cea1cc9d19d27" Namespace="calico-system" Pod="calico-apiserver-744c4c5668-pgmzq" WorkloadEndpoint="ip--172--31--16--254-k8s-calico--apiserver--744c4c5668--pgmzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--254-k8s-calico--apiserver--744c4c5668--pgmzq-eth0", GenerateName:"calico-apiserver-744c4c5668-", Namespace:"calico-system", SelfLink:"", UID:"ee424a99-5ee7-4660-9b0b-b14d2676c736", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 33, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"744c4c5668", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-254", ContainerID:"", Pod:"calico-apiserver-744c4c5668-pgmzq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali2a799cb0a38", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:33:39.052414 containerd[2014]: 2026-04-16 23:33:38.992 [INFO][5804] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.58.200/32] ContainerID="3775b26d62e70b60d681d3bfa9eaf279b655fdbe787edc60781cea1cc9d19d27" Namespace="calico-system" Pod="calico-apiserver-744c4c5668-pgmzq" WorkloadEndpoint="ip--172--31--16--254-k8s-calico--apiserver--744c4c5668--pgmzq-eth0" Apr 16 23:33:39.052414 containerd[2014]: 2026-04-16 23:33:38.992 [INFO][5804] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2a799cb0a38 ContainerID="3775b26d62e70b60d681d3bfa9eaf279b655fdbe787edc60781cea1cc9d19d27" Namespace="calico-system" Pod="calico-apiserver-744c4c5668-pgmzq" WorkloadEndpoint="ip--172--31--16--254-k8s-calico--apiserver--744c4c5668--pgmzq-eth0" Apr 16 23:33:39.052414 containerd[2014]: 2026-04-16 23:33:39.003 [INFO][5804] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3775b26d62e70b60d681d3bfa9eaf279b655fdbe787edc60781cea1cc9d19d27" Namespace="calico-system" Pod="calico-apiserver-744c4c5668-pgmzq" WorkloadEndpoint="ip--172--31--16--254-k8s-calico--apiserver--744c4c5668--pgmzq-eth0" Apr 16 23:33:39.052414 containerd[2014]: 2026-04-16 23:33:39.005 [INFO][5804] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3775b26d62e70b60d681d3bfa9eaf279b655fdbe787edc60781cea1cc9d19d27" Namespace="calico-system" Pod="calico-apiserver-744c4c5668-pgmzq" WorkloadEndpoint="ip--172--31--16--254-k8s-calico--apiserver--744c4c5668--pgmzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--254-k8s-calico--apiserver--744c4c5668--pgmzq-eth0", GenerateName:"calico-apiserver-744c4c5668-", Namespace:"calico-system", SelfLink:"", UID:"ee424a99-5ee7-4660-9b0b-b14d2676c736", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 33, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"744c4c5668", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-254", ContainerID:"3775b26d62e70b60d681d3bfa9eaf279b655fdbe787edc60781cea1cc9d19d27", Pod:"calico-apiserver-744c4c5668-pgmzq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.58.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali2a799cb0a38", MAC:"8a:73:26:fe:c7:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:33:39.052414 containerd[2014]: 2026-04-16 23:33:39.037 [INFO][5804] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3775b26d62e70b60d681d3bfa9eaf279b655fdbe787edc60781cea1cc9d19d27" Namespace="calico-system" Pod="calico-apiserver-744c4c5668-pgmzq" WorkloadEndpoint="ip--172--31--16--254-k8s-calico--apiserver--744c4c5668--pgmzq-eth0" Apr 16 23:33:39.149987 containerd[2014]: time="2026-04-16T23:33:39.149679844Z" level=info msg="connecting to shim 3775b26d62e70b60d681d3bfa9eaf279b655fdbe787edc60781cea1cc9d19d27" address="unix:///run/containerd/s/93432524b7065b4be3673896e00c9d1fda962aee9fe5c538b7d45cf8eeab6427" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:33:39.222620 systemd[1]: Started cri-containerd-3775b26d62e70b60d681d3bfa9eaf279b655fdbe787edc60781cea1cc9d19d27.scope - libcontainer container 3775b26d62e70b60d681d3bfa9eaf279b655fdbe787edc60781cea1cc9d19d27. Apr 16 23:33:39.322857 containerd[2014]: time="2026-04-16T23:33:39.322673993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-744c4c5668-pgmzq,Uid:ee424a99-5ee7-4660-9b0b-b14d2676c736,Namespace:calico-system,Attempt:0,} returns sandbox id \"3775b26d62e70b60d681d3bfa9eaf279b655fdbe787edc60781cea1cc9d19d27\"" Apr 16 23:33:39.499871 systemd-networkd[1887]: calic327c259ec0: Gained IPv6LL Apr 16 23:33:40.005798 sshd[5830]: Accepted publickey for core from 20.229.252.112 port 41398 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:33:40.009258 sshd-session[5830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:33:40.019392 systemd-logind[1991]: New session 8 of user core. Apr 16 23:33:40.026610 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 16 23:33:40.718317 sshd[5900]: Connection closed by 20.229.252.112 port 41398 Apr 16 23:33:40.719461 sshd-session[5830]: pam_unix(sshd:session): session closed for user core Apr 16 23:33:40.726382 systemd[1]: sshd@7-172.31.16.254:22-20.229.252.112:41398.service: Deactivated successfully. Apr 16 23:33:40.731327 systemd[1]: session-8.scope: Deactivated successfully. Apr 16 23:33:40.733750 systemd-logind[1991]: Session 8 logged out. Waiting for processes to exit. Apr 16 23:33:40.737090 systemd-logind[1991]: Removed session 8. Apr 16 23:33:40.779628 systemd-networkd[1887]: cali2a799cb0a38: Gained IPv6LL Apr 16 23:33:42.773010 containerd[2014]: time="2026-04-16T23:33:42.772616770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:42.775921 containerd[2014]: time="2026-04-16T23:33:42.775845442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Apr 16 23:33:42.777407 containerd[2014]: time="2026-04-16T23:33:42.777327154Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:42.784163 containerd[2014]: time="2026-04-16T23:33:42.784112302Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:42.787585 containerd[2014]: time="2026-04-16T23:33:42.787535038Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 4.0272407s" Apr 16 23:33:42.788115 containerd[2014]: time="2026-04-16T23:33:42.787740286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 16 23:33:42.791734 containerd[2014]: time="2026-04-16T23:33:42.791669458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 16 23:33:42.798170 containerd[2014]: time="2026-04-16T23:33:42.798112966Z" level=info msg="CreateContainer within sandbox \"351a079cada4f1efd94265e3dc3ba6a1a70c34144790d301f631debe29c2a06e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 16 23:33:42.815996 containerd[2014]: time="2026-04-16T23:33:42.815926882Z" level=info msg="Container 0cc5979c374fbf2e9039a0032b2c93408e8ccc4dd66b34140045d318eeec4b22: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:33:42.834225 containerd[2014]: time="2026-04-16T23:33:42.834071878Z" level=info msg="CreateContainer within sandbox \"351a079cada4f1efd94265e3dc3ba6a1a70c34144790d301f631debe29c2a06e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0cc5979c374fbf2e9039a0032b2c93408e8ccc4dd66b34140045d318eeec4b22\"" Apr 16 23:33:42.835497 containerd[2014]: time="2026-04-16T23:33:42.835418926Z" level=info msg="StartContainer for \"0cc5979c374fbf2e9039a0032b2c93408e8ccc4dd66b34140045d318eeec4b22\"" Apr 16 23:33:42.839434 containerd[2014]: time="2026-04-16T23:33:42.839345039Z" level=info msg="connecting to shim 0cc5979c374fbf2e9039a0032b2c93408e8ccc4dd66b34140045d318eeec4b22" address="unix:///run/containerd/s/e4009db0ed0c9beb7f3c1475ecaa9fdedb0a628dec371167f78a190938db7582" protocol=ttrpc version=3 Apr 16 23:33:42.878881 systemd[1]: Started cri-containerd-0cc5979c374fbf2e9039a0032b2c93408e8ccc4dd66b34140045d318eeec4b22.scope - libcontainer container 0cc5979c374fbf2e9039a0032b2c93408e8ccc4dd66b34140045d318eeec4b22. Apr 16 23:33:42.973913 containerd[2014]: time="2026-04-16T23:33:42.973843883Z" level=info msg="StartContainer for \"0cc5979c374fbf2e9039a0032b2c93408e8ccc4dd66b34140045d318eeec4b22\" returns successfully" Apr 16 23:33:43.259450 containerd[2014]: time="2026-04-16T23:33:43.259364745Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:43.262524 containerd[2014]: time="2026-04-16T23:33:43.262468581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 16 23:33:43.266163 containerd[2014]: time="2026-04-16T23:33:43.266079957Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 474.346899ms" Apr 16 23:33:43.266288 containerd[2014]: time="2026-04-16T23:33:43.266163801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 16 23:33:43.277366 containerd[2014]: time="2026-04-16T23:33:43.277208349Z" level=info msg="CreateContainer within sandbox \"3775b26d62e70b60d681d3bfa9eaf279b655fdbe787edc60781cea1cc9d19d27\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 16 23:33:43.297125 containerd[2014]: time="2026-04-16T23:33:43.297054513Z" level=info msg="Container 6e9efddfb3888dc60755575b913b8287b79e306dc23254be34e138437ba2f2dd: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:33:43.320763 containerd[2014]: time="2026-04-16T23:33:43.320687097Z" level=info msg="CreateContainer within sandbox \"3775b26d62e70b60d681d3bfa9eaf279b655fdbe787edc60781cea1cc9d19d27\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6e9efddfb3888dc60755575b913b8287b79e306dc23254be34e138437ba2f2dd\"" Apr 16 23:33:43.323960 containerd[2014]: time="2026-04-16T23:33:43.323898801Z" level=info msg="StartContainer for \"6e9efddfb3888dc60755575b913b8287b79e306dc23254be34e138437ba2f2dd\"" Apr 16 23:33:43.327482 containerd[2014]: time="2026-04-16T23:33:43.327368469Z" level=info msg="connecting to shim 6e9efddfb3888dc60755575b913b8287b79e306dc23254be34e138437ba2f2dd" address="unix:///run/containerd/s/93432524b7065b4be3673896e00c9d1fda962aee9fe5c538b7d45cf8eeab6427" protocol=ttrpc version=3 Apr 16 23:33:43.379609 systemd[1]: Started cri-containerd-6e9efddfb3888dc60755575b913b8287b79e306dc23254be34e138437ba2f2dd.scope - libcontainer container 6e9efddfb3888dc60755575b913b8287b79e306dc23254be34e138437ba2f2dd. Apr 16 23:33:43.543140 ntpd[2217]: Listen normally on 12 calibd9aef1c9ff [fe80::ecee:eeff:feee:eeee%11]:123 Apr 16 23:33:43.544822 ntpd[2217]: 16 Apr 23:33:43 ntpd[2217]: Listen normally on 12 calibd9aef1c9ff [fe80::ecee:eeff:feee:eeee%11]:123 Apr 16 23:33:43.544822 ntpd[2217]: 16 Apr 23:33:43 ntpd[2217]: Listen normally on 13 calibece208bb9d [fe80::ecee:eeff:feee:eeee%12]:123 Apr 16 23:33:43.544822 ntpd[2217]: 16 Apr 23:33:43 ntpd[2217]: Listen normally on 14 calic327c259ec0 [fe80::ecee:eeff:feee:eeee%13]:123 Apr 16 23:33:43.544822 ntpd[2217]: 16 Apr 23:33:43 ntpd[2217]: Listen normally on 15 cali2a799cb0a38 [fe80::ecee:eeff:feee:eeee%14]:123 Apr 16 23:33:43.543224 ntpd[2217]: Listen normally on 13 calibece208bb9d [fe80::ecee:eeff:feee:eeee%12]:123 Apr 16 23:33:43.543842 ntpd[2217]: Listen normally on 14 calic327c259ec0 [fe80::ecee:eeff:feee:eeee%13]:123 Apr 16 23:33:43.543902 ntpd[2217]: Listen normally on 15 cali2a799cb0a38 [fe80::ecee:eeff:feee:eeee%14]:123 Apr 16 23:33:43.678749 containerd[2014]: time="2026-04-16T23:33:43.678655427Z" level=info msg="StartContainer for \"6e9efddfb3888dc60755575b913b8287b79e306dc23254be34e138437ba2f2dd\" returns successfully" Apr 16 23:33:44.448681 kubelet[3456]: I0416 23:33:44.448624 3456 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:33:44.481902 kubelet[3456]: I0416 23:33:44.481678 3456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-744c4c5668-n7rcp" podStartSLOduration=39.451959955 podStartE2EDuration="43.481653479s" podCreationTimestamp="2026-04-16 23:33:01 +0000 UTC" firstStartedPulling="2026-04-16 23:33:38.759497178 +0000 UTC m=+63.679796393" lastFinishedPulling="2026-04-16 23:33:42.789190702 +0000 UTC m=+67.709489917" observedRunningTime="2026-04-16 23:33:43.47139409 +0000 UTC m=+68.391693317" watchObservedRunningTime="2026-04-16 23:33:44.481653479 +0000 UTC m=+69.401952694" Apr 16 23:33:45.456084 kubelet[3456]: I0416 23:33:45.455482 3456 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:33:45.896033 systemd[1]: Started sshd@8-172.31.16.254:22-20.229.252.112:33844.service - OpenSSH per-connection server daemon (20.229.252.112:33844). Apr 16 23:33:46.785719 sshd[6033]: Accepted publickey for core from 20.229.252.112 port 33844 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:33:46.788774 sshd-session[6033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:33:46.797475 systemd-logind[1991]: New session 9 of user core. Apr 16 23:33:46.806845 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 16 23:33:47.424220 sshd[6040]: Connection closed by 20.229.252.112 port 33844 Apr 16 23:33:47.425108 sshd-session[6033]: pam_unix(sshd:session): session closed for user core Apr 16 23:33:47.430989 systemd-logind[1991]: Session 9 logged out. Waiting for processes to exit. Apr 16 23:33:47.432222 systemd[1]: sshd@8-172.31.16.254:22-20.229.252.112:33844.service: Deactivated successfully. Apr 16 23:33:47.436404 systemd[1]: session-9.scope: Deactivated successfully. Apr 16 23:33:47.441355 systemd-logind[1991]: Removed session 9. Apr 16 23:33:52.602102 systemd[1]: Started sshd@9-172.31.16.254:22-20.229.252.112:33846.service - OpenSSH per-connection server daemon (20.229.252.112:33846). Apr 16 23:33:53.492348 sshd[6068]: Accepted publickey for core from 20.229.252.112 port 33846 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:33:53.494490 sshd-session[6068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:33:53.503398 systemd-logind[1991]: New session 10 of user core. Apr 16 23:33:53.511574 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 16 23:33:54.122139 sshd[6071]: Connection closed by 20.229.252.112 port 33846 Apr 16 23:33:54.123149 sshd-session[6068]: pam_unix(sshd:session): session closed for user core Apr 16 23:33:54.130975 systemd[1]: sshd@9-172.31.16.254:22-20.229.252.112:33846.service: Deactivated successfully. Apr 16 23:33:54.134647 systemd[1]: session-10.scope: Deactivated successfully. Apr 16 23:33:54.136898 systemd-logind[1991]: Session 10 logged out. Waiting for processes to exit. Apr 16 23:33:54.140197 systemd-logind[1991]: Removed session 10. Apr 16 23:33:54.423378 kubelet[3456]: I0416 23:33:54.423021 3456 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-744c4c5668-pgmzq" podStartSLOduration=49.481871752 podStartE2EDuration="53.422996864s" podCreationTimestamp="2026-04-16 23:33:01 +0000 UTC" firstStartedPulling="2026-04-16 23:33:39.326346281 +0000 UTC m=+64.246645484" lastFinishedPulling="2026-04-16 23:33:43.267471381 +0000 UTC m=+68.187770596" observedRunningTime="2026-04-16 23:33:44.482783183 +0000 UTC m=+69.403082410" watchObservedRunningTime="2026-04-16 23:33:54.422996864 +0000 UTC m=+79.343296091" Apr 16 23:33:57.607383 kubelet[3456]: I0416 23:33:57.606448 3456 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:33:59.314591 systemd[1]: Started sshd@10-172.31.16.254:22-20.229.252.112:44232.service - OpenSSH per-connection server daemon (20.229.252.112:44232). Apr 16 23:34:00.232514 sshd[6116]: Accepted publickey for core from 20.229.252.112 port 44232 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:34:00.235287 sshd-session[6116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:34:00.243668 systemd-logind[1991]: New session 11 of user core. Apr 16 23:34:00.251638 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 16 23:34:00.903578 sshd[6135]: Connection closed by 20.229.252.112 port 44232 Apr 16 23:34:00.904488 sshd-session[6116]: pam_unix(sshd:session): session closed for user core Apr 16 23:34:00.911249 systemd[1]: sshd@10-172.31.16.254:22-20.229.252.112:44232.service: Deactivated successfully. Apr 16 23:34:00.915283 systemd[1]: session-11.scope: Deactivated successfully. Apr 16 23:34:00.917700 systemd-logind[1991]: Session 11 logged out. Waiting for processes to exit. Apr 16 23:34:00.920910 systemd-logind[1991]: Removed session 11. Apr 16 23:34:01.085417 systemd[1]: Started sshd@11-172.31.16.254:22-20.229.252.112:44236.service - OpenSSH per-connection server daemon (20.229.252.112:44236). Apr 16 23:34:01.996450 sshd[6148]: Accepted publickey for core from 20.229.252.112 port 44236 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:34:01.999031 sshd-session[6148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:34:02.009089 systemd-logind[1991]: New session 12 of user core. Apr 16 23:34:02.018588 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 16 23:34:02.758347 sshd[6174]: Connection closed by 20.229.252.112 port 44236 Apr 16 23:34:02.759864 sshd-session[6148]: pam_unix(sshd:session): session closed for user core Apr 16 23:34:02.768089 systemd[1]: sshd@11-172.31.16.254:22-20.229.252.112:44236.service: Deactivated successfully. Apr 16 23:34:02.775887 systemd[1]: session-12.scope: Deactivated successfully. Apr 16 23:34:02.781045 systemd-logind[1991]: Session 12 logged out. Waiting for processes to exit. Apr 16 23:34:02.783671 systemd-logind[1991]: Removed session 12. Apr 16 23:34:02.933160 systemd[1]: Started sshd@12-172.31.16.254:22-20.229.252.112:44242.service - OpenSSH per-connection server daemon (20.229.252.112:44242). Apr 16 23:34:03.831572 sshd[6206]: Accepted publickey for core from 20.229.252.112 port 44242 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:34:03.834025 sshd-session[6206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:34:03.842523 systemd-logind[1991]: New session 13 of user core. Apr 16 23:34:03.850589 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 16 23:34:04.460609 sshd[6209]: Connection closed by 20.229.252.112 port 44242 Apr 16 23:34:04.464184 sshd-session[6206]: pam_unix(sshd:session): session closed for user core Apr 16 23:34:04.478252 systemd[1]: sshd@12-172.31.16.254:22-20.229.252.112:44242.service: Deactivated successfully. Apr 16 23:34:04.478602 systemd-logind[1991]: Session 13 logged out. Waiting for processes to exit. Apr 16 23:34:04.487362 systemd[1]: session-13.scope: Deactivated successfully. Apr 16 23:34:04.496193 systemd-logind[1991]: Removed session 13. Apr 16 23:34:09.649256 systemd[1]: Started sshd@13-172.31.16.254:22-20.229.252.112:44710.service - OpenSSH per-connection server daemon (20.229.252.112:44710). Apr 16 23:34:09.725320 kubelet[3456]: I0416 23:34:09.725246 3456 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:34:10.562611 sshd[6262]: Accepted publickey for core from 20.229.252.112 port 44710 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:34:10.565318 sshd-session[6262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:34:10.577903 systemd-logind[1991]: New session 14 of user core. Apr 16 23:34:10.585626 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 16 23:34:11.214237 sshd[6267]: Connection closed by 20.229.252.112 port 44710 Apr 16 23:34:11.215644 sshd-session[6262]: pam_unix(sshd:session): session closed for user core Apr 16 23:34:11.223855 systemd-logind[1991]: Session 14 logged out. Waiting for processes to exit. Apr 16 23:34:11.224322 systemd[1]: sshd@13-172.31.16.254:22-20.229.252.112:44710.service: Deactivated successfully. Apr 16 23:34:11.230170 systemd[1]: session-14.scope: Deactivated successfully. Apr 16 23:34:11.233869 systemd-logind[1991]: Removed session 14. Apr 16 23:34:16.390211 systemd[1]: Started sshd@14-172.31.16.254:22-20.229.252.112:43724.service - OpenSSH per-connection server daemon (20.229.252.112:43724). Apr 16 23:34:17.289468 sshd[6282]: Accepted publickey for core from 20.229.252.112 port 43724 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:34:17.291000 sshd-session[6282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:34:17.299997 systemd-logind[1991]: New session 15 of user core. Apr 16 23:34:17.305540 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 16 23:34:17.922732 sshd[6285]: Connection closed by 20.229.252.112 port 43724 Apr 16 23:34:17.924019 sshd-session[6282]: pam_unix(sshd:session): session closed for user core Apr 16 23:34:17.931950 systemd[1]: sshd@14-172.31.16.254:22-20.229.252.112:43724.service: Deactivated successfully. Apr 16 23:34:17.938244 systemd[1]: session-15.scope: Deactivated successfully. Apr 16 23:34:17.940522 systemd-logind[1991]: Session 15 logged out. Waiting for processes to exit. Apr 16 23:34:17.944609 systemd-logind[1991]: Removed session 15. Apr 16 23:34:18.104444 systemd[1]: Started sshd@15-172.31.16.254:22-20.229.252.112:43726.service - OpenSSH per-connection server daemon (20.229.252.112:43726). Apr 16 23:34:19.014345 sshd[6297]: Accepted publickey for core from 20.229.252.112 port 43726 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:34:19.016703 sshd-session[6297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:34:19.025175 systemd-logind[1991]: New session 16 of user core. Apr 16 23:34:19.034591 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 16 23:34:19.956533 sshd[6300]: Connection closed by 20.229.252.112 port 43726 Apr 16 23:34:19.956975 sshd-session[6297]: pam_unix(sshd:session): session closed for user core Apr 16 23:34:19.964990 systemd[1]: sshd@15-172.31.16.254:22-20.229.252.112:43726.service: Deactivated successfully. Apr 16 23:34:19.971189 systemd[1]: session-16.scope: Deactivated successfully. Apr 16 23:34:19.974134 systemd-logind[1991]: Session 16 logged out. Waiting for processes to exit. Apr 16 23:34:19.977691 systemd-logind[1991]: Removed session 16. Apr 16 23:34:20.131399 systemd[1]: Started sshd@16-172.31.16.254:22-20.229.252.112:43728.service - OpenSSH per-connection server daemon (20.229.252.112:43728). Apr 16 23:34:21.017713 sshd[6316]: Accepted publickey for core from 20.229.252.112 port 43728 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:34:21.020339 sshd-session[6316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:34:21.028152 systemd-logind[1991]: New session 17 of user core. Apr 16 23:34:21.034585 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 16 23:34:22.592722 sshd[6319]: Connection closed by 20.229.252.112 port 43728 Apr 16 23:34:22.593789 sshd-session[6316]: pam_unix(sshd:session): session closed for user core Apr 16 23:34:22.603021 systemd[1]: sshd@16-172.31.16.254:22-20.229.252.112:43728.service: Deactivated successfully. Apr 16 23:34:22.607801 systemd[1]: session-17.scope: Deactivated successfully. Apr 16 23:34:22.610286 systemd-logind[1991]: Session 17 logged out. Waiting for processes to exit. Apr 16 23:34:22.614017 systemd-logind[1991]: Removed session 17. Apr 16 23:34:22.775860 systemd[1]: Started sshd@17-172.31.16.254:22-20.229.252.112:43736.service - OpenSSH per-connection server daemon (20.229.252.112:43736). Apr 16 23:34:23.693104 sshd[6345]: Accepted publickey for core from 20.229.252.112 port 43736 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:34:23.695525 sshd-session[6345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:34:23.703384 systemd-logind[1991]: New session 18 of user core. Apr 16 23:34:23.710586 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 16 23:34:24.660814 sshd[6348]: Connection closed by 20.229.252.112 port 43736 Apr 16 23:34:24.661341 sshd-session[6345]: pam_unix(sshd:session): session closed for user core Apr 16 23:34:24.670679 systemd[1]: sshd@17-172.31.16.254:22-20.229.252.112:43736.service: Deactivated successfully. Apr 16 23:34:24.674866 systemd[1]: session-18.scope: Deactivated successfully. Apr 16 23:34:24.678166 systemd-logind[1991]: Session 18 logged out. Waiting for processes to exit. Apr 16 23:34:24.682150 systemd-logind[1991]: Removed session 18. Apr 16 23:34:24.845810 systemd[1]: Started sshd@18-172.31.16.254:22-20.229.252.112:43750.service - OpenSSH per-connection server daemon (20.229.252.112:43750). Apr 16 23:34:25.755845 sshd[6383]: Accepted publickey for core from 20.229.252.112 port 43750 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:34:25.758411 sshd-session[6383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:34:25.769667 systemd-logind[1991]: New session 19 of user core. Apr 16 23:34:25.778610 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 16 23:34:26.383397 sshd[6386]: Connection closed by 20.229.252.112 port 43750 Apr 16 23:34:26.384679 sshd-session[6383]: pam_unix(sshd:session): session closed for user core Apr 16 23:34:26.393407 systemd-logind[1991]: Session 19 logged out. Waiting for processes to exit. Apr 16 23:34:26.393959 systemd[1]: sshd@18-172.31.16.254:22-20.229.252.112:43750.service: Deactivated successfully. Apr 16 23:34:26.402908 systemd[1]: session-19.scope: Deactivated successfully. Apr 16 23:34:26.411433 systemd-logind[1991]: Removed session 19. Apr 16 23:34:31.566779 systemd[1]: Started sshd@19-172.31.16.254:22-20.229.252.112:45046.service - OpenSSH per-connection server daemon (20.229.252.112:45046). Apr 16 23:34:32.470750 sshd[6423]: Accepted publickey for core from 20.229.252.112 port 45046 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:34:32.475092 sshd-session[6423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:34:32.487462 systemd-logind[1991]: New session 20 of user core. Apr 16 23:34:32.496655 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 16 23:34:33.107623 sshd[6448]: Connection closed by 20.229.252.112 port 45046 Apr 16 23:34:33.108422 sshd-session[6423]: pam_unix(sshd:session): session closed for user core Apr 16 23:34:33.116025 systemd[1]: sshd@19-172.31.16.254:22-20.229.252.112:45046.service: Deactivated successfully. Apr 16 23:34:33.116769 systemd-logind[1991]: Session 20 logged out. Waiting for processes to exit. Apr 16 23:34:33.120862 systemd[1]: session-20.scope: Deactivated successfully. Apr 16 23:34:33.125050 systemd-logind[1991]: Removed session 20. Apr 16 23:34:38.286664 systemd[1]: Started sshd@20-172.31.16.254:22-20.229.252.112:34696.service - OpenSSH per-connection server daemon (20.229.252.112:34696). Apr 16 23:34:39.192464 sshd[6461]: Accepted publickey for core from 20.229.252.112 port 34696 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:34:39.196080 sshd-session[6461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:34:39.208246 systemd-logind[1991]: New session 21 of user core. Apr 16 23:34:39.214652 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 16 23:34:39.906664 sshd[6464]: Connection closed by 20.229.252.112 port 34696 Apr 16 23:34:39.908893 sshd-session[6461]: pam_unix(sshd:session): session closed for user core Apr 16 23:34:39.916727 systemd[1]: sshd@20-172.31.16.254:22-20.229.252.112:34696.service: Deactivated successfully. Apr 16 23:34:39.925659 systemd[1]: session-21.scope: Deactivated successfully. Apr 16 23:34:39.931164 systemd-logind[1991]: Session 21 logged out. Waiting for processes to exit. Apr 16 23:34:39.936466 systemd-logind[1991]: Removed session 21. Apr 16 23:34:45.091331 systemd[1]: Started sshd@21-172.31.16.254:22-20.229.252.112:57958.service - OpenSSH per-connection server daemon (20.229.252.112:57958). Apr 16 23:34:45.991273 sshd[6504]: Accepted publickey for core from 20.229.252.112 port 57958 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:34:45.993782 sshd-session[6504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:34:46.008967 systemd-logind[1991]: New session 22 of user core. Apr 16 23:34:46.014622 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 16 23:34:46.626138 sshd[6507]: Connection closed by 20.229.252.112 port 57958 Apr 16 23:34:46.627638 sshd-session[6504]: pam_unix(sshd:session): session closed for user core Apr 16 23:34:46.635770 systemd[1]: sshd@21-172.31.16.254:22-20.229.252.112:57958.service: Deactivated successfully. Apr 16 23:34:46.643900 systemd[1]: session-22.scope: Deactivated successfully. Apr 16 23:34:46.648778 systemd-logind[1991]: Session 22 logged out. Waiting for processes to exit. Apr 16 23:34:46.652808 systemd-logind[1991]: Removed session 22. Apr 16 23:34:51.815716 systemd[1]: Started sshd@22-172.31.16.254:22-20.229.252.112:57972.service - OpenSSH per-connection server daemon (20.229.252.112:57972). Apr 16 23:34:52.726519 sshd[6529]: Accepted publickey for core from 20.229.252.112 port 57972 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:34:52.728974 sshd-session[6529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:34:52.738476 systemd-logind[1991]: New session 23 of user core. Apr 16 23:34:52.746594 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 16 23:34:53.359461 sshd[6532]: Connection closed by 20.229.252.112 port 57972 Apr 16 23:34:53.360592 sshd-session[6529]: pam_unix(sshd:session): session closed for user core Apr 16 23:34:53.369728 systemd[1]: sshd@22-172.31.16.254:22-20.229.252.112:57972.service: Deactivated successfully. Apr 16 23:34:53.373546 systemd[1]: session-23.scope: Deactivated successfully. Apr 16 23:34:53.377198 systemd-logind[1991]: Session 23 logged out. Waiting for processes to exit. Apr 16 23:34:53.380371 systemd-logind[1991]: Removed session 23. Apr 16 23:34:58.536672 systemd[1]: Started sshd@23-172.31.16.254:22-20.229.252.112:42462.service - OpenSSH per-connection server daemon (20.229.252.112:42462). Apr 16 23:34:59.431705 sshd[6571]: Accepted publickey for core from 20.229.252.112 port 42462 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:34:59.434288 sshd-session[6571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:34:59.443640 systemd-logind[1991]: New session 24 of user core. Apr 16 23:34:59.449583 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 16 23:35:00.063356 sshd[6580]: Connection closed by 20.229.252.112 port 42462 Apr 16 23:35:00.061460 sshd-session[6571]: pam_unix(sshd:session): session closed for user core Apr 16 23:35:00.070173 systemd[1]: sshd@23-172.31.16.254:22-20.229.252.112:42462.service: Deactivated successfully. Apr 16 23:35:00.074891 systemd[1]: session-24.scope: Deactivated successfully. Apr 16 23:35:00.077629 systemd-logind[1991]: Session 24 logged out. Waiting for processes to exit. Apr 16 23:35:00.081870 systemd-logind[1991]: Removed session 24. Apr 16 23:35:15.006463 systemd[1]: cri-containerd-0ebdd1dc05885f833d4757579755df192262bbf432adf75a643624faab4954d1.scope: Deactivated successfully. Apr 16 23:35:15.008444 systemd[1]: cri-containerd-0ebdd1dc05885f833d4757579755df192262bbf432adf75a643624faab4954d1.scope: Consumed 25.680s CPU time, 122.2M memory peak. Apr 16 23:35:15.014852 containerd[2014]: time="2026-04-16T23:35:15.014780196Z" level=info msg="received container exit event container_id:\"0ebdd1dc05885f833d4757579755df192262bbf432adf75a643624faab4954d1\" id:\"0ebdd1dc05885f833d4757579755df192262bbf432adf75a643624faab4954d1\" pid:3864 exit_status:1 exited_at:{seconds:1776382515 nanos:14113788}" Apr 16 23:35:15.057421 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ebdd1dc05885f833d4757579755df192262bbf432adf75a643624faab4954d1-rootfs.mount: Deactivated successfully. Apr 16 23:35:15.377664 systemd[1]: cri-containerd-f85d5df95e91cdba86e50a67e3001a86cccbbe0ae325a62113c91deca55ef70b.scope: Deactivated successfully. Apr 16 23:35:15.380398 systemd[1]: cri-containerd-f85d5df95e91cdba86e50a67e3001a86cccbbe0ae325a62113c91deca55ef70b.scope: Consumed 5.698s CPU time, 62.4M memory peak, 64K read from disk. Apr 16 23:35:15.385375 containerd[2014]: time="2026-04-16T23:35:15.385240130Z" level=info msg="received container exit event container_id:\"f85d5df95e91cdba86e50a67e3001a86cccbbe0ae325a62113c91deca55ef70b\" id:\"f85d5df95e91cdba86e50a67e3001a86cccbbe0ae325a62113c91deca55ef70b\" pid:3158 exit_status:1 exited_at:{seconds:1776382515 nanos:384750734}" Apr 16 23:35:15.446449 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f85d5df95e91cdba86e50a67e3001a86cccbbe0ae325a62113c91deca55ef70b-rootfs.mount: Deactivated successfully. Apr 16 23:35:15.749238 kubelet[3456]: I0416 23:35:15.749165 3456 scope.go:117] "RemoveContainer" containerID="0ebdd1dc05885f833d4757579755df192262bbf432adf75a643624faab4954d1" Apr 16 23:35:15.757326 kubelet[3456]: I0416 23:35:15.757142 3456 scope.go:117] "RemoveContainer" containerID="f85d5df95e91cdba86e50a67e3001a86cccbbe0ae325a62113c91deca55ef70b" Apr 16 23:35:15.766571 containerd[2014]: time="2026-04-16T23:35:15.766492444Z" level=info msg="CreateContainer within sandbox \"4afb4c4a236adfc7eb47e877d31fe278ff5c6e152b95a3d3a62f57fa7720fa81\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 16 23:35:15.768355 containerd[2014]: time="2026-04-16T23:35:15.767545456Z" level=info msg="CreateContainer within sandbox \"b3c87053ab256896725e9611a8f6167ca098323e5063ecf05c209239b1a073cf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 16 23:35:15.795915 containerd[2014]: time="2026-04-16T23:35:15.795853288Z" level=info msg="Container 5fad369decdcb185e5144fd4cf059d66add3b1ea1200d2c03ecdc2a1e3bf815a: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:35:15.813326 containerd[2014]: time="2026-04-16T23:35:15.811142848Z" level=info msg="Container 5106e339eaad0c7ab2a899e533ec821ea07367e184c444104e527665b155244c: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:35:15.824211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1291511138.mount: Deactivated successfully. Apr 16 23:35:15.829334 containerd[2014]: time="2026-04-16T23:35:15.829256980Z" level=info msg="CreateContainer within sandbox \"4afb4c4a236adfc7eb47e877d31fe278ff5c6e152b95a3d3a62f57fa7720fa81\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"5fad369decdcb185e5144fd4cf059d66add3b1ea1200d2c03ecdc2a1e3bf815a\"" Apr 16 23:35:15.830615 containerd[2014]: time="2026-04-16T23:35:15.830557696Z" level=info msg="StartContainer for \"5fad369decdcb185e5144fd4cf059d66add3b1ea1200d2c03ecdc2a1e3bf815a\"" Apr 16 23:35:15.833116 containerd[2014]: time="2026-04-16T23:35:15.832982836Z" level=info msg="connecting to shim 5fad369decdcb185e5144fd4cf059d66add3b1ea1200d2c03ecdc2a1e3bf815a" address="unix:///run/containerd/s/ef3d757f800b0cb705a2927b865122b5891d6429ed7ec39a4b4e1d6558e615cb" protocol=ttrpc version=3 Apr 16 23:35:15.844794 containerd[2014]: time="2026-04-16T23:35:15.844715932Z" level=info msg="CreateContainer within sandbox \"b3c87053ab256896725e9611a8f6167ca098323e5063ecf05c209239b1a073cf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"5106e339eaad0c7ab2a899e533ec821ea07367e184c444104e527665b155244c\"" Apr 16 23:35:15.845896 containerd[2014]: time="2026-04-16T23:35:15.845722312Z" level=info msg="StartContainer for \"5106e339eaad0c7ab2a899e533ec821ea07367e184c444104e527665b155244c\"" Apr 16 23:35:15.852277 containerd[2014]: time="2026-04-16T23:35:15.852079541Z" level=info msg="connecting to shim 5106e339eaad0c7ab2a899e533ec821ea07367e184c444104e527665b155244c" address="unix:///run/containerd/s/459c810279a5ca6049469fdb4b5d799521a42c8d66311d803c0b65ecf8b019ab" protocol=ttrpc version=3 Apr 16 23:35:15.882832 systemd[1]: Started cri-containerd-5fad369decdcb185e5144fd4cf059d66add3b1ea1200d2c03ecdc2a1e3bf815a.scope - libcontainer container 5fad369decdcb185e5144fd4cf059d66add3b1ea1200d2c03ecdc2a1e3bf815a. Apr 16 23:35:15.907768 systemd[1]: Started cri-containerd-5106e339eaad0c7ab2a899e533ec821ea07367e184c444104e527665b155244c.scope - libcontainer container 5106e339eaad0c7ab2a899e533ec821ea07367e184c444104e527665b155244c. Apr 16 23:35:16.015166 containerd[2014]: time="2026-04-16T23:35:16.012611077Z" level=info msg="StartContainer for \"5fad369decdcb185e5144fd4cf059d66add3b1ea1200d2c03ecdc2a1e3bf815a\" returns successfully" Apr 16 23:35:16.090956 containerd[2014]: time="2026-04-16T23:35:16.090884330Z" level=info msg="StartContainer for \"5106e339eaad0c7ab2a899e533ec821ea07367e184c444104e527665b155244c\" returns successfully" Apr 16 23:35:18.785446 kubelet[3456]: E0416 23:35:18.784266 3456 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.254:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-254?timeout=10s\": context deadline exceeded" Apr 16 23:35:20.948789 systemd[1]: cri-containerd-e97666a69618427cef37b747f13216798c6801c8f14987c02a4fd147ca0f22fd.scope: Deactivated successfully. Apr 16 23:35:20.951151 systemd[1]: cri-containerd-e97666a69618427cef37b747f13216798c6801c8f14987c02a4fd147ca0f22fd.scope: Consumed 3.348s CPU time, 22.5M memory peak, 176K read from disk. Apr 16 23:35:20.954833 containerd[2014]: time="2026-04-16T23:35:20.954735142Z" level=info msg="received container exit event container_id:\"e97666a69618427cef37b747f13216798c6801c8f14987c02a4fd147ca0f22fd\" id:\"e97666a69618427cef37b747f13216798c6801c8f14987c02a4fd147ca0f22fd\" pid:3187 exit_status:1 exited_at:{seconds:1776382520 nanos:954097990}" Apr 16 23:35:20.999285 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e97666a69618427cef37b747f13216798c6801c8f14987c02a4fd147ca0f22fd-rootfs.mount: Deactivated successfully. Apr 16 23:35:21.788121 kubelet[3456]: I0416 23:35:21.787793 3456 scope.go:117] "RemoveContainer" containerID="e97666a69618427cef37b747f13216798c6801c8f14987c02a4fd147ca0f22fd" Apr 16 23:35:21.792125 containerd[2014]: time="2026-04-16T23:35:21.792069742Z" level=info msg="CreateContainer within sandbox \"b479b92e6fe8b811ed7b49a1ba42b333ee55b09f62bf3f3470a1715e3825a909\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 16 23:35:21.810653 containerd[2014]: time="2026-04-16T23:35:21.810588226Z" level=info msg="Container 1ef710a75b419d3dcbc8413c1a4b4eed43bd3832551f5bd157da47332e7906ba: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:35:21.832140 containerd[2014]: time="2026-04-16T23:35:21.832067842Z" level=info msg="CreateContainer within sandbox \"b479b92e6fe8b811ed7b49a1ba42b333ee55b09f62bf3f3470a1715e3825a909\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"1ef710a75b419d3dcbc8413c1a4b4eed43bd3832551f5bd157da47332e7906ba\"" Apr 16 23:35:21.832903 containerd[2014]: time="2026-04-16T23:35:21.832821010Z" level=info msg="StartContainer for \"1ef710a75b419d3dcbc8413c1a4b4eed43bd3832551f5bd157da47332e7906ba\"" Apr 16 23:35:21.836377 containerd[2014]: time="2026-04-16T23:35:21.836227810Z" level=info msg="connecting to shim 1ef710a75b419d3dcbc8413c1a4b4eed43bd3832551f5bd157da47332e7906ba" address="unix:///run/containerd/s/6abd28cb5e9586cdc84eb3cf29eb602a30bb24b8ad87b20737d977d5d34b896b" protocol=ttrpc version=3 Apr 16 23:35:21.876687 systemd[1]: Started cri-containerd-1ef710a75b419d3dcbc8413c1a4b4eed43bd3832551f5bd157da47332e7906ba.scope - libcontainer container 1ef710a75b419d3dcbc8413c1a4b4eed43bd3832551f5bd157da47332e7906ba. Apr 16 23:35:21.959320 containerd[2014]: time="2026-04-16T23:35:21.959245895Z" level=info msg="StartContainer for \"1ef710a75b419d3dcbc8413c1a4b4eed43bd3832551f5bd157da47332e7906ba\" returns successfully" Apr 16 23:35:27.619941 systemd[1]: cri-containerd-5fad369decdcb185e5144fd4cf059d66add3b1ea1200d2c03ecdc2a1e3bf815a.scope: Deactivated successfully. Apr 16 23:35:27.623286 containerd[2014]: time="2026-04-16T23:35:27.622980399Z" level=info msg="received container exit event container_id:\"5fad369decdcb185e5144fd4cf059d66add3b1ea1200d2c03ecdc2a1e3bf815a\" id:\"5fad369decdcb185e5144fd4cf059d66add3b1ea1200d2c03ecdc2a1e3bf815a\" pid:6743 exit_status:1 exited_at:{seconds:1776382527 nanos:622247931}" Apr 16 23:35:27.664619 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5fad369decdcb185e5144fd4cf059d66add3b1ea1200d2c03ecdc2a1e3bf815a-rootfs.mount: Deactivated successfully. Apr 16 23:35:27.817861 kubelet[3456]: I0416 23:35:27.817817 3456 scope.go:117] "RemoveContainer" containerID="0ebdd1dc05885f833d4757579755df192262bbf432adf75a643624faab4954d1" Apr 16 23:35:27.819680 kubelet[3456]: I0416 23:35:27.818595 3456 scope.go:117] "RemoveContainer" containerID="5fad369decdcb185e5144fd4cf059d66add3b1ea1200d2c03ecdc2a1e3bf815a" Apr 16 23:35:27.819680 kubelet[3456]: E0416 23:35:27.819108 3456 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-6bf85f8dd-zl4cx_tigera-operator(59d1f7b9-b64f-49ed-bba0-c1c172e38133)\"" pod="tigera-operator/tigera-operator-6bf85f8dd-zl4cx" podUID="59d1f7b9-b64f-49ed-bba0-c1c172e38133" Apr 16 23:35:27.822157 containerd[2014]: time="2026-04-16T23:35:27.822103240Z" level=info msg="RemoveContainer for \"0ebdd1dc05885f833d4757579755df192262bbf432adf75a643624faab4954d1\"" Apr 16 23:35:27.833006 containerd[2014]: time="2026-04-16T23:35:27.832815100Z" level=info msg="RemoveContainer for \"0ebdd1dc05885f833d4757579755df192262bbf432adf75a643624faab4954d1\" returns successfully"