Apr 16 23:31:38.213611 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Apr 16 23:31:38.213661 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Thu Apr 16 22:10:49 -00 2026 Apr 16 23:31:38.213688 kernel: KASLR disabled due to lack of seed Apr 16 23:31:38.213706 kernel: efi: EFI v2.7 by EDK II Apr 16 23:31:38.213723 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a734a98 MEMRESERVE=0x78557598 Apr 16 23:31:38.213739 kernel: secureboot: Secure boot disabled Apr 16 23:31:38.213757 kernel: ACPI: Early table checksum verification disabled Apr 16 23:31:38.213772 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Apr 16 23:31:38.213788 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Apr 16 23:31:38.213804 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 16 23:31:38.213820 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 16 23:31:38.213840 kernel: ACPI: FACS 0x0000000078630000 000040 Apr 16 23:31:38.213855 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 16 23:31:38.213872 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Apr 16 23:31:38.213895 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Apr 16 23:31:38.213911 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Apr 16 23:31:38.213932 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 16 23:31:38.213949 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Apr 16 23:31:38.213966 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Apr 16 23:31:38.213983 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Apr 16 23:31:38.214000 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Apr 16 23:31:38.214016 kernel: printk: legacy bootconsole [uart0] enabled Apr 16 23:31:38.214032 kernel: ACPI: Use ACPI SPCR as default console: Yes Apr 16 23:31:38.214049 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Apr 16 23:31:38.214066 kernel: NODE_DATA(0) allocated [mem 0x4b584ea00-0x4b5855fff] Apr 16 23:31:38.214082 kernel: Zone ranges: Apr 16 23:31:38.214098 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Apr 16 23:31:38.214120 kernel: DMA32 empty Apr 16 23:31:38.214137 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Apr 16 23:31:38.214153 kernel: Device empty Apr 16 23:31:38.214171 kernel: Movable zone start for each node Apr 16 23:31:38.214188 kernel: Early memory node ranges Apr 16 23:31:38.214205 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Apr 16 23:31:38.214222 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Apr 16 23:31:38.214239 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Apr 16 23:31:38.214255 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Apr 16 23:31:38.214271 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Apr 16 23:31:38.214287 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Apr 16 23:31:38.214303 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Apr 16 23:31:38.214324 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Apr 16 23:31:38.214377 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Apr 16 23:31:38.214405 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Apr 16 23:31:38.214423 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Apr 16 23:31:38.214442 kernel: psci: probing for conduit method from ACPI. Apr 16 23:31:38.214466 kernel: psci: PSCIv1.0 detected in firmware. Apr 16 23:31:38.214483 kernel: psci: Using standard PSCI v0.2 function IDs Apr 16 23:31:38.214500 kernel: psci: Trusted OS migration not required Apr 16 23:31:38.214517 kernel: psci: SMC Calling Convention v1.1 Apr 16 23:31:38.214534 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Apr 16 23:31:38.214551 kernel: percpu: Embedded 33 pages/cpu s97752 r8192 d29224 u135168 Apr 16 23:31:38.214568 kernel: pcpu-alloc: s97752 r8192 d29224 u135168 alloc=33*4096 Apr 16 23:31:38.214585 kernel: pcpu-alloc: [0] 0 [0] 1 Apr 16 23:31:38.214603 kernel: Detected PIPT I-cache on CPU0 Apr 16 23:31:38.214620 kernel: CPU features: detected: GIC system register CPU interface Apr 16 23:31:38.214637 kernel: CPU features: detected: Spectre-v2 Apr 16 23:31:38.214658 kernel: CPU features: detected: Spectre-v3a Apr 16 23:31:38.214675 kernel: CPU features: detected: Spectre-BHB Apr 16 23:31:38.214692 kernel: CPU features: detected: ARM erratum 1742098 Apr 16 23:31:38.214709 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Apr 16 23:31:38.214726 kernel: alternatives: applying boot alternatives Apr 16 23:31:38.214745 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c4961845f9869114226296d88644496bf9e4629823927a5e8ae22de79f1c7b59 Apr 16 23:31:38.214763 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 16 23:31:38.214780 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 16 23:31:38.214797 kernel: Fallback order for Node 0: 0 Apr 16 23:31:38.214814 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Apr 16 23:31:38.214831 kernel: Policy zone: Normal Apr 16 23:31:38.214852 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 16 23:31:38.214870 kernel: software IO TLB: area num 2. Apr 16 23:31:38.214887 kernel: software IO TLB: mapped [mem 0x0000000074557000-0x0000000078557000] (64MB) Apr 16 23:31:38.214903 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 16 23:31:38.214920 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 16 23:31:38.214938 kernel: rcu: RCU event tracing is enabled. Apr 16 23:31:38.214955 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 16 23:31:38.214973 kernel: Trampoline variant of Tasks RCU enabled. Apr 16 23:31:38.214991 kernel: Tracing variant of Tasks RCU enabled. Apr 16 23:31:38.215008 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 16 23:31:38.215025 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 16 23:31:38.215047 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 16 23:31:38.215064 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 16 23:31:38.215081 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 16 23:31:38.215098 kernel: GICv3: 96 SPIs implemented Apr 16 23:31:38.215115 kernel: GICv3: 0 Extended SPIs implemented Apr 16 23:31:38.215132 kernel: Root IRQ handler: gic_handle_irq Apr 16 23:31:38.215149 kernel: GICv3: GICv3 features: 16 PPIs Apr 16 23:31:38.215166 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Apr 16 23:31:38.215182 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Apr 16 23:31:38.215199 kernel: ITS [mem 0x10080000-0x1009ffff] Apr 16 23:31:38.215217 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Apr 16 23:31:38.215235 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Apr 16 23:31:38.215257 kernel: GICv3: using LPI property table @0x0000000400110000 Apr 16 23:31:38.215274 kernel: ITS: Using hypervisor restricted LPI range [128] Apr 16 23:31:38.215291 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Apr 16 23:31:38.215308 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 16 23:31:38.215325 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Apr 16 23:31:38.215342 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Apr 16 23:31:38.215416 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Apr 16 23:31:38.215435 kernel: Console: colour dummy device 80x25 Apr 16 23:31:38.215453 kernel: printk: legacy console [tty1] enabled Apr 16 23:31:38.215471 kernel: ACPI: Core revision 20240827 Apr 16 23:31:38.215488 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Apr 16 23:31:38.215515 kernel: pid_max: default: 32768 minimum: 301 Apr 16 23:31:38.215533 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 16 23:31:38.215604 kernel: landlock: Up and running. Apr 16 23:31:38.215665 kernel: SELinux: Initializing. Apr 16 23:31:38.216056 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 23:31:38.216148 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 23:31:38.216169 kernel: rcu: Hierarchical SRCU implementation. Apr 16 23:31:38.216188 kernel: rcu: Max phase no-delay instances is 400. Apr 16 23:31:38.216214 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 16 23:31:38.216232 kernel: Remapping and enabling EFI services. Apr 16 23:31:38.216249 kernel: smp: Bringing up secondary CPUs ... Apr 16 23:31:38.216266 kernel: Detected PIPT I-cache on CPU1 Apr 16 23:31:38.216305 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Apr 16 23:31:38.216325 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Apr 16 23:31:38.216343 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Apr 16 23:31:38.216444 kernel: smp: Brought up 1 node, 2 CPUs Apr 16 23:31:38.216462 kernel: SMP: Total of 2 processors activated. Apr 16 23:31:38.216486 kernel: CPU: All CPU(s) started at EL1 Apr 16 23:31:38.216514 kernel: CPU features: detected: 32-bit EL0 Support Apr 16 23:31:38.216533 kernel: CPU features: detected: 32-bit EL1 Support Apr 16 23:31:38.216555 kernel: CPU features: detected: CRC32 instructions Apr 16 23:31:38.216573 kernel: alternatives: applying system-wide alternatives Apr 16 23:31:38.216593 kernel: Memory: 3796264K/4030464K available (11200K kernel code, 2458K rwdata, 9092K rodata, 39552K init, 1038K bss, 212848K reserved, 16384K cma-reserved) Apr 16 23:31:38.216612 kernel: devtmpfs: initialized Apr 16 23:31:38.216630 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 16 23:31:38.216653 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 16 23:31:38.216671 kernel: 16864 pages in range for non-PLT usage Apr 16 23:31:38.216689 kernel: 508384 pages in range for PLT usage Apr 16 23:31:38.216707 kernel: pinctrl core: initialized pinctrl subsystem Apr 16 23:31:38.216724 kernel: SMBIOS 3.0.0 present. Apr 16 23:31:38.216742 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Apr 16 23:31:38.216760 kernel: DMI: Memory slots populated: 0/0 Apr 16 23:31:38.216778 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 16 23:31:38.216796 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 16 23:31:38.216818 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 16 23:31:38.216837 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 16 23:31:38.216855 kernel: audit: initializing netlink subsys (disabled) Apr 16 23:31:38.216873 kernel: audit: type=2000 audit(0.227:1): state=initialized audit_enabled=0 res=1 Apr 16 23:31:38.216891 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 16 23:31:38.216909 kernel: cpuidle: using governor menu Apr 16 23:31:38.216927 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 16 23:31:38.216945 kernel: ASID allocator initialised with 65536 entries Apr 16 23:31:38.216963 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 16 23:31:38.216986 kernel: Serial: AMBA PL011 UART driver Apr 16 23:31:38.217004 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 16 23:31:38.217022 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 16 23:31:38.217040 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 16 23:31:38.217058 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 16 23:31:38.217077 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 16 23:31:38.217095 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 16 23:31:38.217114 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 16 23:31:38.217132 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 16 23:31:38.217154 kernel: ACPI: Added _OSI(Module Device) Apr 16 23:31:38.217171 kernel: ACPI: Added _OSI(Processor Device) Apr 16 23:31:38.217189 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 16 23:31:38.217207 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 16 23:31:38.217225 kernel: ACPI: Interpreter enabled Apr 16 23:31:38.217243 kernel: ACPI: Using GIC for interrupt routing Apr 16 23:31:38.217261 kernel: ACPI: MCFG table detected, 1 entries Apr 16 23:31:38.217279 kernel: ACPI: CPU0 has been hot-added Apr 16 23:31:38.217296 kernel: ACPI: CPU1 has been hot-added Apr 16 23:31:38.217319 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Apr 16 23:31:38.217706 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 16 23:31:38.217917 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 16 23:31:38.218109 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 16 23:31:38.218299 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Apr 16 23:31:38.218539 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Apr 16 23:31:38.218568 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Apr 16 23:31:38.218600 kernel: acpiphp: Slot [1] registered Apr 16 23:31:38.218619 kernel: acpiphp: Slot [2] registered Apr 16 23:31:38.218638 kernel: acpiphp: Slot [3] registered Apr 16 23:31:38.218657 kernel: acpiphp: Slot [4] registered Apr 16 23:31:38.218675 kernel: acpiphp: Slot [5] registered Apr 16 23:31:38.218693 kernel: acpiphp: Slot [6] registered Apr 16 23:31:38.218710 kernel: acpiphp: Slot [7] registered Apr 16 23:31:38.218728 kernel: acpiphp: Slot [8] registered Apr 16 23:31:38.218746 kernel: acpiphp: Slot [9] registered Apr 16 23:31:38.218764 kernel: acpiphp: Slot [10] registered Apr 16 23:31:38.218787 kernel: acpiphp: Slot [11] registered Apr 16 23:31:38.218806 kernel: acpiphp: Slot [12] registered Apr 16 23:31:38.218824 kernel: acpiphp: Slot [13] registered Apr 16 23:31:38.218842 kernel: acpiphp: Slot [14] registered Apr 16 23:31:38.218860 kernel: acpiphp: Slot [15] registered Apr 16 23:31:38.218878 kernel: acpiphp: Slot [16] registered Apr 16 23:31:38.218895 kernel: acpiphp: Slot [17] registered Apr 16 23:31:38.218913 kernel: acpiphp: Slot [18] registered Apr 16 23:31:38.218930 kernel: acpiphp: Slot [19] registered Apr 16 23:31:38.218953 kernel: acpiphp: Slot [20] registered Apr 16 23:31:38.218973 kernel: acpiphp: Slot [21] registered Apr 16 23:31:38.218991 kernel: acpiphp: Slot [22] registered Apr 16 23:31:38.219008 kernel: acpiphp: Slot [23] registered Apr 16 23:31:38.219026 kernel: acpiphp: Slot [24] registered Apr 16 23:31:38.219044 kernel: acpiphp: Slot [25] registered Apr 16 23:31:38.219063 kernel: acpiphp: Slot [26] registered Apr 16 23:31:38.219081 kernel: acpiphp: Slot [27] registered Apr 16 23:31:38.219099 kernel: acpiphp: Slot [28] registered Apr 16 23:31:38.219117 kernel: acpiphp: Slot [29] registered Apr 16 23:31:38.219139 kernel: acpiphp: Slot [30] registered Apr 16 23:31:38.219157 kernel: acpiphp: Slot [31] registered Apr 16 23:31:38.219174 kernel: PCI host bridge to bus 0000:00 Apr 16 23:31:38.219436 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Apr 16 23:31:38.219624 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 16 23:31:38.219815 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Apr 16 23:31:38.219987 kernel: pci_bus 0000:00: root bus resource [bus 00] Apr 16 23:31:38.220223 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Apr 16 23:31:38.220584 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Apr 16 23:31:38.220817 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Apr 16 23:31:38.221035 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Apr 16 23:31:38.221248 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Apr 16 23:31:38.221508 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 16 23:31:38.221771 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Apr 16 23:31:38.221990 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Apr 16 23:31:38.222200 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Apr 16 23:31:38.222486 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Apr 16 23:31:38.222696 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Apr 16 23:31:38.222879 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Apr 16 23:31:38.223054 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 16 23:31:38.223240 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Apr 16 23:31:38.223265 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 16 23:31:38.223284 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 16 23:31:38.223302 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 16 23:31:38.223320 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 16 23:31:38.223339 kernel: iommu: Default domain type: Translated Apr 16 23:31:38.223395 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 16 23:31:38.223416 kernel: efivars: Registered efivars operations Apr 16 23:31:38.223434 kernel: vgaarb: loaded Apr 16 23:31:38.223462 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 16 23:31:38.223480 kernel: VFS: Disk quotas dquot_6.6.0 Apr 16 23:31:38.223498 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 16 23:31:38.223516 kernel: pnp: PnP ACPI init Apr 16 23:31:38.223756 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Apr 16 23:31:38.223784 kernel: pnp: PnP ACPI: found 1 devices Apr 16 23:31:38.223802 kernel: NET: Registered PF_INET protocol family Apr 16 23:31:38.223820 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 16 23:31:38.223845 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 16 23:31:38.223863 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 16 23:31:38.223882 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 16 23:31:38.223900 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 16 23:31:38.223917 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 16 23:31:38.223935 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 23:31:38.223953 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 23:31:38.223971 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 16 23:31:38.223989 kernel: PCI: CLS 0 bytes, default 64 Apr 16 23:31:38.224010 kernel: kvm [1]: HYP mode not available Apr 16 23:31:38.224028 kernel: Initialise system trusted keyrings Apr 16 23:31:38.224046 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 16 23:31:38.224063 kernel: Key type asymmetric registered Apr 16 23:31:38.224081 kernel: Asymmetric key parser 'x509' registered Apr 16 23:31:38.224099 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Apr 16 23:31:38.224117 kernel: io scheduler mq-deadline registered Apr 16 23:31:38.224135 kernel: io scheduler kyber registered Apr 16 23:31:38.224153 kernel: io scheduler bfq registered Apr 16 23:31:38.224439 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Apr 16 23:31:38.224473 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 16 23:31:38.224492 kernel: ACPI: button: Power Button [PWRB] Apr 16 23:31:38.224511 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Apr 16 23:31:38.224529 kernel: ACPI: button: Sleep Button [SLPB] Apr 16 23:31:38.224548 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 16 23:31:38.224567 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Apr 16 23:31:38.224807 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Apr 16 23:31:38.224853 kernel: printk: legacy console [ttyS0] disabled Apr 16 23:31:38.224873 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Apr 16 23:31:38.224892 kernel: printk: legacy console [ttyS0] enabled Apr 16 23:31:38.224911 kernel: printk: legacy bootconsole [uart0] disabled Apr 16 23:31:38.224929 kernel: thunder_xcv, ver 1.0 Apr 16 23:31:38.224947 kernel: thunder_bgx, ver 1.0 Apr 16 23:31:38.224966 kernel: nicpf, ver 1.0 Apr 16 23:31:38.224985 kernel: nicvf, ver 1.0 Apr 16 23:31:38.225263 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 16 23:31:38.225549 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-04-16T23:31:37 UTC (1776382297) Apr 16 23:31:38.225581 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 16 23:31:38.225600 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Apr 16 23:31:38.225702 kernel: NET: Registered PF_INET6 protocol family Apr 16 23:31:38.226039 kernel: watchdog: NMI not fully supported Apr 16 23:31:38.226126 kernel: watchdog: Hard watchdog permanently disabled Apr 16 23:31:38.226533 kernel: Segment Routing with IPv6 Apr 16 23:31:38.226556 kernel: In-situ OAM (IOAM) with IPv6 Apr 16 23:31:38.226941 kernel: NET: Registered PF_PACKET protocol family Apr 16 23:31:38.227176 kernel: Key type dns_resolver registered Apr 16 23:31:38.227370 kernel: registered taskstats version 1 Apr 16 23:31:38.227610 kernel: Loading compiled-in X.509 certificates Apr 16 23:31:38.227792 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 4acad53138393591155ecb80320b4c1550e344f8' Apr 16 23:31:38.228149 kernel: Demotion targets for Node 0: null Apr 16 23:31:38.228260 kernel: Key type .fscrypt registered Apr 16 23:31:38.228392 kernel: Key type fscrypt-provisioning registered Apr 16 23:31:38.228412 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 16 23:31:38.228432 kernel: ima: Allocated hash algorithm: sha1 Apr 16 23:31:38.228464 kernel: ima: No architecture policies found Apr 16 23:31:38.228483 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 16 23:31:38.228501 kernel: clk: Disabling unused clocks Apr 16 23:31:38.228520 kernel: PM: genpd: Disabling unused power domains Apr 16 23:31:38.228539 kernel: Warning: unable to open an initial console. Apr 16 23:31:38.228559 kernel: Freeing unused kernel memory: 39552K Apr 16 23:31:38.228578 kernel: Run /init as init process Apr 16 23:31:38.228597 kernel: with arguments: Apr 16 23:31:38.228615 kernel: /init Apr 16 23:31:38.228640 kernel: with environment: Apr 16 23:31:38.228659 kernel: HOME=/ Apr 16 23:31:38.228678 kernel: TERM=linux Apr 16 23:31:38.228699 systemd[1]: Successfully made /usr/ read-only. Apr 16 23:31:38.228724 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 16 23:31:38.228746 systemd[1]: Detected virtualization amazon. Apr 16 23:31:38.228766 systemd[1]: Detected architecture arm64. Apr 16 23:31:38.228791 systemd[1]: Running in initrd. Apr 16 23:31:38.228811 systemd[1]: No hostname configured, using default hostname. Apr 16 23:31:38.228831 systemd[1]: Hostname set to . Apr 16 23:31:38.228851 systemd[1]: Initializing machine ID from VM UUID. Apr 16 23:31:38.228870 systemd[1]: Queued start job for default target initrd.target. Apr 16 23:31:38.228892 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 23:31:38.228911 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 23:31:38.228933 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 16 23:31:38.228962 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 23:31:38.228984 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 16 23:31:38.229006 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 16 23:31:38.229028 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 16 23:31:38.229049 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 16 23:31:38.229070 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 23:31:38.229090 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 23:31:38.229114 systemd[1]: Reached target paths.target - Path Units. Apr 16 23:31:38.229134 systemd[1]: Reached target slices.target - Slice Units. Apr 16 23:31:38.229153 systemd[1]: Reached target swap.target - Swaps. Apr 16 23:31:38.229173 systemd[1]: Reached target timers.target - Timer Units. Apr 16 23:31:38.229192 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 23:31:38.229211 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 23:31:38.229231 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 16 23:31:38.229251 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 16 23:31:38.229271 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 23:31:38.229296 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 23:31:38.229317 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 23:31:38.229337 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 23:31:38.229402 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 16 23:31:38.229427 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 23:31:38.229447 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 16 23:31:38.229467 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 16 23:31:38.229487 systemd[1]: Starting systemd-fsck-usr.service... Apr 16 23:31:38.229515 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 23:31:38.229535 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 23:31:38.229554 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:31:38.229573 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 16 23:31:38.229594 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 23:31:38.229620 systemd[1]: Finished systemd-fsck-usr.service. Apr 16 23:31:38.229640 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 23:31:38.229660 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 16 23:31:38.229679 kernel: Bridge firewalling registered Apr 16 23:31:38.229699 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 23:31:38.229788 systemd-journald[259]: Collecting audit messages is disabled. Apr 16 23:31:38.229844 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 23:31:38.229866 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:31:38.229886 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 23:31:38.229907 systemd-journald[259]: Journal started Apr 16 23:31:38.229952 systemd-journald[259]: Runtime Journal (/run/log/journal/ec277fa927ab751a584a8f9faf4b6d2b) is 8M, max 75.3M, 67.3M free. Apr 16 23:31:38.160492 systemd-modules-load[260]: Inserted module 'overlay' Apr 16 23:31:38.198301 systemd-modules-load[260]: Inserted module 'br_netfilter' Apr 16 23:31:38.242397 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 23:31:38.251480 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 23:31:38.260770 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 23:31:38.271741 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 23:31:38.289141 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 23:31:38.315030 systemd-tmpfiles[282]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 16 23:31:38.315869 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 23:31:38.336476 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 23:31:38.337636 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 23:31:38.349934 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 16 23:31:38.373562 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 23:31:38.408237 dracut-cmdline[299]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c4961845f9869114226296d88644496bf9e4629823927a5e8ae22de79f1c7b59 Apr 16 23:31:38.477205 systemd-resolved[300]: Positive Trust Anchors: Apr 16 23:31:38.477240 systemd-resolved[300]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 23:31:38.477302 systemd-resolved[300]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 23:31:38.576390 kernel: SCSI subsystem initialized Apr 16 23:31:38.584389 kernel: Loading iSCSI transport class v2.0-870. Apr 16 23:31:38.597697 kernel: iscsi: registered transport (tcp) Apr 16 23:31:38.620470 kernel: iscsi: registered transport (qla4xxx) Apr 16 23:31:38.620562 kernel: QLogic iSCSI HBA Driver Apr 16 23:31:38.658547 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 23:31:38.683789 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 23:31:38.695605 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 23:31:38.767027 kernel: random: crng init done Apr 16 23:31:38.766832 systemd-resolved[300]: Defaulting to hostname 'linux'. Apr 16 23:31:38.770687 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 23:31:38.775661 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 23:31:38.806337 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 16 23:31:38.814399 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 16 23:31:38.904414 kernel: raid6: neonx8 gen() 6496 MB/s Apr 16 23:31:38.922406 kernel: raid6: neonx4 gen() 6459 MB/s Apr 16 23:31:38.939407 kernel: raid6: neonx2 gen() 5419 MB/s Apr 16 23:31:38.957405 kernel: raid6: neonx1 gen() 3928 MB/s Apr 16 23:31:38.974421 kernel: raid6: int64x8 gen() 3583 MB/s Apr 16 23:31:38.992418 kernel: raid6: int64x4 gen() 3670 MB/s Apr 16 23:31:39.010458 kernel: raid6: int64x2 gen() 3502 MB/s Apr 16 23:31:39.028542 kernel: raid6: int64x1 gen() 2715 MB/s Apr 16 23:31:39.028626 kernel: raid6: using algorithm neonx8 gen() 6496 MB/s Apr 16 23:31:39.047860 kernel: raid6: .... xor() 4675 MB/s, rmw enabled Apr 16 23:31:39.047948 kernel: raid6: using neon recovery algorithm Apr 16 23:31:39.056409 kernel: xor: measuring software checksum speed Apr 16 23:31:39.058870 kernel: 8regs : 11569 MB/sec Apr 16 23:31:39.058940 kernel: 32regs : 12767 MB/sec Apr 16 23:31:39.060456 kernel: arm64_neon : 9152 MB/sec Apr 16 23:31:39.060536 kernel: xor: using function: 32regs (12767 MB/sec) Apr 16 23:31:39.159410 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 16 23:31:39.175449 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 16 23:31:39.189986 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 23:31:39.246838 systemd-udevd[508]: Using default interface naming scheme 'v255'. Apr 16 23:31:39.259499 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 23:31:39.276706 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 16 23:31:39.326140 dracut-pre-trigger[514]: rd.md=0: removing MD RAID activation Apr 16 23:31:39.383482 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 23:31:39.386522 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 23:31:39.520769 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 23:31:39.527571 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 16 23:31:39.693392 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Apr 16 23:31:39.696404 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 16 23:31:39.713404 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 16 23:31:39.739543 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 16 23:31:39.739577 kernel: GPT:9289727 != 33554431 Apr 16 23:31:39.739603 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 16 23:31:39.739628 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 16 23:31:39.739667 kernel: GPT:9289727 != 33554431 Apr 16 23:31:39.739691 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 16 23:31:39.739716 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 16 23:31:39.739740 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Apr 16 23:31:39.745015 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 16 23:31:39.745421 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 16 23:31:39.763203 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:5d:f9:ad:70:99 Apr 16 23:31:39.762743 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 23:31:39.763042 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:31:39.772629 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:31:39.778183 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:31:39.785169 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 16 23:31:39.789445 (udev-worker)[578]: Network interface NamePolicy= disabled on kernel command line. Apr 16 23:31:39.816436 kernel: nvme nvme0: using unchecked data buffer Apr 16 23:31:39.848170 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:31:39.971049 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 16 23:31:40.054874 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 16 23:31:40.061689 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 16 23:31:40.087172 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 16 23:31:40.090506 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 16 23:31:40.123937 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 16 23:31:40.131816 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 23:31:40.135395 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 23:31:40.141975 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 23:31:40.152830 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 16 23:31:40.164816 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 16 23:31:40.192224 disk-uuid[688]: Primary Header is updated. Apr 16 23:31:40.192224 disk-uuid[688]: Secondary Entries is updated. Apr 16 23:31:40.192224 disk-uuid[688]: Secondary Header is updated. Apr 16 23:31:40.218397 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 16 23:31:40.227736 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 16 23:31:41.233428 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 16 23:31:41.237456 disk-uuid[689]: The operation has completed successfully. Apr 16 23:31:41.427666 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 16 23:31:41.428298 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 16 23:31:41.519453 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 16 23:31:41.546463 sh[957]: Success Apr 16 23:31:41.576413 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 16 23:31:41.576495 kernel: device-mapper: uevent: version 1.0.3 Apr 16 23:31:41.581420 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 16 23:31:41.594448 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Apr 16 23:31:41.705567 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 16 23:31:41.712543 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 16 23:31:41.739299 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 16 23:31:41.760405 kernel: BTRFS: device fsid 10cedb9e-43f1-4d98-9b55-3b84c3a61868 devid 1 transid 33 /dev/mapper/usr (254:0) scanned by mount (980) Apr 16 23:31:41.766031 kernel: BTRFS info (device dm-0): first mount of filesystem 10cedb9e-43f1-4d98-9b55-3b84c3a61868 Apr 16 23:31:41.766100 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 16 23:31:41.873578 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Apr 16 23:31:41.873658 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 16 23:31:41.875110 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 16 23:31:41.899925 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 16 23:31:41.904196 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 16 23:31:41.907443 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 16 23:31:41.909833 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 16 23:31:41.913806 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 16 23:31:41.975386 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1012) Apr 16 23:31:41.980515 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 29b48a10-1a8e-4627-ab21-f0862573351d Apr 16 23:31:41.980601 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 16 23:31:41.998868 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 16 23:31:41.998946 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Apr 16 23:31:42.008419 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 29b48a10-1a8e-4627-ab21-f0862573351d Apr 16 23:31:42.011238 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 16 23:31:42.018303 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 16 23:31:42.126634 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 23:31:42.135641 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 23:31:42.221682 systemd-networkd[1149]: lo: Link UP Apr 16 23:31:42.221705 systemd-networkd[1149]: lo: Gained carrier Apr 16 23:31:42.227704 systemd-networkd[1149]: Enumeration completed Apr 16 23:31:42.228105 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 23:31:42.232469 systemd[1]: Reached target network.target - Network. Apr 16 23:31:42.240432 systemd-networkd[1149]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:31:42.240456 systemd-networkd[1149]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 23:31:42.250671 systemd-networkd[1149]: eth0: Link UP Apr 16 23:31:42.250693 systemd-networkd[1149]: eth0: Gained carrier Apr 16 23:31:42.250716 systemd-networkd[1149]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:31:42.274469 systemd-networkd[1149]: eth0: DHCPv4 address 172.31.18.112/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 16 23:31:42.560338 ignition[1074]: Ignition 2.22.0 Apr 16 23:31:42.560924 ignition[1074]: Stage: fetch-offline Apr 16 23:31:42.562138 ignition[1074]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:31:42.562161 ignition[1074]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 16 23:31:42.562923 ignition[1074]: Ignition finished successfully Apr 16 23:31:42.573393 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 23:31:42.578564 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 16 23:31:42.634788 ignition[1160]: Ignition 2.22.0 Apr 16 23:31:42.634819 ignition[1160]: Stage: fetch Apr 16 23:31:42.635652 ignition[1160]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:31:42.635866 ignition[1160]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 16 23:31:42.636114 ignition[1160]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 16 23:31:42.660330 ignition[1160]: PUT result: OK Apr 16 23:31:42.666460 ignition[1160]: parsed url from cmdline: "" Apr 16 23:31:42.666479 ignition[1160]: no config URL provided Apr 16 23:31:42.666498 ignition[1160]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 23:31:42.666525 ignition[1160]: no config at "/usr/lib/ignition/user.ign" Apr 16 23:31:42.666561 ignition[1160]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 16 23:31:42.674172 ignition[1160]: PUT result: OK Apr 16 23:31:42.674281 ignition[1160]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 16 23:31:42.679461 ignition[1160]: GET result: OK Apr 16 23:31:42.679769 ignition[1160]: parsing config with SHA512: fa1a65f2db33f59f7958672bd4f226dba54701dd4d62be07e7a5687e5c30033f847970ac0522fbc7f9a565d2148d81aa9dc54497f2ee34d72eff6d6243505168 Apr 16 23:31:42.694879 unknown[1160]: fetched base config from "system" Apr 16 23:31:42.696329 ignition[1160]: fetch: fetch complete Apr 16 23:31:42.694934 unknown[1160]: fetched base config from "system" Apr 16 23:31:42.698398 ignition[1160]: fetch: fetch passed Apr 16 23:31:42.694950 unknown[1160]: fetched user config from "aws" Apr 16 23:31:42.698613 ignition[1160]: Ignition finished successfully Apr 16 23:31:42.709094 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 16 23:31:42.717412 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 16 23:31:42.775604 ignition[1166]: Ignition 2.22.0 Apr 16 23:31:42.775635 ignition[1166]: Stage: kargs Apr 16 23:31:42.776941 ignition[1166]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:31:42.777200 ignition[1166]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 16 23:31:42.777574 ignition[1166]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 16 23:31:42.781437 ignition[1166]: PUT result: OK Apr 16 23:31:42.791780 ignition[1166]: kargs: kargs passed Apr 16 23:31:42.792091 ignition[1166]: Ignition finished successfully Apr 16 23:31:42.798186 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 16 23:31:42.808879 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 16 23:31:42.863558 ignition[1173]: Ignition 2.22.0 Apr 16 23:31:42.864082 ignition[1173]: Stage: disks Apr 16 23:31:42.864747 ignition[1173]: no configs at "/usr/lib/ignition/base.d" Apr 16 23:31:42.864774 ignition[1173]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 16 23:31:42.864970 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 16 23:31:42.869740 ignition[1173]: PUT result: OK Apr 16 23:31:42.879504 ignition[1173]: disks: disks passed Apr 16 23:31:42.879853 ignition[1173]: Ignition finished successfully Apr 16 23:31:42.887007 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 16 23:31:42.891943 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 16 23:31:42.895254 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 16 23:31:42.900572 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 23:31:42.907871 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 23:31:42.912800 systemd[1]: Reached target basic.target - Basic System. Apr 16 23:31:42.918325 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 16 23:31:42.970765 systemd-fsck[1182]: ROOT: clean, 15/553520 files, 52789/553472 blocks Apr 16 23:31:42.976407 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 16 23:31:42.985121 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 16 23:31:43.121392 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 717eabe0-7ee2-4bf7-a9aa-0d27bb05c125 r/w with ordered data mode. Quota mode: none. Apr 16 23:31:43.123413 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 16 23:31:43.128170 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 16 23:31:43.135202 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 23:31:43.150398 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 16 23:31:43.157425 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 16 23:31:43.157520 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 16 23:31:43.157573 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 23:31:43.180379 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1201) Apr 16 23:31:43.184981 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 29b48a10-1a8e-4627-ab21-f0862573351d Apr 16 23:31:43.185026 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 16 23:31:43.191750 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 16 23:31:43.196301 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 16 23:31:43.208851 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 16 23:31:43.208923 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Apr 16 23:31:43.211592 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 23:31:43.581714 initrd-setup-root[1226]: cut: /sysroot/etc/passwd: No such file or directory Apr 16 23:31:43.595724 initrd-setup-root[1233]: cut: /sysroot/etc/group: No such file or directory Apr 16 23:31:43.606510 initrd-setup-root[1240]: cut: /sysroot/etc/shadow: No such file or directory Apr 16 23:31:43.617322 initrd-setup-root[1247]: cut: /sysroot/etc/gshadow: No such file or directory Apr 16 23:31:44.007192 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 16 23:31:44.014327 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 16 23:31:44.018182 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 16 23:31:44.051005 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 16 23:31:44.055751 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 29b48a10-1a8e-4627-ab21-f0862573351d Apr 16 23:31:44.068876 systemd-networkd[1149]: eth0: Gained IPv6LL Apr 16 23:31:44.090446 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 16 23:31:44.112410 ignition[1315]: INFO : Ignition 2.22.0 Apr 16 23:31:44.112410 ignition[1315]: INFO : Stage: mount Apr 16 23:31:44.116784 ignition[1315]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 23:31:44.116784 ignition[1315]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 16 23:31:44.116784 ignition[1315]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 16 23:31:44.126763 ignition[1315]: INFO : PUT result: OK Apr 16 23:31:44.137643 ignition[1315]: INFO : mount: mount passed Apr 16 23:31:44.139713 ignition[1315]: INFO : Ignition finished successfully Apr 16 23:31:44.143310 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 16 23:31:44.149790 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 16 23:31:44.185618 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 23:31:44.223405 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1327) Apr 16 23:31:44.228417 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 29b48a10-1a8e-4627-ab21-f0862573351d Apr 16 23:31:44.228575 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Apr 16 23:31:44.236494 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 16 23:31:44.236580 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Apr 16 23:31:44.240515 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 23:31:44.292297 ignition[1344]: INFO : Ignition 2.22.0 Apr 16 23:31:44.292297 ignition[1344]: INFO : Stage: files Apr 16 23:31:44.296953 ignition[1344]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 23:31:44.296953 ignition[1344]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 16 23:31:44.296953 ignition[1344]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 16 23:31:44.304791 ignition[1344]: INFO : PUT result: OK Apr 16 23:31:44.310962 ignition[1344]: DEBUG : files: compiled without relabeling support, skipping Apr 16 23:31:44.314492 ignition[1344]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 16 23:31:44.317753 ignition[1344]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 16 23:31:44.321004 ignition[1344]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 16 23:31:44.324660 ignition[1344]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 16 23:31:44.328899 unknown[1344]: wrote ssh authorized keys file for user: core Apr 16 23:31:44.331427 ignition[1344]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 16 23:31:44.348303 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 16 23:31:44.348303 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Apr 16 23:31:44.433715 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 16 23:31:44.599586 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Apr 16 23:31:44.603817 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 16 23:31:44.603817 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 16 23:31:44.603817 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 16 23:31:44.603817 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 16 23:31:44.603817 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 23:31:44.603817 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 23:31:44.603817 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 23:31:44.603817 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 23:31:44.603817 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 23:31:44.603817 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 23:31:44.603817 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 16 23:31:44.603817 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 16 23:31:44.603817 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 16 23:31:44.603817 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-arm64.raw: attempt #1 Apr 16 23:31:45.078684 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 16 23:31:45.478438 ignition[1344]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Apr 16 23:31:45.478438 ignition[1344]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 16 23:31:45.486608 ignition[1344]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 23:31:45.486608 ignition[1344]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 23:31:45.486608 ignition[1344]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 16 23:31:45.486608 ignition[1344]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 16 23:31:45.486608 ignition[1344]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 16 23:31:45.486608 ignition[1344]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 16 23:31:45.486608 ignition[1344]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 16 23:31:45.486608 ignition[1344]: INFO : files: files passed Apr 16 23:31:45.486608 ignition[1344]: INFO : Ignition finished successfully Apr 16 23:31:45.495814 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 16 23:31:45.498147 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 16 23:31:45.533027 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 16 23:31:45.547411 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 16 23:31:45.548087 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 16 23:31:45.582435 initrd-setup-root-after-ignition[1374]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 23:31:45.586237 initrd-setup-root-after-ignition[1374]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 16 23:31:45.589996 initrd-setup-root-after-ignition[1378]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 23:31:45.596757 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 23:31:45.602995 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 16 23:31:45.610211 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 16 23:31:45.691553 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 16 23:31:45.693852 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 16 23:31:45.700048 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 16 23:31:45.704524 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 16 23:31:45.709245 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 16 23:31:45.714239 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 16 23:31:45.759739 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 23:31:45.767935 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 16 23:31:45.805099 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 16 23:31:45.811222 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 23:31:45.815031 systemd[1]: Stopped target timers.target - Timer Units. Apr 16 23:31:45.822443 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 16 23:31:45.822720 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 23:31:45.832055 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 16 23:31:45.837926 systemd[1]: Stopped target basic.target - Basic System. Apr 16 23:31:45.841337 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 16 23:31:45.850116 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 23:31:45.853733 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 16 23:31:45.862065 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 16 23:31:45.867813 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 16 23:31:45.871518 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 23:31:45.879130 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 16 23:31:45.883331 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 16 23:31:45.890491 systemd[1]: Stopped target swap.target - Swaps. Apr 16 23:31:45.894847 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 16 23:31:45.895099 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 16 23:31:45.904298 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 16 23:31:45.909898 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 23:31:45.915892 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 16 23:31:45.918510 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 23:31:45.923189 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 16 23:31:45.923464 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 16 23:31:45.933860 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 16 23:31:45.934146 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 23:31:45.938418 systemd[1]: ignition-files.service: Deactivated successfully. Apr 16 23:31:45.938795 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 16 23:31:45.951397 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 16 23:31:45.960925 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 16 23:31:45.971031 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 16 23:31:45.976661 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 23:31:45.980706 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 16 23:31:45.980954 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 23:31:46.007807 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 16 23:31:46.008030 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 16 23:31:46.035173 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 16 23:31:46.050097 ignition[1398]: INFO : Ignition 2.22.0 Apr 16 23:31:46.052423 ignition[1398]: INFO : Stage: umount Apr 16 23:31:46.052423 ignition[1398]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 23:31:46.052423 ignition[1398]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 16 23:31:46.052423 ignition[1398]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 16 23:31:46.064470 ignition[1398]: INFO : PUT result: OK Apr 16 23:31:46.069191 ignition[1398]: INFO : umount: umount passed Apr 16 23:31:46.071686 ignition[1398]: INFO : Ignition finished successfully Apr 16 23:31:46.077533 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 16 23:31:46.078058 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 16 23:31:46.087664 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 16 23:31:46.087788 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 16 23:31:46.091039 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 16 23:31:46.091140 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 16 23:31:46.096432 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 16 23:31:46.096539 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 16 23:31:46.107371 systemd[1]: Stopped target network.target - Network. Apr 16 23:31:46.109734 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 16 23:31:46.109869 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 23:31:46.113465 systemd[1]: Stopped target paths.target - Path Units. Apr 16 23:31:46.118078 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 16 23:31:46.118246 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 23:31:46.124448 systemd[1]: Stopped target slices.target - Slice Units. Apr 16 23:31:46.128822 systemd[1]: Stopped target sockets.target - Socket Units. Apr 16 23:31:46.130947 systemd[1]: iscsid.socket: Deactivated successfully. Apr 16 23:31:46.131027 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 23:31:46.138759 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 16 23:31:46.138834 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 23:31:46.141605 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 16 23:31:46.141734 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 16 23:31:46.145441 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 16 23:31:46.145551 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 16 23:31:46.153683 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 16 23:31:46.158695 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 16 23:31:46.182653 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 16 23:31:46.182995 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 16 23:31:46.234635 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 16 23:31:46.241666 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 16 23:31:46.252495 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 16 23:31:46.252579 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 16 23:31:46.259835 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 16 23:31:46.289002 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 16 23:31:46.289139 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 23:31:46.295150 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 23:31:46.313795 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 16 23:31:46.316761 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 16 23:31:46.343793 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 16 23:31:46.344442 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 16 23:31:46.356205 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 23:31:46.367794 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 16 23:31:46.369149 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 16 23:31:46.377579 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 16 23:31:46.377672 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 23:31:46.380852 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 16 23:31:46.380982 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 16 23:31:46.399245 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 16 23:31:46.399995 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 16 23:31:46.409216 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 23:31:46.409572 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 23:31:46.424951 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 16 23:31:46.427743 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 16 23:31:46.427892 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 23:31:46.435198 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 16 23:31:46.435324 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 16 23:31:46.441884 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 16 23:31:46.441995 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 16 23:31:46.450135 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 16 23:31:46.450259 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 23:31:46.460246 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 16 23:31:46.460388 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 23:31:46.463819 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 16 23:31:46.463932 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 23:31:46.474180 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 16 23:31:46.474295 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 23:31:46.495590 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 23:31:46.495775 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:31:46.511090 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Apr 16 23:31:46.511213 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 16 23:31:46.511293 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Apr 16 23:31:46.511405 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 16 23:31:46.511496 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 16 23:31:46.511577 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 16 23:31:46.512827 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 16 23:31:46.513089 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 16 23:31:46.517000 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 16 23:31:46.517197 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 16 23:31:46.522588 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 16 23:31:46.522796 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 16 23:31:46.529026 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 16 23:31:46.543701 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 16 23:31:46.543849 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 16 23:31:46.551490 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 16 23:31:46.605617 systemd[1]: Switching root. Apr 16 23:31:46.667816 systemd-journald[259]: Journal stopped Apr 16 23:31:49.372342 systemd-journald[259]: Received SIGTERM from PID 1 (systemd). Apr 16 23:31:49.372701 kernel: SELinux: policy capability network_peer_controls=1 Apr 16 23:31:49.372750 kernel: SELinux: policy capability open_perms=1 Apr 16 23:31:49.372782 kernel: SELinux: policy capability extended_socket_class=1 Apr 16 23:31:49.372815 kernel: SELinux: policy capability always_check_network=0 Apr 16 23:31:49.372847 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 16 23:31:49.372876 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 16 23:31:49.372915 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 16 23:31:49.372945 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 16 23:31:49.372974 kernel: SELinux: policy capability userspace_initial_context=0 Apr 16 23:31:49.373004 kernel: audit: type=1403 audit(1776382307.250:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 16 23:31:49.373043 systemd[1]: Successfully loaded SELinux policy in 102.981ms. Apr 16 23:31:49.373092 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.368ms. Apr 16 23:31:49.373128 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 16 23:31:49.373160 systemd[1]: Detected virtualization amazon. Apr 16 23:31:49.373191 systemd[1]: Detected architecture arm64. Apr 16 23:31:49.373225 systemd[1]: Detected first boot. Apr 16 23:31:49.373255 systemd[1]: Initializing machine ID from VM UUID. Apr 16 23:31:49.373285 kernel: NET: Registered PF_VSOCK protocol family Apr 16 23:31:49.373316 zram_generator::config[1441]: No configuration found. Apr 16 23:31:49.373630 systemd[1]: Populated /etc with preset unit settings. Apr 16 23:31:49.374107 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 16 23:31:49.375118 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 16 23:31:49.375828 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 16 23:31:49.375871 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 16 23:31:49.375901 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 16 23:31:49.375934 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 16 23:31:49.375964 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 16 23:31:49.375992 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 16 23:31:49.376020 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 16 23:31:49.376051 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 16 23:31:49.376097 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 16 23:31:49.376127 systemd[1]: Created slice user.slice - User and Session Slice. Apr 16 23:31:49.376183 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 23:31:49.376233 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 23:31:49.376269 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 16 23:31:49.376305 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 16 23:31:49.376333 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 16 23:31:49.376382 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 23:31:49.376416 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 16 23:31:49.376447 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 23:31:49.379454 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 23:31:49.379509 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 16 23:31:49.379541 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 16 23:31:49.379571 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 16 23:31:49.379602 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 16 23:31:49.379630 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 23:31:49.379661 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 23:31:49.379690 systemd[1]: Reached target slices.target - Slice Units. Apr 16 23:31:49.379721 systemd[1]: Reached target swap.target - Swaps. Apr 16 23:31:49.379759 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 16 23:31:49.379791 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 16 23:31:49.379823 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 16 23:31:49.379851 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 23:31:49.379882 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 23:31:49.379910 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 23:31:49.379941 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 16 23:31:49.379973 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 16 23:31:49.380002 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 16 23:31:49.380036 systemd[1]: Mounting media.mount - External Media Directory... Apr 16 23:31:49.380066 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 16 23:31:49.380098 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 16 23:31:49.380127 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 16 23:31:49.380159 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 16 23:31:49.380191 systemd[1]: Reached target machines.target - Containers. Apr 16 23:31:49.380270 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 16 23:31:49.380325 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 23:31:49.382862 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 23:31:49.382915 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 16 23:31:49.382954 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 23:31:49.382987 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 23:31:49.383017 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 23:31:49.383046 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 16 23:31:49.383077 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 23:31:49.383107 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 16 23:31:49.383146 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 16 23:31:49.383180 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 16 23:31:49.383212 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 16 23:31:49.383241 systemd[1]: Stopped systemd-fsck-usr.service. Apr 16 23:31:49.383273 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 23:31:49.383305 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 23:31:49.383334 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 23:31:49.387282 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 23:31:49.390130 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 16 23:31:49.390167 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 16 23:31:49.390199 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 23:31:49.390236 kernel: fuse: init (API version 7.41) Apr 16 23:31:49.390265 systemd[1]: verity-setup.service: Deactivated successfully. Apr 16 23:31:49.390293 systemd[1]: Stopped verity-setup.service. Apr 16 23:31:49.390321 kernel: loop: module loaded Apr 16 23:31:49.390367 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 16 23:31:49.390407 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 16 23:31:49.390439 systemd[1]: Mounted media.mount - External Media Directory. Apr 16 23:31:49.390471 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 16 23:31:49.390500 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 16 23:31:49.390536 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 16 23:31:49.390565 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 23:31:49.390594 kernel: ACPI: bus type drm_connector registered Apr 16 23:31:49.390622 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 16 23:31:49.390655 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 16 23:31:49.390684 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 23:31:49.390713 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 23:31:49.390743 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 23:31:49.390778 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 23:31:49.390807 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 23:31:49.390841 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 23:31:49.390873 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 16 23:31:49.390901 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 16 23:31:49.390930 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 23:31:49.390959 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 23:31:49.390987 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 23:31:49.391073 systemd-journald[1531]: Collecting audit messages is disabled. Apr 16 23:31:49.391137 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 16 23:31:49.391169 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 23:31:49.391199 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 16 23:31:49.391228 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 16 23:31:49.391256 systemd-journald[1531]: Journal started Apr 16 23:31:49.391304 systemd-journald[1531]: Runtime Journal (/run/log/journal/ec277fa927ab751a584a8f9faf4b6d2b) is 8M, max 75.3M, 67.3M free. Apr 16 23:31:48.655323 systemd[1]: Queued start job for default target multi-user.target. Apr 16 23:31:48.667747 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 16 23:31:48.668703 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 16 23:31:49.399656 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 23:31:49.433248 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 23:31:49.444551 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 16 23:31:49.452753 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 16 23:31:49.455537 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 16 23:31:49.455620 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 23:31:49.467597 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 16 23:31:49.486667 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 16 23:31:49.492129 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 23:31:49.500583 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 16 23:31:49.512706 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 16 23:31:49.518656 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 23:31:49.522707 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 16 23:31:49.529574 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 23:31:49.534811 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 23:31:49.544790 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 16 23:31:49.569804 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 23:31:49.577569 systemd-journald[1531]: Time spent on flushing to /var/log/journal/ec277fa927ab751a584a8f9faf4b6d2b is 118.755ms for 925 entries. Apr 16 23:31:49.577569 systemd-journald[1531]: System Journal (/var/log/journal/ec277fa927ab751a584a8f9faf4b6d2b) is 8M, max 195.6M, 187.6M free. Apr 16 23:31:49.714424 systemd-journald[1531]: Received client request to flush runtime journal. Apr 16 23:31:49.714542 kernel: loop0: detected capacity change from 0 to 119840 Apr 16 23:31:49.582482 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 16 23:31:49.589752 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 16 23:31:49.627751 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 23:31:49.685389 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 16 23:31:49.690642 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 16 23:31:49.701833 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 16 23:31:49.725590 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 16 23:31:49.772440 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 16 23:31:49.773569 systemd-tmpfiles[1576]: ACLs are not supported, ignoring. Apr 16 23:31:49.773645 systemd-tmpfiles[1576]: ACLs are not supported, ignoring. Apr 16 23:31:49.775534 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 16 23:31:49.791557 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 23:31:49.808166 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 23:31:49.818598 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 16 23:31:49.860386 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 16 23:31:49.885415 kernel: loop1: detected capacity change from 0 to 61264 Apr 16 23:31:49.918553 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 16 23:31:49.927653 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 23:31:49.937400 kernel: loop2: detected capacity change from 0 to 197488 Apr 16 23:31:49.992137 systemd-tmpfiles[1600]: ACLs are not supported, ignoring. Apr 16 23:31:49.992197 systemd-tmpfiles[1600]: ACLs are not supported, ignoring. Apr 16 23:31:50.001294 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 23:31:50.135385 kernel: loop3: detected capacity change from 0 to 100632 Apr 16 23:31:50.375414 kernel: loop4: detected capacity change from 0 to 119840 Apr 16 23:31:50.450409 kernel: loop5: detected capacity change from 0 to 61264 Apr 16 23:31:50.493411 kernel: loop6: detected capacity change from 0 to 197488 Apr 16 23:31:50.570435 kernel: loop7: detected capacity change from 0 to 100632 Apr 16 23:31:50.606491 (sd-merge)[1607]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 16 23:31:50.608050 (sd-merge)[1607]: Merged extensions into '/usr'. Apr 16 23:31:50.617041 systemd[1]: Reload requested from client PID 1575 ('systemd-sysext') (unit systemd-sysext.service)... Apr 16 23:31:50.617088 systemd[1]: Reloading... Apr 16 23:31:50.785409 zram_generator::config[1632]: No configuration found. Apr 16 23:31:51.284820 systemd[1]: Reloading finished in 666 ms. Apr 16 23:31:51.309807 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 16 23:31:51.314503 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 16 23:31:51.335576 systemd[1]: Starting ensure-sysext.service... Apr 16 23:31:51.351717 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 23:31:51.361769 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 23:31:51.409495 systemd[1]: Reload requested from client PID 1685 ('systemctl') (unit ensure-sysext.service)... Apr 16 23:31:51.409723 systemd[1]: Reloading... Apr 16 23:31:51.420009 systemd-tmpfiles[1686]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 16 23:31:51.420094 systemd-tmpfiles[1686]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 16 23:31:51.420804 systemd-tmpfiles[1686]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 16 23:31:51.423422 systemd-tmpfiles[1686]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 16 23:31:51.427562 systemd-tmpfiles[1686]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 16 23:31:51.428251 systemd-tmpfiles[1686]: ACLs are not supported, ignoring. Apr 16 23:31:51.428443 systemd-tmpfiles[1686]: ACLs are not supported, ignoring. Apr 16 23:31:51.444862 systemd-tmpfiles[1686]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 23:31:51.444894 systemd-tmpfiles[1686]: Skipping /boot Apr 16 23:31:51.482677 systemd-udevd[1687]: Using default interface naming scheme 'v255'. Apr 16 23:31:51.500883 systemd-tmpfiles[1686]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 23:31:51.500906 systemd-tmpfiles[1686]: Skipping /boot Apr 16 23:31:51.718509 ldconfig[1570]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 16 23:31:51.733381 zram_generator::config[1742]: No configuration found. Apr 16 23:31:52.040102 (udev-worker)[1713]: Network interface NamePolicy= disabled on kernel command line. Apr 16 23:31:52.399296 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 16 23:31:52.399861 systemd[1]: Reloading finished in 989 ms. Apr 16 23:31:52.413590 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 23:31:52.418016 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 16 23:31:52.421887 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 23:31:52.494082 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 16 23:31:52.500043 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 16 23:31:52.511732 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 16 23:31:52.520830 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 23:31:52.530824 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 23:31:52.537672 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 16 23:31:52.558677 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 23:31:52.562991 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 23:31:52.575940 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 23:31:52.591623 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 23:31:52.594285 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 23:31:52.594620 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 23:31:52.605112 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 23:31:52.605594 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 23:31:52.605856 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 23:31:52.616044 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 16 23:31:52.636319 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 23:31:52.642203 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 23:31:52.644964 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 23:31:52.645079 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 23:31:52.645195 systemd[1]: Reached target time-set.target - System Time Set. Apr 16 23:31:52.650462 systemd[1]: Finished ensure-sysext.service. Apr 16 23:31:52.653345 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 23:31:52.654035 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 23:31:52.721027 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 16 23:31:52.746041 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 16 23:31:52.749633 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 16 23:31:52.761200 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 23:31:52.762646 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 23:31:52.808107 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 16 23:31:52.814301 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 23:31:52.814809 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 23:31:52.821086 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 23:31:52.822697 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 23:31:52.867965 augenrules[1934]: No rules Apr 16 23:31:52.866764 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 23:31:52.866928 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 23:31:52.873825 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 16 23:31:52.877117 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 23:31:52.878564 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 16 23:31:52.934736 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 16 23:31:53.086235 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 23:31:53.111074 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 16 23:31:53.127617 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 16 23:31:53.199864 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 16 23:31:53.278211 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 16 23:31:53.318324 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 23:31:53.427729 systemd-networkd[1882]: lo: Link UP Apr 16 23:31:53.428440 systemd-networkd[1882]: lo: Gained carrier Apr 16 23:31:53.432124 systemd-networkd[1882]: Enumeration completed Apr 16 23:31:53.432680 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 23:31:53.434922 systemd-networkd[1882]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:31:53.435169 systemd-networkd[1882]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 23:31:53.439893 systemd-resolved[1883]: Positive Trust Anchors: Apr 16 23:31:53.439941 systemd-resolved[1883]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 23:31:53.440006 systemd-resolved[1883]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 23:31:53.441810 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 16 23:31:53.454669 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 16 23:31:53.459937 systemd-networkd[1882]: eth0: Link UP Apr 16 23:31:53.460412 systemd-networkd[1882]: eth0: Gained carrier Apr 16 23:31:53.460454 systemd-networkd[1882]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 23:31:53.469077 systemd-resolved[1883]: Defaulting to hostname 'linux'. Apr 16 23:31:53.473443 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 23:31:53.476404 systemd[1]: Reached target network.target - Network. Apr 16 23:31:53.476500 systemd-networkd[1882]: eth0: DHCPv4 address 172.31.18.112/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 16 23:31:53.479030 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 23:31:53.484583 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 23:31:53.487544 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 16 23:31:53.490759 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 16 23:31:53.496914 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 16 23:31:53.499974 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 16 23:31:53.503141 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 16 23:31:53.506757 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 16 23:31:53.506824 systemd[1]: Reached target paths.target - Path Units. Apr 16 23:31:53.509195 systemd[1]: Reached target timers.target - Timer Units. Apr 16 23:31:53.513970 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 16 23:31:53.521173 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 16 23:31:53.532253 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 16 23:31:53.535734 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 16 23:31:53.539000 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 16 23:31:53.554049 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 16 23:31:53.558102 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 16 23:31:53.563218 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 16 23:31:53.567028 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 16 23:31:53.571194 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 23:31:53.574092 systemd[1]: Reached target basic.target - Basic System. Apr 16 23:31:53.576939 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 16 23:31:53.577239 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 16 23:31:53.579844 systemd[1]: Starting containerd.service - containerd container runtime... Apr 16 23:31:53.585180 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 16 23:31:53.591772 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 16 23:31:53.599593 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 16 23:31:53.613275 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 16 23:31:53.620502 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 16 23:31:53.624574 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 16 23:31:53.638248 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 16 23:31:53.649753 systemd[1]: Started ntpd.service - Network Time Service. Apr 16 23:31:53.661714 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 16 23:31:53.673840 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 16 23:31:53.680150 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 16 23:31:53.686962 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 16 23:31:53.711737 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 16 23:31:53.718245 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 16 23:31:53.719275 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 16 23:31:53.730799 systemd[1]: Starting update-engine.service - Update Engine... Apr 16 23:31:53.739301 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 16 23:31:53.752409 jq[1975]: false Apr 16 23:31:53.747489 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 16 23:31:53.765237 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 16 23:31:53.783032 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 16 23:31:53.786673 extend-filesystems[1976]: Found /dev/nvme0n1p6 Apr 16 23:31:53.809465 jq[1990]: true Apr 16 23:31:53.830211 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 16 23:31:53.831582 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 16 23:31:53.836975 extend-filesystems[1976]: Found /dev/nvme0n1p9 Apr 16 23:31:53.869273 extend-filesystems[1976]: Checking size of /dev/nvme0n1p9 Apr 16 23:31:53.908895 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 16 23:31:53.908573 dbus-daemon[1973]: [system] SELinux support is enabled Apr 16 23:31:53.916448 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 16 23:31:53.916525 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 16 23:31:53.919783 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 16 23:31:53.919826 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 16 23:31:53.935968 dbus-daemon[1973]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1882 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 16 23:31:53.923522 (ntainerd)[2004]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 16 23:31:53.948144 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 16 23:31:53.966115 systemd[1]: motdgen.service: Deactivated successfully. Apr 16 23:31:53.966730 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 16 23:31:53.997491 extend-filesystems[1976]: Resized partition /dev/nvme0n1p9 Apr 16 23:31:54.003578 jq[2009]: true Apr 16 23:31:54.013398 update_engine[1989]: I20260416 23:31:54.008149 1989 main.cc:92] Flatcar Update Engine starting Apr 16 23:31:54.022389 extend-filesystems[2027]: resize2fs 1.47.3 (8-Jul-2025) Apr 16 23:31:54.025963 tar[2000]: linux-arm64/LICENSE Apr 16 23:31:54.025963 tar[2000]: linux-arm64/helm Apr 16 23:31:54.028142 systemd[1]: Started update-engine.service - Update Engine. Apr 16 23:31:54.034174 update_engine[1989]: I20260416 23:31:54.032642 1989 update_check_scheduler.cc:74] Next update check in 2m35s Apr 16 23:31:54.063541 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Apr 16 23:31:54.096430 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 16 23:31:54.112710 ntpd[1978]: ntpd 4.2.8p18@1.4062-o Thu Apr 16 21:38:29 UTC 2026 (1): Starting Apr 16 23:31:54.116111 ntpd[1978]: 16 Apr 23:31:54 ntpd[1978]: ntpd 4.2.8p18@1.4062-o Thu Apr 16 21:38:29 UTC 2026 (1): Starting Apr 16 23:31:54.116111 ntpd[1978]: 16 Apr 23:31:54 ntpd[1978]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 16 23:31:54.116111 ntpd[1978]: 16 Apr 23:31:54 ntpd[1978]: ---------------------------------------------------- Apr 16 23:31:54.116111 ntpd[1978]: 16 Apr 23:31:54 ntpd[1978]: ntp-4 is maintained by Network Time Foundation, Apr 16 23:31:54.116111 ntpd[1978]: 16 Apr 23:31:54 ntpd[1978]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 16 23:31:54.116111 ntpd[1978]: 16 Apr 23:31:54 ntpd[1978]: corporation. Support and training for ntp-4 are Apr 16 23:31:54.116111 ntpd[1978]: 16 Apr 23:31:54 ntpd[1978]: available at https://www.nwtime.org/support Apr 16 23:31:54.116111 ntpd[1978]: 16 Apr 23:31:54 ntpd[1978]: ---------------------------------------------------- Apr 16 23:31:54.112855 ntpd[1978]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 16 23:31:54.112875 ntpd[1978]: ---------------------------------------------------- Apr 16 23:31:54.131925 ntpd[1978]: 16 Apr 23:31:54 ntpd[1978]: proto: precision = 0.096 usec (-23) Apr 16 23:31:54.112893 ntpd[1978]: ntp-4 is maintained by Network Time Foundation, Apr 16 23:31:54.112910 ntpd[1978]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 16 23:31:54.112927 ntpd[1978]: corporation. Support and training for ntp-4 are Apr 16 23:31:54.112943 ntpd[1978]: available at https://www.nwtime.org/support Apr 16 23:31:54.112958 ntpd[1978]: ---------------------------------------------------- Apr 16 23:31:54.131127 ntpd[1978]: proto: precision = 0.096 usec (-23) Apr 16 23:31:54.137874 ntpd[1978]: basedate set to 2026-04-04 Apr 16 23:31:54.144437 ntpd[1978]: 16 Apr 23:31:54 ntpd[1978]: basedate set to 2026-04-04 Apr 16 23:31:54.144437 ntpd[1978]: 16 Apr 23:31:54 ntpd[1978]: gps base set to 2026-04-05 (week 2413) Apr 16 23:31:54.144437 ntpd[1978]: 16 Apr 23:31:54 ntpd[1978]: Listen and drop on 0 v6wildcard [::]:123 Apr 16 23:31:54.144437 ntpd[1978]: 16 Apr 23:31:54 ntpd[1978]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 16 23:31:54.144437 ntpd[1978]: 16 Apr 23:31:54 ntpd[1978]: Listen normally on 2 lo 127.0.0.1:123 Apr 16 23:31:54.139558 ntpd[1978]: gps base set to 2026-04-05 (week 2413) Apr 16 23:31:54.140712 ntpd[1978]: Listen and drop on 0 v6wildcard [::]:123 Apr 16 23:31:54.140785 ntpd[1978]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 16 23:31:54.144009 ntpd[1978]: Listen normally on 2 lo 127.0.0.1:123 Apr 16 23:31:54.153882 coreos-metadata[1972]: Apr 16 23:31:54.151 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 16 23:31:54.154497 ntpd[1978]: 16 Apr 23:31:54 ntpd[1978]: Listen normally on 3 eth0 172.31.18.112:123 Apr 16 23:31:54.154497 ntpd[1978]: 16 Apr 23:31:54 ntpd[1978]: Listen normally on 4 lo [::1]:123 Apr 16 23:31:54.154497 ntpd[1978]: 16 Apr 23:31:54 ntpd[1978]: bind(21) AF_INET6 [fe80::45d:f9ff:fead:7099%2]:123 flags 0x811 failed: Cannot assign requested address Apr 16 23:31:54.154497 ntpd[1978]: 16 Apr 23:31:54 ntpd[1978]: unable to create socket on eth0 (5) for [fe80::45d:f9ff:fead:7099%2]:123 Apr 16 23:31:54.152682 ntpd[1978]: Listen normally on 3 eth0 172.31.18.112:123 Apr 16 23:31:54.152804 ntpd[1978]: Listen normally on 4 lo [::1]:123 Apr 16 23:31:54.152861 ntpd[1978]: bind(21) AF_INET6 [fe80::45d:f9ff:fead:7099%2]:123 flags 0x811 failed: Cannot assign requested address Apr 16 23:31:54.163037 coreos-metadata[1972]: Apr 16 23:31:54.159 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 16 23:31:54.152904 ntpd[1978]: unable to create socket on eth0 (5) for [fe80::45d:f9ff:fead:7099%2]:123 Apr 16 23:31:54.164581 systemd-coredump[2039]: Process 1978 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Apr 16 23:31:54.169137 coreos-metadata[1972]: Apr 16 23:31:54.166 INFO Fetch successful Apr 16 23:31:54.169137 coreos-metadata[1972]: Apr 16 23:31:54.166 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 16 23:31:54.171107 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Apr 16 23:31:54.181125 coreos-metadata[1972]: Apr 16 23:31:54.180 INFO Fetch successful Apr 16 23:31:54.181125 coreos-metadata[1972]: Apr 16 23:31:54.180 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 16 23:31:54.181125 coreos-metadata[1972]: Apr 16 23:31:54.181 INFO Fetch successful Apr 16 23:31:54.181125 coreos-metadata[1972]: Apr 16 23:31:54.181 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 16 23:31:54.182248 coreos-metadata[1972]: Apr 16 23:31:54.181 INFO Fetch successful Apr 16 23:31:54.182627 coreos-metadata[1972]: Apr 16 23:31:54.182 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 16 23:31:54.183769 coreos-metadata[1972]: Apr 16 23:31:54.183 INFO Fetch failed with 404: resource not found Apr 16 23:31:54.183769 coreos-metadata[1972]: Apr 16 23:31:54.183 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 16 23:31:54.187180 coreos-metadata[1972]: Apr 16 23:31:54.185 INFO Fetch successful Apr 16 23:31:54.187503 coreos-metadata[1972]: Apr 16 23:31:54.187 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 16 23:31:54.187503 coreos-metadata[1972]: Apr 16 23:31:54.187 INFO Fetch successful Apr 16 23:31:54.187503 coreos-metadata[1972]: Apr 16 23:31:54.187 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 16 23:31:54.188333 coreos-metadata[1972]: Apr 16 23:31:54.188 INFO Fetch successful Apr 16 23:31:54.188333 coreos-metadata[1972]: Apr 16 23:31:54.188 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 16 23:31:54.189170 systemd[1]: Started systemd-coredump@0-2039-0.service - Process Core Dump (PID 2039/UID 0). Apr 16 23:31:54.197713 coreos-metadata[1972]: Apr 16 23:31:54.195 INFO Fetch successful Apr 16 23:31:54.197713 coreos-metadata[1972]: Apr 16 23:31:54.196 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 16 23:31:54.197713 coreos-metadata[1972]: Apr 16 23:31:54.196 INFO Fetch successful Apr 16 23:31:54.227298 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 16 23:31:54.277424 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Apr 16 23:31:54.380818 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 16 23:31:54.387129 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 16 23:31:54.408522 extend-filesystems[2027]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 16 23:31:54.408522 extend-filesystems[2027]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 16 23:31:54.408522 extend-filesystems[2027]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Apr 16 23:31:54.436827 extend-filesystems[1976]: Resized filesystem in /dev/nvme0n1p9 Apr 16 23:31:54.417404 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 16 23:31:54.421496 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 16 23:31:54.471043 bash[2069]: Updated "/home/core/.ssh/authorized_keys" Apr 16 23:31:54.481975 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 16 23:31:54.497307 systemd[1]: Starting sshkeys.service... Apr 16 23:31:54.597057 systemd-logind[1986]: Watching system buttons on /dev/input/event0 (Power Button) Apr 16 23:31:54.597123 systemd-logind[1986]: Watching system buttons on /dev/input/event1 (Sleep Button) Apr 16 23:31:54.611570 systemd-logind[1986]: New seat seat0. Apr 16 23:31:54.624391 systemd[1]: Started systemd-logind.service - User Login Management. Apr 16 23:31:54.697487 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 16 23:31:54.711929 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 16 23:31:54.720130 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 16 23:31:54.815296 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 16 23:31:54.821571 dbus-daemon[1973]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 16 23:31:54.828228 dbus-daemon[1973]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2017 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 16 23:31:54.842158 systemd[1]: Starting polkit.service - Authorization Manager... Apr 16 23:31:54.942559 locksmithd[2029]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 16 23:31:55.097855 sshd_keygen[2021]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 16 23:31:55.205494 containerd[2004]: time="2026-04-16T23:31:55Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 16 23:31:55.215422 containerd[2004]: time="2026-04-16T23:31:55.213953207Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 16 23:31:55.268484 systemd-networkd[1882]: eth0: Gained IPv6LL Apr 16 23:31:55.282934 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 16 23:31:55.287742 systemd[1]: Reached target network-online.target - Network is Online. Apr 16 23:31:55.300394 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 16 23:31:55.321041 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:31:55.333670 containerd[2004]: time="2026-04-16T23:31:55.328616892Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="16.452µs" Apr 16 23:31:55.330002 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 16 23:31:55.336397 containerd[2004]: time="2026-04-16T23:31:55.333952428Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 16 23:31:55.336397 containerd[2004]: time="2026-04-16T23:31:55.334026588Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 16 23:31:55.336553 coreos-metadata[2125]: Apr 16 23:31:55.335 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 16 23:31:55.340325 containerd[2004]: time="2026-04-16T23:31:55.334329996Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 16 23:31:55.340325 containerd[2004]: time="2026-04-16T23:31:55.337537092Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 16 23:31:55.340325 containerd[2004]: time="2026-04-16T23:31:55.337626612Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 16 23:31:55.340325 containerd[2004]: time="2026-04-16T23:31:55.337816236Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 16 23:31:55.340325 containerd[2004]: time="2026-04-16T23:31:55.337851984Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 16 23:31:55.340325 containerd[2004]: time="2026-04-16T23:31:55.338600796Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 16 23:31:55.340325 containerd[2004]: time="2026-04-16T23:31:55.338656200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 16 23:31:55.340325 containerd[2004]: time="2026-04-16T23:31:55.338689776Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 16 23:31:55.340325 containerd[2004]: time="2026-04-16T23:31:55.338715768Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 16 23:31:55.351394 coreos-metadata[2125]: Apr 16 23:31:55.341 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 16 23:31:55.351394 coreos-metadata[2125]: Apr 16 23:31:55.343 INFO Fetch successful Apr 16 23:31:55.351394 coreos-metadata[2125]: Apr 16 23:31:55.346 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 16 23:31:55.355674 coreos-metadata[2125]: Apr 16 23:31:55.353 INFO Fetch successful Apr 16 23:31:55.357536 unknown[2125]: wrote ssh authorized keys file for user: core Apr 16 23:31:55.361646 containerd[2004]: time="2026-04-16T23:31:55.360021900Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 16 23:31:55.361646 containerd[2004]: time="2026-04-16T23:31:55.360632628Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 16 23:31:55.361646 containerd[2004]: time="2026-04-16T23:31:55.360731700Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 16 23:31:55.361646 containerd[2004]: time="2026-04-16T23:31:55.360763920Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 16 23:31:55.361646 containerd[2004]: time="2026-04-16T23:31:55.360828828Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 16 23:31:55.361646 containerd[2004]: time="2026-04-16T23:31:55.361279284Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 16 23:31:55.370486 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 16 23:31:55.379413 containerd[2004]: time="2026-04-16T23:31:55.378403908Z" level=info msg="metadata content store policy set" policy=shared Apr 16 23:31:55.381174 systemd-coredump[2041]: Process 1978 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1978: #0 0x0000aaaab0560b5c n/a (ntpd + 0x60b5c) #1 0x0000aaaab050fe60 n/a (ntpd + 0xfe60) #2 0x0000aaaab0510240 n/a (ntpd + 0x10240) #3 0x0000aaaab050be14 n/a (ntpd + 0xbe14) #4 0x0000aaaab050d3ec n/a (ntpd + 0xd3ec) #5 0x0000aaaab0515a38 n/a (ntpd + 0x15a38) #6 0x0000aaaab050738c n/a (ntpd + 0x738c) #7 0x0000ffffb3232034 n/a (libc.so.6 + 0x22034) #8 0x0000ffffb3232118 __libc_start_main (libc.so.6 + 0x22118) #9 0x0000aaaab05073f0 n/a (ntpd + 0x73f0) ELF object binary architecture: AARCH64 Apr 16 23:31:55.381188 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 16 23:31:55.394149 systemd[1]: Started sshd@0-172.31.18.112:22-20.229.252.112:55634.service - OpenSSH per-connection server daemon (20.229.252.112:55634). Apr 16 23:31:55.413408 containerd[2004]: time="2026-04-16T23:31:55.409145664Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 16 23:31:55.413408 containerd[2004]: time="2026-04-16T23:31:55.409290432Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 16 23:31:55.413408 containerd[2004]: time="2026-04-16T23:31:55.409340184Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 16 23:31:55.413408 containerd[2004]: time="2026-04-16T23:31:55.409401972Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 16 23:31:55.413408 containerd[2004]: time="2026-04-16T23:31:55.409435884Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 16 23:31:55.413408 containerd[2004]: time="2026-04-16T23:31:55.409465740Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 16 23:31:55.413408 containerd[2004]: time="2026-04-16T23:31:55.409497636Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 16 23:31:55.413408 containerd[2004]: time="2026-04-16T23:31:55.409527660Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 16 23:31:55.413408 containerd[2004]: time="2026-04-16T23:31:55.409557360Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 16 23:31:55.413408 containerd[2004]: time="2026-04-16T23:31:55.409596948Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 16 23:31:55.413408 containerd[2004]: time="2026-04-16T23:31:55.409634124Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 16 23:31:55.413408 containerd[2004]: time="2026-04-16T23:31:55.409670784Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 16 23:31:55.413408 containerd[2004]: time="2026-04-16T23:31:55.409945176Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 16 23:31:55.413408 containerd[2004]: time="2026-04-16T23:31:55.410010336Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 16 23:31:55.414059 containerd[2004]: time="2026-04-16T23:31:55.410049456Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 16 23:31:55.414059 containerd[2004]: time="2026-04-16T23:31:55.410078328Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 16 23:31:55.414059 containerd[2004]: time="2026-04-16T23:31:55.410106204Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 16 23:31:55.414059 containerd[2004]: time="2026-04-16T23:31:55.410134104Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 16 23:31:55.414059 containerd[2004]: time="2026-04-16T23:31:55.410163024Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 16 23:31:55.414059 containerd[2004]: time="2026-04-16T23:31:55.410194812Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 16 23:31:55.414059 containerd[2004]: time="2026-04-16T23:31:55.410223936Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 16 23:31:55.414059 containerd[2004]: time="2026-04-16T23:31:55.410251116Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 16 23:31:55.414059 containerd[2004]: time="2026-04-16T23:31:55.410278416Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 16 23:31:55.419761 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Apr 16 23:31:55.424722 containerd[2004]: time="2026-04-16T23:31:55.421163701Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 16 23:31:55.424722 containerd[2004]: time="2026-04-16T23:31:55.421237477Z" level=info msg="Start snapshots syncer" Apr 16 23:31:55.424722 containerd[2004]: time="2026-04-16T23:31:55.421291765Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 16 23:31:55.420098 systemd[1]: ntpd.service: Failed with result 'core-dump'. Apr 16 23:31:55.433701 containerd[2004]: time="2026-04-16T23:31:55.428788429Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 16 23:31:55.441012 systemd[1]: systemd-coredump@0-2039-0.service: Deactivated successfully. Apr 16 23:31:55.444605 containerd[2004]: time="2026-04-16T23:31:55.437401597Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 16 23:31:55.444605 containerd[2004]: time="2026-04-16T23:31:55.437602273Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 16 23:31:55.444605 containerd[2004]: time="2026-04-16T23:31:55.441944413Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 16 23:31:55.444605 containerd[2004]: time="2026-04-16T23:31:55.442024357Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 16 23:31:55.444605 containerd[2004]: time="2026-04-16T23:31:55.442056097Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 16 23:31:55.444605 containerd[2004]: time="2026-04-16T23:31:55.442083805Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 16 23:31:55.444605 containerd[2004]: time="2026-04-16T23:31:55.442116997Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 16 23:31:55.444605 containerd[2004]: time="2026-04-16T23:31:55.442145797Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 16 23:31:55.444605 containerd[2004]: time="2026-04-16T23:31:55.442174753Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 16 23:31:55.444605 containerd[2004]: time="2026-04-16T23:31:55.442250077Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 16 23:31:55.444605 containerd[2004]: time="2026-04-16T23:31:55.442279837Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 16 23:31:55.444605 containerd[2004]: time="2026-04-16T23:31:55.442310269Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 16 23:31:55.444605 containerd[2004]: time="2026-04-16T23:31:55.442391425Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 16 23:31:55.444605 containerd[2004]: time="2026-04-16T23:31:55.442429681Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 16 23:31:55.445231 containerd[2004]: time="2026-04-16T23:31:55.442458133Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 16 23:31:55.445231 containerd[2004]: time="2026-04-16T23:31:55.442484893Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 16 23:31:55.445231 containerd[2004]: time="2026-04-16T23:31:55.442506829Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 16 23:31:55.445231 containerd[2004]: time="2026-04-16T23:31:55.442531717Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 16 23:31:55.445231 containerd[2004]: time="2026-04-16T23:31:55.442566349Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 16 23:31:55.445231 containerd[2004]: time="2026-04-16T23:31:55.442754389Z" level=info msg="runtime interface created" Apr 16 23:31:55.445231 containerd[2004]: time="2026-04-16T23:31:55.442775485Z" level=info msg="created NRI interface" Apr 16 23:31:55.445231 containerd[2004]: time="2026-04-16T23:31:55.442799233Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 16 23:31:55.445231 containerd[2004]: time="2026-04-16T23:31:55.442836973Z" level=info msg="Connect containerd service" Apr 16 23:31:55.445231 containerd[2004]: time="2026-04-16T23:31:55.442886857Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 16 23:31:55.459019 containerd[2004]: time="2026-04-16T23:31:55.458238973Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 23:31:55.521516 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Apr 16 23:31:55.531271 systemd[1]: Started ntpd.service - Network Time Service. Apr 16 23:31:55.570845 update-ssh-keys[2189]: Updated "/home/core/.ssh/authorized_keys" Apr 16 23:31:55.577214 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 16 23:31:55.586978 systemd[1]: Finished sshkeys.service. Apr 16 23:31:55.611579 systemd[1]: issuegen.service: Deactivated successfully. Apr 16 23:31:55.612251 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 16 23:31:55.624152 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 16 23:31:55.694443 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 16 23:31:55.754513 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 16 23:31:55.763079 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 16 23:31:55.770595 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 16 23:31:55.773956 systemd[1]: Reached target getty.target - Login Prompts. Apr 16 23:31:55.843784 amazon-ssm-agent[2179]: Initializing new seelog logger Apr 16 23:31:55.845967 ntpd[2206]: ntpd 4.2.8p18@1.4062-o Thu Apr 16 21:38:29 UTC 2026 (1): Starting Apr 16 23:31:55.847722 amazon-ssm-agent[2179]: New Seelog Logger Creation Complete Apr 16 23:31:55.847722 amazon-ssm-agent[2179]: 2026/04/16 23:31:55 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 16 23:31:55.847722 amazon-ssm-agent[2179]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 16 23:31:55.847722 amazon-ssm-agent[2179]: 2026/04/16 23:31:55 processing appconfig overrides Apr 16 23:31:55.855392 amazon-ssm-agent[2179]: 2026/04/16 23:31:55 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 16 23:31:55.855392 amazon-ssm-agent[2179]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 16 23:31:55.855392 amazon-ssm-agent[2179]: 2026/04/16 23:31:55 processing appconfig overrides Apr 16 23:31:55.855392 amazon-ssm-agent[2179]: 2026/04/16 23:31:55 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 16 23:31:55.855392 amazon-ssm-agent[2179]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 16 23:31:55.855392 amazon-ssm-agent[2179]: 2026/04/16 23:31:55 processing appconfig overrides Apr 16 23:31:55.856872 amazon-ssm-agent[2179]: 2026-04-16 23:31:55.8524 INFO Proxy environment variables: Apr 16 23:31:55.859395 ntpd[2206]: 16 Apr 23:31:55 ntpd[2206]: ntpd 4.2.8p18@1.4062-o Thu Apr 16 21:38:29 UTC 2026 (1): Starting Apr 16 23:31:55.859395 ntpd[2206]: 16 Apr 23:31:55 ntpd[2206]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 16 23:31:55.859395 ntpd[2206]: 16 Apr 23:31:55 ntpd[2206]: ---------------------------------------------------- Apr 16 23:31:55.859395 ntpd[2206]: 16 Apr 23:31:55 ntpd[2206]: ntp-4 is maintained by Network Time Foundation, Apr 16 23:31:55.859395 ntpd[2206]: 16 Apr 23:31:55 ntpd[2206]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 16 23:31:55.859395 ntpd[2206]: 16 Apr 23:31:55 ntpd[2206]: corporation. Support and training for ntp-4 are Apr 16 23:31:55.859395 ntpd[2206]: 16 Apr 23:31:55 ntpd[2206]: available at https://www.nwtime.org/support Apr 16 23:31:55.859395 ntpd[2206]: 16 Apr 23:31:55 ntpd[2206]: ---------------------------------------------------- Apr 16 23:31:55.859395 ntpd[2206]: 16 Apr 23:31:55 ntpd[2206]: proto: precision = 0.096 usec (-23) Apr 16 23:31:55.859395 ntpd[2206]: 16 Apr 23:31:55 ntpd[2206]: basedate set to 2026-04-04 Apr 16 23:31:55.859395 ntpd[2206]: 16 Apr 23:31:55 ntpd[2206]: gps base set to 2026-04-05 (week 2413) Apr 16 23:31:55.859395 ntpd[2206]: 16 Apr 23:31:55 ntpd[2206]: Listen and drop on 0 v6wildcard [::]:123 Apr 16 23:31:55.859395 ntpd[2206]: 16 Apr 23:31:55 ntpd[2206]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 16 23:31:55.857191 ntpd[2206]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 16 23:31:55.857213 ntpd[2206]: ---------------------------------------------------- Apr 16 23:31:55.857232 ntpd[2206]: ntp-4 is maintained by Network Time Foundation, Apr 16 23:31:55.857249 ntpd[2206]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 16 23:31:55.857266 ntpd[2206]: corporation. Support and training for ntp-4 are Apr 16 23:31:55.857289 ntpd[2206]: available at https://www.nwtime.org/support Apr 16 23:31:55.857307 ntpd[2206]: ---------------------------------------------------- Apr 16 23:31:55.858560 ntpd[2206]: proto: precision = 0.096 usec (-23) Apr 16 23:31:55.858932 ntpd[2206]: basedate set to 2026-04-04 Apr 16 23:31:55.858957 ntpd[2206]: gps base set to 2026-04-05 (week 2413) Apr 16 23:31:55.859101 ntpd[2206]: Listen and drop on 0 v6wildcard [::]:123 Apr 16 23:31:55.859148 ntpd[2206]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 16 23:31:55.871489 amazon-ssm-agent[2179]: 2026/04/16 23:31:55 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 16 23:31:55.871489 amazon-ssm-agent[2179]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 16 23:31:55.871489 amazon-ssm-agent[2179]: 2026/04/16 23:31:55 processing appconfig overrides Apr 16 23:31:55.874581 ntpd[2206]: Listen normally on 2 lo 127.0.0.1:123 Apr 16 23:31:55.875238 ntpd[2206]: 16 Apr 23:31:55 ntpd[2206]: Listen normally on 2 lo 127.0.0.1:123 Apr 16 23:31:55.875238 ntpd[2206]: 16 Apr 23:31:55 ntpd[2206]: Listen normally on 3 eth0 172.31.18.112:123 Apr 16 23:31:55.875238 ntpd[2206]: 16 Apr 23:31:55 ntpd[2206]: Listen normally on 4 lo [::1]:123 Apr 16 23:31:55.875238 ntpd[2206]: 16 Apr 23:31:55 ntpd[2206]: Listen normally on 5 eth0 [fe80::45d:f9ff:fead:7099%2]:123 Apr 16 23:31:55.875238 ntpd[2206]: 16 Apr 23:31:55 ntpd[2206]: Listening on routing socket on fd #22 for interface updates Apr 16 23:31:55.874657 ntpd[2206]: Listen normally on 3 eth0 172.31.18.112:123 Apr 16 23:31:55.874711 ntpd[2206]: Listen normally on 4 lo [::1]:123 Apr 16 23:31:55.874761 ntpd[2206]: Listen normally on 5 eth0 [fe80::45d:f9ff:fead:7099%2]:123 Apr 16 23:31:55.874811 ntpd[2206]: Listening on routing socket on fd #22 for interface updates Apr 16 23:31:55.877714 polkitd[2138]: Started polkitd version 126 Apr 16 23:31:55.889003 ntpd[2206]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 16 23:31:55.889691 ntpd[2206]: 16 Apr 23:31:55 ntpd[2206]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 16 23:31:55.889691 ntpd[2206]: 16 Apr 23:31:55 ntpd[2206]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 16 23:31:55.889064 ntpd[2206]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 16 23:31:55.927828 polkitd[2138]: Loading rules from directory /etc/polkit-1/rules.d Apr 16 23:31:55.931944 polkitd[2138]: Loading rules from directory /run/polkit-1/rules.d Apr 16 23:31:55.932128 polkitd[2138]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Apr 16 23:31:55.932842 polkitd[2138]: Loading rules from directory /usr/local/share/polkit-1/rules.d Apr 16 23:31:55.932906 polkitd[2138]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Apr 16 23:31:55.932991 polkitd[2138]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 16 23:31:55.935938 polkitd[2138]: Finished loading, compiling and executing 2 rules Apr 16 23:31:55.937797 systemd[1]: Started polkit.service - Authorization Manager. Apr 16 23:31:55.945114 dbus-daemon[1973]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 16 23:31:55.948593 polkitd[2138]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 16 23:31:55.965266 amazon-ssm-agent[2179]: 2026-04-16 23:31:55.8525 INFO https_proxy: Apr 16 23:31:56.020752 systemd-resolved[1883]: System hostname changed to 'ip-172-31-18-112'. Apr 16 23:31:56.021310 systemd-hostnamed[2017]: Hostname set to (transient) Apr 16 23:31:56.064331 amazon-ssm-agent[2179]: 2026-04-16 23:31:55.8525 INFO http_proxy: Apr 16 23:31:56.129259 containerd[2004]: time="2026-04-16T23:31:56.129129612Z" level=info msg="Start subscribing containerd event" Apr 16 23:31:56.129422 containerd[2004]: time="2026-04-16T23:31:56.129277380Z" level=info msg="Start recovering state" Apr 16 23:31:56.130991 containerd[2004]: time="2026-04-16T23:31:56.130921164Z" level=info msg="Start event monitor" Apr 16 23:31:56.130991 containerd[2004]: time="2026-04-16T23:31:56.130984980Z" level=info msg="Start cni network conf syncer for default" Apr 16 23:31:56.131149 containerd[2004]: time="2026-04-16T23:31:56.131021652Z" level=info msg="Start streaming server" Apr 16 23:31:56.131149 containerd[2004]: time="2026-04-16T23:31:56.131045004Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 16 23:31:56.131149 containerd[2004]: time="2026-04-16T23:31:56.131073744Z" level=info msg="runtime interface starting up..." Apr 16 23:31:56.131149 containerd[2004]: time="2026-04-16T23:31:56.131100300Z" level=info msg="starting plugins..." Apr 16 23:31:56.131149 containerd[2004]: time="2026-04-16T23:31:56.131137416Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 16 23:31:56.140318 containerd[2004]: time="2026-04-16T23:31:56.134016960Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 16 23:31:56.140318 containerd[2004]: time="2026-04-16T23:31:56.134160696Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 16 23:31:56.140318 containerd[2004]: time="2026-04-16T23:31:56.136259484Z" level=info msg="containerd successfully booted in 0.939543s" Apr 16 23:31:56.134428 systemd[1]: Started containerd.service - containerd container runtime. Apr 16 23:31:56.162888 amazon-ssm-agent[2179]: 2026-04-16 23:31:55.8525 INFO no_proxy: Apr 16 23:31:56.261682 amazon-ssm-agent[2179]: 2026-04-16 23:31:55.8528 INFO Checking if agent identity type OnPrem can be assumed Apr 16 23:31:56.361091 amazon-ssm-agent[2179]: 2026-04-16 23:31:55.8529 INFO Checking if agent identity type EC2 can be assumed Apr 16 23:31:56.399008 tar[2000]: linux-arm64/README.md Apr 16 23:31:56.429500 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 16 23:31:56.460619 amazon-ssm-agent[2179]: 2026-04-16 23:31:56.0425 INFO Agent will take identity from EC2 Apr 16 23:31:56.559947 amazon-ssm-agent[2179]: 2026-04-16 23:31:56.0461 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Apr 16 23:31:56.659301 amazon-ssm-agent[2179]: 2026-04-16 23:31:56.0462 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Apr 16 23:31:56.730389 sshd[2185]: Accepted publickey for core from 20.229.252.112 port 55634 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:31:56.735247 sshd-session[2185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:31:56.753664 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 16 23:31:56.759232 amazon-ssm-agent[2179]: 2026-04-16 23:31:56.0462 INFO [amazon-ssm-agent] Starting Core Agent Apr 16 23:31:56.760445 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 16 23:31:56.776072 amazon-ssm-agent[2179]: 2026/04/16 23:31:56 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 16 23:31:56.776072 amazon-ssm-agent[2179]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 16 23:31:56.776572 amazon-ssm-agent[2179]: 2026/04/16 23:31:56 processing appconfig overrides Apr 16 23:31:56.801097 systemd-logind[1986]: New session 1 of user core. Apr 16 23:31:56.816751 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 16 23:31:56.829939 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 16 23:31:56.834651 amazon-ssm-agent[2179]: 2026-04-16 23:31:56.0462 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Apr 16 23:31:56.834651 amazon-ssm-agent[2179]: 2026-04-16 23:31:56.0462 INFO [Registrar] Starting registrar module Apr 16 23:31:56.834651 amazon-ssm-agent[2179]: 2026-04-16 23:31:56.0517 INFO [EC2Identity] Checking disk for registration info Apr 16 23:31:56.834651 amazon-ssm-agent[2179]: 2026-04-16 23:31:56.0518 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Apr 16 23:31:56.834651 amazon-ssm-agent[2179]: 2026-04-16 23:31:56.0518 INFO [EC2Identity] Generating registration keypair Apr 16 23:31:56.834651 amazon-ssm-agent[2179]: 2026-04-16 23:31:56.7315 INFO [EC2Identity] Checking write access before registering Apr 16 23:31:56.834651 amazon-ssm-agent[2179]: 2026-04-16 23:31:56.7324 INFO [EC2Identity] Registering EC2 instance with Systems Manager Apr 16 23:31:56.834651 amazon-ssm-agent[2179]: 2026-04-16 23:31:56.7754 INFO [EC2Identity] EC2 registration was successful. Apr 16 23:31:56.834651 amazon-ssm-agent[2179]: 2026-04-16 23:31:56.7755 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Apr 16 23:31:56.834651 amazon-ssm-agent[2179]: 2026-04-16 23:31:56.7757 INFO [CredentialRefresher] credentialRefresher has started Apr 16 23:31:56.834651 amazon-ssm-agent[2179]: 2026-04-16 23:31:56.7758 INFO [CredentialRefresher] Starting credentials refresher loop Apr 16 23:31:56.837633 amazon-ssm-agent[2179]: 2026-04-16 23:31:56.8339 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 16 23:31:56.837633 amazon-ssm-agent[2179]: 2026-04-16 23:31:56.8343 INFO [CredentialRefresher] Credentials ready Apr 16 23:31:56.856946 (systemd)[2252]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 16 23:31:56.859121 amazon-ssm-agent[2179]: 2026-04-16 23:31:56.8375 INFO [CredentialRefresher] Next credential rotation will be in 29.9999404037 minutes Apr 16 23:31:56.866777 systemd-logind[1986]: New session c1 of user core. Apr 16 23:31:57.181992 systemd[2252]: Queued start job for default target default.target. Apr 16 23:31:57.191946 systemd[2252]: Created slice app.slice - User Application Slice. Apr 16 23:31:57.192030 systemd[2252]: Reached target paths.target - Paths. Apr 16 23:31:57.192133 systemd[2252]: Reached target timers.target - Timers. Apr 16 23:31:57.195182 systemd[2252]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 16 23:31:57.232484 systemd[2252]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 16 23:31:57.232747 systemd[2252]: Reached target sockets.target - Sockets. Apr 16 23:31:57.232865 systemd[2252]: Reached target basic.target - Basic System. Apr 16 23:31:57.232952 systemd[2252]: Reached target default.target - Main User Target. Apr 16 23:31:57.233014 systemd[2252]: Startup finished in 347ms. Apr 16 23:31:57.233437 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 16 23:31:57.243758 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 16 23:31:57.763690 systemd[1]: Started sshd@1-172.31.18.112:22-20.229.252.112:41460.service - OpenSSH per-connection server daemon (20.229.252.112:41460). Apr 16 23:31:57.865772 amazon-ssm-agent[2179]: 2026-04-16 23:31:57.8654 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 16 23:31:57.967153 amazon-ssm-agent[2179]: 2026-04-16 23:31:57.8689 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2269) started Apr 16 23:31:58.068545 amazon-ssm-agent[2179]: 2026-04-16 23:31:57.8689 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 16 23:31:58.260126 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:31:58.266162 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 16 23:31:58.271837 systemd[1]: Startup finished in 3.773s (kernel) + 9.514s (initrd) + 11.122s (userspace) = 24.411s. Apr 16 23:31:58.276774 (kubelet)[2285]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 23:31:58.652568 sshd[2264]: Accepted publickey for core from 20.229.252.112 port 41460 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:31:58.655251 sshd-session[2264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:31:58.667458 systemd-logind[1986]: New session 2 of user core. Apr 16 23:31:58.673673 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 16 23:31:59.146544 sshd[2294]: Connection closed by 20.229.252.112 port 41460 Apr 16 23:31:59.147472 sshd-session[2264]: pam_unix(sshd:session): session closed for user core Apr 16 23:31:59.154236 systemd[1]: sshd@1-172.31.18.112:22-20.229.252.112:41460.service: Deactivated successfully. Apr 16 23:31:59.155821 systemd-logind[1986]: Session 2 logged out. Waiting for processes to exit. Apr 16 23:31:59.159980 systemd[1]: session-2.scope: Deactivated successfully. Apr 16 23:31:59.168209 systemd-logind[1986]: Removed session 2. Apr 16 23:31:59.333110 kubelet[2285]: E0416 23:31:59.333048 2285 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 23:31:59.341183 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 23:31:59.341628 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 23:31:59.343514 systemd[1]: kubelet.service: Consumed 1.321s CPU time, 247.8M memory peak. Apr 16 23:31:59.348855 systemd[1]: Started sshd@2-172.31.18.112:22-20.229.252.112:41468.service - OpenSSH per-connection server daemon (20.229.252.112:41468). Apr 16 23:32:00.238548 sshd[2302]: Accepted publickey for core from 20.229.252.112 port 41468 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:32:00.241060 sshd-session[2302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:32:00.251111 systemd-logind[1986]: New session 3 of user core. Apr 16 23:32:00.257695 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 16 23:32:00.733398 sshd[2305]: Connection closed by 20.229.252.112 port 41468 Apr 16 23:32:00.732622 sshd-session[2302]: pam_unix(sshd:session): session closed for user core Apr 16 23:32:00.739290 systemd-logind[1986]: Session 3 logged out. Waiting for processes to exit. Apr 16 23:32:00.739577 systemd[1]: sshd@2-172.31.18.112:22-20.229.252.112:41468.service: Deactivated successfully. Apr 16 23:32:00.743248 systemd[1]: session-3.scope: Deactivated successfully. Apr 16 23:32:00.748237 systemd-logind[1986]: Removed session 3. Apr 16 23:32:00.911266 systemd[1]: Started sshd@3-172.31.18.112:22-20.229.252.112:41470.service - OpenSSH per-connection server daemon (20.229.252.112:41470). Apr 16 23:32:01.803430 sshd[2311]: Accepted publickey for core from 20.229.252.112 port 41470 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:32:01.805047 sshd-session[2311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:32:01.812146 systemd-logind[1986]: New session 4 of user core. Apr 16 23:32:01.823625 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 16 23:32:02.304427 sshd[2314]: Connection closed by 20.229.252.112 port 41470 Apr 16 23:32:02.305303 sshd-session[2311]: pam_unix(sshd:session): session closed for user core Apr 16 23:32:02.312680 systemd[1]: sshd@3-172.31.18.112:22-20.229.252.112:41470.service: Deactivated successfully. Apr 16 23:32:02.315932 systemd[1]: session-4.scope: Deactivated successfully. Apr 16 23:32:02.317605 systemd-logind[1986]: Session 4 logged out. Waiting for processes to exit. Apr 16 23:32:02.321105 systemd-logind[1986]: Removed session 4. Apr 16 23:32:02.490285 systemd[1]: Started sshd@4-172.31.18.112:22-20.229.252.112:41472.service - OpenSSH per-connection server daemon (20.229.252.112:41472). Apr 16 23:32:02.491224 systemd-resolved[1883]: Clock change detected. Flushing caches. Apr 16 23:32:03.025206 sshd[2320]: Accepted publickey for core from 20.229.252.112 port 41472 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:32:03.027301 sshd-session[2320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:32:03.034697 systemd-logind[1986]: New session 5 of user core. Apr 16 23:32:03.046384 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 16 23:32:03.393027 sudo[2324]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 16 23:32:03.394600 sudo[2324]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:32:03.417070 sudo[2324]: pam_unix(sudo:session): session closed for user root Apr 16 23:32:03.586177 sshd[2323]: Connection closed by 20.229.252.112 port 41472 Apr 16 23:32:03.585794 sshd-session[2320]: pam_unix(sshd:session): session closed for user core Apr 16 23:32:03.592148 systemd[1]: sshd@4-172.31.18.112:22-20.229.252.112:41472.service: Deactivated successfully. Apr 16 23:32:03.596018 systemd[1]: session-5.scope: Deactivated successfully. Apr 16 23:32:03.600941 systemd-logind[1986]: Session 5 logged out. Waiting for processes to exit. Apr 16 23:32:03.603082 systemd-logind[1986]: Removed session 5. Apr 16 23:32:03.768159 systemd[1]: Started sshd@5-172.31.18.112:22-20.229.252.112:41474.service - OpenSSH per-connection server daemon (20.229.252.112:41474). Apr 16 23:32:04.663415 sshd[2330]: Accepted publickey for core from 20.229.252.112 port 41474 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:32:04.665877 sshd-session[2330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:32:04.674722 systemd-logind[1986]: New session 6 of user core. Apr 16 23:32:04.681376 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 16 23:32:05.004895 sudo[2335]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 16 23:32:05.005598 sudo[2335]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:32:05.012280 sudo[2335]: pam_unix(sudo:session): session closed for user root Apr 16 23:32:05.022037 sudo[2334]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 16 23:32:05.023314 sudo[2334]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:32:05.039436 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 16 23:32:05.101882 augenrules[2357]: No rules Apr 16 23:32:05.104695 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 23:32:05.105511 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 16 23:32:05.108294 sudo[2334]: pam_unix(sudo:session): session closed for user root Apr 16 23:32:05.276667 sshd[2333]: Connection closed by 20.229.252.112 port 41474 Apr 16 23:32:05.276424 sshd-session[2330]: pam_unix(sshd:session): session closed for user core Apr 16 23:32:05.285198 systemd[1]: sshd@5-172.31.18.112:22-20.229.252.112:41474.service: Deactivated successfully. Apr 16 23:32:05.289293 systemd[1]: session-6.scope: Deactivated successfully. Apr 16 23:32:05.291073 systemd-logind[1986]: Session 6 logged out. Waiting for processes to exit. Apr 16 23:32:05.295218 systemd-logind[1986]: Removed session 6. Apr 16 23:32:05.454592 systemd[1]: Started sshd@6-172.31.18.112:22-20.229.252.112:36662.service - OpenSSH per-connection server daemon (20.229.252.112:36662). Apr 16 23:32:06.338239 sshd[2366]: Accepted publickey for core from 20.229.252.112 port 36662 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:32:06.340733 sshd-session[2366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:32:06.348590 systemd-logind[1986]: New session 7 of user core. Apr 16 23:32:06.358427 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 16 23:32:06.675785 sudo[2370]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 16 23:32:06.677007 sudo[2370]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 23:32:07.651353 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 16 23:32:07.664621 (dockerd)[2387]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 16 23:32:08.216668 dockerd[2387]: time="2026-04-16T23:32:08.216594700Z" level=info msg="Starting up" Apr 16 23:32:08.222826 dockerd[2387]: time="2026-04-16T23:32:08.222769936Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 16 23:32:08.246286 dockerd[2387]: time="2026-04-16T23:32:08.246213952Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 16 23:32:08.275311 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1601537952-merged.mount: Deactivated successfully. Apr 16 23:32:08.315993 dockerd[2387]: time="2026-04-16T23:32:08.315933328Z" level=info msg="Loading containers: start." Apr 16 23:32:08.331174 kernel: Initializing XFRM netlink socket Apr 16 23:32:08.707374 (udev-worker)[2409]: Network interface NamePolicy= disabled on kernel command line. Apr 16 23:32:08.785260 systemd-networkd[1882]: docker0: Link UP Apr 16 23:32:08.791722 dockerd[2387]: time="2026-04-16T23:32:08.791674351Z" level=info msg="Loading containers: done." Apr 16 23:32:08.861680 dockerd[2387]: time="2026-04-16T23:32:08.861614407Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 16 23:32:08.861911 dockerd[2387]: time="2026-04-16T23:32:08.861731731Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 16 23:32:08.861911 dockerd[2387]: time="2026-04-16T23:32:08.861878551Z" level=info msg="Initializing buildkit" Apr 16 23:32:08.907934 dockerd[2387]: time="2026-04-16T23:32:08.907872763Z" level=info msg="Completed buildkit initialization" Apr 16 23:32:08.925142 dockerd[2387]: time="2026-04-16T23:32:08.925038511Z" level=info msg="Daemon has completed initialization" Apr 16 23:32:08.925492 dockerd[2387]: time="2026-04-16T23:32:08.925303831Z" level=info msg="API listen on /run/docker.sock" Apr 16 23:32:08.928114 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 16 23:32:09.143295 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 16 23:32:09.146565 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:32:09.269687 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck801206236-merged.mount: Deactivated successfully. Apr 16 23:32:09.777706 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:32:09.799008 (kubelet)[2604]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 23:32:09.882118 kubelet[2604]: E0416 23:32:09.882016 2604 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 23:32:09.889569 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 23:32:09.889886 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 23:32:09.891597 systemd[1]: kubelet.service: Consumed 321ms CPU time, 106.9M memory peak. Apr 16 23:32:10.333415 containerd[2004]: time="2026-04-16T23:32:10.333344634Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 16 23:32:11.042990 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2699152701.mount: Deactivated successfully. Apr 16 23:32:12.795255 containerd[2004]: time="2026-04-16T23:32:12.795194375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:12.798312 containerd[2004]: time="2026-04-16T23:32:12.798259847Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=24608785" Apr 16 23:32:12.800018 containerd[2004]: time="2026-04-16T23:32:12.799929731Z" level=info msg="ImageCreate event name:\"sha256:09c946ff1743c56c0d49ef90ba95500741e0534f2f590ec98c924e4673ee3096\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:12.812089 containerd[2004]: time="2026-04-16T23:32:12.811998851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:12.814081 containerd[2004]: time="2026-04-16T23:32:12.813827567Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:09c946ff1743c56c0d49ef90ba95500741e0534f2f590ec98c924e4673ee3096\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"24605384\" in 2.480411785s" Apr 16 23:32:12.814081 containerd[2004]: time="2026-04-16T23:32:12.813883991Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:09c946ff1743c56c0d49ef90ba95500741e0534f2f590ec98c924e4673ee3096\"" Apr 16 23:32:12.815441 containerd[2004]: time="2026-04-16T23:32:12.815022611Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 16 23:32:14.831971 containerd[2004]: time="2026-04-16T23:32:14.831899773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:14.833808 containerd[2004]: time="2026-04-16T23:32:14.833227873Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=19073294" Apr 16 23:32:14.835050 containerd[2004]: time="2026-04-16T23:32:14.834976441Z" level=info msg="ImageCreate event name:\"sha256:95ce7d322e267614405a2a0eccfc0a1bdf5664dd9ab089bdfa9ae74d5ccb05a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:14.842206 containerd[2004]: time="2026-04-16T23:32:14.842140525Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:14.844366 containerd[2004]: time="2026-04-16T23:32:14.844301281Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:95ce7d322e267614405a2a0eccfc0a1bdf5664dd9ab089bdfa9ae74d5ccb05a7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"20579933\" in 2.02911907s" Apr 16 23:32:14.844602 containerd[2004]: time="2026-04-16T23:32:14.844564261Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:95ce7d322e267614405a2a0eccfc0a1bdf5664dd9ab089bdfa9ae74d5ccb05a7\"" Apr 16 23:32:14.845853 containerd[2004]: time="2026-04-16T23:32:14.845394169Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 16 23:32:16.234299 containerd[2004]: time="2026-04-16T23:32:16.234210252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:16.237888 containerd[2004]: time="2026-04-16T23:32:16.237802284Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=0, bytes read=13800836" Apr 16 23:32:16.238958 containerd[2004]: time="2026-04-16T23:32:16.238875768Z" level=info msg="ImageCreate event name:\"sha256:77d7d4cb9aa826105b6410a50df1dda7462ec663ced995347d8c171b04b0ee81\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:16.246880 containerd[2004]: time="2026-04-16T23:32:16.245896512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:16.248195 containerd[2004]: time="2026-04-16T23:32:16.248094444Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:77d7d4cb9aa826105b6410a50df1dda7462ec663ced995347d8c171b04b0ee81\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"15307493\" in 1.402586623s" Apr 16 23:32:16.248195 containerd[2004]: time="2026-04-16T23:32:16.248188272Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:77d7d4cb9aa826105b6410a50df1dda7462ec663ced995347d8c171b04b0ee81\"" Apr 16 23:32:16.249096 containerd[2004]: time="2026-04-16T23:32:16.248987424Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 16 23:32:17.478961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1215930144.mount: Deactivated successfully. Apr 16 23:32:17.908163 containerd[2004]: time="2026-04-16T23:32:17.908073088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:17.910780 containerd[2004]: time="2026-04-16T23:32:17.910355500Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=22340584" Apr 16 23:32:17.911912 containerd[2004]: time="2026-04-16T23:32:17.911851876Z" level=info msg="ImageCreate event name:\"sha256:8c75fb69e773da539298848d12a0a12029818ee910a62f2abd68aa1a5805991c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:17.915200 containerd[2004]: time="2026-04-16T23:32:17.915140152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:17.916563 containerd[2004]: time="2026-04-16T23:32:17.916507708Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:8c75fb69e773da539298848d12a0a12029818ee910a62f2abd68aa1a5805991c\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"22339603\" in 1.667455604s" Apr 16 23:32:17.916644 containerd[2004]: time="2026-04-16T23:32:17.916562368Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:8c75fb69e773da539298848d12a0a12029818ee910a62f2abd68aa1a5805991c\"" Apr 16 23:32:17.917233 containerd[2004]: time="2026-04-16T23:32:17.917180392Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 16 23:32:18.425271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3181819050.mount: Deactivated successfully. Apr 16 23:32:19.893423 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 16 23:32:19.898341 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:32:20.372070 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:32:20.392035 (kubelet)[2752]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 23:32:20.524189 kubelet[2752]: E0416 23:32:20.524089 2752 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 23:32:20.529600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 23:32:20.529907 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 23:32:20.530940 systemd[1]: kubelet.service: Consumed 368ms CPU time, 105M memory peak. Apr 16 23:32:20.885358 containerd[2004]: time="2026-04-16T23:32:20.885280771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:20.918378 containerd[2004]: time="2026-04-16T23:32:20.918257515Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=21172211" Apr 16 23:32:20.939432 containerd[2004]: time="2026-04-16T23:32:20.939201283Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:20.972078 containerd[2004]: time="2026-04-16T23:32:20.971994523Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:20.974692 containerd[2004]: time="2026-04-16T23:32:20.974173267Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 3.056930859s" Apr 16 23:32:20.974692 containerd[2004]: time="2026-04-16T23:32:20.974246803Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"" Apr 16 23:32:20.975243 containerd[2004]: time="2026-04-16T23:32:20.974912059Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 16 23:32:21.633324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3983725344.mount: Deactivated successfully. Apr 16 23:32:21.642156 containerd[2004]: time="2026-04-16T23:32:21.642011911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:21.643693 containerd[2004]: time="2026-04-16T23:32:21.643633903Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Apr 16 23:32:21.645140 containerd[2004]: time="2026-04-16T23:32:21.645052351Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:21.649827 containerd[2004]: time="2026-04-16T23:32:21.649245115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:21.650980 containerd[2004]: time="2026-04-16T23:32:21.650928859Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 675.937636ms" Apr 16 23:32:21.651202 containerd[2004]: time="2026-04-16T23:32:21.651170431Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Apr 16 23:32:21.651942 containerd[2004]: time="2026-04-16T23:32:21.651897811Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 16 23:32:22.264850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1550644051.mount: Deactivated successfully. Apr 16 23:32:23.662155 containerd[2004]: time="2026-04-16T23:32:23.661233093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:23.663710 containerd[2004]: time="2026-04-16T23:32:23.663650169Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=21752308" Apr 16 23:32:23.664280 containerd[2004]: time="2026-04-16T23:32:23.664230357Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:23.668995 containerd[2004]: time="2026-04-16T23:32:23.668913777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:23.671370 containerd[2004]: time="2026-04-16T23:32:23.671278077Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"21749640\" in 2.019047182s" Apr 16 23:32:23.671370 containerd[2004]: time="2026-04-16T23:32:23.671360277Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\"" Apr 16 23:32:25.693113 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 16 23:32:27.695536 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:32:27.696695 systemd[1]: kubelet.service: Consumed 368ms CPU time, 105M memory peak. Apr 16 23:32:27.702434 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:32:27.750424 systemd[1]: Reload requested from client PID 2853 ('systemctl') (unit session-7.scope)... Apr 16 23:32:27.750667 systemd[1]: Reloading... Apr 16 23:32:27.952161 zram_generator::config[2900]: No configuration found. Apr 16 23:32:28.416641 systemd[1]: Reloading finished in 665 ms. Apr 16 23:32:28.475836 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 16 23:32:28.476038 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 16 23:32:28.476763 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:32:28.476881 systemd[1]: kubelet.service: Consumed 159ms CPU time, 71.7M memory peak. Apr 16 23:32:28.480739 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:32:30.411330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:32:30.426616 (kubelet)[2957]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 23:32:30.501511 kubelet[2957]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 23:32:31.392758 kubelet[2957]: I0416 23:32:31.392384 2957 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 16 23:32:31.393059 kubelet[2957]: I0416 23:32:31.393040 2957 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 23:32:31.395675 kubelet[2957]: I0416 23:32:31.395639 2957 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 16 23:32:31.395819 kubelet[2957]: I0416 23:32:31.395797 2957 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 23:32:31.396405 kubelet[2957]: I0416 23:32:31.396383 2957 server.go:951] "Client rotation is on, will bootstrap in background" Apr 16 23:32:31.405041 kubelet[2957]: E0416 23:32:31.404972 2957 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.18.112:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.112:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 23:32:31.406243 kubelet[2957]: I0416 23:32:31.406192 2957 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 23:32:31.414690 kubelet[2957]: I0416 23:32:31.414647 2957 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 23:32:31.421159 kubelet[2957]: I0416 23:32:31.420736 2957 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 16 23:32:31.421310 kubelet[2957]: I0416 23:32:31.421271 2957 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 23:32:31.421632 kubelet[2957]: I0416 23:32:31.421315 2957 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-112","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 23:32:31.421887 kubelet[2957]: I0416 23:32:31.421640 2957 topology_manager.go:143] "Creating topology manager with none policy" Apr 16 23:32:31.421887 kubelet[2957]: I0416 23:32:31.421658 2957 container_manager_linux.go:308] "Creating device plugin manager" Apr 16 23:32:31.421887 kubelet[2957]: I0416 23:32:31.421811 2957 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 16 23:32:31.425300 kubelet[2957]: I0416 23:32:31.425246 2957 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 16 23:32:31.425695 kubelet[2957]: I0416 23:32:31.425669 2957 kubelet.go:482] "Attempting to sync node with API server" Apr 16 23:32:31.425776 kubelet[2957]: I0416 23:32:31.425712 2957 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 23:32:31.425776 kubelet[2957]: I0416 23:32:31.425747 2957 kubelet.go:394] "Adding apiserver pod source" Apr 16 23:32:31.426177 kubelet[2957]: I0416 23:32:31.425787 2957 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 23:32:31.433313 kubelet[2957]: I0416 23:32:31.433246 2957 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 16 23:32:31.435112 kubelet[2957]: I0416 23:32:31.435054 2957 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 23:32:31.435277 kubelet[2957]: I0416 23:32:31.435145 2957 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 16 23:32:31.435277 kubelet[2957]: W0416 23:32:31.435218 2957 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 16 23:32:31.445187 kubelet[2957]: I0416 23:32:31.443653 2957 server.go:1257] "Started kubelet" Apr 16 23:32:31.451676 kubelet[2957]: I0416 23:32:31.451621 2957 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 16 23:32:31.456318 kubelet[2957]: E0416 23:32:31.454200 2957 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.112:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.112:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-112.18a6fa5643117e7b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-112,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-112,},FirstTimestamp:2026-04-16 23:32:31.443590779 +0000 UTC m=+1.010095638,LastTimestamp:2026-04-16 23:32:31.443590779 +0000 UTC m=+1.010095638,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-112,}" Apr 16 23:32:31.457426 kubelet[2957]: I0416 23:32:31.457354 2957 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 23:32:31.459463 kubelet[2957]: I0416 23:32:31.459425 2957 server.go:317] "Adding debug handlers to kubelet server" Apr 16 23:32:31.460392 kubelet[2957]: I0416 23:32:31.460360 2957 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 16 23:32:31.464202 kubelet[2957]: I0416 23:32:31.460542 2957 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 16 23:32:31.464202 kubelet[2957]: E0416 23:32:31.460862 2957 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-18-112\" not found" Apr 16 23:32:31.464389 kubelet[2957]: I0416 23:32:31.464307 2957 reconciler.go:29] "Reconciler: start to sync state" Apr 16 23:32:31.464985 kubelet[2957]: E0416 23:32:31.464924 2957 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-112?timeout=10s\": dial tcp 172.31.18.112:6443: connect: connection refused" interval="200ms" Apr 16 23:32:31.466545 kubelet[2957]: I0416 23:32:31.466452 2957 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 23:32:31.467529 kubelet[2957]: I0416 23:32:31.467502 2957 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 16 23:32:31.467984 kubelet[2957]: I0416 23:32:31.467954 2957 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 23:32:31.468237 kubelet[2957]: I0416 23:32:31.466974 2957 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 23:32:31.472012 kubelet[2957]: E0416 23:32:31.471975 2957 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 23:32:31.473969 kubelet[2957]: I0416 23:32:31.473932 2957 factory.go:223] Registration of the containerd container factory successfully Apr 16 23:32:31.473969 kubelet[2957]: I0416 23:32:31.473963 2957 factory.go:223] Registration of the systemd container factory successfully Apr 16 23:32:31.474544 kubelet[2957]: I0416 23:32:31.474398 2957 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 23:32:31.503366 kubelet[2957]: I0416 23:32:31.503316 2957 cpu_manager.go:225] "Starting" policy="none" Apr 16 23:32:31.503366 kubelet[2957]: I0416 23:32:31.503350 2957 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 16 23:32:31.503934 kubelet[2957]: I0416 23:32:31.503384 2957 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 16 23:32:31.507253 kubelet[2957]: I0416 23:32:31.507207 2957 policy_none.go:50] "Start" Apr 16 23:32:31.507253 kubelet[2957]: I0416 23:32:31.507250 2957 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 16 23:32:31.507440 kubelet[2957]: I0416 23:32:31.507276 2957 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 16 23:32:31.509723 kubelet[2957]: I0416 23:32:31.509256 2957 policy_none.go:44] "Start" Apr 16 23:32:31.517699 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 16 23:32:31.527358 kubelet[2957]: I0416 23:32:31.527276 2957 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 16 23:32:31.533020 kubelet[2957]: I0416 23:32:31.532834 2957 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 16 23:32:31.533020 kubelet[2957]: I0416 23:32:31.532878 2957 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 16 23:32:31.533020 kubelet[2957]: I0416 23:32:31.532919 2957 kubelet.go:2501] "Starting kubelet main sync loop" Apr 16 23:32:31.533020 kubelet[2957]: E0416 23:32:31.532990 2957 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 23:32:31.541030 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 16 23:32:31.550631 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 16 23:32:31.563885 kubelet[2957]: E0416 23:32:31.563838 2957 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 23:32:31.565185 kubelet[2957]: I0416 23:32:31.564521 2957 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 16 23:32:31.565185 kubelet[2957]: I0416 23:32:31.564563 2957 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 23:32:31.565185 kubelet[2957]: I0416 23:32:31.565155 2957 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 16 23:32:31.568734 kubelet[2957]: E0416 23:32:31.568665 2957 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 23:32:31.568734 kubelet[2957]: E0416 23:32:31.568733 2957 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-112\" not found" Apr 16 23:32:31.661064 systemd[1]: Created slice kubepods-burstable-pod6e5f8ad4adda9db793f30d6589e5c170.slice - libcontainer container kubepods-burstable-pod6e5f8ad4adda9db793f30d6589e5c170.slice. Apr 16 23:32:31.664908 kubelet[2957]: I0416 23:32:31.664874 2957 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6e5f8ad4adda9db793f30d6589e5c170-ca-certs\") pod \"kube-apiserver-ip-172-31-18-112\" (UID: \"6e5f8ad4adda9db793f30d6589e5c170\") " pod="kube-system/kube-apiserver-ip-172-31-18-112" Apr 16 23:32:31.664980 kubelet[2957]: I0416 23:32:31.664943 2957 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6e5f8ad4adda9db793f30d6589e5c170-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-112\" (UID: \"6e5f8ad4adda9db793f30d6589e5c170\") " pod="kube-system/kube-apiserver-ip-172-31-18-112" Apr 16 23:32:31.665046 kubelet[2957]: I0416 23:32:31.664991 2957 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f74d72ce145bdb7239cc353f83e465d-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-112\" (UID: \"4f74d72ce145bdb7239cc353f83e465d\") " pod="kube-system/kube-controller-manager-ip-172-31-18-112" Apr 16 23:32:31.665111 kubelet[2957]: I0416 23:32:31.665039 2957 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f74d72ce145bdb7239cc353f83e465d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-112\" (UID: \"4f74d72ce145bdb7239cc353f83e465d\") " pod="kube-system/kube-controller-manager-ip-172-31-18-112" Apr 16 23:32:31.665111 kubelet[2957]: I0416 23:32:31.665089 2957 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f74d72ce145bdb7239cc353f83e465d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-112\" (UID: \"4f74d72ce145bdb7239cc353f83e465d\") " pod="kube-system/kube-controller-manager-ip-172-31-18-112" Apr 16 23:32:31.665248 kubelet[2957]: I0416 23:32:31.665166 2957 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9b077e1404348a1cf4ee57de1b24d81-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-112\" (UID: \"e9b077e1404348a1cf4ee57de1b24d81\") " pod="kube-system/kube-scheduler-ip-172-31-18-112" Apr 16 23:32:31.665248 kubelet[2957]: I0416 23:32:31.665220 2957 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6e5f8ad4adda9db793f30d6589e5c170-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-112\" (UID: \"6e5f8ad4adda9db793f30d6589e5c170\") " pod="kube-system/kube-apiserver-ip-172-31-18-112" Apr 16 23:32:31.665348 kubelet[2957]: I0416 23:32:31.665265 2957 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f74d72ce145bdb7239cc353f83e465d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-112\" (UID: \"4f74d72ce145bdb7239cc353f83e465d\") " pod="kube-system/kube-controller-manager-ip-172-31-18-112" Apr 16 23:32:31.665348 kubelet[2957]: I0416 23:32:31.665325 2957 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f74d72ce145bdb7239cc353f83e465d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-112\" (UID: \"4f74d72ce145bdb7239cc353f83e465d\") " pod="kube-system/kube-controller-manager-ip-172-31-18-112" Apr 16 23:32:31.673285 kubelet[2957]: I0416 23:32:31.673229 2957 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-18-112" Apr 16 23:32:31.681606 kubelet[2957]: E0416 23:32:31.681548 2957 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.18.112:6443/api/v1/nodes\": dial tcp 172.31.18.112:6443: connect: connection refused" node="ip-172-31-18-112" Apr 16 23:32:31.683270 kubelet[2957]: E0416 23:32:31.682929 2957 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-112?timeout=10s\": dial tcp 172.31.18.112:6443: connect: connection refused" interval="400ms" Apr 16 23:32:31.684110 kubelet[2957]: E0416 23:32:31.683998 2957 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-112\" not found" node="ip-172-31-18-112" Apr 16 23:32:31.696803 systemd[1]: Created slice kubepods-burstable-pod4f74d72ce145bdb7239cc353f83e465d.slice - libcontainer container kubepods-burstable-pod4f74d72ce145bdb7239cc353f83e465d.slice. Apr 16 23:32:31.702537 kubelet[2957]: E0416 23:32:31.702490 2957 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-112\" not found" node="ip-172-31-18-112" Apr 16 23:32:31.705846 systemd[1]: Created slice kubepods-burstable-pode9b077e1404348a1cf4ee57de1b24d81.slice - libcontainer container kubepods-burstable-pode9b077e1404348a1cf4ee57de1b24d81.slice. Apr 16 23:32:31.710368 kubelet[2957]: E0416 23:32:31.710310 2957 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-112\" not found" node="ip-172-31-18-112" Apr 16 23:32:31.885160 kubelet[2957]: I0416 23:32:31.884909 2957 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-18-112" Apr 16 23:32:31.885543 kubelet[2957]: E0416 23:32:31.885420 2957 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.18.112:6443/api/v1/nodes\": dial tcp 172.31.18.112:6443: connect: connection refused" node="ip-172-31-18-112" Apr 16 23:32:31.991938 containerd[2004]: time="2026-04-16T23:32:31.991783446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-112,Uid:6e5f8ad4adda9db793f30d6589e5c170,Namespace:kube-system,Attempt:0,}" Apr 16 23:32:32.006987 containerd[2004]: time="2026-04-16T23:32:32.006917198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-112,Uid:4f74d72ce145bdb7239cc353f83e465d,Namespace:kube-system,Attempt:0,}" Apr 16 23:32:32.014524 containerd[2004]: time="2026-04-16T23:32:32.014449958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-112,Uid:e9b077e1404348a1cf4ee57de1b24d81,Namespace:kube-system,Attempt:0,}" Apr 16 23:32:32.085191 kubelet[2957]: E0416 23:32:32.084732 2957 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-112?timeout=10s\": dial tcp 172.31.18.112:6443: connect: connection refused" interval="800ms" Apr 16 23:32:32.288035 kubelet[2957]: I0416 23:32:32.287911 2957 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-18-112" Apr 16 23:32:32.288835 kubelet[2957]: E0416 23:32:32.288754 2957 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.18.112:6443/api/v1/nodes\": dial tcp 172.31.18.112:6443: connect: connection refused" node="ip-172-31-18-112" Apr 16 23:32:32.558001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3555744654.mount: Deactivated successfully. Apr 16 23:32:32.567864 containerd[2004]: time="2026-04-16T23:32:32.567591521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:32:32.571452 containerd[2004]: time="2026-04-16T23:32:32.571394717Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:32:32.574355 containerd[2004]: time="2026-04-16T23:32:32.574307093Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Apr 16 23:32:32.575531 containerd[2004]: time="2026-04-16T23:32:32.575483717Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 16 23:32:32.580179 containerd[2004]: time="2026-04-16T23:32:32.579423629Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:32:32.582305 containerd[2004]: time="2026-04-16T23:32:32.582246353Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:32:32.582782 containerd[2004]: time="2026-04-16T23:32:32.582728981Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 16 23:32:32.590252 containerd[2004]: time="2026-04-16T23:32:32.589106885Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 593.495931ms" Apr 16 23:32:32.592215 containerd[2004]: time="2026-04-16T23:32:32.592107485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 23:32:32.593377 containerd[2004]: time="2026-04-16T23:32:32.589433105Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 580.479195ms" Apr 16 23:32:32.603160 containerd[2004]: time="2026-04-16T23:32:32.603015401Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 586.250007ms" Apr 16 23:32:32.757433 containerd[2004]: time="2026-04-16T23:32:32.757361382Z" level=info msg="connecting to shim 7b64f35198489d5250541f925603d12569e8e1db240f68434d84f97b09ffbb1f" address="unix:///run/containerd/s/fed24d4c014c2da3a20d11175572ef384acbc90139748cbd7d7a228e46c8fd7f" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:32:32.806181 containerd[2004]: time="2026-04-16T23:32:32.805894590Z" level=info msg="connecting to shim ef55280aed513c2660c6d3c20831c40c6a5ea54b50ebd941c17c0b7b0af30d3f" address="unix:///run/containerd/s/198f5827ff514216da3dbfc48d46d252357d70aa9f3cdfde106234c81236396e" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:32:32.817517 systemd[1]: Started cri-containerd-7b64f35198489d5250541f925603d12569e8e1db240f68434d84f97b09ffbb1f.scope - libcontainer container 7b64f35198489d5250541f925603d12569e8e1db240f68434d84f97b09ffbb1f. Apr 16 23:32:32.826763 containerd[2004]: time="2026-04-16T23:32:32.826626522Z" level=info msg="connecting to shim 74018eed4ad2cc2ce3eaae2e40e4a8f8fe7fdddf4cf22a3d206e830788a582b2" address="unix:///run/containerd/s/07c5b3f5619ff3d4be4d58904fe079e2b3ef6b158f9e7d5004c5ff7b84d0c977" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:32:32.886650 kubelet[2957]: E0416 23:32:32.886574 2957 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-112?timeout=10s\": dial tcp 172.31.18.112:6443: connect: connection refused" interval="1.6s" Apr 16 23:32:32.890831 systemd[1]: Started cri-containerd-ef55280aed513c2660c6d3c20831c40c6a5ea54b50ebd941c17c0b7b0af30d3f.scope - libcontainer container ef55280aed513c2660c6d3c20831c40c6a5ea54b50ebd941c17c0b7b0af30d3f. Apr 16 23:32:32.914487 systemd[1]: Started cri-containerd-74018eed4ad2cc2ce3eaae2e40e4a8f8fe7fdddf4cf22a3d206e830788a582b2.scope - libcontainer container 74018eed4ad2cc2ce3eaae2e40e4a8f8fe7fdddf4cf22a3d206e830788a582b2. Apr 16 23:32:33.092490 kubelet[2957]: I0416 23:32:33.092296 2957 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-18-112" Apr 16 23:32:33.093765 kubelet[2957]: E0416 23:32:33.093689 2957 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.18.112:6443/api/v1/nodes\": dial tcp 172.31.18.112:6443: connect: connection refused" node="ip-172-31-18-112" Apr 16 23:32:33.188103 containerd[2004]: time="2026-04-16T23:32:33.188009944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-112,Uid:6e5f8ad4adda9db793f30d6589e5c170,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b64f35198489d5250541f925603d12569e8e1db240f68434d84f97b09ffbb1f\"" Apr 16 23:32:33.622387 kubelet[2957]: E0416 23:32:33.550537 2957 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.18.112:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.112:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 23:32:33.633552 containerd[2004]: time="2026-04-16T23:32:33.633376182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-112,Uid:4f74d72ce145bdb7239cc353f83e465d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef55280aed513c2660c6d3c20831c40c6a5ea54b50ebd941c17c0b7b0af30d3f\"" Apr 16 23:32:33.635231 containerd[2004]: time="2026-04-16T23:32:33.635053590Z" level=info msg="CreateContainer within sandbox \"7b64f35198489d5250541f925603d12569e8e1db240f68434d84f97b09ffbb1f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 16 23:32:33.835098 containerd[2004]: time="2026-04-16T23:32:33.835016071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-112,Uid:e9b077e1404348a1cf4ee57de1b24d81,Namespace:kube-system,Attempt:0,} returns sandbox id \"74018eed4ad2cc2ce3eaae2e40e4a8f8fe7fdddf4cf22a3d206e830788a582b2\"" Apr 16 23:32:33.835390 containerd[2004]: time="2026-04-16T23:32:33.835330843Z" level=info msg="CreateContainer within sandbox \"ef55280aed513c2660c6d3c20831c40c6a5ea54b50ebd941c17c0b7b0af30d3f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 16 23:32:34.029259 containerd[2004]: time="2026-04-16T23:32:34.028814236Z" level=info msg="CreateContainer within sandbox \"74018eed4ad2cc2ce3eaae2e40e4a8f8fe7fdddf4cf22a3d206e830788a582b2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 16 23:32:34.488041 kubelet[2957]: E0416 23:32:34.487951 2957 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-112?timeout=10s\": dial tcp 172.31.18.112:6443: connect: connection refused" interval="3.2s" Apr 16 23:32:34.580926 containerd[2004]: time="2026-04-16T23:32:34.580299391Z" level=info msg="Container f945c3ddf5c38dcab45bc2a69b6165b7705660c68373f8c080e1c70d0cbbee1d: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:32:34.593968 containerd[2004]: time="2026-04-16T23:32:34.592813255Z" level=info msg="Container d46dfeb88c582b61f4d14000259caefdfb777eaf140b9728734e7db6db60a992: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:32:34.597681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4209922148.mount: Deactivated successfully. Apr 16 23:32:34.607530 containerd[2004]: time="2026-04-16T23:32:34.607277611Z" level=info msg="Container 78a2426c8b8f033e7a648c5f5a181d645caf9a0b96f048b36fc6e7ef290df15b: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:32:34.616080 containerd[2004]: time="2026-04-16T23:32:34.615970711Z" level=info msg="CreateContainer within sandbox \"7b64f35198489d5250541f925603d12569e8e1db240f68434d84f97b09ffbb1f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f945c3ddf5c38dcab45bc2a69b6165b7705660c68373f8c080e1c70d0cbbee1d\"" Apr 16 23:32:34.618066 containerd[2004]: time="2026-04-16T23:32:34.617995531Z" level=info msg="StartContainer for \"f945c3ddf5c38dcab45bc2a69b6165b7705660c68373f8c080e1c70d0cbbee1d\"" Apr 16 23:32:34.622682 containerd[2004]: time="2026-04-16T23:32:34.622553011Z" level=info msg="CreateContainer within sandbox \"ef55280aed513c2660c6d3c20831c40c6a5ea54b50ebd941c17c0b7b0af30d3f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d46dfeb88c582b61f4d14000259caefdfb777eaf140b9728734e7db6db60a992\"" Apr 16 23:32:34.623798 containerd[2004]: time="2026-04-16T23:32:34.623683435Z" level=info msg="connecting to shim f945c3ddf5c38dcab45bc2a69b6165b7705660c68373f8c080e1c70d0cbbee1d" address="unix:///run/containerd/s/fed24d4c014c2da3a20d11175572ef384acbc90139748cbd7d7a228e46c8fd7f" protocol=ttrpc version=3 Apr 16 23:32:34.624286 containerd[2004]: time="2026-04-16T23:32:34.623757403Z" level=info msg="CreateContainer within sandbox \"74018eed4ad2cc2ce3eaae2e40e4a8f8fe7fdddf4cf22a3d206e830788a582b2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"78a2426c8b8f033e7a648c5f5a181d645caf9a0b96f048b36fc6e7ef290df15b\"" Apr 16 23:32:34.624689 containerd[2004]: time="2026-04-16T23:32:34.624414103Z" level=info msg="StartContainer for \"d46dfeb88c582b61f4d14000259caefdfb777eaf140b9728734e7db6db60a992\"" Apr 16 23:32:34.628397 containerd[2004]: time="2026-04-16T23:32:34.628320307Z" level=info msg="StartContainer for \"78a2426c8b8f033e7a648c5f5a181d645caf9a0b96f048b36fc6e7ef290df15b\"" Apr 16 23:32:34.635601 containerd[2004]: time="2026-04-16T23:32:34.635511955Z" level=info msg="connecting to shim d46dfeb88c582b61f4d14000259caefdfb777eaf140b9728734e7db6db60a992" address="unix:///run/containerd/s/198f5827ff514216da3dbfc48d46d252357d70aa9f3cdfde106234c81236396e" protocol=ttrpc version=3 Apr 16 23:32:34.637260 containerd[2004]: time="2026-04-16T23:32:34.637051879Z" level=info msg="connecting to shim 78a2426c8b8f033e7a648c5f5a181d645caf9a0b96f048b36fc6e7ef290df15b" address="unix:///run/containerd/s/07c5b3f5619ff3d4be4d58904fe079e2b3ef6b158f9e7d5004c5ff7b84d0c977" protocol=ttrpc version=3 Apr 16 23:32:34.682600 systemd[1]: Started cri-containerd-f945c3ddf5c38dcab45bc2a69b6165b7705660c68373f8c080e1c70d0cbbee1d.scope - libcontainer container f945c3ddf5c38dcab45bc2a69b6165b7705660c68373f8c080e1c70d0cbbee1d. Apr 16 23:32:34.700096 kubelet[2957]: I0416 23:32:34.699779 2957 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-18-112" Apr 16 23:32:34.703700 kubelet[2957]: E0416 23:32:34.702915 2957 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.18.112:6443/api/v1/nodes\": dial tcp 172.31.18.112:6443: connect: connection refused" node="ip-172-31-18-112" Apr 16 23:32:34.718458 systemd[1]: Started cri-containerd-d46dfeb88c582b61f4d14000259caefdfb777eaf140b9728734e7db6db60a992.scope - libcontainer container d46dfeb88c582b61f4d14000259caefdfb777eaf140b9728734e7db6db60a992. Apr 16 23:32:34.730486 systemd[1]: Started cri-containerd-78a2426c8b8f033e7a648c5f5a181d645caf9a0b96f048b36fc6e7ef290df15b.scope - libcontainer container 78a2426c8b8f033e7a648c5f5a181d645caf9a0b96f048b36fc6e7ef290df15b. Apr 16 23:32:34.841465 containerd[2004]: time="2026-04-16T23:32:34.839889548Z" level=info msg="StartContainer for \"f945c3ddf5c38dcab45bc2a69b6165b7705660c68373f8c080e1c70d0cbbee1d\" returns successfully" Apr 16 23:32:34.899151 containerd[2004]: time="2026-04-16T23:32:34.897888020Z" level=info msg="StartContainer for \"d46dfeb88c582b61f4d14000259caefdfb777eaf140b9728734e7db6db60a992\" returns successfully" Apr 16 23:32:35.001791 containerd[2004]: time="2026-04-16T23:32:35.001431725Z" level=info msg="StartContainer for \"78a2426c8b8f033e7a648c5f5a181d645caf9a0b96f048b36fc6e7ef290df15b\" returns successfully" Apr 16 23:32:35.594714 kubelet[2957]: E0416 23:32:35.594660 2957 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-112\" not found" node="ip-172-31-18-112" Apr 16 23:32:35.609431 kubelet[2957]: E0416 23:32:35.606850 2957 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-112\" not found" node="ip-172-31-18-112" Apr 16 23:32:35.619960 kubelet[2957]: E0416 23:32:35.619906 2957 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-112\" not found" node="ip-172-31-18-112" Apr 16 23:32:36.621161 kubelet[2957]: E0416 23:32:36.620967 2957 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-112\" not found" node="ip-172-31-18-112" Apr 16 23:32:36.623721 kubelet[2957]: E0416 23:32:36.623664 2957 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-112\" not found" node="ip-172-31-18-112" Apr 16 23:32:36.624559 kubelet[2957]: E0416 23:32:36.624511 2957 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-112\" not found" node="ip-172-31-18-112" Apr 16 23:32:37.624711 kubelet[2957]: E0416 23:32:37.624640 2957 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-112\" not found" node="ip-172-31-18-112" Apr 16 23:32:37.626672 kubelet[2957]: E0416 23:32:37.625455 2957 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-112\" not found" node="ip-172-31-18-112" Apr 16 23:32:37.707448 kubelet[2957]: E0416 23:32:37.707385 2957 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-112\" not found" node="ip-172-31-18-112" Apr 16 23:32:37.906478 kubelet[2957]: I0416 23:32:37.906415 2957 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-18-112" Apr 16 23:32:37.918146 kubelet[2957]: I0416 23:32:37.918020 2957 kubelet_node_status.go:77] "Successfully registered node" node="ip-172-31-18-112" Apr 16 23:32:37.964066 kubelet[2957]: I0416 23:32:37.962375 2957 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-112" Apr 16 23:32:37.974091 kubelet[2957]: E0416 23:32:37.974043 2957 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-18-112\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-18-112" Apr 16 23:32:37.974291 kubelet[2957]: I0416 23:32:37.974269 2957 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-112" Apr 16 23:32:37.978396 kubelet[2957]: E0416 23:32:37.978352 2957 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-112\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-18-112" Apr 16 23:32:37.978606 kubelet[2957]: I0416 23:32:37.978586 2957 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-112" Apr 16 23:32:37.981967 kubelet[2957]: E0416 23:32:37.981905 2957 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-18-112\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-18-112" Apr 16 23:32:38.435493 kubelet[2957]: I0416 23:32:38.435094 2957 apiserver.go:52] "Watching apiserver" Apr 16 23:32:38.464606 kubelet[2957]: I0416 23:32:38.464548 2957 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 16 23:32:38.496907 update_engine[1989]: I20260416 23:32:38.496809 1989 update_attempter.cc:509] Updating boot flags... Apr 16 23:32:38.625611 kubelet[2957]: I0416 23:32:38.624103 2957 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-112" Apr 16 23:32:38.791964 kubelet[2957]: I0416 23:32:38.791591 2957 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-112" Apr 16 23:32:39.923946 kubelet[2957]: I0416 23:32:39.923523 2957 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-112" Apr 16 23:32:41.056154 systemd[1]: Reload requested from client PID 3520 ('systemctl') (unit session-7.scope)... Apr 16 23:32:41.056729 systemd[1]: Reloading... Apr 16 23:32:41.329342 zram_generator::config[3576]: No configuration found. Apr 16 23:32:41.642250 kubelet[2957]: I0416 23:32:41.642056 2957 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-112" podStartSLOduration=2.6420165620000002 podStartE2EDuration="2.642016562s" podCreationTimestamp="2026-04-16 23:32:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:32:41.612875558 +0000 UTC m=+11.179380441" watchObservedRunningTime="2026-04-16 23:32:41.642016562 +0000 UTC m=+11.208521433" Apr 16 23:32:41.679162 kubelet[2957]: I0416 23:32:41.675877 2957 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-112" podStartSLOduration=3.67585871 podStartE2EDuration="3.67585871s" podCreationTimestamp="2026-04-16 23:32:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:32:41.644848886 +0000 UTC m=+11.211353829" watchObservedRunningTime="2026-04-16 23:32:41.67585871 +0000 UTC m=+11.242363593" Apr 16 23:32:41.940290 systemd[1]: Reloading finished in 882 ms. Apr 16 23:32:42.006628 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:32:42.025215 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 23:32:42.025834 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:32:42.025935 systemd[1]: kubelet.service: Consumed 1.967s CPU time, 121.4M memory peak. Apr 16 23:32:42.032027 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 23:32:42.429529 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 23:32:42.447040 (kubelet)[3624]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 23:32:42.557322 kubelet[3624]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 23:32:42.572423 kubelet[3624]: I0416 23:32:42.571413 3624 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 16 23:32:42.572423 kubelet[3624]: I0416 23:32:42.571491 3624 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 23:32:42.572423 kubelet[3624]: I0416 23:32:42.571532 3624 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 16 23:32:42.572423 kubelet[3624]: I0416 23:32:42.571544 3624 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 23:32:42.572423 kubelet[3624]: I0416 23:32:42.572029 3624 server.go:951] "Client rotation is on, will bootstrap in background" Apr 16 23:32:42.574690 kubelet[3624]: I0416 23:32:42.574632 3624 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 16 23:32:42.579527 kubelet[3624]: I0416 23:32:42.579455 3624 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 23:32:42.598458 kubelet[3624]: I0416 23:32:42.598353 3624 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 23:32:42.605174 kubelet[3624]: I0416 23:32:42.604971 3624 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 16 23:32:42.607537 kubelet[3624]: I0416 23:32:42.606611 3624 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 23:32:42.607841 kubelet[3624]: I0416 23:32:42.607533 3624 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-112","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 23:32:42.608020 kubelet[3624]: I0416 23:32:42.607859 3624 topology_manager.go:143] "Creating topology manager with none policy" Apr 16 23:32:42.608020 kubelet[3624]: I0416 23:32:42.607882 3624 container_manager_linux.go:308] "Creating device plugin manager" Apr 16 23:32:42.608020 kubelet[3624]: I0416 23:32:42.607932 3624 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 16 23:32:42.609745 kubelet[3624]: I0416 23:32:42.609684 3624 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 16 23:32:42.610030 kubelet[3624]: I0416 23:32:42.609999 3624 kubelet.go:482] "Attempting to sync node with API server" Apr 16 23:32:42.610153 kubelet[3624]: I0416 23:32:42.610049 3624 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 23:32:42.610153 kubelet[3624]: I0416 23:32:42.610090 3624 kubelet.go:394] "Adding apiserver pod source" Apr 16 23:32:42.610153 kubelet[3624]: I0416 23:32:42.610112 3624 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 23:32:42.617200 kubelet[3624]: I0416 23:32:42.617106 3624 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 16 23:32:42.624038 kubelet[3624]: I0416 23:32:42.623982 3624 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 23:32:42.624236 kubelet[3624]: I0416 23:32:42.624059 3624 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 16 23:32:42.641959 kubelet[3624]: I0416 23:32:42.641675 3624 server.go:1257] "Started kubelet" Apr 16 23:32:42.650687 kubelet[3624]: I0416 23:32:42.650430 3624 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 16 23:32:42.654499 kubelet[3624]: I0416 23:32:42.654409 3624 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 23:32:42.674057 kubelet[3624]: I0416 23:32:42.674016 3624 server.go:317] "Adding debug handlers to kubelet server" Apr 16 23:32:42.679184 kubelet[3624]: I0416 23:32:42.661836 3624 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 23:32:42.679184 kubelet[3624]: I0416 23:32:42.654691 3624 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 23:32:42.679184 kubelet[3624]: I0416 23:32:42.678958 3624 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 16 23:32:42.679184 kubelet[3624]: E0416 23:32:42.674580 3624 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-18-112\" not found" Apr 16 23:32:42.679501 kubelet[3624]: I0416 23:32:42.674341 3624 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 16 23:32:42.682608 kubelet[3624]: I0416 23:32:42.674360 3624 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 16 23:32:42.685934 kubelet[3624]: I0416 23:32:42.685874 3624 reconciler.go:29] "Reconciler: start to sync state" Apr 16 23:32:42.687767 kubelet[3624]: I0416 23:32:42.686764 3624 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 23:32:42.693061 kubelet[3624]: I0416 23:32:42.693014 3624 factory.go:223] Registration of the systemd container factory successfully Apr 16 23:32:42.693563 kubelet[3624]: I0416 23:32:42.693514 3624 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 23:32:42.729703 kubelet[3624]: I0416 23:32:42.729643 3624 factory.go:223] Registration of the containerd container factory successfully Apr 16 23:32:42.769362 kubelet[3624]: I0416 23:32:42.769298 3624 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 16 23:32:42.779322 kubelet[3624]: I0416 23:32:42.779255 3624 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 16 23:32:42.779610 kubelet[3624]: I0416 23:32:42.779587 3624 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 16 23:32:42.779810 kubelet[3624]: I0416 23:32:42.779698 3624 kubelet.go:2501] "Starting kubelet main sync loop" Apr 16 23:32:42.780139 kubelet[3624]: E0416 23:32:42.779910 3624 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 23:32:42.877995 kubelet[3624]: I0416 23:32:42.877946 3624 cpu_manager.go:225] "Starting" policy="none" Apr 16 23:32:42.878895 kubelet[3624]: I0416 23:32:42.878172 3624 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 16 23:32:42.878895 kubelet[3624]: I0416 23:32:42.878231 3624 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 16 23:32:42.878895 kubelet[3624]: I0416 23:32:42.878464 3624 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 16 23:32:42.878895 kubelet[3624]: I0416 23:32:42.878485 3624 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 16 23:32:42.878895 kubelet[3624]: I0416 23:32:42.878519 3624 policy_none.go:50] "Start" Apr 16 23:32:42.878895 kubelet[3624]: I0416 23:32:42.878537 3624 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 16 23:32:42.878895 kubelet[3624]: I0416 23:32:42.878556 3624 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 16 23:32:42.878895 kubelet[3624]: I0416 23:32:42.878752 3624 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 16 23:32:42.878895 kubelet[3624]: I0416 23:32:42.878777 3624 policy_none.go:44] "Start" Apr 16 23:32:42.880301 kubelet[3624]: E0416 23:32:42.880260 3624 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 23:32:42.894609 kubelet[3624]: E0416 23:32:42.894272 3624 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 23:32:42.895416 kubelet[3624]: I0416 23:32:42.894876 3624 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 16 23:32:42.895416 kubelet[3624]: I0416 23:32:42.894933 3624 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 23:32:42.896260 kubelet[3624]: I0416 23:32:42.896231 3624 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 16 23:32:42.903161 kubelet[3624]: E0416 23:32:42.902053 3624 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 23:32:43.025524 kubelet[3624]: I0416 23:32:43.025384 3624 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-18-112" Apr 16 23:32:43.043935 kubelet[3624]: I0416 23:32:43.043862 3624 kubelet_node_status.go:123] "Node was previously registered" node="ip-172-31-18-112" Apr 16 23:32:43.044061 kubelet[3624]: I0416 23:32:43.043991 3624 kubelet_node_status.go:77] "Successfully registered node" node="ip-172-31-18-112" Apr 16 23:32:43.083938 kubelet[3624]: I0416 23:32:43.083879 3624 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-112" Apr 16 23:32:43.086192 kubelet[3624]: I0416 23:32:43.083886 3624 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-112" Apr 16 23:32:43.086676 kubelet[3624]: I0416 23:32:43.086493 3624 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-112" Apr 16 23:32:43.089814 kubelet[3624]: I0416 23:32:43.089674 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6e5f8ad4adda9db793f30d6589e5c170-ca-certs\") pod \"kube-apiserver-ip-172-31-18-112\" (UID: \"6e5f8ad4adda9db793f30d6589e5c170\") " pod="kube-system/kube-apiserver-ip-172-31-18-112" Apr 16 23:32:43.091108 kubelet[3624]: I0416 23:32:43.089829 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6e5f8ad4adda9db793f30d6589e5c170-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-112\" (UID: \"6e5f8ad4adda9db793f30d6589e5c170\") " pod="kube-system/kube-apiserver-ip-172-31-18-112" Apr 16 23:32:43.091108 kubelet[3624]: I0416 23:32:43.089917 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6e5f8ad4adda9db793f30d6589e5c170-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-112\" (UID: \"6e5f8ad4adda9db793f30d6589e5c170\") " pod="kube-system/kube-apiserver-ip-172-31-18-112" Apr 16 23:32:43.091108 kubelet[3624]: I0416 23:32:43.089972 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f74d72ce145bdb7239cc353f83e465d-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-112\" (UID: \"4f74d72ce145bdb7239cc353f83e465d\") " pod="kube-system/kube-controller-manager-ip-172-31-18-112" Apr 16 23:32:43.091108 kubelet[3624]: I0416 23:32:43.090021 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f74d72ce145bdb7239cc353f83e465d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-112\" (UID: \"4f74d72ce145bdb7239cc353f83e465d\") " pod="kube-system/kube-controller-manager-ip-172-31-18-112" Apr 16 23:32:43.091108 kubelet[3624]: I0416 23:32:43.090063 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f74d72ce145bdb7239cc353f83e465d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-112\" (UID: \"4f74d72ce145bdb7239cc353f83e465d\") " pod="kube-system/kube-controller-manager-ip-172-31-18-112" Apr 16 23:32:43.091468 kubelet[3624]: I0416 23:32:43.090110 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f74d72ce145bdb7239cc353f83e465d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-112\" (UID: \"4f74d72ce145bdb7239cc353f83e465d\") " pod="kube-system/kube-controller-manager-ip-172-31-18-112" Apr 16 23:32:43.091468 kubelet[3624]: I0416 23:32:43.090173 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f74d72ce145bdb7239cc353f83e465d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-112\" (UID: \"4f74d72ce145bdb7239cc353f83e465d\") " pod="kube-system/kube-controller-manager-ip-172-31-18-112" Apr 16 23:32:43.091468 kubelet[3624]: I0416 23:32:43.090230 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9b077e1404348a1cf4ee57de1b24d81-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-112\" (UID: \"e9b077e1404348a1cf4ee57de1b24d81\") " pod="kube-system/kube-scheduler-ip-172-31-18-112" Apr 16 23:32:43.098884 kubelet[3624]: E0416 23:32:43.098738 3624 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-18-112\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-18-112" Apr 16 23:32:43.100593 kubelet[3624]: E0416 23:32:43.100522 3624 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-18-112\" already exists" pod="kube-system/kube-scheduler-ip-172-31-18-112" Apr 16 23:32:43.101182 kubelet[3624]: E0416 23:32:43.101083 3624 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-112\" already exists" pod="kube-system/kube-apiserver-ip-172-31-18-112" Apr 16 23:32:43.611551 kubelet[3624]: I0416 23:32:43.611498 3624 apiserver.go:52] "Watching apiserver" Apr 16 23:32:43.683434 kubelet[3624]: I0416 23:32:43.683367 3624 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 16 23:32:43.833154 kubelet[3624]: I0416 23:32:43.832258 3624 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-112" Apr 16 23:32:43.851531 kubelet[3624]: E0416 23:32:43.851477 3624 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-112\" already exists" pod="kube-system/kube-apiserver-ip-172-31-18-112" Apr 16 23:32:45.723887 kubelet[3624]: I0416 23:32:45.723818 3624 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 16 23:32:45.726780 containerd[2004]: time="2026-04-16T23:32:45.726688266Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 16 23:32:45.729663 kubelet[3624]: I0416 23:32:45.729226 3624 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 16 23:32:46.472822 kubelet[3624]: E0416 23:32:46.471264 3624 status_manager.go:1045] "Failed to get status for pod" err="pods \"kube-proxy-hs84q\" is forbidden: User \"system:node:ip-172-31-18-112\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-18-112' and this object" podUID="c12aaebd-3e12-4354-8dc9-a9ea48a58d6c" pod="kube-system/kube-proxy-hs84q" Apr 16 23:32:46.483435 systemd[1]: Created slice kubepods-besteffort-podc12aaebd_3e12_4354_8dc9_a9ea48a58d6c.slice - libcontainer container kubepods-besteffort-podc12aaebd_3e12_4354_8dc9_a9ea48a58d6c.slice. Apr 16 23:32:46.515501 kubelet[3624]: I0416 23:32:46.515449 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c12aaebd-3e12-4354-8dc9-a9ea48a58d6c-xtables-lock\") pod \"kube-proxy-hs84q\" (UID: \"c12aaebd-3e12-4354-8dc9-a9ea48a58d6c\") " pod="kube-system/kube-proxy-hs84q" Apr 16 23:32:46.515896 kubelet[3624]: I0416 23:32:46.515862 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c12aaebd-3e12-4354-8dc9-a9ea48a58d6c-lib-modules\") pod \"kube-proxy-hs84q\" (UID: \"c12aaebd-3e12-4354-8dc9-a9ea48a58d6c\") " pod="kube-system/kube-proxy-hs84q" Apr 16 23:32:46.516094 kubelet[3624]: I0416 23:32:46.516054 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phxsx\" (UniqueName: \"kubernetes.io/projected/c12aaebd-3e12-4354-8dc9-a9ea48a58d6c-kube-api-access-phxsx\") pod \"kube-proxy-hs84q\" (UID: \"c12aaebd-3e12-4354-8dc9-a9ea48a58d6c\") " pod="kube-system/kube-proxy-hs84q" Apr 16 23:32:46.516416 kubelet[3624]: I0416 23:32:46.516347 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c12aaebd-3e12-4354-8dc9-a9ea48a58d6c-kube-proxy\") pod \"kube-proxy-hs84q\" (UID: \"c12aaebd-3e12-4354-8dc9-a9ea48a58d6c\") " pod="kube-system/kube-proxy-hs84q" Apr 16 23:32:46.802863 containerd[2004]: time="2026-04-16T23:32:46.802691912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hs84q,Uid:c12aaebd-3e12-4354-8dc9-a9ea48a58d6c,Namespace:kube-system,Attempt:0,}" Apr 16 23:32:46.855547 containerd[2004]: time="2026-04-16T23:32:46.855484784Z" level=info msg="connecting to shim e3d4a0b06ab91addc9c8af67cfce9c6d430ff1b4775a77823a4d0d3096a493aa" address="unix:///run/containerd/s/1c4582cfbd2fd9919a05384626cb9f1333dfe48ae43b158ebb4d6a73019af976" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:32:46.948822 systemd[1]: Started cri-containerd-e3d4a0b06ab91addc9c8af67cfce9c6d430ff1b4775a77823a4d0d3096a493aa.scope - libcontainer container e3d4a0b06ab91addc9c8af67cfce9c6d430ff1b4775a77823a4d0d3096a493aa. Apr 16 23:32:47.061375 containerd[2004]: time="2026-04-16T23:32:47.060087521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hs84q,Uid:c12aaebd-3e12-4354-8dc9-a9ea48a58d6c,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3d4a0b06ab91addc9c8af67cfce9c6d430ff1b4775a77823a4d0d3096a493aa\"" Apr 16 23:32:47.075585 containerd[2004]: time="2026-04-16T23:32:47.075491765Z" level=info msg="CreateContainer within sandbox \"e3d4a0b06ab91addc9c8af67cfce9c6d430ff1b4775a77823a4d0d3096a493aa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 16 23:32:47.099635 containerd[2004]: time="2026-04-16T23:32:47.098465837Z" level=info msg="Container 82dcfc502105bab38a6274ab7e4f35bac48fd9c5e5238eea75a47731394807a3: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:32:47.112238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2021256791.mount: Deactivated successfully. Apr 16 23:32:47.132281 containerd[2004]: time="2026-04-16T23:32:47.132089897Z" level=info msg="CreateContainer within sandbox \"e3d4a0b06ab91addc9c8af67cfce9c6d430ff1b4775a77823a4d0d3096a493aa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"82dcfc502105bab38a6274ab7e4f35bac48fd9c5e5238eea75a47731394807a3\"" Apr 16 23:32:47.134462 containerd[2004]: time="2026-04-16T23:32:47.134390153Z" level=info msg="StartContainer for \"82dcfc502105bab38a6274ab7e4f35bac48fd9c5e5238eea75a47731394807a3\"" Apr 16 23:32:47.146600 containerd[2004]: time="2026-04-16T23:32:47.146458949Z" level=info msg="connecting to shim 82dcfc502105bab38a6274ab7e4f35bac48fd9c5e5238eea75a47731394807a3" address="unix:///run/containerd/s/1c4582cfbd2fd9919a05384626cb9f1333dfe48ae43b158ebb4d6a73019af976" protocol=ttrpc version=3 Apr 16 23:32:47.151975 systemd[1]: Created slice kubepods-besteffort-pod446787a3_e030_4ec6_9916_793540a76cf3.slice - libcontainer container kubepods-besteffort-pod446787a3_e030_4ec6_9916_793540a76cf3.slice. Apr 16 23:32:47.222348 kubelet[3624]: I0416 23:32:47.222304 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/446787a3-e030-4ec6-9916-793540a76cf3-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-pl4hx\" (UID: \"446787a3-e030-4ec6-9916-793540a76cf3\") " pod="tigera-operator/tigera-operator-6cf4cccc57-pl4hx" Apr 16 23:32:47.223931 kubelet[3624]: I0416 23:32:47.223806 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdfz7\" (UniqueName: \"kubernetes.io/projected/446787a3-e030-4ec6-9916-793540a76cf3-kube-api-access-tdfz7\") pod \"tigera-operator-6cf4cccc57-pl4hx\" (UID: \"446787a3-e030-4ec6-9916-793540a76cf3\") " pod="tigera-operator/tigera-operator-6cf4cccc57-pl4hx" Apr 16 23:32:47.225060 systemd[1]: Started cri-containerd-82dcfc502105bab38a6274ab7e4f35bac48fd9c5e5238eea75a47731394807a3.scope - libcontainer container 82dcfc502105bab38a6274ab7e4f35bac48fd9c5e5238eea75a47731394807a3. Apr 16 23:32:47.384569 containerd[2004]: time="2026-04-16T23:32:47.384237210Z" level=info msg="StartContainer for \"82dcfc502105bab38a6274ab7e4f35bac48fd9c5e5238eea75a47731394807a3\" returns successfully" Apr 16 23:32:47.467584 containerd[2004]: time="2026-04-16T23:32:47.467119051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-pl4hx,Uid:446787a3-e030-4ec6-9916-793540a76cf3,Namespace:tigera-operator,Attempt:0,}" Apr 16 23:32:47.497021 containerd[2004]: time="2026-04-16T23:32:47.496962319Z" level=info msg="connecting to shim 333769e7bc49f59733d45d1221c672280d748c67611e0ee6e83b8b8a6066a858" address="unix:///run/containerd/s/a6a7a5f95526e530183da584fa947f38583513f2fe42185e2f947ab6e756a7d6" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:32:47.544884 systemd[1]: Started cri-containerd-333769e7bc49f59733d45d1221c672280d748c67611e0ee6e83b8b8a6066a858.scope - libcontainer container 333769e7bc49f59733d45d1221c672280d748c67611e0ee6e83b8b8a6066a858. Apr 16 23:32:47.645585 containerd[2004]: time="2026-04-16T23:32:47.645495044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-pl4hx,Uid:446787a3-e030-4ec6-9916-793540a76cf3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"333769e7bc49f59733d45d1221c672280d748c67611e0ee6e83b8b8a6066a858\"" Apr 16 23:32:47.657950 containerd[2004]: time="2026-04-16T23:32:47.657591248Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 16 23:32:47.867254 kubelet[3624]: I0416 23:32:47.867079 3624 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-hs84q" podStartSLOduration=1.867059625 podStartE2EDuration="1.867059625s" podCreationTimestamp="2026-04-16 23:32:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:32:47.865074729 +0000 UTC m=+5.407150444" watchObservedRunningTime="2026-04-16 23:32:47.867059625 +0000 UTC m=+5.409135352" Apr 16 23:32:48.864418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1703857310.mount: Deactivated successfully. Apr 16 23:32:49.795284 containerd[2004]: time="2026-04-16T23:32:49.795217318Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:49.796804 containerd[2004]: time="2026-04-16T23:32:49.796584598Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Apr 16 23:32:49.797791 containerd[2004]: time="2026-04-16T23:32:49.797709514Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:49.802459 containerd[2004]: time="2026-04-16T23:32:49.802331530Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:32:49.805059 containerd[2004]: time="2026-04-16T23:32:49.803961935Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.145692399s" Apr 16 23:32:49.805059 containerd[2004]: time="2026-04-16T23:32:49.804034871Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Apr 16 23:32:49.812814 containerd[2004]: time="2026-04-16T23:32:49.812752271Z" level=info msg="CreateContainer within sandbox \"333769e7bc49f59733d45d1221c672280d748c67611e0ee6e83b8b8a6066a858\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 16 23:32:49.831231 containerd[2004]: time="2026-04-16T23:32:49.830286215Z" level=info msg="Container a6c131925eeea8eaa3b42ba9a9b7dae6a3e178ef3bd5c374c6dd89320aba994c: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:32:49.836291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1439695504.mount: Deactivated successfully. Apr 16 23:32:49.841796 containerd[2004]: time="2026-04-16T23:32:49.841736387Z" level=info msg="CreateContainer within sandbox \"333769e7bc49f59733d45d1221c672280d748c67611e0ee6e83b8b8a6066a858\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a6c131925eeea8eaa3b42ba9a9b7dae6a3e178ef3bd5c374c6dd89320aba994c\"" Apr 16 23:32:49.843449 containerd[2004]: time="2026-04-16T23:32:49.843402563Z" level=info msg="StartContainer for \"a6c131925eeea8eaa3b42ba9a9b7dae6a3e178ef3bd5c374c6dd89320aba994c\"" Apr 16 23:32:49.845625 containerd[2004]: time="2026-04-16T23:32:49.845573687Z" level=info msg="connecting to shim a6c131925eeea8eaa3b42ba9a9b7dae6a3e178ef3bd5c374c6dd89320aba994c" address="unix:///run/containerd/s/a6a7a5f95526e530183da584fa947f38583513f2fe42185e2f947ab6e756a7d6" protocol=ttrpc version=3 Apr 16 23:32:49.891505 systemd[1]: Started cri-containerd-a6c131925eeea8eaa3b42ba9a9b7dae6a3e178ef3bd5c374c6dd89320aba994c.scope - libcontainer container a6c131925eeea8eaa3b42ba9a9b7dae6a3e178ef3bd5c374c6dd89320aba994c. Apr 16 23:32:49.966514 containerd[2004]: time="2026-04-16T23:32:49.966435287Z" level=info msg="StartContainer for \"a6c131925eeea8eaa3b42ba9a9b7dae6a3e178ef3bd5c374c6dd89320aba994c\" returns successfully" Apr 16 23:32:52.640891 kubelet[3624]: I0416 23:32:52.640759 3624 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-pl4hx" podStartSLOduration=4.485237866 podStartE2EDuration="6.640737361s" podCreationTimestamp="2026-04-16 23:32:46 +0000 UTC" firstStartedPulling="2026-04-16 23:32:47.649904444 +0000 UTC m=+5.191980147" lastFinishedPulling="2026-04-16 23:32:49.805403939 +0000 UTC m=+7.347479642" observedRunningTime="2026-04-16 23:32:50.888278436 +0000 UTC m=+8.430354223" watchObservedRunningTime="2026-04-16 23:32:52.640737361 +0000 UTC m=+10.182813088" Apr 16 23:32:58.740632 sudo[2370]: pam_unix(sudo:session): session closed for user root Apr 16 23:32:58.909167 sshd[2369]: Connection closed by 20.229.252.112 port 36662 Apr 16 23:32:58.910003 sshd-session[2366]: pam_unix(sshd:session): session closed for user core Apr 16 23:32:58.925273 systemd[1]: sshd@6-172.31.18.112:22-20.229.252.112:36662.service: Deactivated successfully. Apr 16 23:32:58.931888 systemd[1]: session-7.scope: Deactivated successfully. Apr 16 23:32:58.933521 systemd[1]: session-7.scope: Consumed 8.136s CPU time, 224.5M memory peak. Apr 16 23:32:58.937516 systemd-logind[1986]: Session 7 logged out. Waiting for processes to exit. Apr 16 23:32:58.942976 systemd-logind[1986]: Removed session 7. Apr 16 23:33:10.507118 systemd[1]: Created slice kubepods-besteffort-pod31620869_0d72_419c_862f_3b33f49083b5.slice - libcontainer container kubepods-besteffort-pod31620869_0d72_419c_862f_3b33f49083b5.slice. Apr 16 23:33:10.589590 kubelet[3624]: I0416 23:33:10.589517 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31620869-0d72-419c-862f-3b33f49083b5-tigera-ca-bundle\") pod \"calico-typha-64dc7f77c7-vbdbp\" (UID: \"31620869-0d72-419c-862f-3b33f49083b5\") " pod="calico-system/calico-typha-64dc7f77c7-vbdbp" Apr 16 23:33:10.589590 kubelet[3624]: I0416 23:33:10.589600 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/31620869-0d72-419c-862f-3b33f49083b5-typha-certs\") pod \"calico-typha-64dc7f77c7-vbdbp\" (UID: \"31620869-0d72-419c-862f-3b33f49083b5\") " pod="calico-system/calico-typha-64dc7f77c7-vbdbp" Apr 16 23:33:10.590614 kubelet[3624]: I0416 23:33:10.589650 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpjng\" (UniqueName: \"kubernetes.io/projected/31620869-0d72-419c-862f-3b33f49083b5-kube-api-access-kpjng\") pod \"calico-typha-64dc7f77c7-vbdbp\" (UID: \"31620869-0d72-419c-862f-3b33f49083b5\") " pod="calico-system/calico-typha-64dc7f77c7-vbdbp" Apr 16 23:33:10.711548 systemd[1]: Created slice kubepods-besteffort-podfca5d126_ab98_4b70_9275_3e97a11c798b.slice - libcontainer container kubepods-besteffort-podfca5d126_ab98_4b70_9275_3e97a11c798b.slice. Apr 16 23:33:10.791648 kubelet[3624]: I0416 23:33:10.791491 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fca5d126-ab98-4b70-9275-3e97a11c798b-tigera-ca-bundle\") pod \"calico-node-vchfz\" (UID: \"fca5d126-ab98-4b70-9275-3e97a11c798b\") " pod="calico-system/calico-node-vchfz" Apr 16 23:33:10.791648 kubelet[3624]: I0416 23:33:10.791566 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fca5d126-ab98-4b70-9275-3e97a11c798b-cni-bin-dir\") pod \"calico-node-vchfz\" (UID: \"fca5d126-ab98-4b70-9275-3e97a11c798b\") " pod="calico-system/calico-node-vchfz" Apr 16 23:33:10.791858 kubelet[3624]: I0416 23:33:10.791679 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/fca5d126-ab98-4b70-9275-3e97a11c798b-nodeproc\") pod \"calico-node-vchfz\" (UID: \"fca5d126-ab98-4b70-9275-3e97a11c798b\") " pod="calico-system/calico-node-vchfz" Apr 16 23:33:10.791858 kubelet[3624]: I0416 23:33:10.791721 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fca5d126-ab98-4b70-9275-3e97a11c798b-policysync\") pod \"calico-node-vchfz\" (UID: \"fca5d126-ab98-4b70-9275-3e97a11c798b\") " pod="calico-system/calico-node-vchfz" Apr 16 23:33:10.791858 kubelet[3624]: I0416 23:33:10.791848 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fca5d126-ab98-4b70-9275-3e97a11c798b-cni-net-dir\") pod \"calico-node-vchfz\" (UID: \"fca5d126-ab98-4b70-9275-3e97a11c798b\") " pod="calico-system/calico-node-vchfz" Apr 16 23:33:10.792014 kubelet[3624]: I0416 23:33:10.791892 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fca5d126-ab98-4b70-9275-3e97a11c798b-node-certs\") pod \"calico-node-vchfz\" (UID: \"fca5d126-ab98-4b70-9275-3e97a11c798b\") " pod="calico-system/calico-node-vchfz" Apr 16 23:33:10.792014 kubelet[3624]: I0416 23:33:10.791929 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fca5d126-ab98-4b70-9275-3e97a11c798b-lib-modules\") pod \"calico-node-vchfz\" (UID: \"fca5d126-ab98-4b70-9275-3e97a11c798b\") " pod="calico-system/calico-node-vchfz" Apr 16 23:33:10.792014 kubelet[3624]: I0416 23:33:10.791971 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fca5d126-ab98-4b70-9275-3e97a11c798b-cni-log-dir\") pod \"calico-node-vchfz\" (UID: \"fca5d126-ab98-4b70-9275-3e97a11c798b\") " pod="calico-system/calico-node-vchfz" Apr 16 23:33:10.792014 kubelet[3624]: I0416 23:33:10.792007 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fca5d126-ab98-4b70-9275-3e97a11c798b-var-run-calico\") pod \"calico-node-vchfz\" (UID: \"fca5d126-ab98-4b70-9275-3e97a11c798b\") " pod="calico-system/calico-node-vchfz" Apr 16 23:33:10.793500 kubelet[3624]: I0416 23:33:10.792050 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/fca5d126-ab98-4b70-9275-3e97a11c798b-sys-fs\") pod \"calico-node-vchfz\" (UID: \"fca5d126-ab98-4b70-9275-3e97a11c798b\") " pod="calico-system/calico-node-vchfz" Apr 16 23:33:10.793500 kubelet[3624]: I0416 23:33:10.792090 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7cjk\" (UniqueName: \"kubernetes.io/projected/fca5d126-ab98-4b70-9275-3e97a11c798b-kube-api-access-m7cjk\") pod \"calico-node-vchfz\" (UID: \"fca5d126-ab98-4b70-9275-3e97a11c798b\") " pod="calico-system/calico-node-vchfz" Apr 16 23:33:10.796182 kubelet[3624]: I0416 23:33:10.794254 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fca5d126-ab98-4b70-9275-3e97a11c798b-var-lib-calico\") pod \"calico-node-vchfz\" (UID: \"fca5d126-ab98-4b70-9275-3e97a11c798b\") " pod="calico-system/calico-node-vchfz" Apr 16 23:33:10.796182 kubelet[3624]: I0416 23:33:10.794920 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fca5d126-ab98-4b70-9275-3e97a11c798b-xtables-lock\") pod \"calico-node-vchfz\" (UID: \"fca5d126-ab98-4b70-9275-3e97a11c798b\") " pod="calico-system/calico-node-vchfz" Apr 16 23:33:10.796182 kubelet[3624]: I0416 23:33:10.795000 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fca5d126-ab98-4b70-9275-3e97a11c798b-flexvol-driver-host\") pod \"calico-node-vchfz\" (UID: \"fca5d126-ab98-4b70-9275-3e97a11c798b\") " pod="calico-system/calico-node-vchfz" Apr 16 23:33:10.796182 kubelet[3624]: I0416 23:33:10.795049 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/fca5d126-ab98-4b70-9275-3e97a11c798b-bpffs\") pod \"calico-node-vchfz\" (UID: \"fca5d126-ab98-4b70-9275-3e97a11c798b\") " pod="calico-system/calico-node-vchfz" Apr 16 23:33:10.846005 containerd[2004]: time="2026-04-16T23:33:10.845756359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64dc7f77c7-vbdbp,Uid:31620869-0d72-419c-862f-3b33f49083b5,Namespace:calico-system,Attempt:0,}" Apr 16 23:33:10.897176 kubelet[3624]: E0416 23:33:10.892642 3624 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xx9nj" podUID="a71fac77-981f-4b75-9304-8c3a33a51180" Apr 16 23:33:10.922406 kubelet[3624]: E0416 23:33:10.922359 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:10.922688 kubelet[3624]: W0416 23:33:10.922645 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:10.923370 kubelet[3624]: E0416 23:33:10.923322 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:10.940030 kubelet[3624]: E0416 23:33:10.939511 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:10.940030 kubelet[3624]: W0416 23:33:10.939852 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:10.940030 kubelet[3624]: E0416 23:33:10.939931 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:10.967420 containerd[2004]: time="2026-04-16T23:33:10.965624096Z" level=info msg="connecting to shim a631344c3516f66b96d932f26c562b33c156e5e4cac31caab7d71bac9969f502" address="unix:///run/containerd/s/8a234532fbe96673c4ab90727f9f8e319bcd735609446fb179e37f25d16d644c" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:33:10.976342 kubelet[3624]: E0416 23:33:10.976286 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:10.976342 kubelet[3624]: W0416 23:33:10.976329 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:10.976736 kubelet[3624]: E0416 23:33:10.976411 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:10.978223 kubelet[3624]: E0416 23:33:10.976855 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:10.978223 kubelet[3624]: W0416 23:33:10.976891 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:10.978223 kubelet[3624]: E0416 23:33:10.976928 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:10.979367 kubelet[3624]: E0416 23:33:10.979315 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:10.979367 kubelet[3624]: W0416 23:33:10.979357 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:10.979367 kubelet[3624]: E0416 23:33:10.979395 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:10.979969 kubelet[3624]: E0416 23:33:10.979841 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:10.979969 kubelet[3624]: W0416 23:33:10.979869 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:10.979969 kubelet[3624]: E0416 23:33:10.979901 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:10.983525 kubelet[3624]: E0416 23:33:10.983470 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:10.983525 kubelet[3624]: W0416 23:33:10.983514 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:10.983786 kubelet[3624]: E0416 23:33:10.983554 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:10.984819 kubelet[3624]: E0416 23:33:10.984765 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:10.984819 kubelet[3624]: W0416 23:33:10.984808 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:10.985059 kubelet[3624]: E0416 23:33:10.984847 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:10.988442 kubelet[3624]: E0416 23:33:10.988375 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:10.988442 kubelet[3624]: W0416 23:33:10.988422 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:10.988680 kubelet[3624]: E0416 23:33:10.988458 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:10.989432 kubelet[3624]: E0416 23:33:10.989240 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:10.989432 kubelet[3624]: W0416 23:33:10.989282 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:10.989432 kubelet[3624]: E0416 23:33:10.989318 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:10.990013 kubelet[3624]: E0416 23:33:10.989959 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:10.990013 kubelet[3624]: W0416 23:33:10.990001 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:10.990222 kubelet[3624]: E0416 23:33:10.990037 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:10.994520 kubelet[3624]: E0416 23:33:10.992601 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:10.994520 kubelet[3624]: W0416 23:33:10.992707 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:10.994520 kubelet[3624]: E0416 23:33:10.992747 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:10.994520 kubelet[3624]: E0416 23:33:10.994415 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:10.994520 kubelet[3624]: W0416 23:33:10.994447 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:10.994520 kubelet[3624]: E0416 23:33:10.994481 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:10.995569 kubelet[3624]: E0416 23:33:10.995511 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:10.995569 kubelet[3624]: W0416 23:33:10.995556 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:10.995738 kubelet[3624]: E0416 23:33:10.995594 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:10.999181 kubelet[3624]: E0416 23:33:10.997469 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:10.999181 kubelet[3624]: W0416 23:33:10.997514 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:10.999181 kubelet[3624]: E0416 23:33:10.997557 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.000385 kubelet[3624]: E0416 23:33:11.000303 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.000385 kubelet[3624]: W0416 23:33:11.000351 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.000385 kubelet[3624]: E0416 23:33:11.000390 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.004222 kubelet[3624]: E0416 23:33:11.000788 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.004222 kubelet[3624]: W0416 23:33:11.000829 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.004222 kubelet[3624]: E0416 23:33:11.000864 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.004222 kubelet[3624]: E0416 23:33:11.002308 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.004222 kubelet[3624]: W0416 23:33:11.002340 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.004222 kubelet[3624]: E0416 23:33:11.002375 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.004680 kubelet[3624]: E0416 23:33:11.004351 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.004680 kubelet[3624]: W0416 23:33:11.004382 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.004680 kubelet[3624]: E0416 23:33:11.004418 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.004962 kubelet[3624]: E0416 23:33:11.004807 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.004962 kubelet[3624]: W0416 23:33:11.004831 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.004962 kubelet[3624]: E0416 23:33:11.004862 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.005324 kubelet[3624]: E0416 23:33:11.005260 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.005324 kubelet[3624]: W0416 23:33:11.005302 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.005539 kubelet[3624]: E0416 23:33:11.005336 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.008233 kubelet[3624]: E0416 23:33:11.006578 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.009014 kubelet[3624]: W0416 23:33:11.008820 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.009014 kubelet[3624]: E0416 23:33:11.008888 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.015328 kubelet[3624]: E0416 23:33:11.015249 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.018798 kubelet[3624]: W0416 23:33:11.015301 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.018798 kubelet[3624]: E0416 23:33:11.015387 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.018798 kubelet[3624]: I0416 23:33:11.017898 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a71fac77-981f-4b75-9304-8c3a33a51180-registration-dir\") pod \"csi-node-driver-xx9nj\" (UID: \"a71fac77-981f-4b75-9304-8c3a33a51180\") " pod="calico-system/csi-node-driver-xx9nj" Apr 16 23:33:11.024555 kubelet[3624]: E0416 23:33:11.024357 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.024555 kubelet[3624]: W0416 23:33:11.024402 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.024555 kubelet[3624]: E0416 23:33:11.024441 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.024555 kubelet[3624]: I0416 23:33:11.024493 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nzhn\" (UniqueName: \"kubernetes.io/projected/a71fac77-981f-4b75-9304-8c3a33a51180-kube-api-access-2nzhn\") pod \"csi-node-driver-xx9nj\" (UID: \"a71fac77-981f-4b75-9304-8c3a33a51180\") " pod="calico-system/csi-node-driver-xx9nj" Apr 16 23:33:11.030928 kubelet[3624]: E0416 23:33:11.030859 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.030928 kubelet[3624]: W0416 23:33:11.030906 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.031114 kubelet[3624]: E0416 23:33:11.030947 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.031380 kubelet[3624]: I0416 23:33:11.031301 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a71fac77-981f-4b75-9304-8c3a33a51180-varrun\") pod \"csi-node-driver-xx9nj\" (UID: \"a71fac77-981f-4b75-9304-8c3a33a51180\") " pod="calico-system/csi-node-driver-xx9nj" Apr 16 23:33:11.033585 kubelet[3624]: E0416 23:33:11.033520 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.033585 kubelet[3624]: W0416 23:33:11.033570 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.033853 kubelet[3624]: E0416 23:33:11.033611 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.034535 kubelet[3624]: E0416 23:33:11.034470 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.034535 kubelet[3624]: W0416 23:33:11.034518 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.035561 kubelet[3624]: E0416 23:33:11.034557 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.038443 kubelet[3624]: E0416 23:33:11.038385 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.038443 kubelet[3624]: W0416 23:33:11.038439 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.038686 kubelet[3624]: E0416 23:33:11.038480 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.038967 kubelet[3624]: E0416 23:33:11.038905 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.038967 kubelet[3624]: W0416 23:33:11.038945 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.039378 kubelet[3624]: E0416 23:33:11.038980 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.039378 kubelet[3624]: I0416 23:33:11.039180 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a71fac77-981f-4b75-9304-8c3a33a51180-socket-dir\") pod \"csi-node-driver-xx9nj\" (UID: \"a71fac77-981f-4b75-9304-8c3a33a51180\") " pod="calico-system/csi-node-driver-xx9nj" Apr 16 23:33:11.040566 kubelet[3624]: E0416 23:33:11.040515 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.040566 kubelet[3624]: W0416 23:33:11.040558 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.040746 kubelet[3624]: E0416 23:33:11.040614 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.046061 kubelet[3624]: E0416 23:33:11.043329 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.046061 kubelet[3624]: W0416 23:33:11.043379 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.046061 kubelet[3624]: E0416 23:33:11.043420 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.047177 kubelet[3624]: E0416 23:33:11.046892 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.047177 kubelet[3624]: W0416 23:33:11.046933 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.047177 kubelet[3624]: E0416 23:33:11.046971 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.048693 kubelet[3624]: E0416 23:33:11.048637 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.048693 kubelet[3624]: W0416 23:33:11.048678 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.048858 kubelet[3624]: E0416 23:33:11.048736 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.052387 kubelet[3624]: E0416 23:33:11.052182 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.052387 kubelet[3624]: W0416 23:33:11.052230 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.052387 kubelet[3624]: E0416 23:33:11.052270 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.056580 kubelet[3624]: E0416 23:33:11.056346 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.056580 kubelet[3624]: W0416 23:33:11.056396 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.056580 kubelet[3624]: E0416 23:33:11.056433 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.056580 kubelet[3624]: I0416 23:33:11.056485 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a71fac77-981f-4b75-9304-8c3a33a51180-kubelet-dir\") pod \"csi-node-driver-xx9nj\" (UID: \"a71fac77-981f-4b75-9304-8c3a33a51180\") " pod="calico-system/csi-node-driver-xx9nj" Apr 16 23:33:11.057575 kubelet[3624]: E0416 23:33:11.057399 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.057575 kubelet[3624]: W0416 23:33:11.057433 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.057575 kubelet[3624]: E0416 23:33:11.057467 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.061419 kubelet[3624]: E0416 23:33:11.061355 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.061419 kubelet[3624]: W0416 23:33:11.061400 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.061717 kubelet[3624]: E0416 23:33:11.061438 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.063414 kubelet[3624]: E0416 23:33:11.063337 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.063414 kubelet[3624]: W0416 23:33:11.063383 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.063620 kubelet[3624]: E0416 23:33:11.063420 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.064781 containerd[2004]: time="2026-04-16T23:33:11.064694212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vchfz,Uid:fca5d126-ab98-4b70-9275-3e97a11c798b,Namespace:calico-system,Attempt:0,}" Apr 16 23:33:11.127733 systemd[1]: Started cri-containerd-a631344c3516f66b96d932f26c562b33c156e5e4cac31caab7d71bac9969f502.scope - libcontainer container a631344c3516f66b96d932f26c562b33c156e5e4cac31caab7d71bac9969f502. Apr 16 23:33:11.149167 containerd[2004]: time="2026-04-16T23:33:11.147448301Z" level=info msg="connecting to shim f5e64d00e9de4d31cabce29b8bb0d3a6de235abdcf1ce9e3612772515fb215b5" address="unix:///run/containerd/s/f7175fa5b9a42720930a40917a40d8a35a5bfdbd39f490b15d58c1f97892ae4d" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:33:11.166210 kubelet[3624]: E0416 23:33:11.166152 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.166210 kubelet[3624]: W0416 23:33:11.166196 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.166525 kubelet[3624]: E0416 23:33:11.166235 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.167262 kubelet[3624]: E0416 23:33:11.167205 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.167262 kubelet[3624]: W0416 23:33:11.167248 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.168635 kubelet[3624]: E0416 23:33:11.167286 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.168635 kubelet[3624]: E0416 23:33:11.168163 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.168635 kubelet[3624]: W0416 23:33:11.168193 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.168635 kubelet[3624]: E0416 23:33:11.168227 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.169256 kubelet[3624]: E0416 23:33:11.169198 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.169256 kubelet[3624]: W0416 23:33:11.169240 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.169483 kubelet[3624]: E0416 23:33:11.169277 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.173428 kubelet[3624]: E0416 23:33:11.173368 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.173428 kubelet[3624]: W0416 23:33:11.173413 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.173664 kubelet[3624]: E0416 23:33:11.173450 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.174362 kubelet[3624]: E0416 23:33:11.173935 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.174362 kubelet[3624]: W0416 23:33:11.173979 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.174362 kubelet[3624]: E0416 23:33:11.174016 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.175028 kubelet[3624]: E0416 23:33:11.174788 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.175028 kubelet[3624]: W0416 23:33:11.174833 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.175028 kubelet[3624]: E0416 23:33:11.174871 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.176188 kubelet[3624]: E0416 23:33:11.175991 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.176188 kubelet[3624]: W0416 23:33:11.176037 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.176188 kubelet[3624]: E0416 23:33:11.176072 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.176800 kubelet[3624]: E0416 23:33:11.176751 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.176800 kubelet[3624]: W0416 23:33:11.176790 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.177376 kubelet[3624]: E0416 23:33:11.176825 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.179002 kubelet[3624]: E0416 23:33:11.177858 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.179002 kubelet[3624]: W0416 23:33:11.177888 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.179002 kubelet[3624]: E0416 23:33:11.177923 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.180016 kubelet[3624]: E0416 23:33:11.179961 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.180016 kubelet[3624]: W0416 23:33:11.180004 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.180568 kubelet[3624]: E0416 23:33:11.180047 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.181378 kubelet[3624]: E0416 23:33:11.181226 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.181378 kubelet[3624]: W0416 23:33:11.181268 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.181378 kubelet[3624]: E0416 23:33:11.181304 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.183211 kubelet[3624]: E0416 23:33:11.182497 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.183211 kubelet[3624]: W0416 23:33:11.182542 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.183211 kubelet[3624]: E0416 23:33:11.182580 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.186169 kubelet[3624]: E0416 23:33:11.184428 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.186169 kubelet[3624]: W0416 23:33:11.184472 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.186169 kubelet[3624]: E0416 23:33:11.184509 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.186169 kubelet[3624]: E0416 23:33:11.185513 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.186169 kubelet[3624]: W0416 23:33:11.185542 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.186169 kubelet[3624]: E0416 23:33:11.185574 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.187939 kubelet[3624]: E0416 23:33:11.187555 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.187939 kubelet[3624]: W0416 23:33:11.187609 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.187939 kubelet[3624]: E0416 23:33:11.187650 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.190301 kubelet[3624]: E0416 23:33:11.188764 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.190301 kubelet[3624]: W0416 23:33:11.188797 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.190301 kubelet[3624]: E0416 23:33:11.188834 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.191477 kubelet[3624]: E0416 23:33:11.191422 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.191477 kubelet[3624]: W0416 23:33:11.191465 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.191680 kubelet[3624]: E0416 23:33:11.191505 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.192274 kubelet[3624]: E0416 23:33:11.192222 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.192274 kubelet[3624]: W0416 23:33:11.192263 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.192857 kubelet[3624]: E0416 23:33:11.192299 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.192857 kubelet[3624]: E0416 23:33:11.192762 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.192857 kubelet[3624]: W0416 23:33:11.192786 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.192857 kubelet[3624]: E0416 23:33:11.192816 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.194371 kubelet[3624]: E0416 23:33:11.193252 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.194371 kubelet[3624]: W0416 23:33:11.193309 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.194371 kubelet[3624]: E0416 23:33:11.193337 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.194371 kubelet[3624]: E0416 23:33:11.193838 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.194371 kubelet[3624]: W0416 23:33:11.193865 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.194371 kubelet[3624]: E0416 23:33:11.193902 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.195777 kubelet[3624]: E0416 23:33:11.195439 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.195777 kubelet[3624]: W0416 23:33:11.195478 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.195777 kubelet[3624]: E0416 23:33:11.195513 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.201168 kubelet[3624]: E0416 23:33:11.198650 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.201168 kubelet[3624]: W0416 23:33:11.198700 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.201168 kubelet[3624]: E0416 23:33:11.198738 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.201168 kubelet[3624]: E0416 23:33:11.200968 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.201168 kubelet[3624]: W0416 23:33:11.201000 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.201168 kubelet[3624]: E0416 23:33:11.201036 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.251604 systemd[1]: Started cri-containerd-f5e64d00e9de4d31cabce29b8bb0d3a6de235abdcf1ce9e3612772515fb215b5.scope - libcontainer container f5e64d00e9de4d31cabce29b8bb0d3a6de235abdcf1ce9e3612772515fb215b5. Apr 16 23:33:11.267313 kubelet[3624]: E0416 23:33:11.267248 3624 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 23:33:11.267313 kubelet[3624]: W0416 23:33:11.267297 3624 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 23:33:11.267533 kubelet[3624]: E0416 23:33:11.267338 3624 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 23:33:11.368572 containerd[2004]: time="2026-04-16T23:33:11.368396274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vchfz,Uid:fca5d126-ab98-4b70-9275-3e97a11c798b,Namespace:calico-system,Attempt:0,} returns sandbox id \"f5e64d00e9de4d31cabce29b8bb0d3a6de235abdcf1ce9e3612772515fb215b5\"" Apr 16 23:33:11.373382 containerd[2004]: time="2026-04-16T23:33:11.373310298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 16 23:33:11.435937 containerd[2004]: time="2026-04-16T23:33:11.435655110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64dc7f77c7-vbdbp,Uid:31620869-0d72-419c-862f-3b33f49083b5,Namespace:calico-system,Attempt:0,} returns sandbox id \"a631344c3516f66b96d932f26c562b33c156e5e4cac31caab7d71bac9969f502\"" Apr 16 23:33:12.559483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4179746579.mount: Deactivated successfully. Apr 16 23:33:12.706158 containerd[2004]: time="2026-04-16T23:33:12.705864848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:12.708505 containerd[2004]: time="2026-04-16T23:33:12.708442100Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=5855345" Apr 16 23:33:12.709897 containerd[2004]: time="2026-04-16T23:33:12.709796048Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:12.716255 containerd[2004]: time="2026-04-16T23:33:12.715676504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:12.718842 containerd[2004]: time="2026-04-16T23:33:12.718780724Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.345400154s" Apr 16 23:33:12.719028 containerd[2004]: time="2026-04-16T23:33:12.718996532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Apr 16 23:33:12.725853 containerd[2004]: time="2026-04-16T23:33:12.725418788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 16 23:33:12.730773 containerd[2004]: time="2026-04-16T23:33:12.730640468Z" level=info msg="CreateContainer within sandbox \"f5e64d00e9de4d31cabce29b8bb0d3a6de235abdcf1ce9e3612772515fb215b5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 16 23:33:12.747167 containerd[2004]: time="2026-04-16T23:33:12.746402480Z" level=info msg="Container d2c5cbb3ea111a9d2c587a2c488cb1293fc2ed7ff1347f99e07a74e2fa6edadf: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:33:12.765188 containerd[2004]: time="2026-04-16T23:33:12.765074901Z" level=info msg="CreateContainer within sandbox \"f5e64d00e9de4d31cabce29b8bb0d3a6de235abdcf1ce9e3612772515fb215b5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d2c5cbb3ea111a9d2c587a2c488cb1293fc2ed7ff1347f99e07a74e2fa6edadf\"" Apr 16 23:33:12.766102 containerd[2004]: time="2026-04-16T23:33:12.766046325Z" level=info msg="StartContainer for \"d2c5cbb3ea111a9d2c587a2c488cb1293fc2ed7ff1347f99e07a74e2fa6edadf\"" Apr 16 23:33:12.771691 containerd[2004]: time="2026-04-16T23:33:12.771606765Z" level=info msg="connecting to shim d2c5cbb3ea111a9d2c587a2c488cb1293fc2ed7ff1347f99e07a74e2fa6edadf" address="unix:///run/containerd/s/f7175fa5b9a42720930a40917a40d8a35a5bfdbd39f490b15d58c1f97892ae4d" protocol=ttrpc version=3 Apr 16 23:33:12.788530 kubelet[3624]: E0416 23:33:12.787893 3624 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xx9nj" podUID="a71fac77-981f-4b75-9304-8c3a33a51180" Apr 16 23:33:12.838466 systemd[1]: Started cri-containerd-d2c5cbb3ea111a9d2c587a2c488cb1293fc2ed7ff1347f99e07a74e2fa6edadf.scope - libcontainer container d2c5cbb3ea111a9d2c587a2c488cb1293fc2ed7ff1347f99e07a74e2fa6edadf. Apr 16 23:33:12.968462 containerd[2004]: time="2026-04-16T23:33:12.967984006Z" level=info msg="StartContainer for \"d2c5cbb3ea111a9d2c587a2c488cb1293fc2ed7ff1347f99e07a74e2fa6edadf\" returns successfully" Apr 16 23:33:13.005912 systemd[1]: cri-containerd-d2c5cbb3ea111a9d2c587a2c488cb1293fc2ed7ff1347f99e07a74e2fa6edadf.scope: Deactivated successfully. Apr 16 23:33:13.014824 containerd[2004]: time="2026-04-16T23:33:13.014593794Z" level=info msg="received container exit event container_id:\"d2c5cbb3ea111a9d2c587a2c488cb1293fc2ed7ff1347f99e07a74e2fa6edadf\" id:\"d2c5cbb3ea111a9d2c587a2c488cb1293fc2ed7ff1347f99e07a74e2fa6edadf\" pid:4226 exited_at:{seconds:1776382393 nanos:13733346}" Apr 16 23:33:13.074747 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2c5cbb3ea111a9d2c587a2c488cb1293fc2ed7ff1347f99e07a74e2fa6edadf-rootfs.mount: Deactivated successfully. Apr 16 23:33:14.729992 containerd[2004]: time="2026-04-16T23:33:14.729908506Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:14.731452 containerd[2004]: time="2026-04-16T23:33:14.731401786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=32467511" Apr 16 23:33:14.732280 containerd[2004]: time="2026-04-16T23:33:14.732220066Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:14.738175 containerd[2004]: time="2026-04-16T23:33:14.738087274Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.012600506s" Apr 16 23:33:14.738175 containerd[2004]: time="2026-04-16T23:33:14.738171634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Apr 16 23:33:14.739323 containerd[2004]: time="2026-04-16T23:33:14.738399706Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:14.742596 containerd[2004]: time="2026-04-16T23:33:14.742515970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 16 23:33:14.772186 containerd[2004]: time="2026-04-16T23:33:14.771969011Z" level=info msg="CreateContainer within sandbox \"a631344c3516f66b96d932f26c562b33c156e5e4cac31caab7d71bac9969f502\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 16 23:33:14.780691 kubelet[3624]: E0416 23:33:14.780624 3624 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xx9nj" podUID="a71fac77-981f-4b75-9304-8c3a33a51180" Apr 16 23:33:14.792602 containerd[2004]: time="2026-04-16T23:33:14.792490067Z" level=info msg="Container c2731595dfc46003b22f9e9ae23a342676bb973e2a549448a12ab0a22cb7ce88: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:33:14.806974 containerd[2004]: time="2026-04-16T23:33:14.806871671Z" level=info msg="CreateContainer within sandbox \"a631344c3516f66b96d932f26c562b33c156e5e4cac31caab7d71bac9969f502\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c2731595dfc46003b22f9e9ae23a342676bb973e2a549448a12ab0a22cb7ce88\"" Apr 16 23:33:14.809560 containerd[2004]: time="2026-04-16T23:33:14.809480411Z" level=info msg="StartContainer for \"c2731595dfc46003b22f9e9ae23a342676bb973e2a549448a12ab0a22cb7ce88\"" Apr 16 23:33:14.812005 containerd[2004]: time="2026-04-16T23:33:14.811934831Z" level=info msg="connecting to shim c2731595dfc46003b22f9e9ae23a342676bb973e2a549448a12ab0a22cb7ce88" address="unix:///run/containerd/s/8a234532fbe96673c4ab90727f9f8e319bcd735609446fb179e37f25d16d644c" protocol=ttrpc version=3 Apr 16 23:33:14.855442 systemd[1]: Started cri-containerd-c2731595dfc46003b22f9e9ae23a342676bb973e2a549448a12ab0a22cb7ce88.scope - libcontainer container c2731595dfc46003b22f9e9ae23a342676bb973e2a549448a12ab0a22cb7ce88. Apr 16 23:33:14.941248 containerd[2004]: time="2026-04-16T23:33:14.940871303Z" level=info msg="StartContainer for \"c2731595dfc46003b22f9e9ae23a342676bb973e2a549448a12ab0a22cb7ce88\" returns successfully" Apr 16 23:33:15.996162 kubelet[3624]: I0416 23:33:15.995487 3624 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:33:16.787039 kubelet[3624]: E0416 23:33:16.786952 3624 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xx9nj" podUID="a71fac77-981f-4b75-9304-8c3a33a51180" Apr 16 23:33:18.780958 kubelet[3624]: E0416 23:33:18.780861 3624 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xx9nj" podUID="a71fac77-981f-4b75-9304-8c3a33a51180" Apr 16 23:33:20.780599 kubelet[3624]: E0416 23:33:20.780519 3624 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xx9nj" podUID="a71fac77-981f-4b75-9304-8c3a33a51180" Apr 16 23:33:21.363833 kubelet[3624]: I0416 23:33:21.363777 3624 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:33:21.401492 kubelet[3624]: I0416 23:33:21.400472 3624 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-64dc7f77c7-vbdbp" podStartSLOduration=8.100906031 podStartE2EDuration="11.400448703s" podCreationTimestamp="2026-04-16 23:33:10 +0000 UTC" firstStartedPulling="2026-04-16 23:33:11.440795202 +0000 UTC m=+28.982870893" lastFinishedPulling="2026-04-16 23:33:14.740337778 +0000 UTC m=+32.282413565" observedRunningTime="2026-04-16 23:33:15.017523236 +0000 UTC m=+32.559598975" watchObservedRunningTime="2026-04-16 23:33:21.400448703 +0000 UTC m=+38.942524406" Apr 16 23:33:21.782804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2579491695.mount: Deactivated successfully. Apr 16 23:33:21.852335 containerd[2004]: time="2026-04-16T23:33:21.852219330Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:21.854029 containerd[2004]: time="2026-04-16T23:33:21.853772466Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Apr 16 23:33:21.855170 containerd[2004]: time="2026-04-16T23:33:21.855041394Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:21.865233 containerd[2004]: time="2026-04-16T23:33:21.865118106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:21.868678 containerd[2004]: time="2026-04-16T23:33:21.868457886Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 7.1258667s" Apr 16 23:33:21.868678 containerd[2004]: time="2026-04-16T23:33:21.868542342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Apr 16 23:33:21.882883 containerd[2004]: time="2026-04-16T23:33:21.881570490Z" level=info msg="CreateContainer within sandbox \"f5e64d00e9de4d31cabce29b8bb0d3a6de235abdcf1ce9e3612772515fb215b5\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 16 23:33:21.919433 containerd[2004]: time="2026-04-16T23:33:21.919370574Z" level=info msg="Container 77912620b8fe2830144e3ceda6e1b847c7a0c8e43db40904b193f410d55953c0: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:33:21.934011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2359064785.mount: Deactivated successfully. Apr 16 23:33:21.947701 containerd[2004]: time="2026-04-16T23:33:21.947632866Z" level=info msg="CreateContainer within sandbox \"f5e64d00e9de4d31cabce29b8bb0d3a6de235abdcf1ce9e3612772515fb215b5\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"77912620b8fe2830144e3ceda6e1b847c7a0c8e43db40904b193f410d55953c0\"" Apr 16 23:33:21.949264 containerd[2004]: time="2026-04-16T23:33:21.949205754Z" level=info msg="StartContainer for \"77912620b8fe2830144e3ceda6e1b847c7a0c8e43db40904b193f410d55953c0\"" Apr 16 23:33:21.953005 containerd[2004]: time="2026-04-16T23:33:21.952945122Z" level=info msg="connecting to shim 77912620b8fe2830144e3ceda6e1b847c7a0c8e43db40904b193f410d55953c0" address="unix:///run/containerd/s/f7175fa5b9a42720930a40917a40d8a35a5bfdbd39f490b15d58c1f97892ae4d" protocol=ttrpc version=3 Apr 16 23:33:21.998863 systemd[1]: Started cri-containerd-77912620b8fe2830144e3ceda6e1b847c7a0c8e43db40904b193f410d55953c0.scope - libcontainer container 77912620b8fe2830144e3ceda6e1b847c7a0c8e43db40904b193f410d55953c0. Apr 16 23:33:22.127638 containerd[2004]: time="2026-04-16T23:33:22.127358331Z" level=info msg="StartContainer for \"77912620b8fe2830144e3ceda6e1b847c7a0c8e43db40904b193f410d55953c0\" returns successfully" Apr 16 23:33:22.342792 systemd[1]: cri-containerd-77912620b8fe2830144e3ceda6e1b847c7a0c8e43db40904b193f410d55953c0.scope: Deactivated successfully. Apr 16 23:33:22.349597 containerd[2004]: time="2026-04-16T23:33:22.349389136Z" level=info msg="received container exit event container_id:\"77912620b8fe2830144e3ceda6e1b847c7a0c8e43db40904b193f410d55953c0\" id:\"77912620b8fe2830144e3ceda6e1b847c7a0c8e43db40904b193f410d55953c0\" pid:4330 exited_at:{seconds:1776382402 nanos:348954820}" Apr 16 23:33:22.779515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77912620b8fe2830144e3ceda6e1b847c7a0c8e43db40904b193f410d55953c0-rootfs.mount: Deactivated successfully. Apr 16 23:33:22.783454 kubelet[3624]: E0416 23:33:22.783384 3624 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xx9nj" podUID="a71fac77-981f-4b75-9304-8c3a33a51180" Apr 16 23:33:23.047279 containerd[2004]: time="2026-04-16T23:33:23.046689796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 16 23:33:24.781235 kubelet[3624]: E0416 23:33:24.781033 3624 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xx9nj" podUID="a71fac77-981f-4b75-9304-8c3a33a51180" Apr 16 23:33:26.072916 containerd[2004]: time="2026-04-16T23:33:26.072841147Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:26.074688 containerd[2004]: time="2026-04-16T23:33:26.074499571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Apr 16 23:33:26.075963 containerd[2004]: time="2026-04-16T23:33:26.075828703Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:26.082275 containerd[2004]: time="2026-04-16T23:33:26.082183027Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:26.085770 containerd[2004]: time="2026-04-16T23:33:26.085660183Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 3.038546163s" Apr 16 23:33:26.085770 containerd[2004]: time="2026-04-16T23:33:26.085740799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Apr 16 23:33:26.096252 containerd[2004]: time="2026-04-16T23:33:26.096180535Z" level=info msg="CreateContainer within sandbox \"f5e64d00e9de4d31cabce29b8bb0d3a6de235abdcf1ce9e3612772515fb215b5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 16 23:33:26.111779 containerd[2004]: time="2026-04-16T23:33:26.110170063Z" level=info msg="Container 777fe9aa57b97df83b4083758cd55cc897c4133d348c11b8d951063cb64aa62f: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:33:26.137215 containerd[2004]: time="2026-04-16T23:33:26.137087647Z" level=info msg="CreateContainer within sandbox \"f5e64d00e9de4d31cabce29b8bb0d3a6de235abdcf1ce9e3612772515fb215b5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"777fe9aa57b97df83b4083758cd55cc897c4133d348c11b8d951063cb64aa62f\"" Apr 16 23:33:26.138663 containerd[2004]: time="2026-04-16T23:33:26.138570847Z" level=info msg="StartContainer for \"777fe9aa57b97df83b4083758cd55cc897c4133d348c11b8d951063cb64aa62f\"" Apr 16 23:33:26.142001 containerd[2004]: time="2026-04-16T23:33:26.141944419Z" level=info msg="connecting to shim 777fe9aa57b97df83b4083758cd55cc897c4133d348c11b8d951063cb64aa62f" address="unix:///run/containerd/s/f7175fa5b9a42720930a40917a40d8a35a5bfdbd39f490b15d58c1f97892ae4d" protocol=ttrpc version=3 Apr 16 23:33:26.184679 systemd[1]: Started cri-containerd-777fe9aa57b97df83b4083758cd55cc897c4133d348c11b8d951063cb64aa62f.scope - libcontainer container 777fe9aa57b97df83b4083758cd55cc897c4133d348c11b8d951063cb64aa62f. Apr 16 23:33:26.303295 containerd[2004]: time="2026-04-16T23:33:26.303088256Z" level=info msg="StartContainer for \"777fe9aa57b97df83b4083758cd55cc897c4133d348c11b8d951063cb64aa62f\" returns successfully" Apr 16 23:33:26.781073 kubelet[3624]: E0416 23:33:26.780975 3624 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xx9nj" podUID="a71fac77-981f-4b75-9304-8c3a33a51180" Apr 16 23:33:27.899258 containerd[2004]: time="2026-04-16T23:33:27.899171316Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 23:33:27.905906 systemd[1]: cri-containerd-777fe9aa57b97df83b4083758cd55cc897c4133d348c11b8d951063cb64aa62f.scope: Deactivated successfully. Apr 16 23:33:27.907545 systemd[1]: cri-containerd-777fe9aa57b97df83b4083758cd55cc897c4133d348c11b8d951063cb64aa62f.scope: Consumed 1.057s CPU time, 185M memory peak, 252K read from disk, 171.3M written to disk. Apr 16 23:33:27.912895 containerd[2004]: time="2026-04-16T23:33:27.912809760Z" level=info msg="received container exit event container_id:\"777fe9aa57b97df83b4083758cd55cc897c4133d348c11b8d951063cb64aa62f\" id:\"777fe9aa57b97df83b4083758cd55cc897c4133d348c11b8d951063cb64aa62f\" pid:4393 exited_at:{seconds:1776382407 nanos:912277152}" Apr 16 23:33:27.950531 kubelet[3624]: I0416 23:33:27.949958 3624 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 16 23:33:28.085685 systemd[1]: Created slice kubepods-burstable-podcb940ee4_a92c_4b55_a310_219ffaee8b28.slice - libcontainer container kubepods-burstable-podcb940ee4_a92c_4b55_a310_219ffaee8b28.slice. Apr 16 23:33:28.115854 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-777fe9aa57b97df83b4083758cd55cc897c4133d348c11b8d951063cb64aa62f-rootfs.mount: Deactivated successfully. Apr 16 23:33:28.161970 systemd[1]: Created slice kubepods-burstable-podcff0eaa9_a828_4135_903b_d58ada095053.slice - libcontainer container kubepods-burstable-podcff0eaa9_a828_4135_903b_d58ada095053.slice. Apr 16 23:33:28.196550 systemd[1]: Created slice kubepods-besteffort-pode941b08a_f79d_4a10_9fb6_55e6ada439aa.slice - libcontainer container kubepods-besteffort-pode941b08a_f79d_4a10_9fb6_55e6ada439aa.slice. Apr 16 23:33:28.225915 systemd[1]: Created slice kubepods-besteffort-podf58985e8_82ab_45c7_a1da_3382768dd37c.slice - libcontainer container kubepods-besteffort-podf58985e8_82ab_45c7_a1da_3382768dd37c.slice. Apr 16 23:33:28.226522 kubelet[3624]: I0416 23:33:28.226452 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb940ee4-a92c-4b55-a310-219ffaee8b28-config-volume\") pod \"coredns-7d764666f9-9trh4\" (UID: \"cb940ee4-a92c-4b55-a310-219ffaee8b28\") " pod="kube-system/coredns-7d764666f9-9trh4" Apr 16 23:33:28.226522 kubelet[3624]: I0416 23:33:28.226536 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss6rk\" (UniqueName: \"kubernetes.io/projected/cb940ee4-a92c-4b55-a310-219ffaee8b28-kube-api-access-ss6rk\") pod \"coredns-7d764666f9-9trh4\" (UID: \"cb940ee4-a92c-4b55-a310-219ffaee8b28\") " pod="kube-system/coredns-7d764666f9-9trh4" Apr 16 23:33:28.229312 kubelet[3624]: I0416 23:33:28.226588 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkrnl\" (UniqueName: \"kubernetes.io/projected/f58985e8-82ab-45c7-a1da-3382768dd37c-kube-api-access-pkrnl\") pod \"calico-apiserver-7fc4b8bb6b-mcbtl\" (UID: \"f58985e8-82ab-45c7-a1da-3382768dd37c\") " pod="calico-system/calico-apiserver-7fc4b8bb6b-mcbtl" Apr 16 23:33:28.229312 kubelet[3624]: I0416 23:33:28.226627 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2w7m\" (UniqueName: \"kubernetes.io/projected/cff0eaa9-a828-4135-903b-d58ada095053-kube-api-access-n2w7m\") pod \"coredns-7d764666f9-w6n45\" (UID: \"cff0eaa9-a828-4135-903b-d58ada095053\") " pod="kube-system/coredns-7d764666f9-w6n45" Apr 16 23:33:28.229312 kubelet[3624]: I0416 23:33:28.226666 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fb86f07-5b07-499c-862c-aa8f0e5e95c3-tigera-ca-bundle\") pod \"calico-kube-controllers-694f584b75-mmxmb\" (UID: \"9fb86f07-5b07-499c-862c-aa8f0e5e95c3\") " pod="calico-system/calico-kube-controllers-694f584b75-mmxmb" Apr 16 23:33:28.229312 kubelet[3624]: I0416 23:33:28.226731 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcx92\" (UniqueName: \"kubernetes.io/projected/9fb86f07-5b07-499c-862c-aa8f0e5e95c3-kube-api-access-wcx92\") pod \"calico-kube-controllers-694f584b75-mmxmb\" (UID: \"9fb86f07-5b07-499c-862c-aa8f0e5e95c3\") " pod="calico-system/calico-kube-controllers-694f584b75-mmxmb" Apr 16 23:33:28.229312 kubelet[3624]: I0416 23:33:28.226814 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e941b08a-f79d-4a10-9fb6-55e6ada439aa-calico-apiserver-certs\") pod \"calico-apiserver-7fc4b8bb6b-shv5x\" (UID: \"e941b08a-f79d-4a10-9fb6-55e6ada439aa\") " pod="calico-system/calico-apiserver-7fc4b8bb6b-shv5x" Apr 16 23:33:28.229612 kubelet[3624]: I0416 23:33:28.226859 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm66x\" (UniqueName: \"kubernetes.io/projected/e941b08a-f79d-4a10-9fb6-55e6ada439aa-kube-api-access-pm66x\") pod \"calico-apiserver-7fc4b8bb6b-shv5x\" (UID: \"e941b08a-f79d-4a10-9fb6-55e6ada439aa\") " pod="calico-system/calico-apiserver-7fc4b8bb6b-shv5x" Apr 16 23:33:28.229612 kubelet[3624]: I0416 23:33:28.226904 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f58985e8-82ab-45c7-a1da-3382768dd37c-calico-apiserver-certs\") pod \"calico-apiserver-7fc4b8bb6b-mcbtl\" (UID: \"f58985e8-82ab-45c7-a1da-3382768dd37c\") " pod="calico-system/calico-apiserver-7fc4b8bb6b-mcbtl" Apr 16 23:33:28.229612 kubelet[3624]: I0416 23:33:28.226978 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cff0eaa9-a828-4135-903b-d58ada095053-config-volume\") pod \"coredns-7d764666f9-w6n45\" (UID: \"cff0eaa9-a828-4135-903b-d58ada095053\") " pod="kube-system/coredns-7d764666f9-w6n45" Apr 16 23:33:28.257587 systemd[1]: Created slice kubepods-besteffort-pod9fb86f07_5b07_499c_862c_aa8f0e5e95c3.slice - libcontainer container kubepods-besteffort-pod9fb86f07_5b07_499c_862c_aa8f0e5e95c3.slice. Apr 16 23:33:28.281896 systemd[1]: Created slice kubepods-besteffort-pod2177fb1d_6bc7_4a5b_a326_d8cb2233c432.slice - libcontainer container kubepods-besteffort-pod2177fb1d_6bc7_4a5b_a326_d8cb2233c432.slice. Apr 16 23:33:28.295295 systemd[1]: Created slice kubepods-besteffort-podedc50944_d647_459a_a5b1_9d5cbd8a1ecf.slice - libcontainer container kubepods-besteffort-podedc50944_d647_459a_a5b1_9d5cbd8a1ecf.slice. Apr 16 23:33:28.329417 kubelet[3624]: I0416 23:33:28.328099 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2177fb1d-6bc7-4a5b-a326-d8cb2233c432-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-vqhkd\" (UID: \"2177fb1d-6bc7-4a5b-a326-d8cb2233c432\") " pod="calico-system/goldmane-9f7667bb8-vqhkd" Apr 16 23:33:28.329417 kubelet[3624]: I0416 23:33:28.328255 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/edc50944-d647-459a-a5b1-9d5cbd8a1ecf-whisker-ca-bundle\") pod \"whisker-79bbdc5db9-vlqn9\" (UID: \"edc50944-d647-459a-a5b1-9d5cbd8a1ecf\") " pod="calico-system/whisker-79bbdc5db9-vlqn9" Apr 16 23:33:28.332791 kubelet[3624]: I0416 23:33:28.331794 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2177fb1d-6bc7-4a5b-a326-d8cb2233c432-config\") pod \"goldmane-9f7667bb8-vqhkd\" (UID: \"2177fb1d-6bc7-4a5b-a326-d8cb2233c432\") " pod="calico-system/goldmane-9f7667bb8-vqhkd" Apr 16 23:33:28.333564 kubelet[3624]: I0416 23:33:28.333184 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcb9d\" (UniqueName: \"kubernetes.io/projected/2177fb1d-6bc7-4a5b-a326-d8cb2233c432-kube-api-access-kcb9d\") pod \"goldmane-9f7667bb8-vqhkd\" (UID: \"2177fb1d-6bc7-4a5b-a326-d8cb2233c432\") " pod="calico-system/goldmane-9f7667bb8-vqhkd" Apr 16 23:33:28.334524 kubelet[3624]: I0416 23:33:28.334417 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/edc50944-d647-459a-a5b1-9d5cbd8a1ecf-nginx-config\") pod \"whisker-79bbdc5db9-vlqn9\" (UID: \"edc50944-d647-459a-a5b1-9d5cbd8a1ecf\") " pod="calico-system/whisker-79bbdc5db9-vlqn9" Apr 16 23:33:28.334524 kubelet[3624]: I0416 23:33:28.334517 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/edc50944-d647-459a-a5b1-9d5cbd8a1ecf-whisker-backend-key-pair\") pod \"whisker-79bbdc5db9-vlqn9\" (UID: \"edc50944-d647-459a-a5b1-9d5cbd8a1ecf\") " pod="calico-system/whisker-79bbdc5db9-vlqn9" Apr 16 23:33:28.334746 kubelet[3624]: I0416 23:33:28.334589 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2177fb1d-6bc7-4a5b-a326-d8cb2233c432-goldmane-key-pair\") pod \"goldmane-9f7667bb8-vqhkd\" (UID: \"2177fb1d-6bc7-4a5b-a326-d8cb2233c432\") " pod="calico-system/goldmane-9f7667bb8-vqhkd" Apr 16 23:33:28.334746 kubelet[3624]: I0416 23:33:28.334635 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csqzv\" (UniqueName: \"kubernetes.io/projected/edc50944-d647-459a-a5b1-9d5cbd8a1ecf-kube-api-access-csqzv\") pod \"whisker-79bbdc5db9-vlqn9\" (UID: \"edc50944-d647-459a-a5b1-9d5cbd8a1ecf\") " pod="calico-system/whisker-79bbdc5db9-vlqn9" Apr 16 23:33:28.487028 containerd[2004]: time="2026-04-16T23:33:28.484572575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-w6n45,Uid:cff0eaa9-a828-4135-903b-d58ada095053,Namespace:kube-system,Attempt:0,}" Apr 16 23:33:28.522758 containerd[2004]: time="2026-04-16T23:33:28.522536567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fc4b8bb6b-shv5x,Uid:e941b08a-f79d-4a10-9fb6-55e6ada439aa,Namespace:calico-system,Attempt:0,}" Apr 16 23:33:28.546884 containerd[2004]: time="2026-04-16T23:33:28.546824435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fc4b8bb6b-mcbtl,Uid:f58985e8-82ab-45c7-a1da-3382768dd37c,Namespace:calico-system,Attempt:0,}" Apr 16 23:33:28.571779 containerd[2004]: time="2026-04-16T23:33:28.571556123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-694f584b75-mmxmb,Uid:9fb86f07-5b07-499c-862c-aa8f0e5e95c3,Namespace:calico-system,Attempt:0,}" Apr 16 23:33:28.595825 containerd[2004]: time="2026-04-16T23:33:28.595763291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-vqhkd,Uid:2177fb1d-6bc7-4a5b-a326-d8cb2233c432,Namespace:calico-system,Attempt:0,}" Apr 16 23:33:28.609266 containerd[2004]: time="2026-04-16T23:33:28.609169979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79bbdc5db9-vlqn9,Uid:edc50944-d647-459a-a5b1-9d5cbd8a1ecf,Namespace:calico-system,Attempt:0,}" Apr 16 23:33:28.742439 containerd[2004]: time="2026-04-16T23:33:28.741961668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-9trh4,Uid:cb940ee4-a92c-4b55-a310-219ffaee8b28,Namespace:kube-system,Attempt:0,}" Apr 16 23:33:28.804063 systemd[1]: Created slice kubepods-besteffort-poda71fac77_981f_4b75_9304_8c3a33a51180.slice - libcontainer container kubepods-besteffort-poda71fac77_981f_4b75_9304_8c3a33a51180.slice. Apr 16 23:33:28.816330 containerd[2004]: time="2026-04-16T23:33:28.816255000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xx9nj,Uid:a71fac77-981f-4b75-9304-8c3a33a51180,Namespace:calico-system,Attempt:0,}" Apr 16 23:33:29.042238 containerd[2004]: time="2026-04-16T23:33:29.042009765Z" level=error msg="Failed to destroy network for sandbox \"dd25b742ad1ee0a2cc990e415c176d0067b2e48d6f0654475950a68345d7b112\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:29.053426 containerd[2004]: time="2026-04-16T23:33:29.053326665Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-w6n45,Uid:cff0eaa9-a828-4135-903b-d58ada095053,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd25b742ad1ee0a2cc990e415c176d0067b2e48d6f0654475950a68345d7b112\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:29.054000 kubelet[3624]: E0416 23:33:29.053927 3624 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd25b742ad1ee0a2cc990e415c176d0067b2e48d6f0654475950a68345d7b112\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:29.058657 kubelet[3624]: E0416 23:33:29.054437 3624 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd25b742ad1ee0a2cc990e415c176d0067b2e48d6f0654475950a68345d7b112\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-w6n45" Apr 16 23:33:29.058657 kubelet[3624]: E0416 23:33:29.054490 3624 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd25b742ad1ee0a2cc990e415c176d0067b2e48d6f0654475950a68345d7b112\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-w6n45" Apr 16 23:33:29.058657 kubelet[3624]: E0416 23:33:29.056714 3624 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-w6n45_kube-system(cff0eaa9-a828-4135-903b-d58ada095053)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-w6n45_kube-system(cff0eaa9-a828-4135-903b-d58ada095053)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd25b742ad1ee0a2cc990e415c176d0067b2e48d6f0654475950a68345d7b112\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-w6n45" podUID="cff0eaa9-a828-4135-903b-d58ada095053" Apr 16 23:33:29.119967 containerd[2004]: time="2026-04-16T23:33:29.119071306Z" level=error msg="Failed to destroy network for sandbox \"b738f0399401594a79a361abff81906942227150bdc6c582d13e516948cd7ea0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:29.134484 containerd[2004]: time="2026-04-16T23:33:29.134346154Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fc4b8bb6b-mcbtl,Uid:f58985e8-82ab-45c7-a1da-3382768dd37c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b738f0399401594a79a361abff81906942227150bdc6c582d13e516948cd7ea0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:29.136425 kubelet[3624]: E0416 23:33:29.134744 3624 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b738f0399401594a79a361abff81906942227150bdc6c582d13e516948cd7ea0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:29.136425 kubelet[3624]: E0416 23:33:29.134825 3624 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b738f0399401594a79a361abff81906942227150bdc6c582d13e516948cd7ea0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7fc4b8bb6b-mcbtl" Apr 16 23:33:29.136425 kubelet[3624]: E0416 23:33:29.134864 3624 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b738f0399401594a79a361abff81906942227150bdc6c582d13e516948cd7ea0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7fc4b8bb6b-mcbtl" Apr 16 23:33:29.136733 kubelet[3624]: E0416 23:33:29.134951 3624 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fc4b8bb6b-mcbtl_calico-system(f58985e8-82ab-45c7-a1da-3382768dd37c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fc4b8bb6b-mcbtl_calico-system(f58985e8-82ab-45c7-a1da-3382768dd37c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b738f0399401594a79a361abff81906942227150bdc6c582d13e516948cd7ea0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7fc4b8bb6b-mcbtl" podUID="f58985e8-82ab-45c7-a1da-3382768dd37c" Apr 16 23:33:29.164895 systemd[1]: run-netns-cni\x2d661e42b2\x2d2735\x2d594a\x2d2caa\x2db746d0b59937.mount: Deactivated successfully. Apr 16 23:33:29.181654 containerd[2004]: time="2026-04-16T23:33:29.181569358Z" level=error msg="Failed to destroy network for sandbox \"99d0fbbdb7e919d8ebf1895ab33627333b909cfc10b4ba134796a7d89115b2e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:29.187021 containerd[2004]: time="2026-04-16T23:33:29.184380382Z" level=error msg="Failed to destroy network for sandbox \"4a7587be64872753ae99e858e4ccc49d337ff2889004527b9fde5b8f53453ea3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:29.187329 systemd[1]: run-netns-cni\x2d9c31022f\x2d0174\x2d379b\x2d16f8\x2d3e5bc867d2c5.mount: Deactivated successfully. Apr 16 23:33:29.195361 systemd[1]: run-netns-cni\x2d591780dc\x2d5982\x2d0b0a\x2dafb6\x2dd68bcbcfb97e.mount: Deactivated successfully. Apr 16 23:33:29.199416 containerd[2004]: time="2026-04-16T23:33:29.199345378Z" level=error msg="Failed to destroy network for sandbox \"57f7bf3e74448a0788c6abfc2f7c0bb1792d64ac335326cd55007c0800cbb52a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:29.208659 systemd[1]: run-netns-cni\x2dc390b46e\x2da7ce\x2d7dd2\x2d8d44\x2d8fc043574ec7.mount: Deactivated successfully. Apr 16 23:33:29.215101 containerd[2004]: time="2026-04-16T23:33:29.212040826Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-694f584b75-mmxmb,Uid:9fb86f07-5b07-499c-862c-aa8f0e5e95c3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"99d0fbbdb7e919d8ebf1895ab33627333b909cfc10b4ba134796a7d89115b2e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:29.220348 kubelet[3624]: E0416 23:33:29.219499 3624 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99d0fbbdb7e919d8ebf1895ab33627333b909cfc10b4ba134796a7d89115b2e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:29.220348 kubelet[3624]: E0416 23:33:29.219579 3624 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99d0fbbdb7e919d8ebf1895ab33627333b909cfc10b4ba134796a7d89115b2e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-694f584b75-mmxmb" Apr 16 23:33:29.220348 kubelet[3624]: E0416 23:33:29.219617 3624 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99d0fbbdb7e919d8ebf1895ab33627333b909cfc10b4ba134796a7d89115b2e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-694f584b75-mmxmb" Apr 16 23:33:29.221012 kubelet[3624]: E0416 23:33:29.219718 3624 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-694f584b75-mmxmb_calico-system(9fb86f07-5b07-499c-862c-aa8f0e5e95c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-694f584b75-mmxmb_calico-system(9fb86f07-5b07-499c-862c-aa8f0e5e95c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99d0fbbdb7e919d8ebf1895ab33627333b909cfc10b4ba134796a7d89115b2e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-694f584b75-mmxmb" podUID="9fb86f07-5b07-499c-862c-aa8f0e5e95c3" Apr 16 23:33:29.222970 containerd[2004]: time="2026-04-16T23:33:29.220447066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fc4b8bb6b-shv5x,Uid:e941b08a-f79d-4a10-9fb6-55e6ada439aa,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a7587be64872753ae99e858e4ccc49d337ff2889004527b9fde5b8f53453ea3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:29.225895 containerd[2004]: time="2026-04-16T23:33:29.225715174Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79bbdc5db9-vlqn9,Uid:edc50944-d647-459a-a5b1-9d5cbd8a1ecf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"57f7bf3e74448a0788c6abfc2f7c0bb1792d64ac335326cd55007c0800cbb52a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:29.227609 kubelet[3624]: E0416 23:33:29.227551 3624 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57f7bf3e74448a0788c6abfc2f7c0bb1792d64ac335326cd55007c0800cbb52a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:29.227775 kubelet[3624]: E0416 23:33:29.227633 3624 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57f7bf3e74448a0788c6abfc2f7c0bb1792d64ac335326cd55007c0800cbb52a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79bbdc5db9-vlqn9" Apr 16 23:33:29.227775 kubelet[3624]: E0416 23:33:29.227666 3624 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57f7bf3e74448a0788c6abfc2f7c0bb1792d64ac335326cd55007c0800cbb52a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79bbdc5db9-vlqn9" Apr 16 23:33:29.227775 kubelet[3624]: E0416 23:33:29.227744 3624 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-79bbdc5db9-vlqn9_calico-system(edc50944-d647-459a-a5b1-9d5cbd8a1ecf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-79bbdc5db9-vlqn9_calico-system(edc50944-d647-459a-a5b1-9d5cbd8a1ecf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"57f7bf3e74448a0788c6abfc2f7c0bb1792d64ac335326cd55007c0800cbb52a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-79bbdc5db9-vlqn9" podUID="edc50944-d647-459a-a5b1-9d5cbd8a1ecf" Apr 16 23:33:29.232915 kubelet[3624]: E0416 23:33:29.231383 3624 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a7587be64872753ae99e858e4ccc49d337ff2889004527b9fde5b8f53453ea3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:29.232915 kubelet[3624]: E0416 23:33:29.232446 3624 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a7587be64872753ae99e858e4ccc49d337ff2889004527b9fde5b8f53453ea3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7fc4b8bb6b-shv5x" Apr 16 23:33:29.232915 kubelet[3624]: E0416 23:33:29.232484 3624 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a7587be64872753ae99e858e4ccc49d337ff2889004527b9fde5b8f53453ea3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7fc4b8bb6b-shv5x" Apr 16 23:33:29.233200 kubelet[3624]: E0416 23:33:29.232584 3624 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fc4b8bb6b-shv5x_calico-system(e941b08a-f79d-4a10-9fb6-55e6ada439aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fc4b8bb6b-shv5x_calico-system(e941b08a-f79d-4a10-9fb6-55e6ada439aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a7587be64872753ae99e858e4ccc49d337ff2889004527b9fde5b8f53453ea3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7fc4b8bb6b-shv5x" podUID="e941b08a-f79d-4a10-9fb6-55e6ada439aa" Apr 16 23:33:29.262823 containerd[2004]: time="2026-04-16T23:33:29.259802074Z" level=error msg="Failed to destroy network for sandbox \"cf862e744d10be7df12015cd563c040a402dbc176941081649696023a935d345\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:29.266779 containerd[2004]: time="2026-04-16T23:33:29.263841299Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-vqhkd,Uid:2177fb1d-6bc7-4a5b-a326-d8cb2233c432,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf862e744d10be7df12015cd563c040a402dbc176941081649696023a935d345\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:29.266956 kubelet[3624]: E0416 23:33:29.264458 3624 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf862e744d10be7df12015cd563c040a402dbc176941081649696023a935d345\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:29.266956 kubelet[3624]: E0416 23:33:29.264549 3624 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf862e744d10be7df12015cd563c040a402dbc176941081649696023a935d345\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-vqhkd" Apr 16 23:33:29.266956 kubelet[3624]: E0416 23:33:29.264583 3624 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf862e744d10be7df12015cd563c040a402dbc176941081649696023a935d345\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-vqhkd" Apr 16 23:33:29.267231 kubelet[3624]: E0416 23:33:29.264665 3624 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-vqhkd_calico-system(2177fb1d-6bc7-4a5b-a326-d8cb2233c432)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-vqhkd_calico-system(2177fb1d-6bc7-4a5b-a326-d8cb2233c432)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf862e744d10be7df12015cd563c040a402dbc176941081649696023a935d345\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-vqhkd" podUID="2177fb1d-6bc7-4a5b-a326-d8cb2233c432" Apr 16 23:33:29.277177 containerd[2004]: time="2026-04-16T23:33:29.276534047Z" level=info msg="CreateContainer within sandbox \"f5e64d00e9de4d31cabce29b8bb0d3a6de235abdcf1ce9e3612772515fb215b5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 16 23:33:29.308400 containerd[2004]: time="2026-04-16T23:33:29.306442619Z" level=info msg="Container be6d81f0ed1efef83f1ac6e53012ac698f5c2fa0b6c46e0cdeb26274855294f0: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:33:29.321617 containerd[2004]: time="2026-04-16T23:33:29.321506411Z" level=error msg="Failed to destroy network for sandbox \"5b88d4bcd70d66c0d6f178568ac232cc251f8f17e38d321f89db2d9832c1059e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:29.323389 containerd[2004]: time="2026-04-16T23:33:29.323233523Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-9trh4,Uid:cb940ee4-a92c-4b55-a310-219ffaee8b28,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b88d4bcd70d66c0d6f178568ac232cc251f8f17e38d321f89db2d9832c1059e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:29.323846 kubelet[3624]: E0416 23:33:29.323650 3624 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b88d4bcd70d66c0d6f178568ac232cc251f8f17e38d321f89db2d9832c1059e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:29.323984 kubelet[3624]: E0416 23:33:29.323902 3624 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b88d4bcd70d66c0d6f178568ac232cc251f8f17e38d321f89db2d9832c1059e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-9trh4" Apr 16 23:33:29.323984 kubelet[3624]: E0416 23:33:29.323944 3624 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b88d4bcd70d66c0d6f178568ac232cc251f8f17e38d321f89db2d9832c1059e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-9trh4" Apr 16 23:33:29.324253 kubelet[3624]: E0416 23:33:29.324202 3624 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-9trh4_kube-system(cb940ee4-a92c-4b55-a310-219ffaee8b28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-9trh4_kube-system(cb940ee4-a92c-4b55-a310-219ffaee8b28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b88d4bcd70d66c0d6f178568ac232cc251f8f17e38d321f89db2d9832c1059e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-9trh4" podUID="cb940ee4-a92c-4b55-a310-219ffaee8b28" Apr 16 23:33:29.328624 containerd[2004]: time="2026-04-16T23:33:29.328542491Z" level=error msg="Failed to destroy network for sandbox \"62e1ddf7154699def1c73d313d305c6581364dd4c13d93edd208dde4839d83cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:29.331523 containerd[2004]: time="2026-04-16T23:33:29.331446707Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xx9nj,Uid:a71fac77-981f-4b75-9304-8c3a33a51180,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"62e1ddf7154699def1c73d313d305c6581364dd4c13d93edd208dde4839d83cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:29.331849 kubelet[3624]: E0416 23:33:29.331787 3624 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62e1ddf7154699def1c73d313d305c6581364dd4c13d93edd208dde4839d83cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 23:33:29.331939 kubelet[3624]: E0416 23:33:29.331872 3624 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62e1ddf7154699def1c73d313d305c6581364dd4c13d93edd208dde4839d83cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xx9nj" Apr 16 23:33:29.331939 kubelet[3624]: E0416 23:33:29.331907 3624 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62e1ddf7154699def1c73d313d305c6581364dd4c13d93edd208dde4839d83cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xx9nj" Apr 16 23:33:29.332838 kubelet[3624]: E0416 23:33:29.332751 3624 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xx9nj_calico-system(a71fac77-981f-4b75-9304-8c3a33a51180)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xx9nj_calico-system(a71fac77-981f-4b75-9304-8c3a33a51180)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62e1ddf7154699def1c73d313d305c6581364dd4c13d93edd208dde4839d83cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xx9nj" podUID="a71fac77-981f-4b75-9304-8c3a33a51180" Apr 16 23:33:29.335799 containerd[2004]: time="2026-04-16T23:33:29.335718227Z" level=info msg="CreateContainer within sandbox \"f5e64d00e9de4d31cabce29b8bb0d3a6de235abdcf1ce9e3612772515fb215b5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"be6d81f0ed1efef83f1ac6e53012ac698f5c2fa0b6c46e0cdeb26274855294f0\"" Apr 16 23:33:29.336694 containerd[2004]: time="2026-04-16T23:33:29.336559367Z" level=info msg="StartContainer for \"be6d81f0ed1efef83f1ac6e53012ac698f5c2fa0b6c46e0cdeb26274855294f0\"" Apr 16 23:33:29.342149 containerd[2004]: time="2026-04-16T23:33:29.342055535Z" level=info msg="connecting to shim be6d81f0ed1efef83f1ac6e53012ac698f5c2fa0b6c46e0cdeb26274855294f0" address="unix:///run/containerd/s/f7175fa5b9a42720930a40917a40d8a35a5bfdbd39f490b15d58c1f97892ae4d" protocol=ttrpc version=3 Apr 16 23:33:29.376448 systemd[1]: Started cri-containerd-be6d81f0ed1efef83f1ac6e53012ac698f5c2fa0b6c46e0cdeb26274855294f0.scope - libcontainer container be6d81f0ed1efef83f1ac6e53012ac698f5c2fa0b6c46e0cdeb26274855294f0. Apr 16 23:33:29.507937 containerd[2004]: time="2026-04-16T23:33:29.507884412Z" level=info msg="StartContainer for \"be6d81f0ed1efef83f1ac6e53012ac698f5c2fa0b6c46e0cdeb26274855294f0\" returns successfully" Apr 16 23:33:30.114308 systemd[1]: run-netns-cni\x2d74081831\x2d2672\x2d562b\x2d0b5f\x2d5d0ecb05f824.mount: Deactivated successfully. Apr 16 23:33:30.114492 systemd[1]: run-netns-cni\x2d40381238\x2d2bf4\x2de0f1\x2ded33\x2d36e1ab6315c6.mount: Deactivated successfully. Apr 16 23:33:30.114611 systemd[1]: run-netns-cni\x2d189cf7b7\x2d542b\x2d7ac6\x2d2288\x2d25c17d92efa7.mount: Deactivated successfully. Apr 16 23:33:30.357551 kubelet[3624]: I0416 23:33:30.357492 3624 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/edc50944-d647-459a-a5b1-9d5cbd8a1ecf-kube-api-access-csqzv\" (UniqueName: \"kubernetes.io/projected/edc50944-d647-459a-a5b1-9d5cbd8a1ecf-kube-api-access-csqzv\") pod \"edc50944-d647-459a-a5b1-9d5cbd8a1ecf\" (UID: \"edc50944-d647-459a-a5b1-9d5cbd8a1ecf\") " Apr 16 23:33:30.358219 kubelet[3624]: I0416 23:33:30.357571 3624 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/edc50944-d647-459a-a5b1-9d5cbd8a1ecf-nginx-config\" (UniqueName: \"kubernetes.io/configmap/edc50944-d647-459a-a5b1-9d5cbd8a1ecf-nginx-config\") pod \"edc50944-d647-459a-a5b1-9d5cbd8a1ecf\" (UID: \"edc50944-d647-459a-a5b1-9d5cbd8a1ecf\") " Apr 16 23:33:30.358219 kubelet[3624]: I0416 23:33:30.357640 3624 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/edc50944-d647-459a-a5b1-9d5cbd8a1ecf-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/edc50944-d647-459a-a5b1-9d5cbd8a1ecf-whisker-backend-key-pair\") pod \"edc50944-d647-459a-a5b1-9d5cbd8a1ecf\" (UID: \"edc50944-d647-459a-a5b1-9d5cbd8a1ecf\") " Apr 16 23:33:30.358219 kubelet[3624]: I0416 23:33:30.357697 3624 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/edc50944-d647-459a-a5b1-9d5cbd8a1ecf-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/edc50944-d647-459a-a5b1-9d5cbd8a1ecf-whisker-ca-bundle\") pod \"edc50944-d647-459a-a5b1-9d5cbd8a1ecf\" (UID: \"edc50944-d647-459a-a5b1-9d5cbd8a1ecf\") " Apr 16 23:33:30.361228 kubelet[3624]: I0416 23:33:30.360947 3624 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edc50944-d647-459a-a5b1-9d5cbd8a1ecf-whisker-ca-bundle" pod "edc50944-d647-459a-a5b1-9d5cbd8a1ecf" (UID: "edc50944-d647-459a-a5b1-9d5cbd8a1ecf"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 23:33:30.361932 kubelet[3624]: I0416 23:33:30.361867 3624 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edc50944-d647-459a-a5b1-9d5cbd8a1ecf-nginx-config" pod "edc50944-d647-459a-a5b1-9d5cbd8a1ecf" (UID: "edc50944-d647-459a-a5b1-9d5cbd8a1ecf"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 16 23:33:30.370942 kubelet[3624]: I0416 23:33:30.370797 3624 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edc50944-d647-459a-a5b1-9d5cbd8a1ecf-kube-api-access-csqzv" pod "edc50944-d647-459a-a5b1-9d5cbd8a1ecf" (UID: "edc50944-d647-459a-a5b1-9d5cbd8a1ecf"). InnerVolumeSpecName "kube-api-access-csqzv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 16 23:33:30.371478 systemd[1]: var-lib-kubelet-pods-edc50944\x2dd647\x2d459a\x2da5b1\x2d9d5cbd8a1ecf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcsqzv.mount: Deactivated successfully. Apr 16 23:33:30.373155 kubelet[3624]: I0416 23:33:30.373063 3624 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc50944-d647-459a-a5b1-9d5cbd8a1ecf-whisker-backend-key-pair" pod "edc50944-d647-459a-a5b1-9d5cbd8a1ecf" (UID: "edc50944-d647-459a-a5b1-9d5cbd8a1ecf"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 16 23:33:30.380330 systemd[1]: var-lib-kubelet-pods-edc50944\x2dd647\x2d459a\x2da5b1\x2d9d5cbd8a1ecf-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 16 23:33:30.459081 kubelet[3624]: I0416 23:33:30.458917 3624 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/edc50944-d647-459a-a5b1-9d5cbd8a1ecf-whisker-backend-key-pair\") on node \"ip-172-31-18-112\" DevicePath \"\"" Apr 16 23:33:30.459081 kubelet[3624]: I0416 23:33:30.458969 3624 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/edc50944-d647-459a-a5b1-9d5cbd8a1ecf-whisker-ca-bundle\") on node \"ip-172-31-18-112\" DevicePath \"\"" Apr 16 23:33:30.459081 kubelet[3624]: I0416 23:33:30.459010 3624 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-csqzv\" (UniqueName: \"kubernetes.io/projected/edc50944-d647-459a-a5b1-9d5cbd8a1ecf-kube-api-access-csqzv\") on node \"ip-172-31-18-112\" DevicePath \"\"" Apr 16 23:33:30.459081 kubelet[3624]: I0416 23:33:30.459036 3624 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/edc50944-d647-459a-a5b1-9d5cbd8a1ecf-nginx-config\") on node \"ip-172-31-18-112\" DevicePath \"\"" Apr 16 23:33:30.794974 systemd[1]: Removed slice kubepods-besteffort-podedc50944_d647_459a_a5b1_9d5cbd8a1ecf.slice - libcontainer container kubepods-besteffort-podedc50944_d647_459a_a5b1_9d5cbd8a1ecf.slice. Apr 16 23:33:31.283483 kubelet[3624]: I0416 23:33:31.283164 3624 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-vchfz" podStartSLOduration=3.415211373 podStartE2EDuration="21.283076065s" podCreationTimestamp="2026-04-16 23:33:10 +0000 UTC" firstStartedPulling="2026-04-16 23:33:11.370978614 +0000 UTC m=+28.913054317" lastFinishedPulling="2026-04-16 23:33:29.238843306 +0000 UTC m=+46.780919009" observedRunningTime="2026-04-16 23:33:30.315652704 +0000 UTC m=+47.857728407" watchObservedRunningTime="2026-04-16 23:33:31.283076065 +0000 UTC m=+48.825151888" Apr 16 23:33:31.381691 systemd[1]: Created slice kubepods-besteffort-pod0a1da6d4_f3ff_41f9_91a3_a68cb2c0e87e.slice - libcontainer container kubepods-besteffort-pod0a1da6d4_f3ff_41f9_91a3_a68cb2c0e87e.slice. Apr 16 23:33:31.466168 kubelet[3624]: I0416 23:33:31.465807 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0a1da6d4-f3ff-41f9-91a3-a68cb2c0e87e-whisker-backend-key-pair\") pod \"whisker-56bd87794-rh2gf\" (UID: \"0a1da6d4-f3ff-41f9-91a3-a68cb2c0e87e\") " pod="calico-system/whisker-56bd87794-rh2gf" Apr 16 23:33:31.466168 kubelet[3624]: I0416 23:33:31.465898 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a1da6d4-f3ff-41f9-91a3-a68cb2c0e87e-whisker-ca-bundle\") pod \"whisker-56bd87794-rh2gf\" (UID: \"0a1da6d4-f3ff-41f9-91a3-a68cb2c0e87e\") " pod="calico-system/whisker-56bd87794-rh2gf" Apr 16 23:33:31.466168 kubelet[3624]: I0416 23:33:31.465967 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/0a1da6d4-f3ff-41f9-91a3-a68cb2c0e87e-nginx-config\") pod \"whisker-56bd87794-rh2gf\" (UID: \"0a1da6d4-f3ff-41f9-91a3-a68cb2c0e87e\") " pod="calico-system/whisker-56bd87794-rh2gf" Apr 16 23:33:31.466168 kubelet[3624]: I0416 23:33:31.466012 3624 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfxfd\" (UniqueName: \"kubernetes.io/projected/0a1da6d4-f3ff-41f9-91a3-a68cb2c0e87e-kube-api-access-zfxfd\") pod \"whisker-56bd87794-rh2gf\" (UID: \"0a1da6d4-f3ff-41f9-91a3-a68cb2c0e87e\") " pod="calico-system/whisker-56bd87794-rh2gf" Apr 16 23:33:31.694345 containerd[2004]: time="2026-04-16T23:33:31.694206579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56bd87794-rh2gf,Uid:0a1da6d4-f3ff-41f9-91a3-a68cb2c0e87e,Namespace:calico-system,Attempt:0,}" Apr 16 23:33:32.217613 systemd-networkd[1882]: calib7b47e11b78: Link UP Apr 16 23:33:32.223068 systemd-networkd[1882]: calib7b47e11b78: Gained carrier Apr 16 23:33:32.242219 (udev-worker)[4816]: Network interface NamePolicy= disabled on kernel command line. Apr 16 23:33:32.289157 containerd[2004]: 2026-04-16 23:33:31.821 [ERROR][4789] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 23:33:32.289157 containerd[2004]: 2026-04-16 23:33:31.905 [INFO][4789] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--112-k8s-whisker--56bd87794--rh2gf-eth0 whisker-56bd87794- calico-system 0a1da6d4-f3ff-41f9-91a3-a68cb2c0e87e 962 0 2026-04-16 23:33:31 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:56bd87794 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-18-112 whisker-56bd87794-rh2gf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib7b47e11b78 [] [] }} ContainerID="9001e92f4fa7d03b67bd2b3f0fd1bee55ce7f1866737c8c1eca26f04699408de" Namespace="calico-system" Pod="whisker-56bd87794-rh2gf" WorkloadEndpoint="ip--172--31--18--112-k8s-whisker--56bd87794--rh2gf-" Apr 16 23:33:32.289157 containerd[2004]: 2026-04-16 23:33:31.905 [INFO][4789] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9001e92f4fa7d03b67bd2b3f0fd1bee55ce7f1866737c8c1eca26f04699408de" Namespace="calico-system" Pod="whisker-56bd87794-rh2gf" WorkloadEndpoint="ip--172--31--18--112-k8s-whisker--56bd87794--rh2gf-eth0" Apr 16 23:33:32.289157 containerd[2004]: 2026-04-16 23:33:32.053 [INFO][4803] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9001e92f4fa7d03b67bd2b3f0fd1bee55ce7f1866737c8c1eca26f04699408de" HandleID="k8s-pod-network.9001e92f4fa7d03b67bd2b3f0fd1bee55ce7f1866737c8c1eca26f04699408de" Workload="ip--172--31--18--112-k8s-whisker--56bd87794--rh2gf-eth0" Apr 16 23:33:32.290671 containerd[2004]: 2026-04-16 23:33:32.076 [INFO][4803] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9001e92f4fa7d03b67bd2b3f0fd1bee55ce7f1866737c8c1eca26f04699408de" HandleID="k8s-pod-network.9001e92f4fa7d03b67bd2b3f0fd1bee55ce7f1866737c8c1eca26f04699408de" Workload="ip--172--31--18--112-k8s-whisker--56bd87794--rh2gf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400042b460), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-112", "pod":"whisker-56bd87794-rh2gf", "timestamp":"2026-04-16 23:33:32.0532368 +0000 UTC"}, Hostname:"ip-172-31-18-112", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001506e0)} Apr 16 23:33:32.290671 containerd[2004]: 2026-04-16 23:33:32.076 [INFO][4803] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:33:32.290671 containerd[2004]: 2026-04-16 23:33:32.076 [INFO][4803] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:33:32.290671 containerd[2004]: 2026-04-16 23:33:32.076 [INFO][4803] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-112' Apr 16 23:33:32.290671 containerd[2004]: 2026-04-16 23:33:32.084 [INFO][4803] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9001e92f4fa7d03b67bd2b3f0fd1bee55ce7f1866737c8c1eca26f04699408de" host="ip-172-31-18-112" Apr 16 23:33:32.290671 containerd[2004]: 2026-04-16 23:33:32.107 [INFO][4803] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-18-112" Apr 16 23:33:32.290671 containerd[2004]: 2026-04-16 23:33:32.134 [INFO][4803] ipam/ipam.go 526: Trying affinity for 192.168.118.0/26 host="ip-172-31-18-112" Apr 16 23:33:32.290671 containerd[2004]: 2026-04-16 23:33:32.138 [INFO][4803] ipam/ipam.go 160: Attempting to load block cidr=192.168.118.0/26 host="ip-172-31-18-112" Apr 16 23:33:32.290671 containerd[2004]: 2026-04-16 23:33:32.143 [INFO][4803] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.118.0/26 host="ip-172-31-18-112" Apr 16 23:33:32.292702 containerd[2004]: 2026-04-16 23:33:32.143 [INFO][4803] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.118.0/26 handle="k8s-pod-network.9001e92f4fa7d03b67bd2b3f0fd1bee55ce7f1866737c8c1eca26f04699408de" host="ip-172-31-18-112" Apr 16 23:33:32.292702 containerd[2004]: 2026-04-16 23:33:32.145 [INFO][4803] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9001e92f4fa7d03b67bd2b3f0fd1bee55ce7f1866737c8c1eca26f04699408de Apr 16 23:33:32.292702 containerd[2004]: 2026-04-16 23:33:32.153 [INFO][4803] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.118.0/26 handle="k8s-pod-network.9001e92f4fa7d03b67bd2b3f0fd1bee55ce7f1866737c8c1eca26f04699408de" host="ip-172-31-18-112" Apr 16 23:33:32.292702 containerd[2004]: 2026-04-16 23:33:32.164 [INFO][4803] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.118.1/26] block=192.168.118.0/26 handle="k8s-pod-network.9001e92f4fa7d03b67bd2b3f0fd1bee55ce7f1866737c8c1eca26f04699408de" host="ip-172-31-18-112" Apr 16 23:33:32.292702 containerd[2004]: 2026-04-16 23:33:32.165 [INFO][4803] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.118.1/26] handle="k8s-pod-network.9001e92f4fa7d03b67bd2b3f0fd1bee55ce7f1866737c8c1eca26f04699408de" host="ip-172-31-18-112" Apr 16 23:33:32.292702 containerd[2004]: 2026-04-16 23:33:32.165 [INFO][4803] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:33:32.292702 containerd[2004]: 2026-04-16 23:33:32.165 [INFO][4803] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.118.1/26] IPv6=[] ContainerID="9001e92f4fa7d03b67bd2b3f0fd1bee55ce7f1866737c8c1eca26f04699408de" HandleID="k8s-pod-network.9001e92f4fa7d03b67bd2b3f0fd1bee55ce7f1866737c8c1eca26f04699408de" Workload="ip--172--31--18--112-k8s-whisker--56bd87794--rh2gf-eth0" Apr 16 23:33:32.293024 containerd[2004]: 2026-04-16 23:33:32.175 [INFO][4789] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9001e92f4fa7d03b67bd2b3f0fd1bee55ce7f1866737c8c1eca26f04699408de" Namespace="calico-system" Pod="whisker-56bd87794-rh2gf" WorkloadEndpoint="ip--172--31--18--112-k8s-whisker--56bd87794--rh2gf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--112-k8s-whisker--56bd87794--rh2gf-eth0", GenerateName:"whisker-56bd87794-", Namespace:"calico-system", SelfLink:"", UID:"0a1da6d4-f3ff-41f9-91a3-a68cb2c0e87e", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 33, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"56bd87794", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-112", ContainerID:"", Pod:"whisker-56bd87794-rh2gf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.118.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib7b47e11b78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:33:32.293024 containerd[2004]: 2026-04-16 23:33:32.175 [INFO][4789] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.118.1/32] ContainerID="9001e92f4fa7d03b67bd2b3f0fd1bee55ce7f1866737c8c1eca26f04699408de" Namespace="calico-system" Pod="whisker-56bd87794-rh2gf" WorkloadEndpoint="ip--172--31--18--112-k8s-whisker--56bd87794--rh2gf-eth0" Apr 16 23:33:32.296255 containerd[2004]: 2026-04-16 23:33:32.175 [INFO][4789] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib7b47e11b78 ContainerID="9001e92f4fa7d03b67bd2b3f0fd1bee55ce7f1866737c8c1eca26f04699408de" Namespace="calico-system" Pod="whisker-56bd87794-rh2gf" WorkloadEndpoint="ip--172--31--18--112-k8s-whisker--56bd87794--rh2gf-eth0" Apr 16 23:33:32.296255 containerd[2004]: 2026-04-16 23:33:32.227 [INFO][4789] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9001e92f4fa7d03b67bd2b3f0fd1bee55ce7f1866737c8c1eca26f04699408de" Namespace="calico-system" Pod="whisker-56bd87794-rh2gf" WorkloadEndpoint="ip--172--31--18--112-k8s-whisker--56bd87794--rh2gf-eth0" Apr 16 23:33:32.296372 containerd[2004]: 2026-04-16 23:33:32.229 [INFO][4789] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9001e92f4fa7d03b67bd2b3f0fd1bee55ce7f1866737c8c1eca26f04699408de" Namespace="calico-system" Pod="whisker-56bd87794-rh2gf" WorkloadEndpoint="ip--172--31--18--112-k8s-whisker--56bd87794--rh2gf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--112-k8s-whisker--56bd87794--rh2gf-eth0", GenerateName:"whisker-56bd87794-", Namespace:"calico-system", SelfLink:"", UID:"0a1da6d4-f3ff-41f9-91a3-a68cb2c0e87e", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 33, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"56bd87794", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-112", ContainerID:"9001e92f4fa7d03b67bd2b3f0fd1bee55ce7f1866737c8c1eca26f04699408de", Pod:"whisker-56bd87794-rh2gf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.118.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib7b47e11b78", MAC:"7e:ff:18:e1:44:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:33:32.296553 containerd[2004]: 2026-04-16 23:33:32.269 [INFO][4789] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9001e92f4fa7d03b67bd2b3f0fd1bee55ce7f1866737c8c1eca26f04699408de" Namespace="calico-system" Pod="whisker-56bd87794-rh2gf" WorkloadEndpoint="ip--172--31--18--112-k8s-whisker--56bd87794--rh2gf-eth0" Apr 16 23:33:32.399815 containerd[2004]: time="2026-04-16T23:33:32.399631178Z" level=info msg="connecting to shim 9001e92f4fa7d03b67bd2b3f0fd1bee55ce7f1866737c8c1eca26f04699408de" address="unix:///run/containerd/s/452f63efa204e12292eab9c07ae5aee066cf567c89430343360c9d723d28d92f" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:33:32.496868 systemd[1]: Started cri-containerd-9001e92f4fa7d03b67bd2b3f0fd1bee55ce7f1866737c8c1eca26f04699408de.scope - libcontainer container 9001e92f4fa7d03b67bd2b3f0fd1bee55ce7f1866737c8c1eca26f04699408de. Apr 16 23:33:32.653576 containerd[2004]: time="2026-04-16T23:33:32.653487087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56bd87794-rh2gf,Uid:0a1da6d4-f3ff-41f9-91a3-a68cb2c0e87e,Namespace:calico-system,Attempt:0,} returns sandbox id \"9001e92f4fa7d03b67bd2b3f0fd1bee55ce7f1866737c8c1eca26f04699408de\"" Apr 16 23:33:32.682099 containerd[2004]: time="2026-04-16T23:33:32.681692943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 16 23:33:32.786918 kubelet[3624]: I0416 23:33:32.786779 3624 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="edc50944-d647-459a-a5b1-9d5cbd8a1ecf" path="/var/lib/kubelet/pods/edc50944-d647-459a-a5b1-9d5cbd8a1ecf/volumes" Apr 16 23:33:33.410346 (udev-worker)[4815]: Network interface NamePolicy= disabled on kernel command line. Apr 16 23:33:33.412044 systemd-networkd[1882]: vxlan.calico: Link UP Apr 16 23:33:33.412053 systemd-networkd[1882]: vxlan.calico: Gained carrier Apr 16 23:33:33.780367 systemd-networkd[1882]: calib7b47e11b78: Gained IPv6LL Apr 16 23:33:34.089280 containerd[2004]: time="2026-04-16T23:33:34.088764170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:34.090383 containerd[2004]: time="2026-04-16T23:33:34.090191990Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Apr 16 23:33:34.091621 containerd[2004]: time="2026-04-16T23:33:34.091559594Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:34.096945 containerd[2004]: time="2026-04-16T23:33:34.096834663Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:34.098609 containerd[2004]: time="2026-04-16T23:33:34.098389395Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.416622988s" Apr 16 23:33:34.098609 containerd[2004]: time="2026-04-16T23:33:34.098474055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Apr 16 23:33:34.109173 containerd[2004]: time="2026-04-16T23:33:34.108875055Z" level=info msg="CreateContainer within sandbox \"9001e92f4fa7d03b67bd2b3f0fd1bee55ce7f1866737c8c1eca26f04699408de\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 16 23:33:34.122624 containerd[2004]: time="2026-04-16T23:33:34.121439835Z" level=info msg="Container ebd76898fb479a3121647cc9d1d2d0e651093e2983abe565dfedb16965326010: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:33:34.136052 containerd[2004]: time="2026-04-16T23:33:34.135994779Z" level=info msg="CreateContainer within sandbox \"9001e92f4fa7d03b67bd2b3f0fd1bee55ce7f1866737c8c1eca26f04699408de\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"ebd76898fb479a3121647cc9d1d2d0e651093e2983abe565dfedb16965326010\"" Apr 16 23:33:34.137651 containerd[2004]: time="2026-04-16T23:33:34.137570211Z" level=info msg="StartContainer for \"ebd76898fb479a3121647cc9d1d2d0e651093e2983abe565dfedb16965326010\"" Apr 16 23:33:34.140443 containerd[2004]: time="2026-04-16T23:33:34.140386647Z" level=info msg="connecting to shim ebd76898fb479a3121647cc9d1d2d0e651093e2983abe565dfedb16965326010" address="unix:///run/containerd/s/452f63efa204e12292eab9c07ae5aee066cf567c89430343360c9d723d28d92f" protocol=ttrpc version=3 Apr 16 23:33:34.181437 systemd[1]: Started cri-containerd-ebd76898fb479a3121647cc9d1d2d0e651093e2983abe565dfedb16965326010.scope - libcontainer container ebd76898fb479a3121647cc9d1d2d0e651093e2983abe565dfedb16965326010. Apr 16 23:33:34.297063 containerd[2004]: time="2026-04-16T23:33:34.296969488Z" level=info msg="StartContainer for \"ebd76898fb479a3121647cc9d1d2d0e651093e2983abe565dfedb16965326010\" returns successfully" Apr 16 23:33:34.301255 containerd[2004]: time="2026-04-16T23:33:34.300867088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 16 23:33:35.188390 systemd-networkd[1882]: vxlan.calico: Gained IPv6LL Apr 16 23:33:35.949559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount801025632.mount: Deactivated successfully. Apr 16 23:33:35.971250 containerd[2004]: time="2026-04-16T23:33:35.970484564Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:35.972548 containerd[2004]: time="2026-04-16T23:33:35.972498044Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Apr 16 23:33:35.972894 containerd[2004]: time="2026-04-16T23:33:35.972858896Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:35.978548 containerd[2004]: time="2026-04-16T23:33:35.978472892Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:35.979812 containerd[2004]: time="2026-04-16T23:33:35.979724912Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 1.678788752s" Apr 16 23:33:35.979975 containerd[2004]: time="2026-04-16T23:33:35.979947332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Apr 16 23:33:35.988723 containerd[2004]: time="2026-04-16T23:33:35.988657244Z" level=info msg="CreateContainer within sandbox \"9001e92f4fa7d03b67bd2b3f0fd1bee55ce7f1866737c8c1eca26f04699408de\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 16 23:33:36.001006 containerd[2004]: time="2026-04-16T23:33:35.999057080Z" level=info msg="Container 8ce10487692a10ebb8601411c51c216962b5a06cd6bc7ea12cacdeed1eb15309: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:33:36.015498 containerd[2004]: time="2026-04-16T23:33:36.015368860Z" level=info msg="CreateContainer within sandbox \"9001e92f4fa7d03b67bd2b3f0fd1bee55ce7f1866737c8c1eca26f04699408de\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"8ce10487692a10ebb8601411c51c216962b5a06cd6bc7ea12cacdeed1eb15309\"" Apr 16 23:33:36.017159 containerd[2004]: time="2026-04-16T23:33:36.016480936Z" level=info msg="StartContainer for \"8ce10487692a10ebb8601411c51c216962b5a06cd6bc7ea12cacdeed1eb15309\"" Apr 16 23:33:36.018886 containerd[2004]: time="2026-04-16T23:33:36.018818164Z" level=info msg="connecting to shim 8ce10487692a10ebb8601411c51c216962b5a06cd6bc7ea12cacdeed1eb15309" address="unix:///run/containerd/s/452f63efa204e12292eab9c07ae5aee066cf567c89430343360c9d723d28d92f" protocol=ttrpc version=3 Apr 16 23:33:36.059471 systemd[1]: Started cri-containerd-8ce10487692a10ebb8601411c51c216962b5a06cd6bc7ea12cacdeed1eb15309.scope - libcontainer container 8ce10487692a10ebb8601411c51c216962b5a06cd6bc7ea12cacdeed1eb15309. Apr 16 23:33:36.152033 containerd[2004]: time="2026-04-16T23:33:36.151956557Z" level=info msg="StartContainer for \"8ce10487692a10ebb8601411c51c216962b5a06cd6bc7ea12cacdeed1eb15309\" returns successfully" Apr 16 23:33:36.359500 kubelet[3624]: I0416 23:33:36.358246 3624 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-56bd87794-rh2gf" podStartSLOduration=2.055892113 podStartE2EDuration="5.358217166s" podCreationTimestamp="2026-04-16 23:33:31 +0000 UTC" firstStartedPulling="2026-04-16 23:33:32.679511451 +0000 UTC m=+50.221587154" lastFinishedPulling="2026-04-16 23:33:35.981836504 +0000 UTC m=+53.523912207" observedRunningTime="2026-04-16 23:33:36.35544831 +0000 UTC m=+53.897524013" watchObservedRunningTime="2026-04-16 23:33:36.358217166 +0000 UTC m=+53.900292881" Apr 16 23:33:37.490666 ntpd[2206]: Listen normally on 6 vxlan.calico 192.168.118.0:123 Apr 16 23:33:37.490760 ntpd[2206]: Listen normally on 7 calib7b47e11b78 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 16 23:33:37.491312 ntpd[2206]: 16 Apr 23:33:37 ntpd[2206]: Listen normally on 6 vxlan.calico 192.168.118.0:123 Apr 16 23:33:37.491312 ntpd[2206]: 16 Apr 23:33:37 ntpd[2206]: Listen normally on 7 calib7b47e11b78 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 16 23:33:37.491312 ntpd[2206]: 16 Apr 23:33:37 ntpd[2206]: Listen normally on 8 vxlan.calico [fe80::648b:ffff:fe06:6d24%5]:123 Apr 16 23:33:37.490809 ntpd[2206]: Listen normally on 8 vxlan.calico [fe80::648b:ffff:fe06:6d24%5]:123 Apr 16 23:33:40.783389 containerd[2004]: time="2026-04-16T23:33:40.783177120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-w6n45,Uid:cff0eaa9-a828-4135-903b-d58ada095053,Namespace:kube-system,Attempt:0,}" Apr 16 23:33:41.014578 systemd-networkd[1882]: cali6fe00bc0642: Link UP Apr 16 23:33:41.017230 systemd-networkd[1882]: cali6fe00bc0642: Gained carrier Apr 16 23:33:41.021686 (udev-worker)[5101]: Network interface NamePolicy= disabled on kernel command line. Apr 16 23:33:41.048243 containerd[2004]: 2026-04-16 23:33:40.868 [INFO][5081] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--112-k8s-coredns--7d764666f9--w6n45-eth0 coredns-7d764666f9- kube-system cff0eaa9-a828-4135-903b-d58ada095053 903 0 2026-04-16 23:32:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-112 coredns-7d764666f9-w6n45 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6fe00bc0642 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="325080ea72b8c1468c98a75f2a3f1e06183b9c7f56adc0b547b898dbde418947" Namespace="kube-system" Pod="coredns-7d764666f9-w6n45" WorkloadEndpoint="ip--172--31--18--112-k8s-coredns--7d764666f9--w6n45-" Apr 16 23:33:41.048243 containerd[2004]: 2026-04-16 23:33:40.868 [INFO][5081] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="325080ea72b8c1468c98a75f2a3f1e06183b9c7f56adc0b547b898dbde418947" Namespace="kube-system" Pod="coredns-7d764666f9-w6n45" WorkloadEndpoint="ip--172--31--18--112-k8s-coredns--7d764666f9--w6n45-eth0" Apr 16 23:33:41.048243 containerd[2004]: 2026-04-16 23:33:40.927 [INFO][5094] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="325080ea72b8c1468c98a75f2a3f1e06183b9c7f56adc0b547b898dbde418947" HandleID="k8s-pod-network.325080ea72b8c1468c98a75f2a3f1e06183b9c7f56adc0b547b898dbde418947" Workload="ip--172--31--18--112-k8s-coredns--7d764666f9--w6n45-eth0" Apr 16 23:33:41.048563 containerd[2004]: 2026-04-16 23:33:40.944 [INFO][5094] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="325080ea72b8c1468c98a75f2a3f1e06183b9c7f56adc0b547b898dbde418947" HandleID="k8s-pod-network.325080ea72b8c1468c98a75f2a3f1e06183b9c7f56adc0b547b898dbde418947" Workload="ip--172--31--18--112-k8s-coredns--7d764666f9--w6n45-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000273350), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-112", "pod":"coredns-7d764666f9-w6n45", "timestamp":"2026-04-16 23:33:40.927797952 +0000 UTC"}, Hostname:"ip-172-31-18-112", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000450f20)} Apr 16 23:33:41.048563 containerd[2004]: 2026-04-16 23:33:40.944 [INFO][5094] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:33:41.048563 containerd[2004]: 2026-04-16 23:33:40.945 [INFO][5094] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:33:41.048563 containerd[2004]: 2026-04-16 23:33:40.945 [INFO][5094] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-112' Apr 16 23:33:41.048563 containerd[2004]: 2026-04-16 23:33:40.949 [INFO][5094] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.325080ea72b8c1468c98a75f2a3f1e06183b9c7f56adc0b547b898dbde418947" host="ip-172-31-18-112" Apr 16 23:33:41.048563 containerd[2004]: 2026-04-16 23:33:40.959 [INFO][5094] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-18-112" Apr 16 23:33:41.048563 containerd[2004]: 2026-04-16 23:33:40.969 [INFO][5094] ipam/ipam.go 526: Trying affinity for 192.168.118.0/26 host="ip-172-31-18-112" Apr 16 23:33:41.048563 containerd[2004]: 2026-04-16 23:33:40.973 [INFO][5094] ipam/ipam.go 160: Attempting to load block cidr=192.168.118.0/26 host="ip-172-31-18-112" Apr 16 23:33:41.048563 containerd[2004]: 2026-04-16 23:33:40.977 [INFO][5094] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.118.0/26 host="ip-172-31-18-112" Apr 16 23:33:41.051886 containerd[2004]: 2026-04-16 23:33:40.978 [INFO][5094] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.118.0/26 handle="k8s-pod-network.325080ea72b8c1468c98a75f2a3f1e06183b9c7f56adc0b547b898dbde418947" host="ip-172-31-18-112" Apr 16 23:33:41.051886 containerd[2004]: 2026-04-16 23:33:40.981 [INFO][5094] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.325080ea72b8c1468c98a75f2a3f1e06183b9c7f56adc0b547b898dbde418947 Apr 16 23:33:41.051886 containerd[2004]: 2026-04-16 23:33:40.988 [INFO][5094] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.118.0/26 handle="k8s-pod-network.325080ea72b8c1468c98a75f2a3f1e06183b9c7f56adc0b547b898dbde418947" host="ip-172-31-18-112" Apr 16 23:33:41.051886 containerd[2004]: 2026-04-16 23:33:41.002 [INFO][5094] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.118.2/26] block=192.168.118.0/26 handle="k8s-pod-network.325080ea72b8c1468c98a75f2a3f1e06183b9c7f56adc0b547b898dbde418947" host="ip-172-31-18-112" Apr 16 23:33:41.051886 containerd[2004]: 2026-04-16 23:33:41.003 [INFO][5094] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.118.2/26] handle="k8s-pod-network.325080ea72b8c1468c98a75f2a3f1e06183b9c7f56adc0b547b898dbde418947" host="ip-172-31-18-112" Apr 16 23:33:41.051886 containerd[2004]: 2026-04-16 23:33:41.003 [INFO][5094] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:33:41.051886 containerd[2004]: 2026-04-16 23:33:41.003 [INFO][5094] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.118.2/26] IPv6=[] ContainerID="325080ea72b8c1468c98a75f2a3f1e06183b9c7f56adc0b547b898dbde418947" HandleID="k8s-pod-network.325080ea72b8c1468c98a75f2a3f1e06183b9c7f56adc0b547b898dbde418947" Workload="ip--172--31--18--112-k8s-coredns--7d764666f9--w6n45-eth0" Apr 16 23:33:41.052281 containerd[2004]: 2026-04-16 23:33:41.008 [INFO][5081] cni-plugin/k8s.go 418: Populated endpoint ContainerID="325080ea72b8c1468c98a75f2a3f1e06183b9c7f56adc0b547b898dbde418947" Namespace="kube-system" Pod="coredns-7d764666f9-w6n45" WorkloadEndpoint="ip--172--31--18--112-k8s-coredns--7d764666f9--w6n45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--112-k8s-coredns--7d764666f9--w6n45-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"cff0eaa9-a828-4135-903b-d58ada095053", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 32, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-112", ContainerID:"", Pod:"coredns-7d764666f9-w6n45", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6fe00bc0642", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:33:41.052281 containerd[2004]: 2026-04-16 23:33:41.008 [INFO][5081] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.118.2/32] ContainerID="325080ea72b8c1468c98a75f2a3f1e06183b9c7f56adc0b547b898dbde418947" Namespace="kube-system" Pod="coredns-7d764666f9-w6n45" WorkloadEndpoint="ip--172--31--18--112-k8s-coredns--7d764666f9--w6n45-eth0" Apr 16 23:33:41.052281 containerd[2004]: 2026-04-16 23:33:41.009 [INFO][5081] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6fe00bc0642 ContainerID="325080ea72b8c1468c98a75f2a3f1e06183b9c7f56adc0b547b898dbde418947" Namespace="kube-system" Pod="coredns-7d764666f9-w6n45" WorkloadEndpoint="ip--172--31--18--112-k8s-coredns--7d764666f9--w6n45-eth0" Apr 16 23:33:41.052281 containerd[2004]: 2026-04-16 23:33:41.017 [INFO][5081] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="325080ea72b8c1468c98a75f2a3f1e06183b9c7f56adc0b547b898dbde418947" Namespace="kube-system" Pod="coredns-7d764666f9-w6n45" WorkloadEndpoint="ip--172--31--18--112-k8s-coredns--7d764666f9--w6n45-eth0" Apr 16 23:33:41.052281 containerd[2004]: 2026-04-16 23:33:41.020 [INFO][5081] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="325080ea72b8c1468c98a75f2a3f1e06183b9c7f56adc0b547b898dbde418947" Namespace="kube-system" Pod="coredns-7d764666f9-w6n45" WorkloadEndpoint="ip--172--31--18--112-k8s-coredns--7d764666f9--w6n45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--112-k8s-coredns--7d764666f9--w6n45-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"cff0eaa9-a828-4135-903b-d58ada095053", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 32, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-112", ContainerID:"325080ea72b8c1468c98a75f2a3f1e06183b9c7f56adc0b547b898dbde418947", Pod:"coredns-7d764666f9-w6n45", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6fe00bc0642", MAC:"3a:5f:3b:0d:85:74", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:33:41.052281 containerd[2004]: 2026-04-16 23:33:41.042 [INFO][5081] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="325080ea72b8c1468c98a75f2a3f1e06183b9c7f56adc0b547b898dbde418947" Namespace="kube-system" Pod="coredns-7d764666f9-w6n45" WorkloadEndpoint="ip--172--31--18--112-k8s-coredns--7d764666f9--w6n45-eth0" Apr 16 23:33:41.104552 containerd[2004]: time="2026-04-16T23:33:41.104462277Z" level=info msg="connecting to shim 325080ea72b8c1468c98a75f2a3f1e06183b9c7f56adc0b547b898dbde418947" address="unix:///run/containerd/s/b9fc8df5f66e53bdcfe710d3c46e7da61f3eb57ac81bdffa6a6043efb31a0f45" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:33:41.179831 systemd[1]: Started cri-containerd-325080ea72b8c1468c98a75f2a3f1e06183b9c7f56adc0b547b898dbde418947.scope - libcontainer container 325080ea72b8c1468c98a75f2a3f1e06183b9c7f56adc0b547b898dbde418947. Apr 16 23:33:41.310375 containerd[2004]: time="2026-04-16T23:33:41.310075522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-w6n45,Uid:cff0eaa9-a828-4135-903b-d58ada095053,Namespace:kube-system,Attempt:0,} returns sandbox id \"325080ea72b8c1468c98a75f2a3f1e06183b9c7f56adc0b547b898dbde418947\"" Apr 16 23:33:41.321743 containerd[2004]: time="2026-04-16T23:33:41.321692314Z" level=info msg="CreateContainer within sandbox \"325080ea72b8c1468c98a75f2a3f1e06183b9c7f56adc0b547b898dbde418947\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 23:33:41.339998 containerd[2004]: time="2026-04-16T23:33:41.338861446Z" level=info msg="Container d18cb7be07e1222f5dac84d3e0c1b590435ae3157f7cef0c811bd5d7ef958282: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:33:41.356453 containerd[2004]: time="2026-04-16T23:33:41.356372567Z" level=info msg="CreateContainer within sandbox \"325080ea72b8c1468c98a75f2a3f1e06183b9c7f56adc0b547b898dbde418947\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d18cb7be07e1222f5dac84d3e0c1b590435ae3157f7cef0c811bd5d7ef958282\"" Apr 16 23:33:41.357852 containerd[2004]: time="2026-04-16T23:33:41.357787043Z" level=info msg="StartContainer for \"d18cb7be07e1222f5dac84d3e0c1b590435ae3157f7cef0c811bd5d7ef958282\"" Apr 16 23:33:41.362220 containerd[2004]: time="2026-04-16T23:33:41.361923527Z" level=info msg="connecting to shim d18cb7be07e1222f5dac84d3e0c1b590435ae3157f7cef0c811bd5d7ef958282" address="unix:///run/containerd/s/b9fc8df5f66e53bdcfe710d3c46e7da61f3eb57ac81bdffa6a6043efb31a0f45" protocol=ttrpc version=3 Apr 16 23:33:41.395454 systemd[1]: Started cri-containerd-d18cb7be07e1222f5dac84d3e0c1b590435ae3157f7cef0c811bd5d7ef958282.scope - libcontainer container d18cb7be07e1222f5dac84d3e0c1b590435ae3157f7cef0c811bd5d7ef958282. Apr 16 23:33:41.474673 containerd[2004]: time="2026-04-16T23:33:41.474325283Z" level=info msg="StartContainer for \"d18cb7be07e1222f5dac84d3e0c1b590435ae3157f7cef0c811bd5d7ef958282\" returns successfully" Apr 16 23:33:41.785071 containerd[2004]: time="2026-04-16T23:33:41.784965433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fc4b8bb6b-shv5x,Uid:e941b08a-f79d-4a10-9fb6-55e6ada439aa,Namespace:calico-system,Attempt:0,}" Apr 16 23:33:41.797282 containerd[2004]: time="2026-04-16T23:33:41.794974477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-694f584b75-mmxmb,Uid:9fb86f07-5b07-499c-862c-aa8f0e5e95c3,Namespace:calico-system,Attempt:0,}" Apr 16 23:33:41.810191 containerd[2004]: time="2026-04-16T23:33:41.803577637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-vqhkd,Uid:2177fb1d-6bc7-4a5b-a326-d8cb2233c432,Namespace:calico-system,Attempt:0,}" Apr 16 23:33:42.201042 systemd-networkd[1882]: calic70e5e1a2ec: Link UP Apr 16 23:33:42.206419 systemd-networkd[1882]: calic70e5e1a2ec: Gained carrier Apr 16 23:33:42.250920 containerd[2004]: 2026-04-16 23:33:41.978 [INFO][5210] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--112-k8s-goldmane--9f7667bb8--vqhkd-eth0 goldmane-9f7667bb8- calico-system 2177fb1d-6bc7-4a5b-a326-d8cb2233c432 905 0 2026-04-16 23:33:08 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-18-112 goldmane-9f7667bb8-vqhkd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic70e5e1a2ec [] [] }} ContainerID="ceb3c77a2045234c10e186cbdcf984b346ad605ed6ae4d30c4cfd3752faac435" Namespace="calico-system" Pod="goldmane-9f7667bb8-vqhkd" WorkloadEndpoint="ip--172--31--18--112-k8s-goldmane--9f7667bb8--vqhkd-" Apr 16 23:33:42.250920 containerd[2004]: 2026-04-16 23:33:41.979 [INFO][5210] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ceb3c77a2045234c10e186cbdcf984b346ad605ed6ae4d30c4cfd3752faac435" Namespace="calico-system" Pod="goldmane-9f7667bb8-vqhkd" WorkloadEndpoint="ip--172--31--18--112-k8s-goldmane--9f7667bb8--vqhkd-eth0" Apr 16 23:33:42.250920 containerd[2004]: 2026-04-16 23:33:42.076 [INFO][5239] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ceb3c77a2045234c10e186cbdcf984b346ad605ed6ae4d30c4cfd3752faac435" HandleID="k8s-pod-network.ceb3c77a2045234c10e186cbdcf984b346ad605ed6ae4d30c4cfd3752faac435" Workload="ip--172--31--18--112-k8s-goldmane--9f7667bb8--vqhkd-eth0" Apr 16 23:33:42.250920 containerd[2004]: 2026-04-16 23:33:42.114 [INFO][5239] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ceb3c77a2045234c10e186cbdcf984b346ad605ed6ae4d30c4cfd3752faac435" HandleID="k8s-pod-network.ceb3c77a2045234c10e186cbdcf984b346ad605ed6ae4d30c4cfd3752faac435" Workload="ip--172--31--18--112-k8s-goldmane--9f7667bb8--vqhkd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003809c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-112", "pod":"goldmane-9f7667bb8-vqhkd", "timestamp":"2026-04-16 23:33:42.076979494 +0000 UTC"}, Hostname:"ip-172-31-18-112", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000432c60)} Apr 16 23:33:42.250920 containerd[2004]: 2026-04-16 23:33:42.114 [INFO][5239] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:33:42.250920 containerd[2004]: 2026-04-16 23:33:42.115 [INFO][5239] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:33:42.250920 containerd[2004]: 2026-04-16 23:33:42.115 [INFO][5239] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-112' Apr 16 23:33:42.250920 containerd[2004]: 2026-04-16 23:33:42.121 [INFO][5239] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ceb3c77a2045234c10e186cbdcf984b346ad605ed6ae4d30c4cfd3752faac435" host="ip-172-31-18-112" Apr 16 23:33:42.250920 containerd[2004]: 2026-04-16 23:33:42.134 [INFO][5239] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-18-112" Apr 16 23:33:42.250920 containerd[2004]: 2026-04-16 23:33:42.151 [INFO][5239] ipam/ipam.go 526: Trying affinity for 192.168.118.0/26 host="ip-172-31-18-112" Apr 16 23:33:42.250920 containerd[2004]: 2026-04-16 23:33:42.156 [INFO][5239] ipam/ipam.go 160: Attempting to load block cidr=192.168.118.0/26 host="ip-172-31-18-112" Apr 16 23:33:42.250920 containerd[2004]: 2026-04-16 23:33:42.161 [INFO][5239] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.118.0/26 host="ip-172-31-18-112" Apr 16 23:33:42.250920 containerd[2004]: 2026-04-16 23:33:42.161 [INFO][5239] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.118.0/26 handle="k8s-pod-network.ceb3c77a2045234c10e186cbdcf984b346ad605ed6ae4d30c4cfd3752faac435" host="ip-172-31-18-112" Apr 16 23:33:42.250920 containerd[2004]: 2026-04-16 23:33:42.164 [INFO][5239] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ceb3c77a2045234c10e186cbdcf984b346ad605ed6ae4d30c4cfd3752faac435 Apr 16 23:33:42.250920 containerd[2004]: 2026-04-16 23:33:42.179 [INFO][5239] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.118.0/26 handle="k8s-pod-network.ceb3c77a2045234c10e186cbdcf984b346ad605ed6ae4d30c4cfd3752faac435" host="ip-172-31-18-112" Apr 16 23:33:42.250920 containerd[2004]: 2026-04-16 23:33:42.187 [INFO][5239] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.118.3/26] block=192.168.118.0/26 handle="k8s-pod-network.ceb3c77a2045234c10e186cbdcf984b346ad605ed6ae4d30c4cfd3752faac435" host="ip-172-31-18-112" Apr 16 23:33:42.250920 containerd[2004]: 2026-04-16 23:33:42.187 [INFO][5239] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.118.3/26] handle="k8s-pod-network.ceb3c77a2045234c10e186cbdcf984b346ad605ed6ae4d30c4cfd3752faac435" host="ip-172-31-18-112" Apr 16 23:33:42.250920 containerd[2004]: 2026-04-16 23:33:42.188 [INFO][5239] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:33:42.250920 containerd[2004]: 2026-04-16 23:33:42.188 [INFO][5239] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.118.3/26] IPv6=[] ContainerID="ceb3c77a2045234c10e186cbdcf984b346ad605ed6ae4d30c4cfd3752faac435" HandleID="k8s-pod-network.ceb3c77a2045234c10e186cbdcf984b346ad605ed6ae4d30c4cfd3752faac435" Workload="ip--172--31--18--112-k8s-goldmane--9f7667bb8--vqhkd-eth0" Apr 16 23:33:42.256327 containerd[2004]: 2026-04-16 23:33:42.193 [INFO][5210] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ceb3c77a2045234c10e186cbdcf984b346ad605ed6ae4d30c4cfd3752faac435" Namespace="calico-system" Pod="goldmane-9f7667bb8-vqhkd" WorkloadEndpoint="ip--172--31--18--112-k8s-goldmane--9f7667bb8--vqhkd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--112-k8s-goldmane--9f7667bb8--vqhkd-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"2177fb1d-6bc7-4a5b-a326-d8cb2233c432", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 33, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-112", ContainerID:"", Pod:"goldmane-9f7667bb8-vqhkd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.118.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic70e5e1a2ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:33:42.256327 containerd[2004]: 2026-04-16 23:33:42.194 [INFO][5210] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.118.3/32] ContainerID="ceb3c77a2045234c10e186cbdcf984b346ad605ed6ae4d30c4cfd3752faac435" Namespace="calico-system" Pod="goldmane-9f7667bb8-vqhkd" WorkloadEndpoint="ip--172--31--18--112-k8s-goldmane--9f7667bb8--vqhkd-eth0" Apr 16 23:33:42.256327 containerd[2004]: 2026-04-16 23:33:42.194 [INFO][5210] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic70e5e1a2ec ContainerID="ceb3c77a2045234c10e186cbdcf984b346ad605ed6ae4d30c4cfd3752faac435" Namespace="calico-system" Pod="goldmane-9f7667bb8-vqhkd" WorkloadEndpoint="ip--172--31--18--112-k8s-goldmane--9f7667bb8--vqhkd-eth0" Apr 16 23:33:42.256327 containerd[2004]: 2026-04-16 23:33:42.209 [INFO][5210] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ceb3c77a2045234c10e186cbdcf984b346ad605ed6ae4d30c4cfd3752faac435" Namespace="calico-system" Pod="goldmane-9f7667bb8-vqhkd" WorkloadEndpoint="ip--172--31--18--112-k8s-goldmane--9f7667bb8--vqhkd-eth0" Apr 16 23:33:42.256327 containerd[2004]: 2026-04-16 23:33:42.211 [INFO][5210] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ceb3c77a2045234c10e186cbdcf984b346ad605ed6ae4d30c4cfd3752faac435" Namespace="calico-system" Pod="goldmane-9f7667bb8-vqhkd" WorkloadEndpoint="ip--172--31--18--112-k8s-goldmane--9f7667bb8--vqhkd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--112-k8s-goldmane--9f7667bb8--vqhkd-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"2177fb1d-6bc7-4a5b-a326-d8cb2233c432", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 33, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-112", ContainerID:"ceb3c77a2045234c10e186cbdcf984b346ad605ed6ae4d30c4cfd3752faac435", Pod:"goldmane-9f7667bb8-vqhkd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.118.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic70e5e1a2ec", MAC:"7e:99:53:b4:d7:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:33:42.256327 containerd[2004]: 2026-04-16 23:33:42.241 [INFO][5210] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ceb3c77a2045234c10e186cbdcf984b346ad605ed6ae4d30c4cfd3752faac435" Namespace="calico-system" Pod="goldmane-9f7667bb8-vqhkd" WorkloadEndpoint="ip--172--31--18--112-k8s-goldmane--9f7667bb8--vqhkd-eth0" Apr 16 23:33:42.349343 containerd[2004]: time="2026-04-16T23:33:42.347849927Z" level=info msg="connecting to shim ceb3c77a2045234c10e186cbdcf984b346ad605ed6ae4d30c4cfd3752faac435" address="unix:///run/containerd/s/cecc6804c983d854b97d1f4ecc9e6b78a8cded219e24e7a7593087062c38a99d" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:33:42.348964 systemd-networkd[1882]: cali4cc73b3ddb0: Link UP Apr 16 23:33:42.353274 systemd-networkd[1882]: cali4cc73b3ddb0: Gained carrier Apr 16 23:33:42.432630 containerd[2004]: 2026-04-16 23:33:41.941 [INFO][5215] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--112-k8s-calico--kube--controllers--694f584b75--mmxmb-eth0 calico-kube-controllers-694f584b75- calico-system 9fb86f07-5b07-499c-862c-aa8f0e5e95c3 900 0 2026-04-16 23:33:11 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:694f584b75 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-18-112 calico-kube-controllers-694f584b75-mmxmb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4cc73b3ddb0 [] [] }} ContainerID="df9f036fd218ed6983db7e53d22faf3ecbf91bdec28f300ee932c6e9ff9dfa1f" Namespace="calico-system" Pod="calico-kube-controllers-694f584b75-mmxmb" WorkloadEndpoint="ip--172--31--18--112-k8s-calico--kube--controllers--694f584b75--mmxmb-" Apr 16 23:33:42.432630 containerd[2004]: 2026-04-16 23:33:41.942 [INFO][5215] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="df9f036fd218ed6983db7e53d22faf3ecbf91bdec28f300ee932c6e9ff9dfa1f" Namespace="calico-system" Pod="calico-kube-controllers-694f584b75-mmxmb" WorkloadEndpoint="ip--172--31--18--112-k8s-calico--kube--controllers--694f584b75--mmxmb-eth0" Apr 16 23:33:42.432630 containerd[2004]: 2026-04-16 23:33:42.068 [INFO][5234] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df9f036fd218ed6983db7e53d22faf3ecbf91bdec28f300ee932c6e9ff9dfa1f" HandleID="k8s-pod-network.df9f036fd218ed6983db7e53d22faf3ecbf91bdec28f300ee932c6e9ff9dfa1f" Workload="ip--172--31--18--112-k8s-calico--kube--controllers--694f584b75--mmxmb-eth0" Apr 16 23:33:42.432630 containerd[2004]: 2026-04-16 23:33:42.119 [INFO][5234] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="df9f036fd218ed6983db7e53d22faf3ecbf91bdec28f300ee932c6e9ff9dfa1f" HandleID="k8s-pod-network.df9f036fd218ed6983db7e53d22faf3ecbf91bdec28f300ee932c6e9ff9dfa1f" Workload="ip--172--31--18--112-k8s-calico--kube--controllers--694f584b75--mmxmb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003aa190), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-112", "pod":"calico-kube-controllers-694f584b75-mmxmb", "timestamp":"2026-04-16 23:33:42.068284714 +0000 UTC"}, Hostname:"ip-172-31-18-112", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002aadc0)} Apr 16 23:33:42.432630 containerd[2004]: 2026-04-16 23:33:42.120 [INFO][5234] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:33:42.432630 containerd[2004]: 2026-04-16 23:33:42.188 [INFO][5234] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:33:42.432630 containerd[2004]: 2026-04-16 23:33:42.188 [INFO][5234] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-112' Apr 16 23:33:42.432630 containerd[2004]: 2026-04-16 23:33:42.225 [INFO][5234] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.df9f036fd218ed6983db7e53d22faf3ecbf91bdec28f300ee932c6e9ff9dfa1f" host="ip-172-31-18-112" Apr 16 23:33:42.432630 containerd[2004]: 2026-04-16 23:33:42.234 [INFO][5234] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-18-112" Apr 16 23:33:42.432630 containerd[2004]: 2026-04-16 23:33:42.254 [INFO][5234] ipam/ipam.go 526: Trying affinity for 192.168.118.0/26 host="ip-172-31-18-112" Apr 16 23:33:42.432630 containerd[2004]: 2026-04-16 23:33:42.263 [INFO][5234] ipam/ipam.go 160: Attempting to load block cidr=192.168.118.0/26 host="ip-172-31-18-112" Apr 16 23:33:42.432630 containerd[2004]: 2026-04-16 23:33:42.271 [INFO][5234] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.118.0/26 host="ip-172-31-18-112" Apr 16 23:33:42.432630 containerd[2004]: 2026-04-16 23:33:42.271 [INFO][5234] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.118.0/26 handle="k8s-pod-network.df9f036fd218ed6983db7e53d22faf3ecbf91bdec28f300ee932c6e9ff9dfa1f" host="ip-172-31-18-112" Apr 16 23:33:42.432630 containerd[2004]: 2026-04-16 23:33:42.275 [INFO][5234] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.df9f036fd218ed6983db7e53d22faf3ecbf91bdec28f300ee932c6e9ff9dfa1f Apr 16 23:33:42.432630 containerd[2004]: 2026-04-16 23:33:42.297 [INFO][5234] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.118.0/26 handle="k8s-pod-network.df9f036fd218ed6983db7e53d22faf3ecbf91bdec28f300ee932c6e9ff9dfa1f" host="ip-172-31-18-112" Apr 16 23:33:42.432630 containerd[2004]: 2026-04-16 23:33:42.313 [INFO][5234] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.118.4/26] block=192.168.118.0/26 handle="k8s-pod-network.df9f036fd218ed6983db7e53d22faf3ecbf91bdec28f300ee932c6e9ff9dfa1f" host="ip-172-31-18-112" Apr 16 23:33:42.432630 containerd[2004]: 2026-04-16 23:33:42.313 [INFO][5234] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.118.4/26] handle="k8s-pod-network.df9f036fd218ed6983db7e53d22faf3ecbf91bdec28f300ee932c6e9ff9dfa1f" host="ip-172-31-18-112" Apr 16 23:33:42.432630 containerd[2004]: 2026-04-16 23:33:42.313 [INFO][5234] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:33:42.432630 containerd[2004]: 2026-04-16 23:33:42.316 [INFO][5234] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.118.4/26] IPv6=[] ContainerID="df9f036fd218ed6983db7e53d22faf3ecbf91bdec28f300ee932c6e9ff9dfa1f" HandleID="k8s-pod-network.df9f036fd218ed6983db7e53d22faf3ecbf91bdec28f300ee932c6e9ff9dfa1f" Workload="ip--172--31--18--112-k8s-calico--kube--controllers--694f584b75--mmxmb-eth0" Apr 16 23:33:42.437546 containerd[2004]: 2026-04-16 23:33:42.332 [INFO][5215] cni-plugin/k8s.go 418: Populated endpoint ContainerID="df9f036fd218ed6983db7e53d22faf3ecbf91bdec28f300ee932c6e9ff9dfa1f" Namespace="calico-system" Pod="calico-kube-controllers-694f584b75-mmxmb" WorkloadEndpoint="ip--172--31--18--112-k8s-calico--kube--controllers--694f584b75--mmxmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--112-k8s-calico--kube--controllers--694f584b75--mmxmb-eth0", GenerateName:"calico-kube-controllers-694f584b75-", Namespace:"calico-system", SelfLink:"", UID:"9fb86f07-5b07-499c-862c-aa8f0e5e95c3", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 33, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"694f584b75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-112", ContainerID:"", Pod:"calico-kube-controllers-694f584b75-mmxmb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.118.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4cc73b3ddb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:33:42.437546 containerd[2004]: 2026-04-16 23:33:42.333 [INFO][5215] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.118.4/32] ContainerID="df9f036fd218ed6983db7e53d22faf3ecbf91bdec28f300ee932c6e9ff9dfa1f" Namespace="calico-system" Pod="calico-kube-controllers-694f584b75-mmxmb" WorkloadEndpoint="ip--172--31--18--112-k8s-calico--kube--controllers--694f584b75--mmxmb-eth0" Apr 16 23:33:42.437546 containerd[2004]: 2026-04-16 23:33:42.333 [INFO][5215] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4cc73b3ddb0 ContainerID="df9f036fd218ed6983db7e53d22faf3ecbf91bdec28f300ee932c6e9ff9dfa1f" Namespace="calico-system" Pod="calico-kube-controllers-694f584b75-mmxmb" WorkloadEndpoint="ip--172--31--18--112-k8s-calico--kube--controllers--694f584b75--mmxmb-eth0" Apr 16 23:33:42.437546 containerd[2004]: 2026-04-16 23:33:42.361 [INFO][5215] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df9f036fd218ed6983db7e53d22faf3ecbf91bdec28f300ee932c6e9ff9dfa1f" Namespace="calico-system" Pod="calico-kube-controllers-694f584b75-mmxmb" WorkloadEndpoint="ip--172--31--18--112-k8s-calico--kube--controllers--694f584b75--mmxmb-eth0" Apr 16 23:33:42.437546 containerd[2004]: 2026-04-16 23:33:42.376 [INFO][5215] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="df9f036fd218ed6983db7e53d22faf3ecbf91bdec28f300ee932c6e9ff9dfa1f" Namespace="calico-system" Pod="calico-kube-controllers-694f584b75-mmxmb" WorkloadEndpoint="ip--172--31--18--112-k8s-calico--kube--controllers--694f584b75--mmxmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--112-k8s-calico--kube--controllers--694f584b75--mmxmb-eth0", GenerateName:"calico-kube-controllers-694f584b75-", Namespace:"calico-system", SelfLink:"", UID:"9fb86f07-5b07-499c-862c-aa8f0e5e95c3", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 33, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"694f584b75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-112", ContainerID:"df9f036fd218ed6983db7e53d22faf3ecbf91bdec28f300ee932c6e9ff9dfa1f", Pod:"calico-kube-controllers-694f584b75-mmxmb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.118.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4cc73b3ddb0", MAC:"1a:61:6e:9a:0a:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:33:42.437546 containerd[2004]: 2026-04-16 23:33:42.411 [INFO][5215] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="df9f036fd218ed6983db7e53d22faf3ecbf91bdec28f300ee932c6e9ff9dfa1f" Namespace="calico-system" Pod="calico-kube-controllers-694f584b75-mmxmb" WorkloadEndpoint="ip--172--31--18--112-k8s-calico--kube--controllers--694f584b75--mmxmb-eth0" Apr 16 23:33:42.493554 systemd[1]: Started cri-containerd-ceb3c77a2045234c10e186cbdcf984b346ad605ed6ae4d30c4cfd3752faac435.scope - libcontainer container ceb3c77a2045234c10e186cbdcf984b346ad605ed6ae4d30c4cfd3752faac435. Apr 16 23:33:42.530547 kubelet[3624]: I0416 23:33:42.530427 3624 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-w6n45" podStartSLOduration=56.530404332 podStartE2EDuration="56.530404332s" podCreationTimestamp="2026-04-16 23:32:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:33:42.460790676 +0000 UTC m=+60.002866403" watchObservedRunningTime="2026-04-16 23:33:42.530404332 +0000 UTC m=+60.072480023" Apr 16 23:33:42.548491 systemd-networkd[1882]: cali6fe00bc0642: Gained IPv6LL Apr 16 23:33:42.562517 containerd[2004]: time="2026-04-16T23:33:42.562445221Z" level=info msg="connecting to shim df9f036fd218ed6983db7e53d22faf3ecbf91bdec28f300ee932c6e9ff9dfa1f" address="unix:///run/containerd/s/8e5fde24335d31d23b6341e73c9def4fe7538b97cf2d062d6e3f991fd5c63da3" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:33:42.581706 systemd-networkd[1882]: calif007c19ac1d: Link UP Apr 16 23:33:42.589565 systemd-networkd[1882]: calif007c19ac1d: Gained carrier Apr 16 23:33:42.651904 containerd[2004]: 2026-04-16 23:33:41.996 [INFO][5200] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--112-k8s-calico--apiserver--7fc4b8bb6b--shv5x-eth0 calico-apiserver-7fc4b8bb6b- calico-system e941b08a-f79d-4a10-9fb6-55e6ada439aa 904 0 2026-04-16 23:33:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7fc4b8bb6b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-112 calico-apiserver-7fc4b8bb6b-shv5x eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calif007c19ac1d [] [] }} ContainerID="8bcd708105fb46b715be2752c165170a078354169f83a545ab4e6ebcec310b8a" Namespace="calico-system" Pod="calico-apiserver-7fc4b8bb6b-shv5x" WorkloadEndpoint="ip--172--31--18--112-k8s-calico--apiserver--7fc4b8bb6b--shv5x-" Apr 16 23:33:42.651904 containerd[2004]: 2026-04-16 23:33:41.996 [INFO][5200] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8bcd708105fb46b715be2752c165170a078354169f83a545ab4e6ebcec310b8a" Namespace="calico-system" Pod="calico-apiserver-7fc4b8bb6b-shv5x" WorkloadEndpoint="ip--172--31--18--112-k8s-calico--apiserver--7fc4b8bb6b--shv5x-eth0" Apr 16 23:33:42.651904 containerd[2004]: 2026-04-16 23:33:42.121 [INFO][5244] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8bcd708105fb46b715be2752c165170a078354169f83a545ab4e6ebcec310b8a" HandleID="k8s-pod-network.8bcd708105fb46b715be2752c165170a078354169f83a545ab4e6ebcec310b8a" Workload="ip--172--31--18--112-k8s-calico--apiserver--7fc4b8bb6b--shv5x-eth0" Apr 16 23:33:42.651904 containerd[2004]: 2026-04-16 23:33:42.153 [INFO][5244] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8bcd708105fb46b715be2752c165170a078354169f83a545ab4e6ebcec310b8a" HandleID="k8s-pod-network.8bcd708105fb46b715be2752c165170a078354169f83a545ab4e6ebcec310b8a" Workload="ip--172--31--18--112-k8s-calico--apiserver--7fc4b8bb6b--shv5x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003dc1d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-112", "pod":"calico-apiserver-7fc4b8bb6b-shv5x", "timestamp":"2026-04-16 23:33:42.12145801 +0000 UTC"}, Hostname:"ip-172-31-18-112", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002eda20)} Apr 16 23:33:42.651904 containerd[2004]: 2026-04-16 23:33:42.153 [INFO][5244] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:33:42.651904 containerd[2004]: 2026-04-16 23:33:42.314 [INFO][5244] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:33:42.651904 containerd[2004]: 2026-04-16 23:33:42.314 [INFO][5244] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-112' Apr 16 23:33:42.651904 containerd[2004]: 2026-04-16 23:33:42.328 [INFO][5244] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8bcd708105fb46b715be2752c165170a078354169f83a545ab4e6ebcec310b8a" host="ip-172-31-18-112" Apr 16 23:33:42.651904 containerd[2004]: 2026-04-16 23:33:42.387 [INFO][5244] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-18-112" Apr 16 23:33:42.651904 containerd[2004]: 2026-04-16 23:33:42.434 [INFO][5244] ipam/ipam.go 526: Trying affinity for 192.168.118.0/26 host="ip-172-31-18-112" Apr 16 23:33:42.651904 containerd[2004]: 2026-04-16 23:33:42.445 [INFO][5244] ipam/ipam.go 160: Attempting to load block cidr=192.168.118.0/26 host="ip-172-31-18-112" Apr 16 23:33:42.651904 containerd[2004]: 2026-04-16 23:33:42.463 [INFO][5244] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.118.0/26 host="ip-172-31-18-112" Apr 16 23:33:42.651904 containerd[2004]: 2026-04-16 23:33:42.463 [INFO][5244] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.118.0/26 handle="k8s-pod-network.8bcd708105fb46b715be2752c165170a078354169f83a545ab4e6ebcec310b8a" host="ip-172-31-18-112" Apr 16 23:33:42.651904 containerd[2004]: 2026-04-16 23:33:42.482 [INFO][5244] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8bcd708105fb46b715be2752c165170a078354169f83a545ab4e6ebcec310b8a Apr 16 23:33:42.651904 containerd[2004]: 2026-04-16 23:33:42.516 [INFO][5244] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.118.0/26 handle="k8s-pod-network.8bcd708105fb46b715be2752c165170a078354169f83a545ab4e6ebcec310b8a" host="ip-172-31-18-112" Apr 16 23:33:42.651904 containerd[2004]: 2026-04-16 23:33:42.543 [INFO][5244] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.118.5/26] block=192.168.118.0/26 handle="k8s-pod-network.8bcd708105fb46b715be2752c165170a078354169f83a545ab4e6ebcec310b8a" host="ip-172-31-18-112" Apr 16 23:33:42.651904 containerd[2004]: 2026-04-16 23:33:42.544 [INFO][5244] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.118.5/26] handle="k8s-pod-network.8bcd708105fb46b715be2752c165170a078354169f83a545ab4e6ebcec310b8a" host="ip-172-31-18-112" Apr 16 23:33:42.651904 containerd[2004]: 2026-04-16 23:33:42.546 [INFO][5244] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:33:42.651904 containerd[2004]: 2026-04-16 23:33:42.546 [INFO][5244] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.118.5/26] IPv6=[] ContainerID="8bcd708105fb46b715be2752c165170a078354169f83a545ab4e6ebcec310b8a" HandleID="k8s-pod-network.8bcd708105fb46b715be2752c165170a078354169f83a545ab4e6ebcec310b8a" Workload="ip--172--31--18--112-k8s-calico--apiserver--7fc4b8bb6b--shv5x-eth0" Apr 16 23:33:42.653293 containerd[2004]: 2026-04-16 23:33:42.571 [INFO][5200] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8bcd708105fb46b715be2752c165170a078354169f83a545ab4e6ebcec310b8a" Namespace="calico-system" Pod="calico-apiserver-7fc4b8bb6b-shv5x" WorkloadEndpoint="ip--172--31--18--112-k8s-calico--apiserver--7fc4b8bb6b--shv5x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--112-k8s-calico--apiserver--7fc4b8bb6b--shv5x-eth0", GenerateName:"calico-apiserver-7fc4b8bb6b-", Namespace:"calico-system", SelfLink:"", UID:"e941b08a-f79d-4a10-9fb6-55e6ada439aa", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 33, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fc4b8bb6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-112", ContainerID:"", Pod:"calico-apiserver-7fc4b8bb6b-shv5x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.118.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif007c19ac1d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:33:42.653293 containerd[2004]: 2026-04-16 23:33:42.571 [INFO][5200] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.118.5/32] ContainerID="8bcd708105fb46b715be2752c165170a078354169f83a545ab4e6ebcec310b8a" Namespace="calico-system" Pod="calico-apiserver-7fc4b8bb6b-shv5x" WorkloadEndpoint="ip--172--31--18--112-k8s-calico--apiserver--7fc4b8bb6b--shv5x-eth0" Apr 16 23:33:42.653293 containerd[2004]: 2026-04-16 23:33:42.571 [INFO][5200] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif007c19ac1d ContainerID="8bcd708105fb46b715be2752c165170a078354169f83a545ab4e6ebcec310b8a" Namespace="calico-system" Pod="calico-apiserver-7fc4b8bb6b-shv5x" WorkloadEndpoint="ip--172--31--18--112-k8s-calico--apiserver--7fc4b8bb6b--shv5x-eth0" Apr 16 23:33:42.653293 containerd[2004]: 2026-04-16 23:33:42.586 [INFO][5200] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8bcd708105fb46b715be2752c165170a078354169f83a545ab4e6ebcec310b8a" Namespace="calico-system" Pod="calico-apiserver-7fc4b8bb6b-shv5x" WorkloadEndpoint="ip--172--31--18--112-k8s-calico--apiserver--7fc4b8bb6b--shv5x-eth0" Apr 16 23:33:42.653293 containerd[2004]: 2026-04-16 23:33:42.595 [INFO][5200] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8bcd708105fb46b715be2752c165170a078354169f83a545ab4e6ebcec310b8a" Namespace="calico-system" Pod="calico-apiserver-7fc4b8bb6b-shv5x" WorkloadEndpoint="ip--172--31--18--112-k8s-calico--apiserver--7fc4b8bb6b--shv5x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--112-k8s-calico--apiserver--7fc4b8bb6b--shv5x-eth0", GenerateName:"calico-apiserver-7fc4b8bb6b-", Namespace:"calico-system", SelfLink:"", UID:"e941b08a-f79d-4a10-9fb6-55e6ada439aa", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 33, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fc4b8bb6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-112", ContainerID:"8bcd708105fb46b715be2752c165170a078354169f83a545ab4e6ebcec310b8a", Pod:"calico-apiserver-7fc4b8bb6b-shv5x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.118.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif007c19ac1d", MAC:"5e:cf:7d:b5:43:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:33:42.653293 containerd[2004]: 2026-04-16 23:33:42.629 [INFO][5200] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8bcd708105fb46b715be2752c165170a078354169f83a545ab4e6ebcec310b8a" Namespace="calico-system" Pod="calico-apiserver-7fc4b8bb6b-shv5x" WorkloadEndpoint="ip--172--31--18--112-k8s-calico--apiserver--7fc4b8bb6b--shv5x-eth0" Apr 16 23:33:42.714160 systemd[1]: Started cri-containerd-df9f036fd218ed6983db7e53d22faf3ecbf91bdec28f300ee932c6e9ff9dfa1f.scope - libcontainer container df9f036fd218ed6983db7e53d22faf3ecbf91bdec28f300ee932c6e9ff9dfa1f. Apr 16 23:33:42.730642 containerd[2004]: time="2026-04-16T23:33:42.728396809Z" level=info msg="connecting to shim 8bcd708105fb46b715be2752c165170a078354169f83a545ab4e6ebcec310b8a" address="unix:///run/containerd/s/00e6b60e14b9b87f8e171806ae929aab10f8a5d5d2f76825909cb3f3e69748dd" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:33:42.801617 containerd[2004]: time="2026-04-16T23:33:42.801464954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fc4b8bb6b-mcbtl,Uid:f58985e8-82ab-45c7-a1da-3382768dd37c,Namespace:calico-system,Attempt:0,}" Apr 16 23:33:42.821899 containerd[2004]: time="2026-04-16T23:33:42.818852546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xx9nj,Uid:a71fac77-981f-4b75-9304-8c3a33a51180,Namespace:calico-system,Attempt:0,}" Apr 16 23:33:42.886810 systemd[1]: Started cri-containerd-8bcd708105fb46b715be2752c165170a078354169f83a545ab4e6ebcec310b8a.scope - libcontainer container 8bcd708105fb46b715be2752c165170a078354169f83a545ab4e6ebcec310b8a. Apr 16 23:33:42.982591 containerd[2004]: time="2026-04-16T23:33:42.982516239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-vqhkd,Uid:2177fb1d-6bc7-4a5b-a326-d8cb2233c432,Namespace:calico-system,Attempt:0,} returns sandbox id \"ceb3c77a2045234c10e186cbdcf984b346ad605ed6ae4d30c4cfd3752faac435\"" Apr 16 23:33:42.994063 containerd[2004]: time="2026-04-16T23:33:42.993460527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 16 23:33:43.277068 containerd[2004]: time="2026-04-16T23:33:43.276909420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-694f584b75-mmxmb,Uid:9fb86f07-5b07-499c-862c-aa8f0e5e95c3,Namespace:calico-system,Attempt:0,} returns sandbox id \"df9f036fd218ed6983db7e53d22faf3ecbf91bdec28f300ee932c6e9ff9dfa1f\"" Apr 16 23:33:43.509952 containerd[2004]: time="2026-04-16T23:33:43.509880781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fc4b8bb6b-shv5x,Uid:e941b08a-f79d-4a10-9fb6-55e6ada439aa,Namespace:calico-system,Attempt:0,} returns sandbox id \"8bcd708105fb46b715be2752c165170a078354169f83a545ab4e6ebcec310b8a\"" Apr 16 23:33:43.541763 systemd[1]: Started sshd@7-172.31.18.112:22-20.229.252.112:52260.service - OpenSSH per-connection server daemon (20.229.252.112:52260). Apr 16 23:33:43.677738 systemd-networkd[1882]: cali64611947fb0: Link UP Apr 16 23:33:43.678769 systemd-networkd[1882]: cali64611947fb0: Gained carrier Apr 16 23:33:43.701470 systemd-networkd[1882]: calif007c19ac1d: Gained IPv6LL Apr 16 23:33:43.734573 containerd[2004]: 2026-04-16 23:33:43.147 [INFO][5403] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--112-k8s-csi--node--driver--xx9nj-eth0 csi-node-driver- calico-system a71fac77-981f-4b75-9304-8c3a33a51180 769 0 2026-04-16 23:33:10 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-18-112 csi-node-driver-xx9nj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali64611947fb0 [] [] }} ContainerID="8b9ef988f954f6fa001251404aaa50a49029bac0c39a66649ae399a8343a83e2" Namespace="calico-system" Pod="csi-node-driver-xx9nj" WorkloadEndpoint="ip--172--31--18--112-k8s-csi--node--driver--xx9nj-" Apr 16 23:33:43.734573 containerd[2004]: 2026-04-16 23:33:43.147 [INFO][5403] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8b9ef988f954f6fa001251404aaa50a49029bac0c39a66649ae399a8343a83e2" Namespace="calico-system" Pod="csi-node-driver-xx9nj" WorkloadEndpoint="ip--172--31--18--112-k8s-csi--node--driver--xx9nj-eth0" Apr 16 23:33:43.734573 containerd[2004]: 2026-04-16 23:33:43.306 [INFO][5449] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8b9ef988f954f6fa001251404aaa50a49029bac0c39a66649ae399a8343a83e2" HandleID="k8s-pod-network.8b9ef988f954f6fa001251404aaa50a49029bac0c39a66649ae399a8343a83e2" Workload="ip--172--31--18--112-k8s-csi--node--driver--xx9nj-eth0" Apr 16 23:33:43.734573 containerd[2004]: 2026-04-16 23:33:43.371 [INFO][5449] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8b9ef988f954f6fa001251404aaa50a49029bac0c39a66649ae399a8343a83e2" HandleID="k8s-pod-network.8b9ef988f954f6fa001251404aaa50a49029bac0c39a66649ae399a8343a83e2" Workload="ip--172--31--18--112-k8s-csi--node--driver--xx9nj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ce0e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-112", "pod":"csi-node-driver-xx9nj", "timestamp":"2026-04-16 23:33:43.306468 +0000 UTC"}, Hostname:"ip-172-31-18-112", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002c2160)} Apr 16 23:33:43.734573 containerd[2004]: 2026-04-16 23:33:43.372 [INFO][5449] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:33:43.734573 containerd[2004]: 2026-04-16 23:33:43.372 [INFO][5449] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:33:43.734573 containerd[2004]: 2026-04-16 23:33:43.374 [INFO][5449] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-112' Apr 16 23:33:43.734573 containerd[2004]: 2026-04-16 23:33:43.421 [INFO][5449] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8b9ef988f954f6fa001251404aaa50a49029bac0c39a66649ae399a8343a83e2" host="ip-172-31-18-112" Apr 16 23:33:43.734573 containerd[2004]: 2026-04-16 23:33:43.484 [INFO][5449] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-18-112" Apr 16 23:33:43.734573 containerd[2004]: 2026-04-16 23:33:43.531 [INFO][5449] ipam/ipam.go 526: Trying affinity for 192.168.118.0/26 host="ip-172-31-18-112" Apr 16 23:33:43.734573 containerd[2004]: 2026-04-16 23:33:43.550 [INFO][5449] ipam/ipam.go 160: Attempting to load block cidr=192.168.118.0/26 host="ip-172-31-18-112" Apr 16 23:33:43.734573 containerd[2004]: 2026-04-16 23:33:43.597 [INFO][5449] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.118.0/26 host="ip-172-31-18-112" Apr 16 23:33:43.734573 containerd[2004]: 2026-04-16 23:33:43.599 [INFO][5449] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.118.0/26 handle="k8s-pod-network.8b9ef988f954f6fa001251404aaa50a49029bac0c39a66649ae399a8343a83e2" host="ip-172-31-18-112" Apr 16 23:33:43.734573 containerd[2004]: 2026-04-16 23:33:43.612 [INFO][5449] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8b9ef988f954f6fa001251404aaa50a49029bac0c39a66649ae399a8343a83e2 Apr 16 23:33:43.734573 containerd[2004]: 2026-04-16 23:33:43.629 [INFO][5449] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.118.0/26 handle="k8s-pod-network.8b9ef988f954f6fa001251404aaa50a49029bac0c39a66649ae399a8343a83e2" host="ip-172-31-18-112" Apr 16 23:33:43.734573 containerd[2004]: 2026-04-16 23:33:43.647 [INFO][5449] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.118.6/26] block=192.168.118.0/26 handle="k8s-pod-network.8b9ef988f954f6fa001251404aaa50a49029bac0c39a66649ae399a8343a83e2" host="ip-172-31-18-112" Apr 16 23:33:43.734573 containerd[2004]: 2026-04-16 23:33:43.648 [INFO][5449] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.118.6/26] handle="k8s-pod-network.8b9ef988f954f6fa001251404aaa50a49029bac0c39a66649ae399a8343a83e2" host="ip-172-31-18-112" Apr 16 23:33:43.734573 containerd[2004]: 2026-04-16 23:33:43.648 [INFO][5449] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:33:43.734573 containerd[2004]: 2026-04-16 23:33:43.649 [INFO][5449] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.118.6/26] IPv6=[] ContainerID="8b9ef988f954f6fa001251404aaa50a49029bac0c39a66649ae399a8343a83e2" HandleID="k8s-pod-network.8b9ef988f954f6fa001251404aaa50a49029bac0c39a66649ae399a8343a83e2" Workload="ip--172--31--18--112-k8s-csi--node--driver--xx9nj-eth0" Apr 16 23:33:43.737534 containerd[2004]: 2026-04-16 23:33:43.661 [INFO][5403] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8b9ef988f954f6fa001251404aaa50a49029bac0c39a66649ae399a8343a83e2" Namespace="calico-system" Pod="csi-node-driver-xx9nj" WorkloadEndpoint="ip--172--31--18--112-k8s-csi--node--driver--xx9nj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--112-k8s-csi--node--driver--xx9nj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a71fac77-981f-4b75-9304-8c3a33a51180", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 33, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-112", ContainerID:"", Pod:"csi-node-driver-xx9nj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.118.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali64611947fb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:33:43.737534 containerd[2004]: 2026-04-16 23:33:43.662 [INFO][5403] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.118.6/32] ContainerID="8b9ef988f954f6fa001251404aaa50a49029bac0c39a66649ae399a8343a83e2" Namespace="calico-system" Pod="csi-node-driver-xx9nj" WorkloadEndpoint="ip--172--31--18--112-k8s-csi--node--driver--xx9nj-eth0" Apr 16 23:33:43.737534 containerd[2004]: 2026-04-16 23:33:43.662 [INFO][5403] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali64611947fb0 ContainerID="8b9ef988f954f6fa001251404aaa50a49029bac0c39a66649ae399a8343a83e2" Namespace="calico-system" Pod="csi-node-driver-xx9nj" WorkloadEndpoint="ip--172--31--18--112-k8s-csi--node--driver--xx9nj-eth0" Apr 16 23:33:43.737534 containerd[2004]: 2026-04-16 23:33:43.683 [INFO][5403] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8b9ef988f954f6fa001251404aaa50a49029bac0c39a66649ae399a8343a83e2" Namespace="calico-system" Pod="csi-node-driver-xx9nj" WorkloadEndpoint="ip--172--31--18--112-k8s-csi--node--driver--xx9nj-eth0" Apr 16 23:33:43.737534 containerd[2004]: 2026-04-16 23:33:43.686 [INFO][5403] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8b9ef988f954f6fa001251404aaa50a49029bac0c39a66649ae399a8343a83e2" Namespace="calico-system" Pod="csi-node-driver-xx9nj" WorkloadEndpoint="ip--172--31--18--112-k8s-csi--node--driver--xx9nj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--112-k8s-csi--node--driver--xx9nj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a71fac77-981f-4b75-9304-8c3a33a51180", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 33, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-112", ContainerID:"8b9ef988f954f6fa001251404aaa50a49029bac0c39a66649ae399a8343a83e2", Pod:"csi-node-driver-xx9nj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.118.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali64611947fb0", MAC:"a2:bf:41:9b:7f:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:33:43.737534 containerd[2004]: 2026-04-16 23:33:43.725 [INFO][5403] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8b9ef988f954f6fa001251404aaa50a49029bac0c39a66649ae399a8343a83e2" Namespace="calico-system" Pod="csi-node-driver-xx9nj" WorkloadEndpoint="ip--172--31--18--112-k8s-csi--node--driver--xx9nj-eth0" Apr 16 23:33:43.788507 containerd[2004]: time="2026-04-16T23:33:43.787837887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-9trh4,Uid:cb940ee4-a92c-4b55-a310-219ffaee8b28,Namespace:kube-system,Attempt:0,}" Apr 16 23:33:43.846505 containerd[2004]: time="2026-04-16T23:33:43.846337647Z" level=info msg="connecting to shim 8b9ef988f954f6fa001251404aaa50a49029bac0c39a66649ae399a8343a83e2" address="unix:///run/containerd/s/b22ee46f506bd008f05c6cddc0869e7bb6e407106f37357a4790cdd429d1f4a7" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:33:43.855527 systemd-networkd[1882]: cali2ee5b23fb13: Link UP Apr 16 23:33:43.857735 systemd-networkd[1882]: cali2ee5b23fb13: Gained carrier Apr 16 23:33:43.939647 containerd[2004]: 2026-04-16 23:33:43.154 [INFO][5401] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--112-k8s-calico--apiserver--7fc4b8bb6b--mcbtl-eth0 calico-apiserver-7fc4b8bb6b- calico-system f58985e8-82ab-45c7-a1da-3382768dd37c 902 0 2026-04-16 23:33:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7fc4b8bb6b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-112 calico-apiserver-7fc4b8bb6b-mcbtl eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali2ee5b23fb13 [] [] }} ContainerID="f707144d34f29f94c8cc8f5b0debc83ad7fe5e9b1fe78f456e671198935c20d4" Namespace="calico-system" Pod="calico-apiserver-7fc4b8bb6b-mcbtl" WorkloadEndpoint="ip--172--31--18--112-k8s-calico--apiserver--7fc4b8bb6b--mcbtl-" Apr 16 23:33:43.939647 containerd[2004]: 2026-04-16 23:33:43.155 [INFO][5401] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f707144d34f29f94c8cc8f5b0debc83ad7fe5e9b1fe78f456e671198935c20d4" Namespace="calico-system" Pod="calico-apiserver-7fc4b8bb6b-mcbtl" WorkloadEndpoint="ip--172--31--18--112-k8s-calico--apiserver--7fc4b8bb6b--mcbtl-eth0" Apr 16 23:33:43.939647 containerd[2004]: 2026-04-16 23:33:43.313 [INFO][5451] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f707144d34f29f94c8cc8f5b0debc83ad7fe5e9b1fe78f456e671198935c20d4" HandleID="k8s-pod-network.f707144d34f29f94c8cc8f5b0debc83ad7fe5e9b1fe78f456e671198935c20d4" Workload="ip--172--31--18--112-k8s-calico--apiserver--7fc4b8bb6b--mcbtl-eth0" Apr 16 23:33:43.939647 containerd[2004]: 2026-04-16 23:33:43.385 [INFO][5451] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f707144d34f29f94c8cc8f5b0debc83ad7fe5e9b1fe78f456e671198935c20d4" HandleID="k8s-pod-network.f707144d34f29f94c8cc8f5b0debc83ad7fe5e9b1fe78f456e671198935c20d4" Workload="ip--172--31--18--112-k8s-calico--apiserver--7fc4b8bb6b--mcbtl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c7e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-112", "pod":"calico-apiserver-7fc4b8bb6b-mcbtl", "timestamp":"2026-04-16 23:33:43.3138867 +0000 UTC"}, Hostname:"ip-172-31-18-112", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40000ac840)} Apr 16 23:33:43.939647 containerd[2004]: 2026-04-16 23:33:43.385 [INFO][5451] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:33:43.939647 containerd[2004]: 2026-04-16 23:33:43.648 [INFO][5451] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:33:43.939647 containerd[2004]: 2026-04-16 23:33:43.649 [INFO][5451] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-112' Apr 16 23:33:43.939647 containerd[2004]: 2026-04-16 23:33:43.667 [INFO][5451] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f707144d34f29f94c8cc8f5b0debc83ad7fe5e9b1fe78f456e671198935c20d4" host="ip-172-31-18-112" Apr 16 23:33:43.939647 containerd[2004]: 2026-04-16 23:33:43.711 [INFO][5451] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-18-112" Apr 16 23:33:43.939647 containerd[2004]: 2026-04-16 23:33:43.725 [INFO][5451] ipam/ipam.go 526: Trying affinity for 192.168.118.0/26 host="ip-172-31-18-112" Apr 16 23:33:43.939647 containerd[2004]: 2026-04-16 23:33:43.736 [INFO][5451] ipam/ipam.go 160: Attempting to load block cidr=192.168.118.0/26 host="ip-172-31-18-112" Apr 16 23:33:43.939647 containerd[2004]: 2026-04-16 23:33:43.741 [INFO][5451] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.118.0/26 host="ip-172-31-18-112" Apr 16 23:33:43.939647 containerd[2004]: 2026-04-16 23:33:43.741 [INFO][5451] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.118.0/26 handle="k8s-pod-network.f707144d34f29f94c8cc8f5b0debc83ad7fe5e9b1fe78f456e671198935c20d4" host="ip-172-31-18-112" Apr 16 23:33:43.939647 containerd[2004]: 2026-04-16 23:33:43.748 [INFO][5451] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f707144d34f29f94c8cc8f5b0debc83ad7fe5e9b1fe78f456e671198935c20d4 Apr 16 23:33:43.939647 containerd[2004]: 2026-04-16 23:33:43.762 [INFO][5451] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.118.0/26 handle="k8s-pod-network.f707144d34f29f94c8cc8f5b0debc83ad7fe5e9b1fe78f456e671198935c20d4" host="ip-172-31-18-112" Apr 16 23:33:43.939647 containerd[2004]: 2026-04-16 23:33:43.788 [INFO][5451] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.118.7/26] block=192.168.118.0/26 handle="k8s-pod-network.f707144d34f29f94c8cc8f5b0debc83ad7fe5e9b1fe78f456e671198935c20d4" host="ip-172-31-18-112" Apr 16 23:33:43.939647 containerd[2004]: 2026-04-16 23:33:43.789 [INFO][5451] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.118.7/26] handle="k8s-pod-network.f707144d34f29f94c8cc8f5b0debc83ad7fe5e9b1fe78f456e671198935c20d4" host="ip-172-31-18-112" Apr 16 23:33:43.939647 containerd[2004]: 2026-04-16 23:33:43.790 [INFO][5451] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:33:43.939647 containerd[2004]: 2026-04-16 23:33:43.790 [INFO][5451] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.118.7/26] IPv6=[] ContainerID="f707144d34f29f94c8cc8f5b0debc83ad7fe5e9b1fe78f456e671198935c20d4" HandleID="k8s-pod-network.f707144d34f29f94c8cc8f5b0debc83ad7fe5e9b1fe78f456e671198935c20d4" Workload="ip--172--31--18--112-k8s-calico--apiserver--7fc4b8bb6b--mcbtl-eth0" Apr 16 23:33:43.943758 containerd[2004]: 2026-04-16 23:33:43.819 [INFO][5401] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f707144d34f29f94c8cc8f5b0debc83ad7fe5e9b1fe78f456e671198935c20d4" Namespace="calico-system" Pod="calico-apiserver-7fc4b8bb6b-mcbtl" WorkloadEndpoint="ip--172--31--18--112-k8s-calico--apiserver--7fc4b8bb6b--mcbtl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--112-k8s-calico--apiserver--7fc4b8bb6b--mcbtl-eth0", GenerateName:"calico-apiserver-7fc4b8bb6b-", Namespace:"calico-system", SelfLink:"", UID:"f58985e8-82ab-45c7-a1da-3382768dd37c", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 33, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fc4b8bb6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-112", ContainerID:"", Pod:"calico-apiserver-7fc4b8bb6b-mcbtl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.118.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali2ee5b23fb13", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:33:43.943758 containerd[2004]: 2026-04-16 23:33:43.821 [INFO][5401] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.118.7/32] ContainerID="f707144d34f29f94c8cc8f5b0debc83ad7fe5e9b1fe78f456e671198935c20d4" Namespace="calico-system" Pod="calico-apiserver-7fc4b8bb6b-mcbtl" WorkloadEndpoint="ip--172--31--18--112-k8s-calico--apiserver--7fc4b8bb6b--mcbtl-eth0" Apr 16 23:33:43.943758 containerd[2004]: 2026-04-16 23:33:43.822 [INFO][5401] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ee5b23fb13 ContainerID="f707144d34f29f94c8cc8f5b0debc83ad7fe5e9b1fe78f456e671198935c20d4" Namespace="calico-system" Pod="calico-apiserver-7fc4b8bb6b-mcbtl" WorkloadEndpoint="ip--172--31--18--112-k8s-calico--apiserver--7fc4b8bb6b--mcbtl-eth0" Apr 16 23:33:43.943758 containerd[2004]: 2026-04-16 23:33:43.870 [INFO][5401] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f707144d34f29f94c8cc8f5b0debc83ad7fe5e9b1fe78f456e671198935c20d4" Namespace="calico-system" Pod="calico-apiserver-7fc4b8bb6b-mcbtl" WorkloadEndpoint="ip--172--31--18--112-k8s-calico--apiserver--7fc4b8bb6b--mcbtl-eth0" Apr 16 23:33:43.943758 containerd[2004]: 2026-04-16 23:33:43.888 [INFO][5401] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f707144d34f29f94c8cc8f5b0debc83ad7fe5e9b1fe78f456e671198935c20d4" Namespace="calico-system" Pod="calico-apiserver-7fc4b8bb6b-mcbtl" WorkloadEndpoint="ip--172--31--18--112-k8s-calico--apiserver--7fc4b8bb6b--mcbtl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--112-k8s-calico--apiserver--7fc4b8bb6b--mcbtl-eth0", GenerateName:"calico-apiserver-7fc4b8bb6b-", Namespace:"calico-system", SelfLink:"", UID:"f58985e8-82ab-45c7-a1da-3382768dd37c", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 33, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fc4b8bb6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-112", ContainerID:"f707144d34f29f94c8cc8f5b0debc83ad7fe5e9b1fe78f456e671198935c20d4", Pod:"calico-apiserver-7fc4b8bb6b-mcbtl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.118.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali2ee5b23fb13", MAC:"4a:87:69:15:3b:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:33:43.943758 containerd[2004]: 2026-04-16 23:33:43.916 [INFO][5401] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f707144d34f29f94c8cc8f5b0debc83ad7fe5e9b1fe78f456e671198935c20d4" Namespace="calico-system" Pod="calico-apiserver-7fc4b8bb6b-mcbtl" WorkloadEndpoint="ip--172--31--18--112-k8s-calico--apiserver--7fc4b8bb6b--mcbtl-eth0" Apr 16 23:33:44.018644 systemd[1]: Started cri-containerd-8b9ef988f954f6fa001251404aaa50a49029bac0c39a66649ae399a8343a83e2.scope - libcontainer container 8b9ef988f954f6fa001251404aaa50a49029bac0c39a66649ae399a8343a83e2. Apr 16 23:33:44.048152 containerd[2004]: time="2026-04-16T23:33:44.047447844Z" level=info msg="connecting to shim f707144d34f29f94c8cc8f5b0debc83ad7fe5e9b1fe78f456e671198935c20d4" address="unix:///run/containerd/s/8c8fc41abce062b7461473c3f3ed987464994239399132099d2023627261c3dd" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:33:44.084637 systemd-networkd[1882]: calic70e5e1a2ec: Gained IPv6LL Apr 16 23:33:44.086928 systemd-networkd[1882]: cali4cc73b3ddb0: Gained IPv6LL Apr 16 23:33:44.243319 systemd[1]: Started cri-containerd-f707144d34f29f94c8cc8f5b0debc83ad7fe5e9b1fe78f456e671198935c20d4.scope - libcontainer container f707144d34f29f94c8cc8f5b0debc83ad7fe5e9b1fe78f456e671198935c20d4. Apr 16 23:33:44.261465 containerd[2004]: time="2026-04-16T23:33:44.261401221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xx9nj,Uid:a71fac77-981f-4b75-9304-8c3a33a51180,Namespace:calico-system,Attempt:0,} returns sandbox id \"8b9ef988f954f6fa001251404aaa50a49029bac0c39a66649ae399a8343a83e2\"" Apr 16 23:33:44.468363 systemd-networkd[1882]: cali9e21be20ff3: Link UP Apr 16 23:33:44.471516 systemd-networkd[1882]: cali9e21be20ff3: Gained carrier Apr 16 23:33:44.482962 containerd[2004]: time="2026-04-16T23:33:44.482840750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fc4b8bb6b-mcbtl,Uid:f58985e8-82ab-45c7-a1da-3382768dd37c,Namespace:calico-system,Attempt:0,} returns sandbox id \"f707144d34f29f94c8cc8f5b0debc83ad7fe5e9b1fe78f456e671198935c20d4\"" Apr 16 23:33:44.522870 containerd[2004]: 2026-04-16 23:33:44.090 [INFO][5511] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--112-k8s-coredns--7d764666f9--9trh4-eth0 coredns-7d764666f9- kube-system cb940ee4-a92c-4b55-a310-219ffaee8b28 894 0 2026-04-16 23:32:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-112 coredns-7d764666f9-9trh4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9e21be20ff3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="f1d5ecd6a80f4b864badec08292665f8af47c241a9601542c007e989ee83164f" Namespace="kube-system" Pod="coredns-7d764666f9-9trh4" WorkloadEndpoint="ip--172--31--18--112-k8s-coredns--7d764666f9--9trh4-" Apr 16 23:33:44.522870 containerd[2004]: 2026-04-16 23:33:44.091 [INFO][5511] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f1d5ecd6a80f4b864badec08292665f8af47c241a9601542c007e989ee83164f" Namespace="kube-system" Pod="coredns-7d764666f9-9trh4" WorkloadEndpoint="ip--172--31--18--112-k8s-coredns--7d764666f9--9trh4-eth0" Apr 16 23:33:44.522870 containerd[2004]: 2026-04-16 23:33:44.258 [INFO][5594] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f1d5ecd6a80f4b864badec08292665f8af47c241a9601542c007e989ee83164f" HandleID="k8s-pod-network.f1d5ecd6a80f4b864badec08292665f8af47c241a9601542c007e989ee83164f" Workload="ip--172--31--18--112-k8s-coredns--7d764666f9--9trh4-eth0" Apr 16 23:33:44.522870 containerd[2004]: 2026-04-16 23:33:44.291 [INFO][5594] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f1d5ecd6a80f4b864badec08292665f8af47c241a9601542c007e989ee83164f" HandleID="k8s-pod-network.f1d5ecd6a80f4b864badec08292665f8af47c241a9601542c007e989ee83164f" Workload="ip--172--31--18--112-k8s-coredns--7d764666f9--9trh4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000399870), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-112", "pod":"coredns-7d764666f9-9trh4", "timestamp":"2026-04-16 23:33:44.258534133 +0000 UTC"}, Hostname:"ip-172-31-18-112", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400057e160)} Apr 16 23:33:44.522870 containerd[2004]: 2026-04-16 23:33:44.292 [INFO][5594] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 23:33:44.522870 containerd[2004]: 2026-04-16 23:33:44.292 [INFO][5594] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 23:33:44.522870 containerd[2004]: 2026-04-16 23:33:44.293 [INFO][5594] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-112' Apr 16 23:33:44.522870 containerd[2004]: 2026-04-16 23:33:44.299 [INFO][5594] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f1d5ecd6a80f4b864badec08292665f8af47c241a9601542c007e989ee83164f" host="ip-172-31-18-112" Apr 16 23:33:44.522870 containerd[2004]: 2026-04-16 23:33:44.316 [INFO][5594] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-18-112" Apr 16 23:33:44.522870 containerd[2004]: 2026-04-16 23:33:44.333 [INFO][5594] ipam/ipam.go 526: Trying affinity for 192.168.118.0/26 host="ip-172-31-18-112" Apr 16 23:33:44.522870 containerd[2004]: 2026-04-16 23:33:44.338 [INFO][5594] ipam/ipam.go 160: Attempting to load block cidr=192.168.118.0/26 host="ip-172-31-18-112" Apr 16 23:33:44.522870 containerd[2004]: 2026-04-16 23:33:44.346 [INFO][5594] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.118.0/26 host="ip-172-31-18-112" Apr 16 23:33:44.522870 containerd[2004]: 2026-04-16 23:33:44.347 [INFO][5594] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.118.0/26 handle="k8s-pod-network.f1d5ecd6a80f4b864badec08292665f8af47c241a9601542c007e989ee83164f" host="ip-172-31-18-112" Apr 16 23:33:44.522870 containerd[2004]: 2026-04-16 23:33:44.351 [INFO][5594] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f1d5ecd6a80f4b864badec08292665f8af47c241a9601542c007e989ee83164f Apr 16 23:33:44.522870 containerd[2004]: 2026-04-16 23:33:44.369 [INFO][5594] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.118.0/26 handle="k8s-pod-network.f1d5ecd6a80f4b864badec08292665f8af47c241a9601542c007e989ee83164f" host="ip-172-31-18-112" Apr 16 23:33:44.522870 containerd[2004]: 2026-04-16 23:33:44.418 [INFO][5594] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.118.8/26] block=192.168.118.0/26 handle="k8s-pod-network.f1d5ecd6a80f4b864badec08292665f8af47c241a9601542c007e989ee83164f" host="ip-172-31-18-112" Apr 16 23:33:44.522870 containerd[2004]: 2026-04-16 23:33:44.419 [INFO][5594] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.118.8/26] handle="k8s-pod-network.f1d5ecd6a80f4b864badec08292665f8af47c241a9601542c007e989ee83164f" host="ip-172-31-18-112" Apr 16 23:33:44.522870 containerd[2004]: 2026-04-16 23:33:44.421 [INFO][5594] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 23:33:44.522870 containerd[2004]: 2026-04-16 23:33:44.421 [INFO][5594] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.118.8/26] IPv6=[] ContainerID="f1d5ecd6a80f4b864badec08292665f8af47c241a9601542c007e989ee83164f" HandleID="k8s-pod-network.f1d5ecd6a80f4b864badec08292665f8af47c241a9601542c007e989ee83164f" Workload="ip--172--31--18--112-k8s-coredns--7d764666f9--9trh4-eth0" Apr 16 23:33:44.525853 containerd[2004]: 2026-04-16 23:33:44.449 [INFO][5511] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f1d5ecd6a80f4b864badec08292665f8af47c241a9601542c007e989ee83164f" Namespace="kube-system" Pod="coredns-7d764666f9-9trh4" WorkloadEndpoint="ip--172--31--18--112-k8s-coredns--7d764666f9--9trh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--112-k8s-coredns--7d764666f9--9trh4-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"cb940ee4-a92c-4b55-a310-219ffaee8b28", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 32, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-112", ContainerID:"", Pod:"coredns-7d764666f9-9trh4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9e21be20ff3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:33:44.525853 containerd[2004]: 2026-04-16 23:33:44.452 [INFO][5511] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.118.8/32] ContainerID="f1d5ecd6a80f4b864badec08292665f8af47c241a9601542c007e989ee83164f" Namespace="kube-system" Pod="coredns-7d764666f9-9trh4" WorkloadEndpoint="ip--172--31--18--112-k8s-coredns--7d764666f9--9trh4-eth0" Apr 16 23:33:44.525853 containerd[2004]: 2026-04-16 23:33:44.453 [INFO][5511] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9e21be20ff3 ContainerID="f1d5ecd6a80f4b864badec08292665f8af47c241a9601542c007e989ee83164f" Namespace="kube-system" Pod="coredns-7d764666f9-9trh4" WorkloadEndpoint="ip--172--31--18--112-k8s-coredns--7d764666f9--9trh4-eth0" Apr 16 23:33:44.525853 containerd[2004]: 2026-04-16 23:33:44.474 [INFO][5511] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f1d5ecd6a80f4b864badec08292665f8af47c241a9601542c007e989ee83164f" Namespace="kube-system" Pod="coredns-7d764666f9-9trh4" WorkloadEndpoint="ip--172--31--18--112-k8s-coredns--7d764666f9--9trh4-eth0" Apr 16 23:33:44.525853 containerd[2004]: 2026-04-16 23:33:44.478 [INFO][5511] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f1d5ecd6a80f4b864badec08292665f8af47c241a9601542c007e989ee83164f" Namespace="kube-system" Pod="coredns-7d764666f9-9trh4" WorkloadEndpoint="ip--172--31--18--112-k8s-coredns--7d764666f9--9trh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--112-k8s-coredns--7d764666f9--9trh4-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"cb940ee4-a92c-4b55-a310-219ffaee8b28", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 23, 32, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-112", ContainerID:"f1d5ecd6a80f4b864badec08292665f8af47c241a9601542c007e989ee83164f", Pod:"coredns-7d764666f9-9trh4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9e21be20ff3", MAC:"72:70:5d:1c:18:7e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 23:33:44.525853 containerd[2004]: 2026-04-16 23:33:44.506 [INFO][5511] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f1d5ecd6a80f4b864badec08292665f8af47c241a9601542c007e989ee83164f" Namespace="kube-system" Pod="coredns-7d764666f9-9trh4" WorkloadEndpoint="ip--172--31--18--112-k8s-coredns--7d764666f9--9trh4-eth0" Apr 16 23:33:44.536824 sshd[5482]: Accepted publickey for core from 20.229.252.112 port 52260 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:33:44.542746 sshd-session[5482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:33:44.563264 systemd-logind[1986]: New session 8 of user core. Apr 16 23:33:44.588050 containerd[2004]: time="2026-04-16T23:33:44.587979891Z" level=info msg="connecting to shim f1d5ecd6a80f4b864badec08292665f8af47c241a9601542c007e989ee83164f" address="unix:///run/containerd/s/f773db9d95639b1f06d9553948aff549dc37a46300325e08299707d4527c8a30" namespace=k8s.io protocol=ttrpc version=3 Apr 16 23:33:44.588726 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 16 23:33:44.688541 systemd[1]: Started cri-containerd-f1d5ecd6a80f4b864badec08292665f8af47c241a9601542c007e989ee83164f.scope - libcontainer container f1d5ecd6a80f4b864badec08292665f8af47c241a9601542c007e989ee83164f. Apr 16 23:33:44.824313 containerd[2004]: time="2026-04-16T23:33:44.824160220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-9trh4,Uid:cb940ee4-a92c-4b55-a310-219ffaee8b28,Namespace:kube-system,Attempt:0,} returns sandbox id \"f1d5ecd6a80f4b864badec08292665f8af47c241a9601542c007e989ee83164f\"" Apr 16 23:33:44.836591 containerd[2004]: time="2026-04-16T23:33:44.836529604Z" level=info msg="CreateContainer within sandbox \"f1d5ecd6a80f4b864badec08292665f8af47c241a9601542c007e989ee83164f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 16 23:33:44.852983 containerd[2004]: time="2026-04-16T23:33:44.852585808Z" level=info msg="Container 8b20f25c78eddeae34e6f8e9280a7b956aa3c84fc4c4246f51ba24f2519245c1: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:33:44.872350 containerd[2004]: time="2026-04-16T23:33:44.872078908Z" level=info msg="CreateContainer within sandbox \"f1d5ecd6a80f4b864badec08292665f8af47c241a9601542c007e989ee83164f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8b20f25c78eddeae34e6f8e9280a7b956aa3c84fc4c4246f51ba24f2519245c1\"" Apr 16 23:33:44.877759 containerd[2004]: time="2026-04-16T23:33:44.877390276Z" level=info msg="StartContainer for \"8b20f25c78eddeae34e6f8e9280a7b956aa3c84fc4c4246f51ba24f2519245c1\"" Apr 16 23:33:44.884642 containerd[2004]: time="2026-04-16T23:33:44.884349316Z" level=info msg="connecting to shim 8b20f25c78eddeae34e6f8e9280a7b956aa3c84fc4c4246f51ba24f2519245c1" address="unix:///run/containerd/s/f773db9d95639b1f06d9553948aff549dc37a46300325e08299707d4527c8a30" protocol=ttrpc version=3 Apr 16 23:33:44.940497 systemd[1]: Started cri-containerd-8b20f25c78eddeae34e6f8e9280a7b956aa3c84fc4c4246f51ba24f2519245c1.scope - libcontainer container 8b20f25c78eddeae34e6f8e9280a7b956aa3c84fc4c4246f51ba24f2519245c1. Apr 16 23:33:44.982968 systemd-networkd[1882]: cali2ee5b23fb13: Gained IPv6LL Apr 16 23:33:45.087109 containerd[2004]: time="2026-04-16T23:33:45.086289553Z" level=info msg="StartContainer for \"8b20f25c78eddeae34e6f8e9280a7b956aa3c84fc4c4246f51ba24f2519245c1\" returns successfully" Apr 16 23:33:45.329172 sshd[5661]: Connection closed by 20.229.252.112 port 52260 Apr 16 23:33:45.329792 sshd-session[5482]: pam_unix(sshd:session): session closed for user core Apr 16 23:33:45.338086 systemd[1]: sshd@7-172.31.18.112:22-20.229.252.112:52260.service: Deactivated successfully. Apr 16 23:33:45.344733 systemd[1]: session-8.scope: Deactivated successfully. Apr 16 23:33:45.346480 systemd-logind[1986]: Session 8 logged out. Waiting for processes to exit. Apr 16 23:33:45.351406 systemd-logind[1986]: Removed session 8. Apr 16 23:33:45.562796 kubelet[3624]: I0416 23:33:45.561539 3624 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-9trh4" podStartSLOduration=59.561516867 podStartE2EDuration="59.561516867s" podCreationTimestamp="2026-04-16 23:32:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 23:33:45.556795311 +0000 UTC m=+63.098871050" watchObservedRunningTime="2026-04-16 23:33:45.561516867 +0000 UTC m=+63.103592558" Apr 16 23:33:45.621240 systemd-networkd[1882]: cali64611947fb0: Gained IPv6LL Apr 16 23:33:46.261458 systemd-networkd[1882]: cali9e21be20ff3: Gained IPv6LL Apr 16 23:33:46.483225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4014338737.mount: Deactivated successfully. Apr 16 23:33:47.162784 containerd[2004]: time="2026-04-16T23:33:47.162730215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:47.165702 containerd[2004]: time="2026-04-16T23:33:47.165633615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Apr 16 23:33:47.167189 containerd[2004]: time="2026-04-16T23:33:47.167019303Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:47.170448 containerd[2004]: time="2026-04-16T23:33:47.170364099Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:47.172092 containerd[2004]: time="2026-04-16T23:33:47.171903891Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 4.178372348s" Apr 16 23:33:47.172092 containerd[2004]: time="2026-04-16T23:33:47.171960915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Apr 16 23:33:47.175376 containerd[2004]: time="2026-04-16T23:33:47.175327923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 16 23:33:47.181829 containerd[2004]: time="2026-04-16T23:33:47.181615768Z" level=info msg="CreateContainer within sandbox \"ceb3c77a2045234c10e186cbdcf984b346ad605ed6ae4d30c4cfd3752faac435\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 16 23:33:47.194365 containerd[2004]: time="2026-04-16T23:33:47.193402720Z" level=info msg="Container 2f6d039b874f67ba165c5346751706e10bb45cd617cb79197a422ccc7723c887: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:33:47.210574 containerd[2004]: time="2026-04-16T23:33:47.210486268Z" level=info msg="CreateContainer within sandbox \"ceb3c77a2045234c10e186cbdcf984b346ad605ed6ae4d30c4cfd3752faac435\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2f6d039b874f67ba165c5346751706e10bb45cd617cb79197a422ccc7723c887\"" Apr 16 23:33:47.213207 containerd[2004]: time="2026-04-16T23:33:47.212319412Z" level=info msg="StartContainer for \"2f6d039b874f67ba165c5346751706e10bb45cd617cb79197a422ccc7723c887\"" Apr 16 23:33:47.216680 containerd[2004]: time="2026-04-16T23:33:47.216614416Z" level=info msg="connecting to shim 2f6d039b874f67ba165c5346751706e10bb45cd617cb79197a422ccc7723c887" address="unix:///run/containerd/s/cecc6804c983d854b97d1f4ecc9e6b78a8cded219e24e7a7593087062c38a99d" protocol=ttrpc version=3 Apr 16 23:33:47.266451 systemd[1]: Started cri-containerd-2f6d039b874f67ba165c5346751706e10bb45cd617cb79197a422ccc7723c887.scope - libcontainer container 2f6d039b874f67ba165c5346751706e10bb45cd617cb79197a422ccc7723c887. Apr 16 23:33:47.356247 containerd[2004]: time="2026-04-16T23:33:47.356115328Z" level=info msg="StartContainer for \"2f6d039b874f67ba165c5346751706e10bb45cd617cb79197a422ccc7723c887\" returns successfully" Apr 16 23:33:48.490869 ntpd[2206]: Listen normally on 9 cali6fe00bc0642 [fe80::ecee:eeff:feee:eeee%8]:123 Apr 16 23:33:48.492650 ntpd[2206]: 16 Apr 23:33:48 ntpd[2206]: Listen normally on 9 cali6fe00bc0642 [fe80::ecee:eeff:feee:eeee%8]:123 Apr 16 23:33:48.492650 ntpd[2206]: 16 Apr 23:33:48 ntpd[2206]: Listen normally on 10 calic70e5e1a2ec [fe80::ecee:eeff:feee:eeee%9]:123 Apr 16 23:33:48.492650 ntpd[2206]: 16 Apr 23:33:48 ntpd[2206]: Listen normally on 11 cali4cc73b3ddb0 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 16 23:33:48.492650 ntpd[2206]: 16 Apr 23:33:48 ntpd[2206]: Listen normally on 12 calif007c19ac1d [fe80::ecee:eeff:feee:eeee%11]:123 Apr 16 23:33:48.490966 ntpd[2206]: Listen normally on 10 calic70e5e1a2ec [fe80::ecee:eeff:feee:eeee%9]:123 Apr 16 23:33:48.491013 ntpd[2206]: Listen normally on 11 cali4cc73b3ddb0 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 16 23:33:48.491064 ntpd[2206]: Listen normally on 12 calif007c19ac1d [fe80::ecee:eeff:feee:eeee%11]:123 Apr 16 23:33:48.491113 ntpd[2206]: Listen normally on 13 cali64611947fb0 [fe80::ecee:eeff:feee:eeee%12]:123 Apr 16 23:33:48.493241 ntpd[2206]: 16 Apr 23:33:48 ntpd[2206]: Listen normally on 13 cali64611947fb0 [fe80::ecee:eeff:feee:eeee%12]:123 Apr 16 23:33:48.493298 ntpd[2206]: Listen normally on 14 cali2ee5b23fb13 [fe80::ecee:eeff:feee:eeee%13]:123 Apr 16 23:33:48.493351 ntpd[2206]: Listen normally on 15 cali9e21be20ff3 [fe80::ecee:eeff:feee:eeee%14]:123 Apr 16 23:33:48.493420 ntpd[2206]: 16 Apr 23:33:48 ntpd[2206]: Listen normally on 14 cali2ee5b23fb13 [fe80::ecee:eeff:feee:eeee%13]:123 Apr 16 23:33:48.493420 ntpd[2206]: 16 Apr 23:33:48 ntpd[2206]: Listen normally on 15 cali9e21be20ff3 [fe80::ecee:eeff:feee:eeee%14]:123 Apr 16 23:33:50.109564 containerd[2004]: time="2026-04-16T23:33:50.109499658Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:50.111218 containerd[2004]: time="2026-04-16T23:33:50.111157842Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Apr 16 23:33:50.112809 containerd[2004]: time="2026-04-16T23:33:50.112244670Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:50.117707 containerd[2004]: time="2026-04-16T23:33:50.117612582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:50.119574 containerd[2004]: time="2026-04-16T23:33:50.119421090Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 2.943862119s" Apr 16 23:33:50.119574 containerd[2004]: time="2026-04-16T23:33:50.119480418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Apr 16 23:33:50.122316 containerd[2004]: time="2026-04-16T23:33:50.121350282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 16 23:33:50.161760 containerd[2004]: time="2026-04-16T23:33:50.161684634Z" level=info msg="CreateContainer within sandbox \"df9f036fd218ed6983db7e53d22faf3ecbf91bdec28f300ee932c6e9ff9dfa1f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 16 23:33:50.174097 containerd[2004]: time="2026-04-16T23:33:50.174032346Z" level=info msg="Container 79fbf78955a09ffe3d2b882cd17f180e28af6305447175cd7a9028728ee5769a: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:33:50.184859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount593231283.mount: Deactivated successfully. Apr 16 23:33:50.195363 containerd[2004]: time="2026-04-16T23:33:50.195284034Z" level=info msg="CreateContainer within sandbox \"df9f036fd218ed6983db7e53d22faf3ecbf91bdec28f300ee932c6e9ff9dfa1f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"79fbf78955a09ffe3d2b882cd17f180e28af6305447175cd7a9028728ee5769a\"" Apr 16 23:33:50.196514 containerd[2004]: time="2026-04-16T23:33:50.196458942Z" level=info msg="StartContainer for \"79fbf78955a09ffe3d2b882cd17f180e28af6305447175cd7a9028728ee5769a\"" Apr 16 23:33:50.202287 containerd[2004]: time="2026-04-16T23:33:50.202217347Z" level=info msg="connecting to shim 79fbf78955a09ffe3d2b882cd17f180e28af6305447175cd7a9028728ee5769a" address="unix:///run/containerd/s/8e5fde24335d31d23b6341e73c9def4fe7538b97cf2d062d6e3f991fd5c63da3" protocol=ttrpc version=3 Apr 16 23:33:50.252473 systemd[1]: Started cri-containerd-79fbf78955a09ffe3d2b882cd17f180e28af6305447175cd7a9028728ee5769a.scope - libcontainer container 79fbf78955a09ffe3d2b882cd17f180e28af6305447175cd7a9028728ee5769a. Apr 16 23:33:50.345618 containerd[2004]: time="2026-04-16T23:33:50.345554263Z" level=info msg="StartContainer for \"79fbf78955a09ffe3d2b882cd17f180e28af6305447175cd7a9028728ee5769a\" returns successfully" Apr 16 23:33:50.503549 systemd[1]: Started sshd@8-172.31.18.112:22-20.229.252.112:58774.service - OpenSSH per-connection server daemon (20.229.252.112:58774). Apr 16 23:33:50.623932 kubelet[3624]: I0416 23:33:50.623563 3624 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-vqhkd" podStartSLOduration=38.440115821 podStartE2EDuration="42.623541741s" podCreationTimestamp="2026-04-16 23:33:08 +0000 UTC" firstStartedPulling="2026-04-16 23:33:42.991104171 +0000 UTC m=+60.533179874" lastFinishedPulling="2026-04-16 23:33:47.174530091 +0000 UTC m=+64.716605794" observedRunningTime="2026-04-16 23:33:47.558658061 +0000 UTC m=+65.100733788" watchObservedRunningTime="2026-04-16 23:33:50.623541741 +0000 UTC m=+68.165617456" Apr 16 23:33:50.630195 kubelet[3624]: I0416 23:33:50.629118 3624 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-694f584b75-mmxmb" podStartSLOduration=32.791780847 podStartE2EDuration="39.629096817s" podCreationTimestamp="2026-04-16 23:33:11 +0000 UTC" firstStartedPulling="2026-04-16 23:33:43.283646856 +0000 UTC m=+60.825722559" lastFinishedPulling="2026-04-16 23:33:50.120962766 +0000 UTC m=+67.663038529" observedRunningTime="2026-04-16 23:33:50.628625289 +0000 UTC m=+68.170701100" watchObservedRunningTime="2026-04-16 23:33:50.629096817 +0000 UTC m=+68.171172508" Apr 16 23:33:51.450269 sshd[5921]: Accepted publickey for core from 20.229.252.112 port 58774 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:33:51.454869 sshd-session[5921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:33:51.471283 systemd-logind[1986]: New session 9 of user core. Apr 16 23:33:51.478453 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 16 23:33:52.144146 sshd[5947]: Connection closed by 20.229.252.112 port 58774 Apr 16 23:33:52.145038 sshd-session[5921]: pam_unix(sshd:session): session closed for user core Apr 16 23:33:52.159000 systemd[1]: sshd@8-172.31.18.112:22-20.229.252.112:58774.service: Deactivated successfully. Apr 16 23:33:52.165410 systemd[1]: session-9.scope: Deactivated successfully. Apr 16 23:33:52.168873 systemd-logind[1986]: Session 9 logged out. Waiting for processes to exit. Apr 16 23:33:52.174663 systemd-logind[1986]: Removed session 9. Apr 16 23:33:52.840146 containerd[2004]: time="2026-04-16T23:33:52.840067740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:52.842192 containerd[2004]: time="2026-04-16T23:33:52.842092584Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Apr 16 23:33:52.843167 containerd[2004]: time="2026-04-16T23:33:52.842950464Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:52.848580 containerd[2004]: time="2026-04-16T23:33:52.848491332Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:52.850897 containerd[2004]: time="2026-04-16T23:33:52.849857244Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 2.728445954s" Apr 16 23:33:52.850897 containerd[2004]: time="2026-04-16T23:33:52.849923904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 16 23:33:52.852359 containerd[2004]: time="2026-04-16T23:33:52.852308832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 16 23:33:52.860814 containerd[2004]: time="2026-04-16T23:33:52.860747088Z" level=info msg="CreateContainer within sandbox \"8bcd708105fb46b715be2752c165170a078354169f83a545ab4e6ebcec310b8a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 16 23:33:52.874742 containerd[2004]: time="2026-04-16T23:33:52.872617764Z" level=info msg="Container 1e158f58ac4dcf8f1be476e8960e756a35b1de4f1e4fb0fdbcc5888bc0e5af0b: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:33:52.893159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1707575247.mount: Deactivated successfully. Apr 16 23:33:52.896993 containerd[2004]: time="2026-04-16T23:33:52.896937036Z" level=info msg="CreateContainer within sandbox \"8bcd708105fb46b715be2752c165170a078354169f83a545ab4e6ebcec310b8a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1e158f58ac4dcf8f1be476e8960e756a35b1de4f1e4fb0fdbcc5888bc0e5af0b\"" Apr 16 23:33:52.898785 containerd[2004]: time="2026-04-16T23:33:52.898739316Z" level=info msg="StartContainer for \"1e158f58ac4dcf8f1be476e8960e756a35b1de4f1e4fb0fdbcc5888bc0e5af0b\"" Apr 16 23:33:52.903522 containerd[2004]: time="2026-04-16T23:33:52.903468384Z" level=info msg="connecting to shim 1e158f58ac4dcf8f1be476e8960e756a35b1de4f1e4fb0fdbcc5888bc0e5af0b" address="unix:///run/containerd/s/00e6b60e14b9b87f8e171806ae929aab10f8a5d5d2f76825909cb3f3e69748dd" protocol=ttrpc version=3 Apr 16 23:33:52.947452 systemd[1]: Started cri-containerd-1e158f58ac4dcf8f1be476e8960e756a35b1de4f1e4fb0fdbcc5888bc0e5af0b.scope - libcontainer container 1e158f58ac4dcf8f1be476e8960e756a35b1de4f1e4fb0fdbcc5888bc0e5af0b. Apr 16 23:33:53.057054 containerd[2004]: time="2026-04-16T23:33:53.057008457Z" level=info msg="StartContainer for \"1e158f58ac4dcf8f1be476e8960e756a35b1de4f1e4fb0fdbcc5888bc0e5af0b\" returns successfully" Apr 16 23:33:53.626935 kubelet[3624]: I0416 23:33:53.626577 3624 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-7fc4b8bb6b-shv5x" podStartSLOduration=37.292680625 podStartE2EDuration="46.626556996s" podCreationTimestamp="2026-04-16 23:33:07 +0000 UTC" firstStartedPulling="2026-04-16 23:33:43.518186797 +0000 UTC m=+61.060262500" lastFinishedPulling="2026-04-16 23:33:52.852063108 +0000 UTC m=+70.394138871" observedRunningTime="2026-04-16 23:33:53.62338452 +0000 UTC m=+71.165460319" watchObservedRunningTime="2026-04-16 23:33:53.626556996 +0000 UTC m=+71.168632711" Apr 16 23:33:54.392190 containerd[2004]: time="2026-04-16T23:33:54.391948859Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:54.394324 containerd[2004]: time="2026-04-16T23:33:54.394156391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Apr 16 23:33:54.397219 containerd[2004]: time="2026-04-16T23:33:54.395667035Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:54.399931 containerd[2004]: time="2026-04-16T23:33:54.399875711Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:54.401476 containerd[2004]: time="2026-04-16T23:33:54.401408195Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.548346867s" Apr 16 23:33:54.401476 containerd[2004]: time="2026-04-16T23:33:54.401472815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Apr 16 23:33:54.404672 containerd[2004]: time="2026-04-16T23:33:54.404591219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 16 23:33:54.440090 containerd[2004]: time="2026-04-16T23:33:54.440021352Z" level=info msg="CreateContainer within sandbox \"8b9ef988f954f6fa001251404aaa50a49029bac0c39a66649ae399a8343a83e2\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 16 23:33:54.458256 containerd[2004]: time="2026-04-16T23:33:54.458178120Z" level=info msg="Container 1f1a76e7134ceb845cb4e3098271b9429c46cf170c500f0740f25262ab1f5466: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:33:54.475422 containerd[2004]: time="2026-04-16T23:33:54.475294032Z" level=info msg="CreateContainer within sandbox \"8b9ef988f954f6fa001251404aaa50a49029bac0c39a66649ae399a8343a83e2\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1f1a76e7134ceb845cb4e3098271b9429c46cf170c500f0740f25262ab1f5466\"" Apr 16 23:33:54.476701 containerd[2004]: time="2026-04-16T23:33:54.476571228Z" level=info msg="StartContainer for \"1f1a76e7134ceb845cb4e3098271b9429c46cf170c500f0740f25262ab1f5466\"" Apr 16 23:33:54.480175 containerd[2004]: time="2026-04-16T23:33:54.480027300Z" level=info msg="connecting to shim 1f1a76e7134ceb845cb4e3098271b9429c46cf170c500f0740f25262ab1f5466" address="unix:///run/containerd/s/b22ee46f506bd008f05c6cddc0869e7bb6e407106f37357a4790cdd429d1f4a7" protocol=ttrpc version=3 Apr 16 23:33:54.530876 systemd[1]: Started cri-containerd-1f1a76e7134ceb845cb4e3098271b9429c46cf170c500f0740f25262ab1f5466.scope - libcontainer container 1f1a76e7134ceb845cb4e3098271b9429c46cf170c500f0740f25262ab1f5466. Apr 16 23:33:54.680067 containerd[2004]: time="2026-04-16T23:33:54.679786369Z" level=info msg="StartContainer for \"1f1a76e7134ceb845cb4e3098271b9429c46cf170c500f0740f25262ab1f5466\" returns successfully" Apr 16 23:33:54.768553 containerd[2004]: time="2026-04-16T23:33:54.768497245Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:54.771841 containerd[2004]: time="2026-04-16T23:33:54.771790393Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 16 23:33:54.775457 containerd[2004]: time="2026-04-16T23:33:54.775407373Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 370.735394ms" Apr 16 23:33:54.775621 containerd[2004]: time="2026-04-16T23:33:54.775594081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Apr 16 23:33:54.778677 containerd[2004]: time="2026-04-16T23:33:54.777727609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 16 23:33:54.789479 containerd[2004]: time="2026-04-16T23:33:54.788821669Z" level=info msg="CreateContainer within sandbox \"f707144d34f29f94c8cc8f5b0debc83ad7fe5e9b1fe78f456e671198935c20d4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 16 23:33:54.802070 containerd[2004]: time="2026-04-16T23:33:54.802013125Z" level=info msg="Container 7aa293110d905d24136ab004a26bb63203f1f8e16f59f37a713a99afcd646b6d: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:33:54.820833 containerd[2004]: time="2026-04-16T23:33:54.820755517Z" level=info msg="CreateContainer within sandbox \"f707144d34f29f94c8cc8f5b0debc83ad7fe5e9b1fe78f456e671198935c20d4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7aa293110d905d24136ab004a26bb63203f1f8e16f59f37a713a99afcd646b6d\"" Apr 16 23:33:54.822951 containerd[2004]: time="2026-04-16T23:33:54.822880453Z" level=info msg="StartContainer for \"7aa293110d905d24136ab004a26bb63203f1f8e16f59f37a713a99afcd646b6d\"" Apr 16 23:33:54.826357 containerd[2004]: time="2026-04-16T23:33:54.826291141Z" level=info msg="connecting to shim 7aa293110d905d24136ab004a26bb63203f1f8e16f59f37a713a99afcd646b6d" address="unix:///run/containerd/s/8c8fc41abce062b7461473c3f3ed987464994239399132099d2023627261c3dd" protocol=ttrpc version=3 Apr 16 23:33:54.870434 systemd[1]: Started cri-containerd-7aa293110d905d24136ab004a26bb63203f1f8e16f59f37a713a99afcd646b6d.scope - libcontainer container 7aa293110d905d24136ab004a26bb63203f1f8e16f59f37a713a99afcd646b6d. Apr 16 23:33:55.025207 containerd[2004]: time="2026-04-16T23:33:55.025014394Z" level=info msg="StartContainer for \"7aa293110d905d24136ab004a26bb63203f1f8e16f59f37a713a99afcd646b6d\" returns successfully" Apr 16 23:33:56.652159 containerd[2004]: time="2026-04-16T23:33:56.652079559Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:56.656156 containerd[2004]: time="2026-04-16T23:33:56.655862631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Apr 16 23:33:56.659084 containerd[2004]: time="2026-04-16T23:33:56.658998975Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:56.671288 containerd[2004]: time="2026-04-16T23:33:56.671217003Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 23:33:56.675562 containerd[2004]: time="2026-04-16T23:33:56.675458019Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.89767119s" Apr 16 23:33:56.675562 containerd[2004]: time="2026-04-16T23:33:56.675542091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Apr 16 23:33:56.688744 containerd[2004]: time="2026-04-16T23:33:56.688680051Z" level=info msg="CreateContainer within sandbox \"8b9ef988f954f6fa001251404aaa50a49029bac0c39a66649ae399a8343a83e2\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 16 23:33:56.705192 containerd[2004]: time="2026-04-16T23:33:56.705095955Z" level=info msg="Container 76e262b26d3c01eab23283006023a5e48cdcb43284764793d852ba4dbdda66ec: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:33:56.742601 containerd[2004]: time="2026-04-16T23:33:56.742531587Z" level=info msg="CreateContainer within sandbox \"8b9ef988f954f6fa001251404aaa50a49029bac0c39a66649ae399a8343a83e2\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"76e262b26d3c01eab23283006023a5e48cdcb43284764793d852ba4dbdda66ec\"" Apr 16 23:33:56.745355 containerd[2004]: time="2026-04-16T23:33:56.744535995Z" level=info msg="StartContainer for \"76e262b26d3c01eab23283006023a5e48cdcb43284764793d852ba4dbdda66ec\"" Apr 16 23:33:56.751392 containerd[2004]: time="2026-04-16T23:33:56.751332303Z" level=info msg="connecting to shim 76e262b26d3c01eab23283006023a5e48cdcb43284764793d852ba4dbdda66ec" address="unix:///run/containerd/s/b22ee46f506bd008f05c6cddc0869e7bb6e407106f37357a4790cdd429d1f4a7" protocol=ttrpc version=3 Apr 16 23:33:56.830594 systemd[1]: Started cri-containerd-76e262b26d3c01eab23283006023a5e48cdcb43284764793d852ba4dbdda66ec.scope - libcontainer container 76e262b26d3c01eab23283006023a5e48cdcb43284764793d852ba4dbdda66ec. Apr 16 23:33:56.905943 kubelet[3624]: I0416 23:33:56.904879 3624 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-7fc4b8bb6b-mcbtl" podStartSLOduration=39.626248241 podStartE2EDuration="49.90485836s" podCreationTimestamp="2026-04-16 23:33:07 +0000 UTC" firstStartedPulling="2026-04-16 23:33:44.498588242 +0000 UTC m=+62.040663945" lastFinishedPulling="2026-04-16 23:33:54.777198313 +0000 UTC m=+72.319274064" observedRunningTime="2026-04-16 23:33:55.670790846 +0000 UTC m=+73.212866561" watchObservedRunningTime="2026-04-16 23:33:56.90485836 +0000 UTC m=+74.446934063" Apr 16 23:33:57.141429 containerd[2004]: time="2026-04-16T23:33:57.141290809Z" level=info msg="StartContainer for \"76e262b26d3c01eab23283006023a5e48cdcb43284764793d852ba4dbdda66ec\" returns successfully" Apr 16 23:33:57.343476 systemd[1]: Started sshd@9-172.31.18.112:22-20.229.252.112:33408.service - OpenSSH per-connection server daemon (20.229.252.112:33408). Apr 16 23:33:57.651679 kubelet[3624]: I0416 23:33:57.651557 3624 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 16 23:33:57.956955 kubelet[3624]: I0416 23:33:57.956479 3624 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 16 23:33:57.956955 kubelet[3624]: I0416 23:33:57.956560 3624 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 16 23:33:58.263712 sshd[6132]: Accepted publickey for core from 20.229.252.112 port 33408 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:33:58.267751 sshd-session[6132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:33:58.282072 systemd-logind[1986]: New session 10 of user core. Apr 16 23:33:58.287633 kubelet[3624]: I0416 23:33:58.287449 3624 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-xx9nj" podStartSLOduration=35.874756073 podStartE2EDuration="48.287424087s" podCreationTimestamp="2026-04-16 23:33:10 +0000 UTC" firstStartedPulling="2026-04-16 23:33:44.266049085 +0000 UTC m=+61.808124776" lastFinishedPulling="2026-04-16 23:33:56.678717087 +0000 UTC m=+74.220792790" observedRunningTime="2026-04-16 23:33:57.683081416 +0000 UTC m=+75.225157119" watchObservedRunningTime="2026-04-16 23:33:58.287424087 +0000 UTC m=+75.829499898" Apr 16 23:33:58.291582 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 16 23:33:58.922202 sshd[6135]: Connection closed by 20.229.252.112 port 33408 Apr 16 23:33:58.923201 sshd-session[6132]: pam_unix(sshd:session): session closed for user core Apr 16 23:33:58.932824 systemd-logind[1986]: Session 10 logged out. Waiting for processes to exit. Apr 16 23:33:58.934919 systemd[1]: sshd@9-172.31.18.112:22-20.229.252.112:33408.service: Deactivated successfully. Apr 16 23:33:58.940477 systemd[1]: session-10.scope: Deactivated successfully. Apr 16 23:33:58.945970 systemd-logind[1986]: Removed session 10. Apr 16 23:34:04.106812 systemd[1]: Started sshd@10-172.31.18.112:22-20.229.252.112:33422.service - OpenSSH per-connection server daemon (20.229.252.112:33422). Apr 16 23:34:05.014158 sshd[6215]: Accepted publickey for core from 20.229.252.112 port 33422 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:34:05.017025 sshd-session[6215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:34:05.027112 systemd-logind[1986]: New session 11 of user core. Apr 16 23:34:05.033429 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 16 23:34:05.675280 sshd[6225]: Connection closed by 20.229.252.112 port 33422 Apr 16 23:34:05.676418 sshd-session[6215]: pam_unix(sshd:session): session closed for user core Apr 16 23:34:05.686042 systemd[1]: sshd@10-172.31.18.112:22-20.229.252.112:33422.service: Deactivated successfully. Apr 16 23:34:05.693643 systemd[1]: session-11.scope: Deactivated successfully. Apr 16 23:34:05.697389 systemd-logind[1986]: Session 11 logged out. Waiting for processes to exit. Apr 16 23:34:05.701954 systemd-logind[1986]: Removed session 11. Apr 16 23:34:05.863631 systemd[1]: Started sshd@11-172.31.18.112:22-20.229.252.112:51754.service - OpenSSH per-connection server daemon (20.229.252.112:51754). Apr 16 23:34:06.767790 sshd[6238]: Accepted publickey for core from 20.229.252.112 port 51754 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:34:06.771347 sshd-session[6238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:34:06.780211 systemd-logind[1986]: New session 12 of user core. Apr 16 23:34:06.786406 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 16 23:34:07.490183 sshd[6241]: Connection closed by 20.229.252.112 port 51754 Apr 16 23:34:07.491513 sshd-session[6238]: pam_unix(sshd:session): session closed for user core Apr 16 23:34:07.502018 systemd-logind[1986]: Session 12 logged out. Waiting for processes to exit. Apr 16 23:34:07.502865 systemd[1]: sshd@11-172.31.18.112:22-20.229.252.112:51754.service: Deactivated successfully. Apr 16 23:34:07.508486 systemd[1]: session-12.scope: Deactivated successfully. Apr 16 23:34:07.514332 systemd-logind[1986]: Removed session 12. Apr 16 23:34:07.669466 systemd[1]: Started sshd@12-172.31.18.112:22-20.229.252.112:51758.service - OpenSSH per-connection server daemon (20.229.252.112:51758). Apr 16 23:34:08.578670 sshd[6273]: Accepted publickey for core from 20.229.252.112 port 51758 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:34:08.581603 sshd-session[6273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:34:08.590089 systemd-logind[1986]: New session 13 of user core. Apr 16 23:34:08.600396 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 16 23:34:09.223581 sshd[6276]: Connection closed by 20.229.252.112 port 51758 Apr 16 23:34:09.225411 sshd-session[6273]: pam_unix(sshd:session): session closed for user core Apr 16 23:34:09.233244 systemd[1]: sshd@12-172.31.18.112:22-20.229.252.112:51758.service: Deactivated successfully. Apr 16 23:34:09.238498 systemd[1]: session-13.scope: Deactivated successfully. Apr 16 23:34:09.240904 systemd-logind[1986]: Session 13 logged out. Waiting for processes to exit. Apr 16 23:34:09.246241 systemd-logind[1986]: Removed session 13. Apr 16 23:34:14.416001 systemd[1]: Started sshd@13-172.31.18.112:22-20.229.252.112:51760.service - OpenSSH per-connection server daemon (20.229.252.112:51760). Apr 16 23:34:15.331175 sshd[6302]: Accepted publickey for core from 20.229.252.112 port 51760 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:34:15.333459 sshd-session[6302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:34:15.341081 systemd-logind[1986]: New session 14 of user core. Apr 16 23:34:15.348424 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 16 23:34:15.981428 sshd[6306]: Connection closed by 20.229.252.112 port 51760 Apr 16 23:34:15.982455 sshd-session[6302]: pam_unix(sshd:session): session closed for user core Apr 16 23:34:15.990552 systemd[1]: sshd@13-172.31.18.112:22-20.229.252.112:51760.service: Deactivated successfully. Apr 16 23:34:15.996515 systemd[1]: session-14.scope: Deactivated successfully. Apr 16 23:34:15.998617 systemd-logind[1986]: Session 14 logged out. Waiting for processes to exit. Apr 16 23:34:16.002460 systemd-logind[1986]: Removed session 14. Apr 16 23:34:16.164948 systemd[1]: Started sshd@14-172.31.18.112:22-20.229.252.112:60308.service - OpenSSH per-connection server daemon (20.229.252.112:60308). Apr 16 23:34:17.073584 sshd[6318]: Accepted publickey for core from 20.229.252.112 port 60308 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:34:17.078521 sshd-session[6318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:34:17.093552 systemd-logind[1986]: New session 15 of user core. Apr 16 23:34:17.126618 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 16 23:34:18.045081 sshd[6321]: Connection closed by 20.229.252.112 port 60308 Apr 16 23:34:18.046727 sshd-session[6318]: pam_unix(sshd:session): session closed for user core Apr 16 23:34:18.055017 systemd[1]: sshd@14-172.31.18.112:22-20.229.252.112:60308.service: Deactivated successfully. Apr 16 23:34:18.062288 systemd[1]: session-15.scope: Deactivated successfully. Apr 16 23:34:18.065461 systemd-logind[1986]: Session 15 logged out. Waiting for processes to exit. Apr 16 23:34:18.069856 systemd-logind[1986]: Removed session 15. Apr 16 23:34:18.225758 systemd[1]: Started sshd@15-172.31.18.112:22-20.229.252.112:60312.service - OpenSSH per-connection server daemon (20.229.252.112:60312). Apr 16 23:34:19.122006 sshd[6334]: Accepted publickey for core from 20.229.252.112 port 60312 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:34:19.124936 sshd-session[6334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:34:19.133526 systemd-logind[1986]: New session 16 of user core. Apr 16 23:34:19.144427 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 16 23:34:20.608104 sshd[6337]: Connection closed by 20.229.252.112 port 60312 Apr 16 23:34:20.608329 sshd-session[6334]: pam_unix(sshd:session): session closed for user core Apr 16 23:34:20.619981 systemd[1]: sshd@15-172.31.18.112:22-20.229.252.112:60312.service: Deactivated successfully. Apr 16 23:34:20.627884 systemd[1]: session-16.scope: Deactivated successfully. Apr 16 23:34:20.632523 systemd-logind[1986]: Session 16 logged out. Waiting for processes to exit. Apr 16 23:34:20.638688 systemd-logind[1986]: Removed session 16. Apr 16 23:34:20.793444 systemd[1]: Started sshd@16-172.31.18.112:22-20.229.252.112:60318.service - OpenSSH per-connection server daemon (20.229.252.112:60318). Apr 16 23:34:21.715583 sshd[6410]: Accepted publickey for core from 20.229.252.112 port 60318 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:34:21.719226 sshd-session[6410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:34:21.727628 systemd-logind[1986]: New session 17 of user core. Apr 16 23:34:21.734759 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 16 23:34:22.650974 sshd[6415]: Connection closed by 20.229.252.112 port 60318 Apr 16 23:34:22.652181 sshd-session[6410]: pam_unix(sshd:session): session closed for user core Apr 16 23:34:22.660005 systemd[1]: sshd@16-172.31.18.112:22-20.229.252.112:60318.service: Deactivated successfully. Apr 16 23:34:22.664775 systemd[1]: session-17.scope: Deactivated successfully. Apr 16 23:34:22.668482 systemd-logind[1986]: Session 17 logged out. Waiting for processes to exit. Apr 16 23:34:22.671945 systemd-logind[1986]: Removed session 17. Apr 16 23:34:22.831588 systemd[1]: Started sshd@17-172.31.18.112:22-20.229.252.112:60334.service - OpenSSH per-connection server daemon (20.229.252.112:60334). Apr 16 23:34:23.718962 sshd[6425]: Accepted publickey for core from 20.229.252.112 port 60334 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:34:23.722331 sshd-session[6425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:34:23.730305 systemd-logind[1986]: New session 18 of user core. Apr 16 23:34:23.741403 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 16 23:34:24.348077 sshd[6428]: Connection closed by 20.229.252.112 port 60334 Apr 16 23:34:24.347959 sshd-session[6425]: pam_unix(sshd:session): session closed for user core Apr 16 23:34:24.357912 systemd[1]: sshd@17-172.31.18.112:22-20.229.252.112:60334.service: Deactivated successfully. Apr 16 23:34:24.363155 systemd[1]: session-18.scope: Deactivated successfully. Apr 16 23:34:24.365290 systemd-logind[1986]: Session 18 logged out. Waiting for processes to exit. Apr 16 23:34:24.368939 systemd-logind[1986]: Removed session 18. Apr 16 23:34:28.496175 update_engine[1989]: I20260416 23:34:28.496062 1989 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 16 23:34:28.496975 update_engine[1989]: I20260416 23:34:28.496272 1989 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 16 23:34:28.498153 update_engine[1989]: I20260416 23:34:28.497452 1989 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 16 23:34:28.498873 update_engine[1989]: I20260416 23:34:28.498835 1989 omaha_request_params.cc:62] Current group set to stable Apr 16 23:34:28.499380 update_engine[1989]: I20260416 23:34:28.499333 1989 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 16 23:34:28.499523 update_engine[1989]: I20260416 23:34:28.499492 1989 update_attempter.cc:643] Scheduling an action processor start. Apr 16 23:34:28.501163 update_engine[1989]: I20260416 23:34:28.499629 1989 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 16 23:34:28.501163 update_engine[1989]: I20260416 23:34:28.499711 1989 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 16 23:34:28.501163 update_engine[1989]: I20260416 23:34:28.499819 1989 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 16 23:34:28.501163 update_engine[1989]: I20260416 23:34:28.499837 1989 omaha_request_action.cc:272] Request: Apr 16 23:34:28.501163 update_engine[1989]: Apr 16 23:34:28.501163 update_engine[1989]: Apr 16 23:34:28.501163 update_engine[1989]: Apr 16 23:34:28.501163 update_engine[1989]: Apr 16 23:34:28.501163 update_engine[1989]: Apr 16 23:34:28.501163 update_engine[1989]: Apr 16 23:34:28.501163 update_engine[1989]: Apr 16 23:34:28.501163 update_engine[1989]: Apr 16 23:34:28.501163 update_engine[1989]: I20260416 23:34:28.499852 1989 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 23:34:28.505690 update_engine[1989]: I20260416 23:34:28.505621 1989 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 23:34:28.507074 update_engine[1989]: I20260416 23:34:28.507011 1989 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 23:34:28.507617 locksmithd[2029]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 16 23:34:28.517003 update_engine[1989]: E20260416 23:34:28.516827 1989 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 23:34:28.517003 update_engine[1989]: I20260416 23:34:28.516958 1989 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 16 23:34:29.533403 systemd[1]: Started sshd@18-172.31.18.112:22-20.229.252.112:59212.service - OpenSSH per-connection server daemon (20.229.252.112:59212). Apr 16 23:34:30.435925 sshd[6448]: Accepted publickey for core from 20.229.252.112 port 59212 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:34:30.440386 sshd-session[6448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:34:30.450678 systemd-logind[1986]: New session 19 of user core. Apr 16 23:34:30.457402 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 16 23:34:31.096245 sshd[6469]: Connection closed by 20.229.252.112 port 59212 Apr 16 23:34:31.097373 sshd-session[6448]: pam_unix(sshd:session): session closed for user core Apr 16 23:34:31.104815 systemd[1]: sshd@18-172.31.18.112:22-20.229.252.112:59212.service: Deactivated successfully. Apr 16 23:34:31.111052 systemd[1]: session-19.scope: Deactivated successfully. Apr 16 23:34:31.112964 systemd-logind[1986]: Session 19 logged out. Waiting for processes to exit. Apr 16 23:34:31.116237 systemd-logind[1986]: Removed session 19. Apr 16 23:34:36.277544 systemd[1]: Started sshd@19-172.31.18.112:22-20.229.252.112:41120.service - OpenSSH per-connection server daemon (20.229.252.112:41120). Apr 16 23:34:37.173038 sshd[6487]: Accepted publickey for core from 20.229.252.112 port 41120 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:34:37.175688 sshd-session[6487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:34:37.184196 systemd-logind[1986]: New session 20 of user core. Apr 16 23:34:37.192436 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 16 23:34:37.813171 sshd[6490]: Connection closed by 20.229.252.112 port 41120 Apr 16 23:34:37.814082 sshd-session[6487]: pam_unix(sshd:session): session closed for user core Apr 16 23:34:37.822581 systemd[1]: sshd@19-172.31.18.112:22-20.229.252.112:41120.service: Deactivated successfully. Apr 16 23:34:37.826571 systemd[1]: session-20.scope: Deactivated successfully. Apr 16 23:34:37.829952 systemd-logind[1986]: Session 20 logged out. Waiting for processes to exit. Apr 16 23:34:37.835571 systemd-logind[1986]: Removed session 20. Apr 16 23:34:38.494573 update_engine[1989]: I20260416 23:34:38.493844 1989 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 23:34:38.494573 update_engine[1989]: I20260416 23:34:38.493963 1989 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 23:34:38.494573 update_engine[1989]: I20260416 23:34:38.494503 1989 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 23:34:38.501806 update_engine[1989]: E20260416 23:34:38.501482 1989 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 23:34:38.501806 update_engine[1989]: I20260416 23:34:38.501621 1989 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 16 23:34:42.999181 systemd[1]: Started sshd@20-172.31.18.112:22-20.229.252.112:41130.service - OpenSSH per-connection server daemon (20.229.252.112:41130). Apr 16 23:34:43.951739 sshd[6504]: Accepted publickey for core from 20.229.252.112 port 41130 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:34:43.956311 sshd-session[6504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:34:43.981275 systemd-logind[1986]: New session 21 of user core. Apr 16 23:34:43.984620 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 16 23:34:44.685614 sshd[6507]: Connection closed by 20.229.252.112 port 41130 Apr 16 23:34:44.686958 sshd-session[6504]: pam_unix(sshd:session): session closed for user core Apr 16 23:34:44.695800 systemd[1]: sshd@20-172.31.18.112:22-20.229.252.112:41130.service: Deactivated successfully. Apr 16 23:34:44.703599 systemd[1]: session-21.scope: Deactivated successfully. Apr 16 23:34:44.711185 systemd-logind[1986]: Session 21 logged out. Waiting for processes to exit. Apr 16 23:34:44.714904 systemd-logind[1986]: Removed session 21. Apr 16 23:34:48.497240 update_engine[1989]: I20260416 23:34:48.496433 1989 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 23:34:48.497240 update_engine[1989]: I20260416 23:34:48.496554 1989 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 23:34:48.497962 update_engine[1989]: I20260416 23:34:48.497097 1989 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 23:34:48.536203 update_engine[1989]: E20260416 23:34:48.535853 1989 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 23:34:48.536203 update_engine[1989]: I20260416 23:34:48.536024 1989 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 16 23:34:49.865558 systemd[1]: Started sshd@21-172.31.18.112:22-20.229.252.112:59306.service - OpenSSH per-connection server daemon (20.229.252.112:59306). Apr 16 23:34:50.757013 sshd[6568]: Accepted publickey for core from 20.229.252.112 port 59306 ssh2: RSA SHA256:PJgZSKX2ZrLsD3QduM7kDD0uu8YGIZrKXvqEeCH2zd8 Apr 16 23:34:50.759622 sshd-session[6568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 23:34:50.768179 systemd-logind[1986]: New session 22 of user core. Apr 16 23:34:50.777504 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 16 23:34:51.404101 sshd[6592]: Connection closed by 20.229.252.112 port 59306 Apr 16 23:34:51.405329 sshd-session[6568]: pam_unix(sshd:session): session closed for user core Apr 16 23:34:51.415294 systemd[1]: sshd@21-172.31.18.112:22-20.229.252.112:59306.service: Deactivated successfully. Apr 16 23:34:51.421650 systemd[1]: session-22.scope: Deactivated successfully. Apr 16 23:34:51.428309 systemd-logind[1986]: Session 22 logged out. Waiting for processes to exit. Apr 16 23:34:51.434964 systemd-logind[1986]: Removed session 22. Apr 16 23:34:58.500395 update_engine[1989]: I20260416 23:34:58.500283 1989 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 23:34:58.501169 update_engine[1989]: I20260416 23:34:58.500411 1989 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 23:34:58.501169 update_engine[1989]: I20260416 23:34:58.500977 1989 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 23:34:58.502375 update_engine[1989]: E20260416 23:34:58.502198 1989 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 23:34:58.502527 update_engine[1989]: I20260416 23:34:58.502449 1989 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 16 23:34:58.502527 update_engine[1989]: I20260416 23:34:58.502475 1989 omaha_request_action.cc:617] Omaha request response: Apr 16 23:34:58.502637 update_engine[1989]: E20260416 23:34:58.502590 1989 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 16 23:34:58.502691 update_engine[1989]: I20260416 23:34:58.502632 1989 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 16 23:34:58.502691 update_engine[1989]: I20260416 23:34:58.502647 1989 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 23:34:58.502691 update_engine[1989]: I20260416 23:34:58.502661 1989 update_attempter.cc:306] Processing Done. Apr 16 23:34:58.503144 update_engine[1989]: E20260416 23:34:58.502686 1989 update_attempter.cc:619] Update failed. Apr 16 23:34:58.503144 update_engine[1989]: I20260416 23:34:58.502704 1989 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 16 23:34:58.503144 update_engine[1989]: I20260416 23:34:58.502718 1989 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 16 23:34:58.503144 update_engine[1989]: I20260416 23:34:58.502734 1989 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 16 23:34:58.503144 update_engine[1989]: I20260416 23:34:58.502848 1989 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 16 23:34:58.503144 update_engine[1989]: I20260416 23:34:58.502898 1989 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 16 23:34:58.503144 update_engine[1989]: I20260416 23:34:58.502916 1989 omaha_request_action.cc:272] Request: Apr 16 23:34:58.503144 update_engine[1989]: Apr 16 23:34:58.503144 update_engine[1989]: Apr 16 23:34:58.503144 update_engine[1989]: Apr 16 23:34:58.503144 update_engine[1989]: Apr 16 23:34:58.503144 update_engine[1989]: Apr 16 23:34:58.503144 update_engine[1989]: Apr 16 23:34:58.503144 update_engine[1989]: I20260416 23:34:58.502933 1989 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 23:34:58.503144 update_engine[1989]: I20260416 23:34:58.502977 1989 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 23:34:58.504222 update_engine[1989]: I20260416 23:34:58.503628 1989 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 23:34:58.504539 update_engine[1989]: E20260416 23:34:58.504470 1989 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 23:34:58.504643 update_engine[1989]: I20260416 23:34:58.504604 1989 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 16 23:34:58.504643 update_engine[1989]: I20260416 23:34:58.504627 1989 omaha_request_action.cc:617] Omaha request response: Apr 16 23:34:58.504747 update_engine[1989]: I20260416 23:34:58.504642 1989 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 23:34:58.504747 update_engine[1989]: I20260416 23:34:58.504657 1989 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 23:34:58.504747 update_engine[1989]: I20260416 23:34:58.504670 1989 update_attempter.cc:306] Processing Done. Apr 16 23:34:58.504747 update_engine[1989]: I20260416 23:34:58.504684 1989 update_attempter.cc:310] Error event sent. Apr 16 23:34:58.504747 update_engine[1989]: I20260416 23:34:58.504707 1989 update_check_scheduler.cc:74] Next update check in 46m55s Apr 16 23:34:58.505308 locksmithd[2029]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 16 23:34:58.505809 locksmithd[2029]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 16 23:35:08.610430 systemd[1]: cri-containerd-d46dfeb88c582b61f4d14000259caefdfb777eaf140b9728734e7db6db60a992.scope: Deactivated successfully. Apr 16 23:35:08.612415 systemd[1]: cri-containerd-d46dfeb88c582b61f4d14000259caefdfb777eaf140b9728734e7db6db60a992.scope: Consumed 4.028s CPU time, 63.3M memory peak, 64K read from disk. Apr 16 23:35:08.623178 containerd[2004]: time="2026-04-16T23:35:08.623034132Z" level=info msg="received container exit event container_id:\"d46dfeb88c582b61f4d14000259caefdfb777eaf140b9728734e7db6db60a992\" id:\"d46dfeb88c582b61f4d14000259caefdfb777eaf140b9728734e7db6db60a992\" pid:3187 exit_status:1 exited_at:{seconds:1776382508 nanos:622450404}" Apr 16 23:35:08.680928 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d46dfeb88c582b61f4d14000259caefdfb777eaf140b9728734e7db6db60a992-rootfs.mount: Deactivated successfully. Apr 16 23:35:08.940439 kubelet[3624]: I0416 23:35:08.940384 3624 scope.go:122] "RemoveContainer" containerID="d46dfeb88c582b61f4d14000259caefdfb777eaf140b9728734e7db6db60a992" Apr 16 23:35:08.947733 containerd[2004]: time="2026-04-16T23:35:08.947663474Z" level=info msg="CreateContainer within sandbox \"ef55280aed513c2660c6d3c20831c40c6a5ea54b50ebd941c17c0b7b0af30d3f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 16 23:35:08.968091 containerd[2004]: time="2026-04-16T23:35:08.968017862Z" level=info msg="Container f4d565b907e6102f09935375ef036619266586df63e0f9cfa98a7e39f0482f72: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:35:08.992112 containerd[2004]: time="2026-04-16T23:35:08.992030438Z" level=info msg="CreateContainer within sandbox \"ef55280aed513c2660c6d3c20831c40c6a5ea54b50ebd941c17c0b7b0af30d3f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"f4d565b907e6102f09935375ef036619266586df63e0f9cfa98a7e39f0482f72\"" Apr 16 23:35:08.992945 containerd[2004]: time="2026-04-16T23:35:08.992852510Z" level=info msg="StartContainer for \"f4d565b907e6102f09935375ef036619266586df63e0f9cfa98a7e39f0482f72\"" Apr 16 23:35:08.995382 containerd[2004]: time="2026-04-16T23:35:08.995277362Z" level=info msg="connecting to shim f4d565b907e6102f09935375ef036619266586df63e0f9cfa98a7e39f0482f72" address="unix:///run/containerd/s/198f5827ff514216da3dbfc48d46d252357d70aa9f3cdfde106234c81236396e" protocol=ttrpc version=3 Apr 16 23:35:09.038407 systemd[1]: Started cri-containerd-f4d565b907e6102f09935375ef036619266586df63e0f9cfa98a7e39f0482f72.scope - libcontainer container f4d565b907e6102f09935375ef036619266586df63e0f9cfa98a7e39f0482f72. Apr 16 23:35:09.122094 containerd[2004]: time="2026-04-16T23:35:09.122009591Z" level=info msg="StartContainer for \"f4d565b907e6102f09935375ef036619266586df63e0f9cfa98a7e39f0482f72\" returns successfully" Apr 16 23:35:09.535772 systemd[1]: cri-containerd-a6c131925eeea8eaa3b42ba9a9b7dae6a3e178ef3bd5c374c6dd89320aba994c.scope: Deactivated successfully. Apr 16 23:35:09.537416 systemd[1]: cri-containerd-a6c131925eeea8eaa3b42ba9a9b7dae6a3e178ef3bd5c374c6dd89320aba994c.scope: Consumed 25.217s CPU time, 110.6M memory peak. Apr 16 23:35:09.542151 containerd[2004]: time="2026-04-16T23:35:09.541807945Z" level=info msg="received container exit event container_id:\"a6c131925eeea8eaa3b42ba9a9b7dae6a3e178ef3bd5c374c6dd89320aba994c\" id:\"a6c131925eeea8eaa3b42ba9a9b7dae6a3e178ef3bd5c374c6dd89320aba994c\" pid:3964 exit_status:1 exited_at:{seconds:1776382509 nanos:541341265}" Apr 16 23:35:09.600485 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6c131925eeea8eaa3b42ba9a9b7dae6a3e178ef3bd5c374c6dd89320aba994c-rootfs.mount: Deactivated successfully. Apr 16 23:35:09.955000 kubelet[3624]: I0416 23:35:09.954936 3624 scope.go:122] "RemoveContainer" containerID="a6c131925eeea8eaa3b42ba9a9b7dae6a3e178ef3bd5c374c6dd89320aba994c" Apr 16 23:35:09.960234 containerd[2004]: time="2026-04-16T23:35:09.959618487Z" level=info msg="CreateContainer within sandbox \"333769e7bc49f59733d45d1221c672280d748c67611e0ee6e83b8b8a6066a858\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 16 23:35:09.973595 containerd[2004]: time="2026-04-16T23:35:09.973227219Z" level=info msg="Container 9b57f1295e136a3ed8769691ca7d576741b0f6dad385fec3b3b5dead416a859f: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:35:09.994783 containerd[2004]: time="2026-04-16T23:35:09.994719975Z" level=info msg="CreateContainer within sandbox \"333769e7bc49f59733d45d1221c672280d748c67611e0ee6e83b8b8a6066a858\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"9b57f1295e136a3ed8769691ca7d576741b0f6dad385fec3b3b5dead416a859f\"" Apr 16 23:35:09.995726 containerd[2004]: time="2026-04-16T23:35:09.995650827Z" level=info msg="StartContainer for \"9b57f1295e136a3ed8769691ca7d576741b0f6dad385fec3b3b5dead416a859f\"" Apr 16 23:35:09.998799 containerd[2004]: time="2026-04-16T23:35:09.998709447Z" level=info msg="connecting to shim 9b57f1295e136a3ed8769691ca7d576741b0f6dad385fec3b3b5dead416a859f" address="unix:///run/containerd/s/a6a7a5f95526e530183da584fa947f38583513f2fe42185e2f947ab6e756a7d6" protocol=ttrpc version=3 Apr 16 23:35:10.051636 systemd[1]: Started cri-containerd-9b57f1295e136a3ed8769691ca7d576741b0f6dad385fec3b3b5dead416a859f.scope - libcontainer container 9b57f1295e136a3ed8769691ca7d576741b0f6dad385fec3b3b5dead416a859f. Apr 16 23:35:10.138425 containerd[2004]: time="2026-04-16T23:35:10.138271896Z" level=info msg="StartContainer for \"9b57f1295e136a3ed8769691ca7d576741b0f6dad385fec3b3b5dead416a859f\" returns successfully" Apr 16 23:35:13.980457 systemd[1]: cri-containerd-78a2426c8b8f033e7a648c5f5a181d645caf9a0b96f048b36fc6e7ef290df15b.scope: Deactivated successfully. Apr 16 23:35:13.981585 systemd[1]: cri-containerd-78a2426c8b8f033e7a648c5f5a181d645caf9a0b96f048b36fc6e7ef290df15b.scope: Consumed 3.555s CPU time, 23.3M memory peak. Apr 16 23:35:13.986564 containerd[2004]: time="2026-04-16T23:35:13.986514391Z" level=info msg="received container exit event container_id:\"78a2426c8b8f033e7a648c5f5a181d645caf9a0b96f048b36fc6e7ef290df15b\" id:\"78a2426c8b8f033e7a648c5f5a181d645caf9a0b96f048b36fc6e7ef290df15b\" pid:3194 exit_status:1 exited_at:{seconds:1776382513 nanos:985807483}" Apr 16 23:35:14.032469 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78a2426c8b8f033e7a648c5f5a181d645caf9a0b96f048b36fc6e7ef290df15b-rootfs.mount: Deactivated successfully. Apr 16 23:35:14.985299 kubelet[3624]: I0416 23:35:14.985254 3624 scope.go:122] "RemoveContainer" containerID="78a2426c8b8f033e7a648c5f5a181d645caf9a0b96f048b36fc6e7ef290df15b" Apr 16 23:35:14.989462 containerd[2004]: time="2026-04-16T23:35:14.989405192Z" level=info msg="CreateContainer within sandbox \"74018eed4ad2cc2ce3eaae2e40e4a8f8fe7fdddf4cf22a3d206e830788a582b2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 16 23:35:15.009351 containerd[2004]: time="2026-04-16T23:35:15.009293572Z" level=info msg="Container 9e5b01b2d3e0bf3aeeeae92ccfc8819e1cdf0308d448e3c2e2da399a7c9e4b12: CDI devices from CRI Config.CDIDevices: []" Apr 16 23:35:15.028446 containerd[2004]: time="2026-04-16T23:35:15.028384252Z" level=info msg="CreateContainer within sandbox \"74018eed4ad2cc2ce3eaae2e40e4a8f8fe7fdddf4cf22a3d206e830788a582b2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"9e5b01b2d3e0bf3aeeeae92ccfc8819e1cdf0308d448e3c2e2da399a7c9e4b12\"" Apr 16 23:35:15.030194 containerd[2004]: time="2026-04-16T23:35:15.029284408Z" level=info msg="StartContainer for \"9e5b01b2d3e0bf3aeeeae92ccfc8819e1cdf0308d448e3c2e2da399a7c9e4b12\"" Apr 16 23:35:15.031511 containerd[2004]: time="2026-04-16T23:35:15.031449376Z" level=info msg="connecting to shim 9e5b01b2d3e0bf3aeeeae92ccfc8819e1cdf0308d448e3c2e2da399a7c9e4b12" address="unix:///run/containerd/s/07c5b3f5619ff3d4be4d58904fe079e2b3ef6b158f9e7d5004c5ff7b84d0c977" protocol=ttrpc version=3 Apr 16 23:35:15.111408 systemd[1]: Started cri-containerd-9e5b01b2d3e0bf3aeeeae92ccfc8819e1cdf0308d448e3c2e2da399a7c9e4b12.scope - libcontainer container 9e5b01b2d3e0bf3aeeeae92ccfc8819e1cdf0308d448e3c2e2da399a7c9e4b12. Apr 16 23:35:15.201170 containerd[2004]: time="2026-04-16T23:35:15.201035465Z" level=info msg="StartContainer for \"9e5b01b2d3e0bf3aeeeae92ccfc8819e1cdf0308d448e3c2e2da399a7c9e4b12\" returns successfully" Apr 16 23:35:15.583103 kubelet[3624]: E0416 23:35:15.579200 3624 controller.go:251] "Failed to update lease" err="Put \"https://172.31.18.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-112?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 16 23:35:21.575325 systemd[1]: cri-containerd-9b57f1295e136a3ed8769691ca7d576741b0f6dad385fec3b3b5dead416a859f.scope: Deactivated successfully. Apr 16 23:35:21.577213 containerd[2004]: time="2026-04-16T23:35:21.577059552Z" level=info msg="received container exit event container_id:\"9b57f1295e136a3ed8769691ca7d576741b0f6dad385fec3b3b5dead416a859f\" id:\"9b57f1295e136a3ed8769691ca7d576741b0f6dad385fec3b3b5dead416a859f\" pid:6747 exit_status:1 exited_at:{seconds:1776382521 nanos:575831664}" Apr 16 23:35:21.629001 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b57f1295e136a3ed8769691ca7d576741b0f6dad385fec3b3b5dead416a859f-rootfs.mount: Deactivated successfully. Apr 16 23:35:22.019456 kubelet[3624]: I0416 23:35:22.019409 3624 scope.go:122] "RemoveContainer" containerID="a6c131925eeea8eaa3b42ba9a9b7dae6a3e178ef3bd5c374c6dd89320aba994c" Apr 16 23:35:22.020188 kubelet[3624]: I0416 23:35:22.019925 3624 scope.go:122] "RemoveContainer" containerID="9b57f1295e136a3ed8769691ca7d576741b0f6dad385fec3b3b5dead416a859f" Apr 16 23:35:22.020289 kubelet[3624]: E0416 23:35:22.020202 3624 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-6cf4cccc57-pl4hx_tigera-operator(446787a3-e030-4ec6-9916-793540a76cf3)\"" pod="tigera-operator/tigera-operator-6cf4cccc57-pl4hx" podUID="446787a3-e030-4ec6-9916-793540a76cf3" Apr 16 23:35:22.024626 containerd[2004]: time="2026-04-16T23:35:22.024572363Z" level=info msg="RemoveContainer for \"a6c131925eeea8eaa3b42ba9a9b7dae6a3e178ef3bd5c374c6dd89320aba994c\"" Apr 16 23:35:22.034148 containerd[2004]: time="2026-04-16T23:35:22.034070471Z" level=info msg="RemoveContainer for \"a6c131925eeea8eaa3b42ba9a9b7dae6a3e178ef3bd5c374c6dd89320aba994c\" returns successfully"