Mar 14 00:13:29.261103 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Mar 14 00:13:29.261153 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Mar 13 22:32:52 -00 2026 Mar 14 00:13:29.261179 kernel: KASLR disabled due to lack of seed Mar 14 00:13:29.261196 kernel: efi: EFI v2.7 by EDK II Mar 14 00:13:29.261212 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Mar 14 00:13:29.261228 kernel: ACPI: Early table checksum verification disabled Mar 14 00:13:29.261246 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Mar 14 00:13:29.261262 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Mar 14 00:13:29.261279 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 14 00:13:29.261295 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Mar 14 00:13:29.261315 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 14 00:13:29.261331 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Mar 14 00:13:29.261348 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Mar 14 00:13:29.261364 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Mar 14 00:13:29.261383 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 14 00:13:29.261404 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Mar 14 00:13:29.261422 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Mar 14 00:13:29.261438 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Mar 14 00:13:29.261456 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Mar 14 00:13:29.261472 kernel: printk: bootconsole [uart0] enabled Mar 14 00:13:29.261489 kernel: NUMA: Failed to initialise from firmware Mar 14 00:13:29.261506 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Mar 14 00:13:29.261523 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Mar 14 00:13:29.261540 kernel: Zone ranges: Mar 14 00:13:29.261557 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 14 00:13:29.261573 kernel: DMA32 empty Mar 14 00:13:29.261594 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Mar 14 00:13:29.261611 kernel: Movable zone start for each node Mar 14 00:13:29.261628 kernel: Early memory node ranges Mar 14 00:13:29.261644 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Mar 14 00:13:29.261661 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Mar 14 00:13:29.261678 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Mar 14 00:13:29.261695 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Mar 14 00:13:29.261712 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Mar 14 00:13:29.261728 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Mar 14 00:13:29.261745 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Mar 14 00:13:29.261762 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Mar 14 00:13:29.261779 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Mar 14 00:13:29.261800 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Mar 14 00:13:29.261818 kernel: psci: probing for conduit method from ACPI. Mar 14 00:13:29.261841 kernel: psci: PSCIv1.0 detected in firmware. Mar 14 00:13:29.261859 kernel: psci: Using standard PSCI v0.2 function IDs Mar 14 00:13:29.261877 kernel: psci: Trusted OS migration not required Mar 14 00:13:29.261898 kernel: psci: SMC Calling Convention v1.1 Mar 14 00:13:29.261917 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Mar 14 00:13:29.261935 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 14 00:13:29.261954 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 14 00:13:29.261972 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 14 00:13:29.261989 kernel: Detected PIPT I-cache on CPU0 Mar 14 00:13:29.262007 kernel: CPU features: detected: GIC system register CPU interface Mar 14 00:13:29.262024 kernel: CPU features: detected: Spectre-v2 Mar 14 00:13:29.262042 kernel: CPU features: detected: Spectre-v3a Mar 14 00:13:29.262059 kernel: CPU features: detected: Spectre-BHB Mar 14 00:13:29.263283 kernel: CPU features: detected: ARM erratum 1742098 Mar 14 00:13:29.263325 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Mar 14 00:13:29.263344 kernel: alternatives: applying boot alternatives Mar 14 00:13:29.263366 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=704dcf876dede90264a8630d1e6c631c8df8e652c7e2ae2e5d334e632916c980 Mar 14 00:13:29.263384 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 14 00:13:29.263403 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 14 00:13:29.263421 kernel: Fallback order for Node 0: 0 Mar 14 00:13:29.263439 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Mar 14 00:13:29.263457 kernel: Policy zone: Normal Mar 14 00:13:29.263475 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 14 00:13:29.263493 kernel: software IO TLB: area num 2. Mar 14 00:13:29.263510 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Mar 14 00:13:29.263534 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Mar 14 00:13:29.263553 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 14 00:13:29.263571 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 14 00:13:29.263590 kernel: rcu: RCU event tracing is enabled. Mar 14 00:13:29.263608 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 14 00:13:29.263626 kernel: Trampoline variant of Tasks RCU enabled. Mar 14 00:13:29.263644 kernel: Tracing variant of Tasks RCU enabled. Mar 14 00:13:29.263662 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 14 00:13:29.263680 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 14 00:13:29.263699 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 14 00:13:29.263717 kernel: GICv3: 96 SPIs implemented Mar 14 00:13:29.263739 kernel: GICv3: 0 Extended SPIs implemented Mar 14 00:13:29.263757 kernel: Root IRQ handler: gic_handle_irq Mar 14 00:13:29.263774 kernel: GICv3: GICv3 features: 16 PPIs Mar 14 00:13:29.263793 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Mar 14 00:13:29.263810 kernel: ITS [mem 0x10080000-0x1009ffff] Mar 14 00:13:29.263828 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Mar 14 00:13:29.263847 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Mar 14 00:13:29.263865 kernel: GICv3: using LPI property table @0x00000004000d0000 Mar 14 00:13:29.263883 kernel: ITS: Using hypervisor restricted LPI range [128] Mar 14 00:13:29.263901 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Mar 14 00:13:29.263919 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 14 00:13:29.263937 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Mar 14 00:13:29.263960 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Mar 14 00:13:29.263979 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Mar 14 00:13:29.263998 kernel: Console: colour dummy device 80x25 Mar 14 00:13:29.264016 kernel: printk: console [tty1] enabled Mar 14 00:13:29.264034 kernel: ACPI: Core revision 20230628 Mar 14 00:13:29.264053 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Mar 14 00:13:29.264094 kernel: pid_max: default: 32768 minimum: 301 Mar 14 00:13:29.264119 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 14 00:13:29.264138 kernel: landlock: Up and running. Mar 14 00:13:29.264162 kernel: SELinux: Initializing. Mar 14 00:13:29.264181 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:13:29.264200 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:13:29.264219 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:13:29.264237 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:13:29.264255 kernel: rcu: Hierarchical SRCU implementation. Mar 14 00:13:29.264274 kernel: rcu: Max phase no-delay instances is 400. Mar 14 00:13:29.264293 kernel: Platform MSI: ITS@0x10080000 domain created Mar 14 00:13:29.264311 kernel: PCI/MSI: ITS@0x10080000 domain created Mar 14 00:13:29.264335 kernel: Remapping and enabling EFI services. Mar 14 00:13:29.264354 kernel: smp: Bringing up secondary CPUs ... Mar 14 00:13:29.264373 kernel: Detected PIPT I-cache on CPU1 Mar 14 00:13:29.264393 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Mar 14 00:13:29.264412 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Mar 14 00:13:29.264431 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Mar 14 00:13:29.264450 kernel: smp: Brought up 1 node, 2 CPUs Mar 14 00:13:29.264469 kernel: SMP: Total of 2 processors activated. Mar 14 00:13:29.264510 kernel: CPU features: detected: 32-bit EL0 Support Mar 14 00:13:29.264538 kernel: CPU features: detected: 32-bit EL1 Support Mar 14 00:13:29.264558 kernel: CPU features: detected: CRC32 instructions Mar 14 00:13:29.264576 kernel: CPU: All CPU(s) started at EL1 Mar 14 00:13:29.264607 kernel: alternatives: applying system-wide alternatives Mar 14 00:13:29.264632 kernel: devtmpfs: initialized Mar 14 00:13:29.264653 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 14 00:13:29.264674 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 14 00:13:29.264694 kernel: pinctrl core: initialized pinctrl subsystem Mar 14 00:13:29.264716 kernel: SMBIOS 3.0.0 present. Mar 14 00:13:29.264740 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Mar 14 00:13:29.264761 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 14 00:13:29.264782 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 14 00:13:29.264801 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 14 00:13:29.264821 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 14 00:13:29.264840 kernel: audit: initializing netlink subsys (disabled) Mar 14 00:13:29.264859 kernel: audit: type=2000 audit(0.287:1): state=initialized audit_enabled=0 res=1 Mar 14 00:13:29.264879 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 14 00:13:29.264902 kernel: cpuidle: using governor menu Mar 14 00:13:29.264921 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 14 00:13:29.264941 kernel: ASID allocator initialised with 65536 entries Mar 14 00:13:29.264960 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 14 00:13:29.264979 kernel: Serial: AMBA PL011 UART driver Mar 14 00:13:29.264998 kernel: Modules: 17488 pages in range for non-PLT usage Mar 14 00:13:29.265017 kernel: Modules: 509008 pages in range for PLT usage Mar 14 00:13:29.265038 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 14 00:13:29.265057 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 14 00:13:29.265193 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 14 00:13:29.265235 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 14 00:13:29.265282 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 14 00:13:29.265321 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 14 00:13:29.265342 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 14 00:13:29.265362 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 14 00:13:29.265381 kernel: ACPI: Added _OSI(Module Device) Mar 14 00:13:29.265400 kernel: ACPI: Added _OSI(Processor Device) Mar 14 00:13:29.265419 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 14 00:13:29.265445 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 14 00:13:29.265464 kernel: ACPI: Interpreter enabled Mar 14 00:13:29.265483 kernel: ACPI: Using GIC for interrupt routing Mar 14 00:13:29.265501 kernel: ACPI: MCFG table detected, 1 entries Mar 14 00:13:29.265520 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Mar 14 00:13:29.265859 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 14 00:13:29.266108 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 14 00:13:29.266326 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 14 00:13:29.266538 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Mar 14 00:13:29.266742 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Mar 14 00:13:29.266768 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Mar 14 00:13:29.266787 kernel: acpiphp: Slot [1] registered Mar 14 00:13:29.266807 kernel: acpiphp: Slot [2] registered Mar 14 00:13:29.266826 kernel: acpiphp: Slot [3] registered Mar 14 00:13:29.266844 kernel: acpiphp: Slot [4] registered Mar 14 00:13:29.266863 kernel: acpiphp: Slot [5] registered Mar 14 00:13:29.266887 kernel: acpiphp: Slot [6] registered Mar 14 00:13:29.266906 kernel: acpiphp: Slot [7] registered Mar 14 00:13:29.266924 kernel: acpiphp: Slot [8] registered Mar 14 00:13:29.266943 kernel: acpiphp: Slot [9] registered Mar 14 00:13:29.266961 kernel: acpiphp: Slot [10] registered Mar 14 00:13:29.266980 kernel: acpiphp: Slot [11] registered Mar 14 00:13:29.266999 kernel: acpiphp: Slot [12] registered Mar 14 00:13:29.267017 kernel: acpiphp: Slot [13] registered Mar 14 00:13:29.267036 kernel: acpiphp: Slot [14] registered Mar 14 00:13:29.267054 kernel: acpiphp: Slot [15] registered Mar 14 00:13:29.267097 kernel: acpiphp: Slot [16] registered Mar 14 00:13:29.267118 kernel: acpiphp: Slot [17] registered Mar 14 00:13:29.267137 kernel: acpiphp: Slot [18] registered Mar 14 00:13:29.267156 kernel: acpiphp: Slot [19] registered Mar 14 00:13:29.267174 kernel: acpiphp: Slot [20] registered Mar 14 00:13:29.267193 kernel: acpiphp: Slot [21] registered Mar 14 00:13:29.267212 kernel: acpiphp: Slot [22] registered Mar 14 00:13:29.267230 kernel: acpiphp: Slot [23] registered Mar 14 00:13:29.267248 kernel: acpiphp: Slot [24] registered Mar 14 00:13:29.267273 kernel: acpiphp: Slot [25] registered Mar 14 00:13:29.267292 kernel: acpiphp: Slot [26] registered Mar 14 00:13:29.267310 kernel: acpiphp: Slot [27] registered Mar 14 00:13:29.267329 kernel: acpiphp: Slot [28] registered Mar 14 00:13:29.267347 kernel: acpiphp: Slot [29] registered Mar 14 00:13:29.267365 kernel: acpiphp: Slot [30] registered Mar 14 00:13:29.267384 kernel: acpiphp: Slot [31] registered Mar 14 00:13:29.267403 kernel: PCI host bridge to bus 0000:00 Mar 14 00:13:29.267609 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Mar 14 00:13:29.267812 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 14 00:13:29.268005 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Mar 14 00:13:29.268226 kernel: pci_bus 0000:00: root bus resource [bus 00] Mar 14 00:13:29.268541 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Mar 14 00:13:29.268798 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Mar 14 00:13:29.269024 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Mar 14 00:13:29.269401 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 14 00:13:29.269617 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Mar 14 00:13:29.269827 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 14 00:13:29.270059 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 14 00:13:29.270336 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Mar 14 00:13:29.270552 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Mar 14 00:13:29.270761 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Mar 14 00:13:29.270977 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 14 00:13:29.271227 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Mar 14 00:13:29.271414 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 14 00:13:29.271600 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Mar 14 00:13:29.271625 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 14 00:13:29.271645 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 14 00:13:29.271664 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 14 00:13:29.271683 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 14 00:13:29.271708 kernel: iommu: Default domain type: Translated Mar 14 00:13:29.271727 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 14 00:13:29.271746 kernel: efivars: Registered efivars operations Mar 14 00:13:29.271765 kernel: vgaarb: loaded Mar 14 00:13:29.271783 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 14 00:13:29.271802 kernel: VFS: Disk quotas dquot_6.6.0 Mar 14 00:13:29.271820 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 14 00:13:29.271839 kernel: pnp: PnP ACPI init Mar 14 00:13:29.272051 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Mar 14 00:13:29.272121 kernel: pnp: PnP ACPI: found 1 devices Mar 14 00:13:29.272142 kernel: NET: Registered PF_INET protocol family Mar 14 00:13:29.272162 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 14 00:13:29.272181 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 14 00:13:29.272200 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 14 00:13:29.272219 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 14 00:13:29.272239 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 14 00:13:29.272258 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 14 00:13:29.272283 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:13:29.272302 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:13:29.272321 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 14 00:13:29.272340 kernel: PCI: CLS 0 bytes, default 64 Mar 14 00:13:29.272359 kernel: kvm [1]: HYP mode not available Mar 14 00:13:29.272377 kernel: Initialise system trusted keyrings Mar 14 00:13:29.272396 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 14 00:13:29.272415 kernel: Key type asymmetric registered Mar 14 00:13:29.272433 kernel: Asymmetric key parser 'x509' registered Mar 14 00:13:29.272456 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 14 00:13:29.272476 kernel: io scheduler mq-deadline registered Mar 14 00:13:29.272515 kernel: io scheduler kyber registered Mar 14 00:13:29.272534 kernel: io scheduler bfq registered Mar 14 00:13:29.272790 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Mar 14 00:13:29.272820 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 14 00:13:29.272840 kernel: ACPI: button: Power Button [PWRB] Mar 14 00:13:29.272860 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Mar 14 00:13:29.272878 kernel: ACPI: button: Sleep Button [SLPB] Mar 14 00:13:29.272904 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 14 00:13:29.272924 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 14 00:13:29.273179 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Mar 14 00:13:29.273206 kernel: printk: console [ttyS0] disabled Mar 14 00:13:29.273225 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Mar 14 00:13:29.273244 kernel: printk: console [ttyS0] enabled Mar 14 00:13:29.273263 kernel: printk: bootconsole [uart0] disabled Mar 14 00:13:29.273282 kernel: thunder_xcv, ver 1.0 Mar 14 00:13:29.273301 kernel: thunder_bgx, ver 1.0 Mar 14 00:13:29.273326 kernel: nicpf, ver 1.0 Mar 14 00:13:29.273344 kernel: nicvf, ver 1.0 Mar 14 00:13:29.273566 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 14 00:13:29.273765 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-14T00:13:28 UTC (1773447208) Mar 14 00:13:29.273791 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 14 00:13:29.273810 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Mar 14 00:13:29.273829 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 14 00:13:29.273848 kernel: watchdog: Hard watchdog permanently disabled Mar 14 00:13:29.273872 kernel: NET: Registered PF_INET6 protocol family Mar 14 00:13:29.273891 kernel: Segment Routing with IPv6 Mar 14 00:13:29.273910 kernel: In-situ OAM (IOAM) with IPv6 Mar 14 00:13:29.273929 kernel: NET: Registered PF_PACKET protocol family Mar 14 00:13:29.273948 kernel: Key type dns_resolver registered Mar 14 00:13:29.273967 kernel: registered taskstats version 1 Mar 14 00:13:29.273986 kernel: Loading compiled-in X.509 certificates Mar 14 00:13:29.274006 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 16e13a4d63c54048487d2b18c824fa4694264505' Mar 14 00:13:29.274025 kernel: Key type .fscrypt registered Mar 14 00:13:29.274048 kernel: Key type fscrypt-provisioning registered Mar 14 00:13:29.274067 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 14 00:13:29.274108 kernel: ima: Allocated hash algorithm: sha1 Mar 14 00:13:29.274128 kernel: ima: No architecture policies found Mar 14 00:13:29.274147 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 14 00:13:29.274166 kernel: clk: Disabling unused clocks Mar 14 00:13:29.274185 kernel: Freeing unused kernel memory: 39424K Mar 14 00:13:29.274203 kernel: Run /init as init process Mar 14 00:13:29.274222 kernel: with arguments: Mar 14 00:13:29.274246 kernel: /init Mar 14 00:13:29.274265 kernel: with environment: Mar 14 00:13:29.274283 kernel: HOME=/ Mar 14 00:13:29.274302 kernel: TERM=linux Mar 14 00:13:29.274325 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:13:29.274349 systemd[1]: Detected virtualization amazon. Mar 14 00:13:29.274370 systemd[1]: Detected architecture arm64. Mar 14 00:13:29.274389 systemd[1]: Running in initrd. Mar 14 00:13:29.274414 systemd[1]: No hostname configured, using default hostname. Mar 14 00:13:29.274434 systemd[1]: Hostname set to . Mar 14 00:13:29.274455 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:13:29.274475 systemd[1]: Queued start job for default target initrd.target. Mar 14 00:13:29.274495 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:13:29.274516 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:13:29.274537 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 14 00:13:29.274557 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:13:29.274583 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 14 00:13:29.274604 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 14 00:13:29.274627 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 14 00:13:29.274648 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 14 00:13:29.274668 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:13:29.274689 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:13:29.274713 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:13:29.274734 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:13:29.274755 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:13:29.274775 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:13:29.274795 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:13:29.274816 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:13:29.274837 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:13:29.274857 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:13:29.274878 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:13:29.274903 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:13:29.274925 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:13:29.274945 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:13:29.274966 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 14 00:13:29.274987 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:13:29.275009 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 14 00:13:29.275060 systemd[1]: Starting systemd-fsck-usr.service... Mar 14 00:13:29.275107 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:13:29.275129 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:13:29.275158 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:13:29.275180 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 14 00:13:29.275202 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:13:29.275224 systemd[1]: Finished systemd-fsck-usr.service. Mar 14 00:13:29.275295 systemd-journald[252]: Collecting audit messages is disabled. Mar 14 00:13:29.275346 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 14 00:13:29.275369 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:13:29.275390 systemd-journald[252]: Journal started Mar 14 00:13:29.275436 systemd-journald[252]: Runtime Journal (/run/log/journal/ec2d456ddcbfe68982fc5f650a3e8ec5) is 8.0M, max 75.3M, 67.3M free. Mar 14 00:13:29.225684 systemd-modules-load[253]: Inserted module 'overlay' Mar 14 00:13:29.279128 kernel: Bridge firewalling registered Mar 14 00:13:29.283486 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:13:29.281758 systemd-modules-load[253]: Inserted module 'br_netfilter' Mar 14 00:13:29.296139 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:13:29.299326 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:13:29.316397 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:13:29.324384 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:13:29.330327 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:13:29.331289 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:13:29.348834 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:13:29.384592 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:13:29.396649 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:13:29.399833 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:13:29.403682 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:13:29.420319 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 14 00:13:29.434853 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:13:29.462697 dracut-cmdline[289]: dracut-dracut-053 Mar 14 00:13:29.471567 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=704dcf876dede90264a8630d1e6c631c8df8e652c7e2ae2e5d334e632916c980 Mar 14 00:13:29.520873 systemd-resolved[292]: Positive Trust Anchors: Mar 14 00:13:29.520908 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:13:29.520971 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:13:29.627098 kernel: SCSI subsystem initialized Mar 14 00:13:29.634111 kernel: Loading iSCSI transport class v2.0-870. Mar 14 00:13:29.647116 kernel: iscsi: registered transport (tcp) Mar 14 00:13:29.669356 kernel: iscsi: registered transport (qla4xxx) Mar 14 00:13:29.669430 kernel: QLogic iSCSI HBA Driver Mar 14 00:13:29.759115 kernel: random: crng init done Mar 14 00:13:29.759472 systemd-resolved[292]: Defaulting to hostname 'linux'. Mar 14 00:13:29.763293 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:13:29.768360 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:13:29.794149 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 14 00:13:29.805532 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 14 00:13:29.841103 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 14 00:13:29.841190 kernel: device-mapper: uevent: version 1.0.3 Mar 14 00:13:29.843112 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 14 00:13:29.924123 kernel: raid6: neonx8 gen() 6758 MB/s Mar 14 00:13:29.926107 kernel: raid6: neonx4 gen() 6562 MB/s Mar 14 00:13:29.943106 kernel: raid6: neonx2 gen() 5478 MB/s Mar 14 00:13:29.960106 kernel: raid6: neonx1 gen() 3965 MB/s Mar 14 00:13:29.977102 kernel: raid6: int64x8 gen() 3810 MB/s Mar 14 00:13:29.994106 kernel: raid6: int64x4 gen() 3692 MB/s Mar 14 00:13:30.011107 kernel: raid6: int64x2 gen() 3603 MB/s Mar 14 00:13:30.029170 kernel: raid6: int64x1 gen() 2753 MB/s Mar 14 00:13:30.029203 kernel: raid6: using algorithm neonx8 gen() 6758 MB/s Mar 14 00:13:30.048110 kernel: raid6: .... xor() 4837 MB/s, rmw enabled Mar 14 00:13:30.048159 kernel: raid6: using neon recovery algorithm Mar 14 00:13:30.056112 kernel: xor: measuring software checksum speed Mar 14 00:13:30.058392 kernel: 8regs : 9982 MB/sec Mar 14 00:13:30.058425 kernel: 32regs : 11909 MB/sec Mar 14 00:13:30.059695 kernel: arm64_neon : 9564 MB/sec Mar 14 00:13:30.059736 kernel: xor: using function: 32regs (11909 MB/sec) Mar 14 00:13:30.146130 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 14 00:13:30.165016 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:13:30.179454 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:13:30.214240 systemd-udevd[474]: Using default interface naming scheme 'v255'. Mar 14 00:13:30.222357 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:13:30.246348 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 14 00:13:30.273807 dracut-pre-trigger[486]: rd.md=0: removing MD RAID activation Mar 14 00:13:30.329414 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:13:30.345450 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:13:30.456594 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:13:30.471492 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 14 00:13:30.507130 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 14 00:13:30.513572 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:13:30.519280 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:13:30.521955 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:13:30.536398 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 14 00:13:30.582435 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:13:30.658637 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 14 00:13:30.658715 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Mar 14 00:13:30.664429 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:13:30.665206 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:13:30.678000 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 14 00:13:30.678357 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 14 00:13:30.675519 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:13:30.685623 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:13:30.685909 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:13:30.688640 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:13:30.702519 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:13:30.711477 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:84:0b:ca:21:4b Mar 14 00:13:30.713807 (udev-worker)[541]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:13:30.729100 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 14 00:13:30.731242 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 14 00:13:30.746096 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 14 00:13:30.746012 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:13:30.758417 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:13:30.767345 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 14 00:13:30.767387 kernel: GPT:9289727 != 33554431 Mar 14 00:13:30.767413 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 14 00:13:30.771110 kernel: GPT:9289727 != 33554431 Mar 14 00:13:30.771175 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 14 00:13:30.771201 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:13:30.804143 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:13:30.907127 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (519) Mar 14 00:13:30.919886 kernel: BTRFS: device fsid df62721e-ebc0-40bc-8956-1227b067a773 devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (531) Mar 14 00:13:30.989682 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 14 00:13:31.010603 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 14 00:13:31.040015 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 14 00:13:31.057978 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 14 00:13:31.058951 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 14 00:13:31.075422 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 14 00:13:31.089383 disk-uuid[665]: Primary Header is updated. Mar 14 00:13:31.089383 disk-uuid[665]: Secondary Entries is updated. Mar 14 00:13:31.089383 disk-uuid[665]: Secondary Header is updated. Mar 14 00:13:31.107111 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:13:31.119125 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:13:31.133118 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:13:32.135148 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:13:32.137122 disk-uuid[666]: The operation has completed successfully. Mar 14 00:13:32.314594 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 14 00:13:32.316694 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 14 00:13:32.377400 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 14 00:13:32.396217 sh[1007]: Success Mar 14 00:13:32.424122 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 14 00:13:32.559479 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 14 00:13:32.568251 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 14 00:13:32.573041 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 14 00:13:32.621756 kernel: BTRFS info (device dm-0): first mount of filesystem df62721e-ebc0-40bc-8956-1227b067a773 Mar 14 00:13:32.621819 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:13:32.621857 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 14 00:13:32.623701 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 14 00:13:32.625129 kernel: BTRFS info (device dm-0): using free space tree Mar 14 00:13:32.764121 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 14 00:13:32.768380 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 14 00:13:32.770164 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 14 00:13:32.785448 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 14 00:13:32.793401 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 14 00:13:32.837931 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:13:32.838004 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:13:32.838031 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 14 00:13:32.861158 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 14 00:13:32.885023 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 14 00:13:32.888112 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:13:32.901163 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 14 00:13:32.911441 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 14 00:13:32.982167 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:13:32.997375 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:13:33.057895 systemd-networkd[1199]: lo: Link UP Mar 14 00:13:33.057917 systemd-networkd[1199]: lo: Gained carrier Mar 14 00:13:33.063328 systemd-networkd[1199]: Enumeration completed Mar 14 00:13:33.063513 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:13:33.065223 systemd-networkd[1199]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:13:33.065230 systemd-networkd[1199]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:13:33.068242 systemd[1]: Reached target network.target - Network. Mar 14 00:13:33.094025 systemd-networkd[1199]: eth0: Link UP Mar 14 00:13:33.094038 systemd-networkd[1199]: eth0: Gained carrier Mar 14 00:13:33.094056 systemd-networkd[1199]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:13:33.113167 systemd-networkd[1199]: eth0: DHCPv4 address 172.31.28.2/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 14 00:13:33.399005 ignition[1146]: Ignition 2.19.0 Mar 14 00:13:33.399042 ignition[1146]: Stage: fetch-offline Mar 14 00:13:33.403274 ignition[1146]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:33.403315 ignition[1146]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:13:33.408146 ignition[1146]: Ignition finished successfully Mar 14 00:13:33.410577 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:13:33.421560 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 14 00:13:33.448213 ignition[1209]: Ignition 2.19.0 Mar 14 00:13:33.448733 ignition[1209]: Stage: fetch Mar 14 00:13:33.449447 ignition[1209]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:33.449472 ignition[1209]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:13:33.449647 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:13:33.477543 ignition[1209]: PUT result: OK Mar 14 00:13:33.480952 ignition[1209]: parsed url from cmdline: "" Mar 14 00:13:33.480967 ignition[1209]: no config URL provided Mar 14 00:13:33.480982 ignition[1209]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:13:33.481007 ignition[1209]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:13:33.481038 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:13:33.482972 ignition[1209]: PUT result: OK Mar 14 00:13:33.487232 ignition[1209]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 14 00:13:33.493762 ignition[1209]: GET result: OK Mar 14 00:13:33.494170 ignition[1209]: parsing config with SHA512: e8dda7a7f7470583120e4e05b608e95daa98c5827172c31f80667a14a4900c22c19e4ef8eff27162539395f0084a46ef218cbdb8b7f2ceda0e1fd3d53f88fc34 Mar 14 00:13:33.502907 unknown[1209]: fetched base config from "system" Mar 14 00:13:33.502934 unknown[1209]: fetched base config from "system" Mar 14 00:13:33.502949 unknown[1209]: fetched user config from "aws" Mar 14 00:13:33.510722 ignition[1209]: fetch: fetch complete Mar 14 00:13:33.512373 ignition[1209]: fetch: fetch passed Mar 14 00:13:33.514058 ignition[1209]: Ignition finished successfully Mar 14 00:13:33.520371 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 14 00:13:33.530556 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 14 00:13:33.566515 ignition[1215]: Ignition 2.19.0 Mar 14 00:13:33.566543 ignition[1215]: Stage: kargs Mar 14 00:13:33.569193 ignition[1215]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:33.569220 ignition[1215]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:13:33.569372 ignition[1215]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:13:33.574573 ignition[1215]: PUT result: OK Mar 14 00:13:33.581486 ignition[1215]: kargs: kargs passed Mar 14 00:13:33.581588 ignition[1215]: Ignition finished successfully Mar 14 00:13:33.586279 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 14 00:13:33.596591 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 14 00:13:33.624122 ignition[1221]: Ignition 2.19.0 Mar 14 00:13:33.624143 ignition[1221]: Stage: disks Mar 14 00:13:33.624775 ignition[1221]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:33.624800 ignition[1221]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:13:33.624951 ignition[1221]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:13:33.636153 ignition[1221]: PUT result: OK Mar 14 00:13:33.642206 ignition[1221]: disks: disks passed Mar 14 00:13:33.642360 ignition[1221]: Ignition finished successfully Mar 14 00:13:33.645013 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 14 00:13:33.648965 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 14 00:13:33.652276 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:13:33.663526 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:13:33.665875 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:13:33.668269 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:13:33.680303 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 14 00:13:33.739768 systemd-fsck[1229]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 14 00:13:33.746154 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 14 00:13:33.760411 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 14 00:13:33.852109 kernel: EXT4-fs (nvme0n1p9): mounted filesystem af566013-4e57-4e7f-9689-a2e15898536d r/w with ordered data mode. Quota mode: none. Mar 14 00:13:33.854243 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 14 00:13:33.858254 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 14 00:13:33.877299 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:13:33.886137 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 14 00:13:33.892550 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 14 00:13:33.892822 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 14 00:13:33.892874 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:13:33.913115 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1248) Mar 14 00:13:33.917676 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:13:33.917739 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:13:33.919772 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 14 00:13:33.923657 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 14 00:13:33.937469 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 14 00:13:33.954106 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 14 00:13:33.956530 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:13:34.338204 initrd-setup-root[1272]: cut: /sysroot/etc/passwd: No such file or directory Mar 14 00:13:34.369298 initrd-setup-root[1279]: cut: /sysroot/etc/group: No such file or directory Mar 14 00:13:34.390579 initrd-setup-root[1286]: cut: /sysroot/etc/shadow: No such file or directory Mar 14 00:13:34.400636 initrd-setup-root[1293]: cut: /sysroot/etc/gshadow: No such file or directory Mar 14 00:13:34.793718 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 14 00:13:34.810298 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 14 00:13:34.814169 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 14 00:13:34.831753 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 14 00:13:34.834578 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:13:34.883797 ignition[1361]: INFO : Ignition 2.19.0 Mar 14 00:13:34.883797 ignition[1361]: INFO : Stage: mount Mar 14 00:13:34.890267 ignition[1361]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:34.890267 ignition[1361]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:13:34.890267 ignition[1361]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:13:34.891723 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 14 00:13:34.904544 ignition[1361]: INFO : PUT result: OK Mar 14 00:13:34.909266 ignition[1361]: INFO : mount: mount passed Mar 14 00:13:34.912352 ignition[1361]: INFO : Ignition finished successfully Mar 14 00:13:34.911281 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 14 00:13:34.926250 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 14 00:13:34.958457 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:13:34.972219 systemd-networkd[1199]: eth0: Gained IPv6LL Mar 14 00:13:34.985125 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1372) Mar 14 00:13:34.989879 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:13:34.989930 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:13:34.989958 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 14 00:13:34.999117 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 14 00:13:35.002803 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:13:35.037287 ignition[1389]: INFO : Ignition 2.19.0 Mar 14 00:13:35.037287 ignition[1389]: INFO : Stage: files Mar 14 00:13:35.042174 ignition[1389]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:35.042174 ignition[1389]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:13:35.042174 ignition[1389]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:13:35.050417 ignition[1389]: INFO : PUT result: OK Mar 14 00:13:35.054508 ignition[1389]: DEBUG : files: compiled without relabeling support, skipping Mar 14 00:13:35.070067 ignition[1389]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 14 00:13:35.070067 ignition[1389]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 14 00:13:35.114097 ignition[1389]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 14 00:13:35.120800 ignition[1389]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 14 00:13:35.124359 unknown[1389]: wrote ssh authorized keys file for user: core Mar 14 00:13:35.126771 ignition[1389]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 14 00:13:35.140825 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 14 00:13:35.140825 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 14 00:13:35.140825 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 14 00:13:35.140825 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 14 00:13:35.237114 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 14 00:13:35.494091 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 14 00:13:35.494091 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 14 00:13:35.504457 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 14 00:13:35.504457 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:13:35.504457 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:13:35.504457 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:13:35.504457 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:13:35.504457 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:13:35.504457 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:13:35.504457 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:13:35.504457 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:13:35.504457 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 14 00:13:35.504457 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 14 00:13:35.504457 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 14 00:13:35.504457 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Mar 14 00:13:50.958052 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 14 00:13:51.424613 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 14 00:13:51.424613 ignition[1389]: INFO : files: op(c): [started] processing unit "containerd.service" Mar 14 00:13:51.433189 ignition[1389]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 14 00:13:51.433189 ignition[1389]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 14 00:13:51.433189 ignition[1389]: INFO : files: op(c): [finished] processing unit "containerd.service" Mar 14 00:13:51.433189 ignition[1389]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Mar 14 00:13:51.433189 ignition[1389]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:13:51.433189 ignition[1389]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:13:51.433189 ignition[1389]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Mar 14 00:13:51.433189 ignition[1389]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Mar 14 00:13:51.433189 ignition[1389]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Mar 14 00:13:51.433189 ignition[1389]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:13:51.433189 ignition[1389]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:13:51.433189 ignition[1389]: INFO : files: files passed Mar 14 00:13:51.433189 ignition[1389]: INFO : Ignition finished successfully Mar 14 00:13:51.475273 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 14 00:13:51.491412 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 14 00:13:51.502399 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 14 00:13:51.511883 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 14 00:13:51.512766 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 14 00:13:51.547607 initrd-setup-root-after-ignition[1417]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:13:51.551904 initrd-setup-root-after-ignition[1421]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:13:51.555979 initrd-setup-root-after-ignition[1417]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:13:51.562188 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:13:51.568923 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 14 00:13:51.580411 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 14 00:13:51.630978 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 14 00:13:51.631459 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 14 00:13:51.640819 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 14 00:13:51.643212 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 14 00:13:51.645608 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 14 00:13:51.659322 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 14 00:13:51.690130 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:13:51.703348 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 14 00:13:51.729487 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:13:51.733228 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:13:51.738919 systemd[1]: Stopped target timers.target - Timer Units. Mar 14 00:13:51.742923 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 14 00:13:51.743256 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:13:51.750462 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 14 00:13:51.758068 systemd[1]: Stopped target basic.target - Basic System. Mar 14 00:13:51.760457 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 14 00:13:51.767413 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:13:51.772486 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 14 00:13:51.777541 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 14 00:13:51.782595 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:13:51.785784 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 14 00:13:51.788904 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 14 00:13:51.793898 systemd[1]: Stopped target swap.target - Swaps. Mar 14 00:13:51.796032 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 14 00:13:51.796316 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:13:51.803289 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:13:51.814241 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:13:51.817853 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 14 00:13:51.819981 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:13:51.823321 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 14 00:13:51.823544 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 14 00:13:51.835795 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 14 00:13:51.836237 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:13:51.844636 systemd[1]: ignition-files.service: Deactivated successfully. Mar 14 00:13:51.845039 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 14 00:13:51.864529 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 14 00:13:51.867012 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 14 00:13:51.867375 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:13:51.882385 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 14 00:13:51.884801 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 14 00:13:51.888246 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:13:51.895649 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 14 00:13:51.895909 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:13:51.920940 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 14 00:13:51.921188 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 14 00:13:51.938613 ignition[1441]: INFO : Ignition 2.19.0 Mar 14 00:13:51.938613 ignition[1441]: INFO : Stage: umount Mar 14 00:13:51.938613 ignition[1441]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:51.938613 ignition[1441]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:13:51.938613 ignition[1441]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:13:51.952556 ignition[1441]: INFO : PUT result: OK Mar 14 00:13:51.960131 ignition[1441]: INFO : umount: umount passed Mar 14 00:13:51.960131 ignition[1441]: INFO : Ignition finished successfully Mar 14 00:13:51.968773 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 14 00:13:51.970291 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 14 00:13:51.970506 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 14 00:13:51.982697 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 14 00:13:51.985039 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 14 00:13:51.991810 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 14 00:13:51.992143 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 14 00:13:51.999018 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 14 00:13:51.999137 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 14 00:13:52.003577 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 14 00:13:52.003855 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 14 00:13:52.008157 systemd[1]: Stopped target network.target - Network. Mar 14 00:13:52.012411 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 14 00:13:52.012504 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:13:52.015140 systemd[1]: Stopped target paths.target - Path Units. Mar 14 00:13:52.017344 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 14 00:13:52.019483 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:13:52.022584 systemd[1]: Stopped target slices.target - Slice Units. Mar 14 00:13:52.024850 systemd[1]: Stopped target sockets.target - Socket Units. Mar 14 00:13:52.027116 systemd[1]: iscsid.socket: Deactivated successfully. Mar 14 00:13:52.027629 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:13:52.031575 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 14 00:13:52.031653 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:13:52.036145 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 14 00:13:52.036248 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 14 00:13:52.040403 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 14 00:13:52.040486 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 14 00:13:52.044605 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 14 00:13:52.044690 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 14 00:13:52.049825 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 14 00:13:52.056506 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 14 00:13:52.056554 systemd-networkd[1199]: eth0: DHCPv6 lease lost Mar 14 00:13:52.070305 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 14 00:13:52.070521 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 14 00:13:52.075134 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 14 00:13:52.075334 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 14 00:13:52.087811 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 14 00:13:52.087917 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:13:52.106824 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 14 00:13:52.108735 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 14 00:13:52.108855 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:13:52.116675 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:13:52.116770 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:13:52.120712 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 14 00:13:52.120797 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 14 00:13:52.123306 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 14 00:13:52.123384 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:13:52.126509 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:13:52.179294 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 14 00:13:52.180770 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:13:52.184665 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 14 00:13:52.184750 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 14 00:13:52.188451 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 14 00:13:52.189585 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:13:52.194356 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 14 00:13:52.194465 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:13:52.200434 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 14 00:13:52.200567 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 14 00:13:52.205881 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:13:52.205976 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:13:52.236470 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 14 00:13:52.239585 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 14 00:13:52.239693 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:13:52.242828 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:13:52.242930 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:13:52.243779 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 14 00:13:52.244192 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 14 00:13:52.266869 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 14 00:13:52.267091 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 14 00:13:52.272463 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 14 00:13:52.287547 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 14 00:13:52.319638 systemd[1]: Switching root. Mar 14 00:13:52.350392 systemd-journald[252]: Journal stopped Mar 14 00:13:55.109321 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Mar 14 00:13:55.109472 kernel: SELinux: policy capability network_peer_controls=1 Mar 14 00:13:55.109518 kernel: SELinux: policy capability open_perms=1 Mar 14 00:13:55.109567 kernel: SELinux: policy capability extended_socket_class=1 Mar 14 00:13:55.109599 kernel: SELinux: policy capability always_check_network=0 Mar 14 00:13:55.109635 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 14 00:13:55.109668 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 14 00:13:55.109698 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 14 00:13:55.109730 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 14 00:13:55.109761 kernel: audit: type=1403 audit(1773447233.239:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 14 00:13:55.109800 systemd[1]: Successfully loaded SELinux policy in 61.814ms. Mar 14 00:13:55.109847 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.903ms. Mar 14 00:13:55.109881 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:13:55.109912 systemd[1]: Detected virtualization amazon. Mar 14 00:13:55.109946 systemd[1]: Detected architecture arm64. Mar 14 00:13:55.109977 systemd[1]: Detected first boot. Mar 14 00:13:55.110010 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:13:55.110044 zram_generator::config[1501]: No configuration found. Mar 14 00:13:55.114463 systemd[1]: Populated /etc with preset unit settings. Mar 14 00:13:55.114590 systemd[1]: Queued start job for default target multi-user.target. Mar 14 00:13:55.114625 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 14 00:13:55.114665 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 14 00:13:55.114706 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 14 00:13:55.114739 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 14 00:13:55.114772 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 14 00:13:55.114802 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 14 00:13:55.114835 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 14 00:13:55.114866 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 14 00:13:55.114898 systemd[1]: Created slice user.slice - User and Session Slice. Mar 14 00:13:55.114931 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:13:55.114962 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:13:55.114997 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 14 00:13:55.115029 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 14 00:13:55.115063 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 14 00:13:55.115120 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:13:55.115155 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 14 00:13:55.115190 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:13:55.115220 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 14 00:13:55.115253 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:13:55.115287 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:13:55.115323 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:13:55.115358 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:13:55.115390 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 14 00:13:55.115424 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 14 00:13:55.115458 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:13:55.115487 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:13:55.115517 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:13:55.115548 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:13:55.115585 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:13:55.115617 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 14 00:13:55.115649 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 14 00:13:55.115678 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 14 00:13:55.115708 systemd[1]: Mounting media.mount - External Media Directory... Mar 14 00:13:55.115741 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 14 00:13:55.115773 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 14 00:13:55.115805 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 14 00:13:55.115835 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 14 00:13:55.115871 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:13:55.115903 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:13:55.115933 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 14 00:13:55.115963 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:13:55.115993 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:13:55.116025 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:13:55.116058 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 14 00:13:55.116109 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:13:55.116148 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 14 00:13:55.116179 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 14 00:13:55.116213 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 14 00:13:55.116243 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:13:55.116279 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:13:55.116309 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 14 00:13:55.116354 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 14 00:13:55.116388 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:13:55.116421 kernel: fuse: init (API version 7.39) Mar 14 00:13:55.119180 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 14 00:13:55.119225 kernel: loop: module loaded Mar 14 00:13:55.119344 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 14 00:13:55.119744 systemd[1]: Mounted media.mount - External Media Directory. Mar 14 00:13:55.121667 systemd-journald[1608]: Collecting audit messages is disabled. Mar 14 00:13:55.121738 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 14 00:13:55.121771 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 14 00:13:55.121805 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 14 00:13:55.121840 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 14 00:13:55.121872 systemd-journald[1608]: Journal started Mar 14 00:13:55.121921 systemd-journald[1608]: Runtime Journal (/run/log/journal/ec2d456ddcbfe68982fc5f650a3e8ec5) is 8.0M, max 75.3M, 67.3M free. Mar 14 00:13:55.132149 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:13:55.135477 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:13:55.141254 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 14 00:13:55.141620 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 14 00:13:55.145063 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:13:55.145554 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:13:55.153067 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:13:55.153882 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:13:55.158762 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 14 00:13:55.160486 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 14 00:13:55.164035 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:13:55.164429 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:13:55.175174 kernel: ACPI: bus type drm_connector registered Mar 14 00:13:55.175020 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:13:55.179734 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:13:55.181532 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:13:55.185002 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 14 00:13:55.188989 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 14 00:13:55.216189 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 14 00:13:55.225294 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 14 00:13:55.240238 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 14 00:13:55.245310 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 14 00:13:55.258527 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 14 00:13:55.278406 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 14 00:13:55.281265 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:13:55.289413 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 14 00:13:55.292854 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:13:55.303482 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:13:55.323340 systemd-journald[1608]: Time spent on flushing to /var/log/journal/ec2d456ddcbfe68982fc5f650a3e8ec5 is 89.357ms for 884 entries. Mar 14 00:13:55.323340 systemd-journald[1608]: System Journal (/var/log/journal/ec2d456ddcbfe68982fc5f650a3e8ec5) is 8.0M, max 195.6M, 187.6M free. Mar 14 00:13:55.431590 systemd-journald[1608]: Received client request to flush runtime journal. Mar 14 00:13:55.323426 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:13:55.337887 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 14 00:13:55.344739 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 14 00:13:55.379905 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 14 00:13:55.383004 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 14 00:13:55.439248 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 14 00:13:55.455823 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:13:55.461578 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:13:55.476450 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 14 00:13:55.490383 systemd-tmpfiles[1655]: ACLs are not supported, ignoring. Mar 14 00:13:55.490414 systemd-tmpfiles[1655]: ACLs are not supported, ignoring. Mar 14 00:13:55.507173 udevadm[1670]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 14 00:13:55.509740 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:13:55.522523 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 14 00:13:55.593645 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 14 00:13:55.611483 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:13:55.645049 systemd-tmpfiles[1677]: ACLs are not supported, ignoring. Mar 14 00:13:55.645132 systemd-tmpfiles[1677]: ACLs are not supported, ignoring. Mar 14 00:13:55.652903 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:13:56.293914 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 14 00:13:56.312530 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:13:56.362815 systemd-udevd[1683]: Using default interface naming scheme 'v255'. Mar 14 00:13:56.434115 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:13:56.446443 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:13:56.495637 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 14 00:13:56.577402 (udev-worker)[1698]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:13:56.642908 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 14 00:13:56.656831 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Mar 14 00:13:56.816853 systemd-networkd[1684]: lo: Link UP Mar 14 00:13:56.817381 systemd-networkd[1684]: lo: Gained carrier Mar 14 00:13:56.822775 systemd-networkd[1684]: Enumeration completed Mar 14 00:13:56.828471 systemd-networkd[1684]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:13:56.828486 systemd-networkd[1684]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:13:56.833958 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:13:56.838778 systemd-networkd[1684]: eth0: Link UP Mar 14 00:13:56.840647 systemd-networkd[1684]: eth0: Gained carrier Mar 14 00:13:56.840789 systemd-networkd[1684]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:13:56.851473 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 14 00:13:56.860382 systemd-networkd[1684]: eth0: DHCPv4 address 172.31.28.2/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 14 00:13:56.886152 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1705) Mar 14 00:13:56.907519 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:13:57.094914 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 14 00:13:57.128704 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:13:57.145662 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 14 00:13:57.155416 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 14 00:13:57.186099 lvm[1812]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:13:57.221849 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 14 00:13:57.229108 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:13:57.242377 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 14 00:13:57.250195 lvm[1815]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:13:57.290888 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 14 00:13:57.297452 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:13:57.300353 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 14 00:13:57.300396 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:13:57.302907 systemd[1]: Reached target machines.target - Containers. Mar 14 00:13:57.307010 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 14 00:13:57.315397 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 14 00:13:57.325427 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 14 00:13:57.330841 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:13:57.332733 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 14 00:13:57.353932 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 14 00:13:57.371989 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 14 00:13:57.380631 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 14 00:13:57.395024 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 14 00:13:57.396844 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 14 00:13:57.421394 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 14 00:13:57.428200 kernel: loop0: detected capacity change from 0 to 209336 Mar 14 00:13:57.537130 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 14 00:13:57.566125 kernel: loop1: detected capacity change from 0 to 52536 Mar 14 00:13:57.690165 kernel: loop2: detected capacity change from 0 to 114328 Mar 14 00:13:57.833172 kernel: loop3: detected capacity change from 0 to 114432 Mar 14 00:13:57.949118 kernel: loop4: detected capacity change from 0 to 209336 Mar 14 00:13:57.992118 kernel: loop5: detected capacity change from 0 to 52536 Mar 14 00:13:58.020120 kernel: loop6: detected capacity change from 0 to 114328 Mar 14 00:13:58.045125 kernel: loop7: detected capacity change from 0 to 114432 Mar 14 00:13:58.068589 (sd-merge)[1837]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 14 00:13:58.070546 (sd-merge)[1837]: Merged extensions into '/usr'. Mar 14 00:13:58.079609 systemd[1]: Reloading requested from client PID 1823 ('systemd-sysext') (unit systemd-sysext.service)... Mar 14 00:13:58.079815 systemd[1]: Reloading... Mar 14 00:13:58.183122 zram_generator::config[1865]: No configuration found. Mar 14 00:13:58.485773 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:13:58.566676 ldconfig[1819]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 14 00:13:58.645046 systemd[1]: Reloading finished in 564 ms. Mar 14 00:13:58.673639 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 14 00:13:58.679526 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 14 00:13:58.699431 systemd[1]: Starting ensure-sysext.service... Mar 14 00:13:58.708426 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:13:58.717437 systemd[1]: Reloading requested from client PID 1924 ('systemctl') (unit ensure-sysext.service)... Mar 14 00:13:58.717463 systemd[1]: Reloading... Mar 14 00:13:58.765686 systemd-tmpfiles[1925]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 14 00:13:58.766459 systemd-tmpfiles[1925]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 14 00:13:58.768361 systemd-tmpfiles[1925]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 14 00:13:58.768927 systemd-tmpfiles[1925]: ACLs are not supported, ignoring. Mar 14 00:13:58.769640 systemd-tmpfiles[1925]: ACLs are not supported, ignoring. Mar 14 00:13:58.776740 systemd-tmpfiles[1925]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:13:58.776770 systemd-tmpfiles[1925]: Skipping /boot Mar 14 00:13:58.797607 systemd-tmpfiles[1925]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:13:58.797637 systemd-tmpfiles[1925]: Skipping /boot Mar 14 00:13:58.846268 systemd-networkd[1684]: eth0: Gained IPv6LL Mar 14 00:13:58.892127 zram_generator::config[1953]: No configuration found. Mar 14 00:13:59.127795 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:13:59.281436 systemd[1]: Reloading finished in 563 ms. Mar 14 00:13:59.311973 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 14 00:13:59.327902 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:13:59.350617 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:13:59.361561 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 14 00:13:59.369587 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 14 00:13:59.393503 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:13:59.409683 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 14 00:13:59.422544 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:13:59.431593 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:13:59.442810 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:13:59.449810 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:13:59.454700 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:13:59.472198 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:13:59.472623 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:13:59.493838 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 14 00:13:59.513968 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:13:59.515478 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:13:59.536008 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 14 00:13:59.540853 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:13:59.543815 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:13:59.562990 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:13:59.565632 augenrules[2046]: No rules Mar 14 00:13:59.582765 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:13:59.591895 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:13:59.601035 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:13:59.610544 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:13:59.610668 systemd[1]: Reached target time-set.target - System Time Set. Mar 14 00:13:59.629359 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 14 00:13:59.636386 systemd[1]: Finished ensure-sysext.service. Mar 14 00:13:59.642286 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:13:59.649187 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:13:59.649550 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:13:59.654000 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:13:59.654392 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:13:59.677159 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:13:59.686863 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:13:59.693469 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:13:59.698740 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:13:59.722912 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 14 00:13:59.730209 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 14 00:13:59.736038 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 14 00:13:59.764260 systemd-resolved[2025]: Positive Trust Anchors: Mar 14 00:13:59.764298 systemd-resolved[2025]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:13:59.764380 systemd-resolved[2025]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:13:59.778671 systemd-resolved[2025]: Defaulting to hostname 'linux'. Mar 14 00:13:59.782125 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:13:59.784863 systemd[1]: Reached target network.target - Network. Mar 14 00:13:59.786905 systemd[1]: Reached target network-online.target - Network is Online. Mar 14 00:13:59.789365 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:13:59.792100 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:13:59.794616 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 14 00:13:59.797432 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 14 00:13:59.800571 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 14 00:13:59.803135 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 14 00:13:59.805953 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 14 00:13:59.808754 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 14 00:13:59.808805 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:13:59.810808 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:13:59.814488 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 14 00:13:59.819669 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 14 00:13:59.823955 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 14 00:13:59.837025 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 14 00:13:59.839543 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:13:59.841758 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:13:59.844185 systemd[1]: System is tainted: cgroupsv1 Mar 14 00:13:59.844260 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:13:59.844328 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:13:59.848259 systemd[1]: Starting containerd.service - containerd container runtime... Mar 14 00:13:59.865040 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 14 00:13:59.875451 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 14 00:13:59.884446 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 14 00:13:59.891669 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 14 00:13:59.895013 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 14 00:13:59.905945 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:13:59.920010 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 14 00:13:59.943807 systemd[1]: Started ntpd.service - Network Time Service. Mar 14 00:13:59.956429 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 14 00:13:59.967204 jq[2080]: false Mar 14 00:13:59.973306 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 14 00:13:59.997221 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 14 00:14:00.013245 dbus-daemon[2079]: [system] SELinux support is enabled Mar 14 00:14:00.019263 dbus-daemon[2079]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1684 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 14 00:14:00.019686 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 14 00:14:00.045329 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 14 00:14:00.078560 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 14 00:14:00.085624 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 14 00:14:00.103385 systemd[1]: Starting update-engine.service - Update Engine... Mar 14 00:14:00.114460 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 14 00:14:00.121814 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 14 00:14:00.164812 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 14 00:14:00.165359 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 14 00:14:00.179926 systemd[1]: motdgen.service: Deactivated successfully. Mar 14 00:14:00.180761 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 14 00:14:00.204642 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 14 00:14:00.223116 extend-filesystems[2081]: Found loop4 Mar 14 00:14:00.223116 extend-filesystems[2081]: Found loop5 Mar 14 00:14:00.223116 extend-filesystems[2081]: Found loop6 Mar 14 00:14:00.223116 extend-filesystems[2081]: Found loop7 Mar 14 00:14:00.223116 extend-filesystems[2081]: Found nvme0n1 Mar 14 00:14:00.223116 extend-filesystems[2081]: Found nvme0n1p1 Mar 14 00:14:00.223116 extend-filesystems[2081]: Found nvme0n1p2 Mar 14 00:14:00.223116 extend-filesystems[2081]: Found nvme0n1p3 Mar 14 00:14:00.223116 extend-filesystems[2081]: Found usr Mar 14 00:14:00.223116 extend-filesystems[2081]: Found nvme0n1p4 Mar 14 00:14:00.223116 extend-filesystems[2081]: Found nvme0n1p6 Mar 14 00:14:00.223116 extend-filesystems[2081]: Found nvme0n1p7 Mar 14 00:14:00.223116 extend-filesystems[2081]: Found nvme0n1p9 Mar 14 00:14:00.223116 extend-filesystems[2081]: Checking size of /dev/nvme0n1p9 Mar 14 00:14:00.306566 ntpd[2086]: 14 Mar 00:14:00 ntpd[2086]: ntpd 4.2.8p17@1.4004-o Fri Mar 13 21:57:55 UTC 2026 (1): Starting Mar 14 00:14:00.306566 ntpd[2086]: 14 Mar 00:14:00 ntpd[2086]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 14 00:14:00.306566 ntpd[2086]: 14 Mar 00:14:00 ntpd[2086]: ---------------------------------------------------- Mar 14 00:14:00.306566 ntpd[2086]: 14 Mar 00:14:00 ntpd[2086]: ntp-4 is maintained by Network Time Foundation, Mar 14 00:14:00.306566 ntpd[2086]: 14 Mar 00:14:00 ntpd[2086]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 14 00:14:00.306566 ntpd[2086]: 14 Mar 00:14:00 ntpd[2086]: corporation. Support and training for ntp-4 are Mar 14 00:14:00.306566 ntpd[2086]: 14 Mar 00:14:00 ntpd[2086]: available at https://www.nwtime.org/support Mar 14 00:14:00.306566 ntpd[2086]: 14 Mar 00:14:00 ntpd[2086]: ---------------------------------------------------- Mar 14 00:14:00.306566 ntpd[2086]: 14 Mar 00:14:00 ntpd[2086]: proto: precision = 0.096 usec (-23) Mar 14 00:14:00.258882 ntpd[2086]: ntpd 4.2.8p17@1.4004-o Fri Mar 13 21:57:55 UTC 2026 (1): Starting Mar 14 00:14:00.323050 jq[2111]: true Mar 14 00:14:00.240948 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 14 00:14:00.323394 update_engine[2107]: I20260314 00:14:00.315265 2107 main.cc:92] Flatcar Update Engine starting Mar 14 00:14:00.323811 ntpd[2086]: 14 Mar 00:14:00 ntpd[2086]: basedate set to 2026-03-01 Mar 14 00:14:00.323811 ntpd[2086]: 14 Mar 00:14:00 ntpd[2086]: gps base set to 2026-03-01 (week 2408) Mar 14 00:14:00.258933 ntpd[2086]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 14 00:14:00.242777 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 14 00:14:00.258972 ntpd[2086]: ---------------------------------------------------- Mar 14 00:14:00.322847 (ntainerd)[2127]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 14 00:14:00.341809 ntpd[2086]: 14 Mar 00:14:00 ntpd[2086]: Listen and drop on 0 v6wildcard [::]:123 Mar 14 00:14:00.341809 ntpd[2086]: 14 Mar 00:14:00 ntpd[2086]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 14 00:14:00.341809 ntpd[2086]: 14 Mar 00:14:00 ntpd[2086]: Listen normally on 2 lo 127.0.0.1:123 Mar 14 00:14:00.341809 ntpd[2086]: 14 Mar 00:14:00 ntpd[2086]: Listen normally on 3 eth0 172.31.28.2:123 Mar 14 00:14:00.341809 ntpd[2086]: 14 Mar 00:14:00 ntpd[2086]: Listen normally on 4 lo [::1]:123 Mar 14 00:14:00.341809 ntpd[2086]: 14 Mar 00:14:00 ntpd[2086]: Listen normally on 5 eth0 [fe80::484:bff:feca:214b%2]:123 Mar 14 00:14:00.341809 ntpd[2086]: 14 Mar 00:14:00 ntpd[2086]: Listening on routing socket on fd #22 for interface updates Mar 14 00:14:00.258993 ntpd[2086]: ntp-4 is maintained by Network Time Foundation, Mar 14 00:14:00.259013 ntpd[2086]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 14 00:14:00.259034 ntpd[2086]: corporation. Support and training for ntp-4 are Mar 14 00:14:00.259054 ntpd[2086]: available at https://www.nwtime.org/support Mar 14 00:14:00.350154 systemd[1]: Started update-engine.service - Update Engine. Mar 14 00:14:00.259096 ntpd[2086]: ---------------------------------------------------- Mar 14 00:14:00.300250 ntpd[2086]: proto: precision = 0.096 usec (-23) Mar 14 00:14:00.309562 ntpd[2086]: basedate set to 2026-03-01 Mar 14 00:14:00.361795 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 14 00:14:00.309596 ntpd[2086]: gps base set to 2026-03-01 (week 2408) Mar 14 00:14:00.361842 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 14 00:14:00.330358 ntpd[2086]: Listen and drop on 0 v6wildcard [::]:123 Mar 14 00:14:00.330447 ntpd[2086]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 14 00:14:00.330723 ntpd[2086]: Listen normally on 2 lo 127.0.0.1:123 Mar 14 00:14:00.330790 ntpd[2086]: Listen normally on 3 eth0 172.31.28.2:123 Mar 14 00:14:00.330860 ntpd[2086]: Listen normally on 4 lo [::1]:123 Mar 14 00:14:00.330933 ntpd[2086]: Listen normally on 5 eth0 [fe80::484:bff:feca:214b%2]:123 Mar 14 00:14:00.331031 ntpd[2086]: Listening on routing socket on fd #22 for interface updates Mar 14 00:14:00.348696 dbus-daemon[2079]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 14 00:14:00.380210 update_engine[2107]: I20260314 00:14:00.374661 2107 update_check_scheduler.cc:74] Next update check in 4m30s Mar 14 00:14:00.396589 jq[2126]: true Mar 14 00:14:00.398818 extend-filesystems[2081]: Resized partition /dev/nvme0n1p9 Mar 14 00:14:00.400988 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 14 00:14:00.407318 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 14 00:14:00.416403 extend-filesystems[2147]: resize2fs 1.47.1 (20-May-2024) Mar 14 00:14:00.459344 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Mar 14 00:14:00.407384 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 14 00:14:00.459564 tar[2121]: linux-arm64/LICENSE Mar 14 00:14:00.459564 tar[2121]: linux-arm64/helm Mar 14 00:14:00.417359 ntpd[2086]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:14:00.460146 ntpd[2086]: 14 Mar 00:14:00 ntpd[2086]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:14:00.460146 ntpd[2086]: 14 Mar 00:14:00 ntpd[2086]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:14:00.440938 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 14 00:14:00.417412 ntpd[2086]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:14:00.443388 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 14 00:14:00.507229 coreos-metadata[2077]: Mar 14 00:14:00.504 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 14 00:14:00.522240 coreos-metadata[2077]: Mar 14 00:14:00.521 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 14 00:14:00.535562 coreos-metadata[2077]: Mar 14 00:14:00.535 INFO Fetch successful Mar 14 00:14:00.535562 coreos-metadata[2077]: Mar 14 00:14:00.535 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 14 00:14:00.539156 coreos-metadata[2077]: Mar 14 00:14:00.537 INFO Fetch successful Mar 14 00:14:00.539156 coreos-metadata[2077]: Mar 14 00:14:00.537 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 14 00:14:00.539156 coreos-metadata[2077]: Mar 14 00:14:00.538 INFO Fetch successful Mar 14 00:14:00.539156 coreos-metadata[2077]: Mar 14 00:14:00.538 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 14 00:14:00.552412 coreos-metadata[2077]: Mar 14 00:14:00.540 INFO Fetch successful Mar 14 00:14:00.552412 coreos-metadata[2077]: Mar 14 00:14:00.540 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 14 00:14:00.552412 coreos-metadata[2077]: Mar 14 00:14:00.549 INFO Fetch failed with 404: resource not found Mar 14 00:14:00.552412 coreos-metadata[2077]: Mar 14 00:14:00.552 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 14 00:14:00.553015 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 14 00:14:00.562921 coreos-metadata[2077]: Mar 14 00:14:00.562 INFO Fetch successful Mar 14 00:14:00.562921 coreos-metadata[2077]: Mar 14 00:14:00.562 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 14 00:14:00.566210 coreos-metadata[2077]: Mar 14 00:14:00.565 INFO Fetch successful Mar 14 00:14:00.566210 coreos-metadata[2077]: Mar 14 00:14:00.566 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 14 00:14:00.569553 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 14 00:14:00.578403 coreos-metadata[2077]: Mar 14 00:14:00.573 INFO Fetch successful Mar 14 00:14:00.578403 coreos-metadata[2077]: Mar 14 00:14:00.573 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 14 00:14:00.578403 coreos-metadata[2077]: Mar 14 00:14:00.578 INFO Fetch successful Mar 14 00:14:00.578403 coreos-metadata[2077]: Mar 14 00:14:00.578 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 14 00:14:00.593651 coreos-metadata[2077]: Mar 14 00:14:00.589 INFO Fetch successful Mar 14 00:14:00.738911 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 14 00:14:00.743607 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 14 00:14:00.746602 systemd-logind[2106]: Watching system buttons on /dev/input/event0 (Power Button) Mar 14 00:14:00.746709 systemd-logind[2106]: Watching system buttons on /dev/input/event1 (Sleep Button) Mar 14 00:14:00.748421 systemd-logind[2106]: New seat seat0. Mar 14 00:14:00.764183 systemd[1]: Started systemd-logind.service - User Login Management. Mar 14 00:14:00.777121 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Mar 14 00:14:00.798865 extend-filesystems[2147]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 14 00:14:00.798865 extend-filesystems[2147]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 14 00:14:00.798865 extend-filesystems[2147]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Mar 14 00:14:00.813523 extend-filesystems[2081]: Resized filesystem in /dev/nvme0n1p9 Mar 14 00:14:00.824662 bash[2180]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:14:00.816000 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 14 00:14:00.820676 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 14 00:14:00.835724 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 14 00:14:00.893022 amazon-ssm-agent[2158]: Initializing new seelog logger Mar 14 00:14:00.920842 amazon-ssm-agent[2158]: New Seelog Logger Creation Complete Mar 14 00:14:00.920842 amazon-ssm-agent[2158]: 2026/03/14 00:14:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:14:00.920842 amazon-ssm-agent[2158]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:14:00.920842 amazon-ssm-agent[2158]: 2026/03/14 00:14:00 processing appconfig overrides Mar 14 00:14:00.920842 amazon-ssm-agent[2158]: 2026/03/14 00:14:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:14:00.920842 amazon-ssm-agent[2158]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:14:00.920842 amazon-ssm-agent[2158]: 2026/03/14 00:14:00 processing appconfig overrides Mar 14 00:14:00.920842 amazon-ssm-agent[2158]: 2026/03/14 00:14:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:14:00.920842 amazon-ssm-agent[2158]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:14:00.920842 amazon-ssm-agent[2158]: 2026/03/14 00:14:00 processing appconfig overrides Mar 14 00:14:00.920842 amazon-ssm-agent[2158]: 2026-03-14 00:14:00 INFO Proxy environment variables: Mar 14 00:14:00.920842 amazon-ssm-agent[2158]: 2026/03/14 00:14:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:14:00.920842 amazon-ssm-agent[2158]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:14:00.920842 amazon-ssm-agent[2158]: 2026/03/14 00:14:00 processing appconfig overrides Mar 14 00:14:00.914460 systemd[1]: Starting sshkeys.service... Mar 14 00:14:00.988722 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 14 00:14:01.003238 amazon-ssm-agent[2158]: 2026-03-14 00:14:00 INFO https_proxy: Mar 14 00:14:01.007434 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 14 00:14:01.088113 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (2197) Mar 14 00:14:01.104286 amazon-ssm-agent[2158]: 2026-03-14 00:14:00 INFO http_proxy: Mar 14 00:14:01.156172 locksmithd[2149]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 14 00:14:01.204169 dbus-daemon[2079]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 14 00:14:01.204457 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 14 00:14:01.219316 amazon-ssm-agent[2158]: 2026-03-14 00:14:00 INFO no_proxy: Mar 14 00:14:01.222517 dbus-daemon[2079]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2144 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 14 00:14:01.240832 systemd[1]: Starting polkit.service - Authorization Manager... Mar 14 00:14:01.265008 polkitd[2256]: Started polkitd version 121 Mar 14 00:14:01.298938 polkitd[2256]: Loading rules from directory /etc/polkit-1/rules.d Mar 14 00:14:01.301388 polkitd[2256]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 14 00:14:01.309154 polkitd[2256]: Finished loading, compiling and executing 2 rules Mar 14 00:14:01.316704 amazon-ssm-agent[2158]: 2026-03-14 00:14:00 INFO Checking if agent identity type OnPrem can be assumed Mar 14 00:14:01.313581 systemd[1]: Started polkit.service - Authorization Manager. Mar 14 00:14:01.313286 dbus-daemon[2079]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 14 00:14:01.318780 polkitd[2256]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 14 00:14:01.393325 systemd-resolved[2025]: System hostname changed to 'ip-172-31-28-2'. Mar 14 00:14:01.393326 systemd-hostnamed[2144]: Hostname set to (transient) Mar 14 00:14:01.414148 amazon-ssm-agent[2158]: 2026-03-14 00:14:00 INFO Checking if agent identity type EC2 can be assumed Mar 14 00:14:01.458805 coreos-metadata[2212]: Mar 14 00:14:01.455 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 14 00:14:01.460489 coreos-metadata[2212]: Mar 14 00:14:01.459 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 14 00:14:01.460559 coreos-metadata[2212]: Mar 14 00:14:01.460 INFO Fetch successful Mar 14 00:14:01.460610 coreos-metadata[2212]: Mar 14 00:14:01.460 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 14 00:14:01.462231 coreos-metadata[2212]: Mar 14 00:14:01.462 INFO Fetch successful Mar 14 00:14:01.466777 containerd[2127]: time="2026-03-14T00:14:01.466627575Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 14 00:14:01.469482 unknown[2212]: wrote ssh authorized keys file for user: core Mar 14 00:14:01.517318 amazon-ssm-agent[2158]: 2026-03-14 00:14:01 INFO Agent will take identity from EC2 Mar 14 00:14:01.573349 update-ssh-keys[2305]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:14:01.578554 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 14 00:14:01.595274 systemd[1]: Finished sshkeys.service. Mar 14 00:14:01.634099 amazon-ssm-agent[2158]: 2026-03-14 00:14:01 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 14 00:14:01.723112 amazon-ssm-agent[2158]: 2026-03-14 00:14:01 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 14 00:14:01.770115 containerd[2127]: time="2026-03-14T00:14:01.768342065Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:14:01.775365 containerd[2127]: time="2026-03-14T00:14:01.775284593Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:14:01.775365 containerd[2127]: time="2026-03-14T00:14:01.775356761Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 14 00:14:01.775522 containerd[2127]: time="2026-03-14T00:14:01.775394357Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 14 00:14:01.775745 containerd[2127]: time="2026-03-14T00:14:01.775701881Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 14 00:14:01.775804 containerd[2127]: time="2026-03-14T00:14:01.775750469Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 14 00:14:01.775918 containerd[2127]: time="2026-03-14T00:14:01.775872989Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:14:01.775975 containerd[2127]: time="2026-03-14T00:14:01.775913633Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:14:01.776379 containerd[2127]: time="2026-03-14T00:14:01.776331605Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:14:01.776440 containerd[2127]: time="2026-03-14T00:14:01.776376629Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 14 00:14:01.776440 containerd[2127]: time="2026-03-14T00:14:01.776411933Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:14:01.776523 containerd[2127]: time="2026-03-14T00:14:01.776437049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 14 00:14:01.776644 containerd[2127]: time="2026-03-14T00:14:01.776603249Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:14:01.777087 containerd[2127]: time="2026-03-14T00:14:01.777026897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:14:01.783667 containerd[2127]: time="2026-03-14T00:14:01.782980685Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:14:01.783667 containerd[2127]: time="2026-03-14T00:14:01.783042869Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 14 00:14:01.783667 containerd[2127]: time="2026-03-14T00:14:01.783401249Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 14 00:14:01.783667 containerd[2127]: time="2026-03-14T00:14:01.783534881Z" level=info msg="metadata content store policy set" policy=shared Mar 14 00:14:01.796396 containerd[2127]: time="2026-03-14T00:14:01.796325441Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 14 00:14:01.796542 containerd[2127]: time="2026-03-14T00:14:01.796436825Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 14 00:14:01.796542 containerd[2127]: time="2026-03-14T00:14:01.796477577Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 14 00:14:01.796663 containerd[2127]: time="2026-03-14T00:14:01.796540541Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 14 00:14:01.796663 containerd[2127]: time="2026-03-14T00:14:01.796575365Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 14 00:14:01.796877 containerd[2127]: time="2026-03-14T00:14:01.796831937Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 14 00:14:01.797468 containerd[2127]: time="2026-03-14T00:14:01.797399669Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 14 00:14:01.797843 containerd[2127]: time="2026-03-14T00:14:01.797652665Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 14 00:14:01.797843 containerd[2127]: time="2026-03-14T00:14:01.797688593Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 14 00:14:01.797843 containerd[2127]: time="2026-03-14T00:14:01.797718521Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 14 00:14:01.797843 containerd[2127]: time="2026-03-14T00:14:01.797762897Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 14 00:14:01.797843 containerd[2127]: time="2026-03-14T00:14:01.797794025Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 14 00:14:01.797843 containerd[2127]: time="2026-03-14T00:14:01.797825273Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 14 00:14:01.799280 containerd[2127]: time="2026-03-14T00:14:01.797859401Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 14 00:14:01.799280 containerd[2127]: time="2026-03-14T00:14:01.797892101Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 14 00:14:01.799280 containerd[2127]: time="2026-03-14T00:14:01.797922065Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 14 00:14:01.799280 containerd[2127]: time="2026-03-14T00:14:01.797951153Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 14 00:14:01.799280 containerd[2127]: time="2026-03-14T00:14:01.797980217Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 14 00:14:01.799280 containerd[2127]: time="2026-03-14T00:14:01.798023333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 14 00:14:01.799280 containerd[2127]: time="2026-03-14T00:14:01.798054041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 14 00:14:01.804219 containerd[2127]: time="2026-03-14T00:14:01.802155809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 14 00:14:01.804219 containerd[2127]: time="2026-03-14T00:14:01.802225757Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 14 00:14:01.804219 containerd[2127]: time="2026-03-14T00:14:01.802278185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 14 00:14:01.804219 containerd[2127]: time="2026-03-14T00:14:01.802311965Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 14 00:14:01.804219 containerd[2127]: time="2026-03-14T00:14:01.802349717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 14 00:14:01.804219 containerd[2127]: time="2026-03-14T00:14:01.802381145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 14 00:14:01.804219 containerd[2127]: time="2026-03-14T00:14:01.802414277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 14 00:14:01.804219 containerd[2127]: time="2026-03-14T00:14:01.802500761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 14 00:14:01.804219 containerd[2127]: time="2026-03-14T00:14:01.802538741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 14 00:14:01.804219 containerd[2127]: time="2026-03-14T00:14:01.802571813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 14 00:14:01.804219 containerd[2127]: time="2026-03-14T00:14:01.802602305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 14 00:14:01.804219 containerd[2127]: time="2026-03-14T00:14:01.802651805Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 14 00:14:01.804219 containerd[2127]: time="2026-03-14T00:14:01.802713509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 14 00:14:01.804219 containerd[2127]: time="2026-03-14T00:14:01.802742909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 14 00:14:01.804219 containerd[2127]: time="2026-03-14T00:14:01.802772609Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 14 00:14:01.804941 containerd[2127]: time="2026-03-14T00:14:01.803011877Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 14 00:14:01.804941 containerd[2127]: time="2026-03-14T00:14:01.803051081Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 14 00:14:01.804941 containerd[2127]: time="2026-03-14T00:14:01.803104253Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 14 00:14:01.804941 containerd[2127]: time="2026-03-14T00:14:01.803137661Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 14 00:14:01.804941 containerd[2127]: time="2026-03-14T00:14:01.803164565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 14 00:14:01.804941 containerd[2127]: time="2026-03-14T00:14:01.803194013Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 14 00:14:01.804941 containerd[2127]: time="2026-03-14T00:14:01.803218169Z" level=info msg="NRI interface is disabled by configuration." Mar 14 00:14:01.804941 containerd[2127]: time="2026-03-14T00:14:01.803249921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 14 00:14:01.805333 containerd[2127]: time="2026-03-14T00:14:01.803889917Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 14 00:14:01.805333 containerd[2127]: time="2026-03-14T00:14:01.803998781Z" level=info msg="Connect containerd service" Mar 14 00:14:01.812106 containerd[2127]: time="2026-03-14T00:14:01.805647953Z" level=info msg="using legacy CRI server" Mar 14 00:14:01.812106 containerd[2127]: time="2026-03-14T00:14:01.806463425Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 14 00:14:01.812106 containerd[2127]: time="2026-03-14T00:14:01.806634089Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 14 00:14:01.814117 containerd[2127]: time="2026-03-14T00:14:01.812744957Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:14:01.814117 containerd[2127]: time="2026-03-14T00:14:01.813126713Z" level=info msg="Start subscribing containerd event" Mar 14 00:14:01.814117 containerd[2127]: time="2026-03-14T00:14:01.813212837Z" level=info msg="Start recovering state" Mar 14 00:14:01.814117 containerd[2127]: time="2026-03-14T00:14:01.813344489Z" level=info msg="Start event monitor" Mar 14 00:14:01.814117 containerd[2127]: time="2026-03-14T00:14:01.813369773Z" level=info msg="Start snapshots syncer" Mar 14 00:14:01.814117 containerd[2127]: time="2026-03-14T00:14:01.813390317Z" level=info msg="Start cni network conf syncer for default" Mar 14 00:14:01.814117 containerd[2127]: time="2026-03-14T00:14:01.813408557Z" level=info msg="Start streaming server" Mar 14 00:14:01.818302 containerd[2127]: time="2026-03-14T00:14:01.818253833Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 14 00:14:01.822979 containerd[2127]: time="2026-03-14T00:14:01.820257857Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 14 00:14:01.822738 systemd[1]: Started containerd.service - containerd container runtime. Mar 14 00:14:01.825378 amazon-ssm-agent[2158]: 2026-03-14 00:14:01 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 14 00:14:01.834923 containerd[2127]: time="2026-03-14T00:14:01.833179085Z" level=info msg="containerd successfully booted in 0.373677s" Mar 14 00:14:01.922092 amazon-ssm-agent[2158]: 2026-03-14 00:14:01 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Mar 14 00:14:02.020927 amazon-ssm-agent[2158]: 2026-03-14 00:14:01 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Mar 14 00:14:02.126118 amazon-ssm-agent[2158]: 2026-03-14 00:14:01 INFO [amazon-ssm-agent] Starting Core Agent Mar 14 00:14:02.146631 amazon-ssm-agent[2158]: 2026-03-14 00:14:01 INFO [amazon-ssm-agent] registrar detected. Attempting registration Mar 14 00:14:02.146631 amazon-ssm-agent[2158]: 2026-03-14 00:14:01 INFO [Registrar] Starting registrar module Mar 14 00:14:02.146631 amazon-ssm-agent[2158]: 2026-03-14 00:14:01 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Mar 14 00:14:02.146843 amazon-ssm-agent[2158]: 2026-03-14 00:14:02 INFO [EC2Identity] EC2 registration was successful. Mar 14 00:14:02.146843 amazon-ssm-agent[2158]: 2026-03-14 00:14:02 INFO [CredentialRefresher] credentialRefresher has started Mar 14 00:14:02.146843 amazon-ssm-agent[2158]: 2026-03-14 00:14:02 INFO [CredentialRefresher] Starting credentials refresher loop Mar 14 00:14:02.146843 amazon-ssm-agent[2158]: 2026-03-14 00:14:02 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 14 00:14:02.225211 amazon-ssm-agent[2158]: 2026-03-14 00:14:02 INFO [CredentialRefresher] Next credential rotation will be in 31.8749915306 minutes Mar 14 00:14:02.426774 sshd_keygen[2114]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 14 00:14:02.446534 tar[2121]: linux-arm64/README.md Mar 14 00:14:02.475743 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 14 00:14:02.506387 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 14 00:14:02.520576 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 14 00:14:02.546739 systemd[1]: issuegen.service: Deactivated successfully. Mar 14 00:14:02.547701 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 14 00:14:02.559639 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 14 00:14:02.593185 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 14 00:14:02.606691 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 14 00:14:02.622949 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 14 00:14:02.629498 systemd[1]: Reached target getty.target - Login Prompts. Mar 14 00:14:03.176946 amazon-ssm-agent[2158]: 2026-03-14 00:14:03 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 14 00:14:03.277414 amazon-ssm-agent[2158]: 2026-03-14 00:14:03 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2357) started Mar 14 00:14:03.377925 amazon-ssm-agent[2158]: 2026-03-14 00:14:03 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 14 00:14:03.601408 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:03.607146 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 14 00:14:03.612466 systemd[1]: Startup finished in 25.596s (kernel) + 10.435s (userspace) = 36.031s. Mar 14 00:14:03.620931 (kubelet)[2375]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:14:04.926047 kubelet[2375]: E0314 00:14:04.925956 2375 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:14:04.930817 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:14:04.932216 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:14:07.666327 systemd-resolved[2025]: Clock change detected. Flushing caches. Mar 14 00:14:08.916518 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 14 00:14:08.922776 systemd[1]: Started sshd@0-172.31.28.2:22-68.220.241.50:53888.service - OpenSSH per-connection server daemon (68.220.241.50:53888). Mar 14 00:14:09.485307 sshd[2387]: Accepted publickey for core from 68.220.241.50 port 53888 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:14:09.489504 sshd[2387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:09.508778 systemd-logind[2106]: New session 1 of user core. Mar 14 00:14:09.510982 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 14 00:14:09.517728 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 14 00:14:09.552508 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 14 00:14:09.572809 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 14 00:14:09.579225 (systemd)[2393]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 14 00:14:09.809787 systemd[2393]: Queued start job for default target default.target. Mar 14 00:14:09.810516 systemd[2393]: Created slice app.slice - User Application Slice. Mar 14 00:14:09.810572 systemd[2393]: Reached target paths.target - Paths. Mar 14 00:14:09.810604 systemd[2393]: Reached target timers.target - Timers. Mar 14 00:14:09.822414 systemd[2393]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 14 00:14:09.836434 systemd[2393]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 14 00:14:09.836562 systemd[2393]: Reached target sockets.target - Sockets. Mar 14 00:14:09.836595 systemd[2393]: Reached target basic.target - Basic System. Mar 14 00:14:09.836697 systemd[2393]: Reached target default.target - Main User Target. Mar 14 00:14:09.836760 systemd[2393]: Startup finished in 246ms. Mar 14 00:14:09.837537 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 14 00:14:09.844927 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 14 00:14:10.230733 systemd[1]: Started sshd@1-172.31.28.2:22-68.220.241.50:53900.service - OpenSSH per-connection server daemon (68.220.241.50:53900). Mar 14 00:14:10.723553 sshd[2405]: Accepted publickey for core from 68.220.241.50 port 53900 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:14:10.725690 sshd[2405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:10.734458 systemd-logind[2106]: New session 2 of user core. Mar 14 00:14:10.741060 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 14 00:14:11.076584 sshd[2405]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:11.084582 systemd-logind[2106]: Session 2 logged out. Waiting for processes to exit. Mar 14 00:14:11.085830 systemd[1]: sshd@1-172.31.28.2:22-68.220.241.50:53900.service: Deactivated successfully. Mar 14 00:14:11.091812 systemd[1]: session-2.scope: Deactivated successfully. Mar 14 00:14:11.093486 systemd-logind[2106]: Removed session 2. Mar 14 00:14:11.165718 systemd[1]: Started sshd@2-172.31.28.2:22-68.220.241.50:53912.service - OpenSSH per-connection server daemon (68.220.241.50:53912). Mar 14 00:14:11.658309 sshd[2413]: Accepted publickey for core from 68.220.241.50 port 53912 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:14:11.660247 sshd[2413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:11.667520 systemd-logind[2106]: New session 3 of user core. Mar 14 00:14:11.680729 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 14 00:14:12.004609 sshd[2413]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:12.009959 systemd-logind[2106]: Session 3 logged out. Waiting for processes to exit. Mar 14 00:14:12.012119 systemd[1]: sshd@2-172.31.28.2:22-68.220.241.50:53912.service: Deactivated successfully. Mar 14 00:14:12.017224 systemd[1]: session-3.scope: Deactivated successfully. Mar 14 00:14:12.019369 systemd-logind[2106]: Removed session 3. Mar 14 00:14:12.092688 systemd[1]: Started sshd@3-172.31.28.2:22-68.220.241.50:53926.service - OpenSSH per-connection server daemon (68.220.241.50:53926). Mar 14 00:14:12.584307 sshd[2421]: Accepted publickey for core from 68.220.241.50 port 53926 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:14:12.586565 sshd[2421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:12.594362 systemd-logind[2106]: New session 4 of user core. Mar 14 00:14:12.602920 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 14 00:14:12.940431 sshd[2421]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:12.945245 systemd-logind[2106]: Session 4 logged out. Waiting for processes to exit. Mar 14 00:14:12.947160 systemd[1]: sshd@3-172.31.28.2:22-68.220.241.50:53926.service: Deactivated successfully. Mar 14 00:14:12.952828 systemd[1]: session-4.scope: Deactivated successfully. Mar 14 00:14:12.955071 systemd-logind[2106]: Removed session 4. Mar 14 00:14:13.030706 systemd[1]: Started sshd@4-172.31.28.2:22-68.220.241.50:47742.service - OpenSSH per-connection server daemon (68.220.241.50:47742). Mar 14 00:14:13.523327 sshd[2429]: Accepted publickey for core from 68.220.241.50 port 47742 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:14:13.525219 sshd[2429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:13.534639 systemd-logind[2106]: New session 5 of user core. Mar 14 00:14:13.538064 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 14 00:14:13.829971 sudo[2433]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 14 00:14:13.831418 sudo[2433]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:14:13.851757 sudo[2433]: pam_unix(sudo:session): session closed for user root Mar 14 00:14:13.931491 sshd[2429]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:13.938177 systemd[1]: sshd@4-172.31.28.2:22-68.220.241.50:47742.service: Deactivated successfully. Mar 14 00:14:13.944771 systemd[1]: session-5.scope: Deactivated successfully. Mar 14 00:14:13.946454 systemd-logind[2106]: Session 5 logged out. Waiting for processes to exit. Mar 14 00:14:13.948752 systemd-logind[2106]: Removed session 5. Mar 14 00:14:14.031705 systemd[1]: Started sshd@5-172.31.28.2:22-68.220.241.50:47758.service - OpenSSH per-connection server daemon (68.220.241.50:47758). Mar 14 00:14:14.562319 sshd[2438]: Accepted publickey for core from 68.220.241.50 port 47758 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:14:14.564738 sshd[2438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:14.572018 systemd-logind[2106]: New session 6 of user core. Mar 14 00:14:14.582889 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 14 00:14:14.859495 sudo[2443]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 14 00:14:14.860204 sudo[2443]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:14:14.866678 sudo[2443]: pam_unix(sudo:session): session closed for user root Mar 14 00:14:14.876505 sudo[2442]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 14 00:14:14.877147 sudo[2442]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:14:14.902772 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 14 00:14:14.907148 auditctl[2446]: No rules Mar 14 00:14:14.908197 systemd[1]: audit-rules.service: Deactivated successfully. Mar 14 00:14:14.908736 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 14 00:14:14.920994 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:14:14.965522 augenrules[2465]: No rules Mar 14 00:14:14.969223 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:14:14.971627 sudo[2442]: pam_unix(sudo:session): session closed for user root Mar 14 00:14:15.058569 sshd[2438]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:15.064113 systemd[1]: sshd@5-172.31.28.2:22-68.220.241.50:47758.service: Deactivated successfully. Mar 14 00:14:15.071259 systemd[1]: session-6.scope: Deactivated successfully. Mar 14 00:14:15.072861 systemd-logind[2106]: Session 6 logged out. Waiting for processes to exit. Mar 14 00:14:15.075149 systemd-logind[2106]: Removed session 6. Mar 14 00:14:15.139706 systemd[1]: Started sshd@6-172.31.28.2:22-68.220.241.50:47760.service - OpenSSH per-connection server daemon (68.220.241.50:47760). Mar 14 00:14:15.553294 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 14 00:14:15.560875 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:15.646332 sshd[2474]: Accepted publickey for core from 68.220.241.50 port 47760 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:14:15.649492 sshd[2474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:15.660121 systemd-logind[2106]: New session 7 of user core. Mar 14 00:14:15.666985 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 14 00:14:15.908645 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:15.914215 (kubelet)[2490]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:14:15.931419 sudo[2491]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 14 00:14:15.932769 sudo[2491]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:14:16.001172 kubelet[2490]: E0314 00:14:16.001113 2490 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:14:16.009233 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:14:16.010804 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:14:16.453103 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 14 00:14:16.464953 (dockerd)[2513]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 14 00:14:16.885187 dockerd[2513]: time="2026-03-14T00:14:16.884997125Z" level=info msg="Starting up" Mar 14 00:14:17.129085 dockerd[2513]: time="2026-03-14T00:14:17.128558391Z" level=info msg="Loading containers: start." Mar 14 00:14:17.305309 kernel: Initializing XFRM netlink socket Mar 14 00:14:17.342849 (udev-worker)[2535]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:14:17.434618 systemd-networkd[1684]: docker0: Link UP Mar 14 00:14:17.463620 dockerd[2513]: time="2026-03-14T00:14:17.463559128Z" level=info msg="Loading containers: done." Mar 14 00:14:17.490720 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1290181079-merged.mount: Deactivated successfully. Mar 14 00:14:17.495937 dockerd[2513]: time="2026-03-14T00:14:17.495806680Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 14 00:14:17.496786 dockerd[2513]: time="2026-03-14T00:14:17.496146112Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 14 00:14:17.496786 dockerd[2513]: time="2026-03-14T00:14:17.496416100Z" level=info msg="Daemon has completed initialization" Mar 14 00:14:17.576881 dockerd[2513]: time="2026-03-14T00:14:17.576600821Z" level=info msg="API listen on /run/docker.sock" Mar 14 00:14:17.577143 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 14 00:14:18.701221 containerd[2127]: time="2026-03-14T00:14:18.701144298Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 14 00:14:19.373259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2139040974.mount: Deactivated successfully. Mar 14 00:14:20.887330 containerd[2127]: time="2026-03-14T00:14:20.886249629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:20.888613 containerd[2127]: time="2026-03-14T00:14:20.888542757Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=27390174" Mar 14 00:14:20.892292 containerd[2127]: time="2026-03-14T00:14:20.890498049Z" level=info msg="ImageCreate event name:\"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:20.897078 containerd[2127]: time="2026-03-14T00:14:20.897011217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:20.899536 containerd[2127]: time="2026-03-14T00:14:20.899483577Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"27386773\" in 2.198270123s" Mar 14 00:14:20.899706 containerd[2127]: time="2026-03-14T00:14:20.899675613Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\"" Mar 14 00:14:20.901604 containerd[2127]: time="2026-03-14T00:14:20.901538793Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 14 00:14:22.361738 containerd[2127]: time="2026-03-14T00:14:22.361661084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:22.363858 containerd[2127]: time="2026-03-14T00:14:22.363788997Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=23552106" Mar 14 00:14:22.365143 containerd[2127]: time="2026-03-14T00:14:22.365060781Z" level=info msg="ImageCreate event name:\"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:22.371315 containerd[2127]: time="2026-03-14T00:14:22.370792137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:22.373633 containerd[2127]: time="2026-03-14T00:14:22.373188753Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"25136510\" in 1.471585928s" Mar 14 00:14:22.373633 containerd[2127]: time="2026-03-14T00:14:22.373245069Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\"" Mar 14 00:14:22.374328 containerd[2127]: time="2026-03-14T00:14:22.374039853Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 14 00:14:23.577198 containerd[2127]: time="2026-03-14T00:14:23.577112363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:23.579329 containerd[2127]: time="2026-03-14T00:14:23.579248507Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=18301305" Mar 14 00:14:23.581055 containerd[2127]: time="2026-03-14T00:14:23.580649651Z" level=info msg="ImageCreate event name:\"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:23.586823 containerd[2127]: time="2026-03-14T00:14:23.586759967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:23.589482 containerd[2127]: time="2026-03-14T00:14:23.589416791Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"19885727\" in 1.215324654s" Mar 14 00:14:23.589574 containerd[2127]: time="2026-03-14T00:14:23.589478051Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\"" Mar 14 00:14:23.591001 containerd[2127]: time="2026-03-14T00:14:23.590730047Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 14 00:14:24.872840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount818716756.mount: Deactivated successfully. Mar 14 00:14:25.514823 containerd[2127]: time="2026-03-14T00:14:25.514747428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:25.517021 containerd[2127]: time="2026-03-14T00:14:25.516960024Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=28148870" Mar 14 00:14:25.519522 containerd[2127]: time="2026-03-14T00:14:25.519471648Z" level=info msg="ImageCreate event name:\"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:25.525563 containerd[2127]: time="2026-03-14T00:14:25.525479580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:25.526948 containerd[2127]: time="2026-03-14T00:14:25.526710732Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"28147889\" in 1.935923577s" Mar 14 00:14:25.526948 containerd[2127]: time="2026-03-14T00:14:25.526771548Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\"" Mar 14 00:14:25.527864 containerd[2127]: time="2026-03-14T00:14:25.527606172Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 14 00:14:26.047389 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 14 00:14:26.054647 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:26.102963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2541027712.mount: Deactivated successfully. Mar 14 00:14:26.506547 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:26.527004 (kubelet)[2752]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:14:26.657198 kubelet[2752]: E0314 00:14:26.656205 2752 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:14:26.667665 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:14:26.668248 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:14:27.467332 containerd[2127]: time="2026-03-14T00:14:27.466616246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:27.469949 containerd[2127]: time="2026-03-14T00:14:27.469879070Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Mar 14 00:14:27.473055 containerd[2127]: time="2026-03-14T00:14:27.472983446Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:27.481479 containerd[2127]: time="2026-03-14T00:14:27.481400486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:27.483220 containerd[2127]: time="2026-03-14T00:14:27.482972438Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.95531343s" Mar 14 00:14:27.483220 containerd[2127]: time="2026-03-14T00:14:27.483032846Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Mar 14 00:14:27.484031 containerd[2127]: time="2026-03-14T00:14:27.483773282Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 14 00:14:28.006652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1100722403.mount: Deactivated successfully. Mar 14 00:14:28.020333 containerd[2127]: time="2026-03-14T00:14:28.019677397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:28.022774 containerd[2127]: time="2026-03-14T00:14:28.022708681Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 14 00:14:28.025387 containerd[2127]: time="2026-03-14T00:14:28.025320253Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:28.031424 containerd[2127]: time="2026-03-14T00:14:28.031350901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:28.034694 containerd[2127]: time="2026-03-14T00:14:28.034626709Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 550.796379ms" Mar 14 00:14:28.034694 containerd[2127]: time="2026-03-14T00:14:28.034684921Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 14 00:14:28.035454 containerd[2127]: time="2026-03-14T00:14:28.035346193Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 14 00:14:28.679825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount608563197.mount: Deactivated successfully. Mar 14 00:14:30.715337 containerd[2127]: time="2026-03-14T00:14:30.714642726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:30.717936 containerd[2127]: time="2026-03-14T00:14:30.717866418Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21885780" Mar 14 00:14:30.720995 containerd[2127]: time="2026-03-14T00:14:30.720922662Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:30.726522 containerd[2127]: time="2026-03-14T00:14:30.726445230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:30.728918 containerd[2127]: time="2026-03-14T00:14:30.728868246Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 2.693465881s" Mar 14 00:14:30.729175 containerd[2127]: time="2026-03-14T00:14:30.729037146Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Mar 14 00:14:31.811965 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 14 00:14:36.750420 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 14 00:14:36.759173 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:37.103581 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:37.118972 (kubelet)[2905]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:14:37.192290 kubelet[2905]: E0314 00:14:37.188569 2905 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:14:37.193254 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:14:37.193717 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:14:37.908019 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:37.921163 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:37.983072 systemd[1]: Reloading requested from client PID 2921 ('systemctl') (unit session-7.scope)... Mar 14 00:14:37.983301 systemd[1]: Reloading... Mar 14 00:14:38.191299 zram_generator::config[2962]: No configuration found. Mar 14 00:14:38.463161 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:14:38.632916 systemd[1]: Reloading finished in 648 ms. Mar 14 00:14:38.711919 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 14 00:14:38.712303 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 14 00:14:38.712898 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:38.722961 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:39.049606 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:39.066928 (kubelet)[3034]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:14:39.136303 kubelet[3034]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:14:39.136800 kubelet[3034]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:14:39.136904 kubelet[3034]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:14:39.137143 kubelet[3034]: I0314 00:14:39.137094 3034 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:14:41.478364 kubelet[3034]: I0314 00:14:41.477902 3034 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 14 00:14:41.478364 kubelet[3034]: I0314 00:14:41.477947 3034 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:14:41.478364 kubelet[3034]: I0314 00:14:41.478358 3034 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:14:41.528366 kubelet[3034]: I0314 00:14:41.528313 3034 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:14:41.531238 kubelet[3034]: E0314 00:14:41.530878 3034 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.28.2:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.2:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:14:41.543905 kubelet[3034]: E0314 00:14:41.543859 3034 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:14:41.544719 kubelet[3034]: I0314 00:14:41.544156 3034 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 14 00:14:41.550855 kubelet[3034]: I0314 00:14:41.550819 3034 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 14 00:14:41.553438 kubelet[3034]: I0314 00:14:41.553393 3034 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:14:41.554439 kubelet[3034]: I0314 00:14:41.553575 3034 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 14 00:14:41.554439 kubelet[3034]: I0314 00:14:41.553829 3034 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:14:41.554439 kubelet[3034]: I0314 00:14:41.553862 3034 container_manager_linux.go:303] "Creating device plugin manager" Mar 14 00:14:41.554439 kubelet[3034]: I0314 00:14:41.554199 3034 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:14:41.562669 kubelet[3034]: I0314 00:14:41.562635 3034 kubelet.go:480] "Attempting to sync node with API server" Mar 14 00:14:41.562871 kubelet[3034]: I0314 00:14:41.562851 3034 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:14:41.563104 kubelet[3034]: I0314 00:14:41.562996 3034 kubelet.go:386] "Adding apiserver pod source" Mar 14 00:14:41.570139 kubelet[3034]: I0314 00:14:41.568483 3034 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:14:41.572002 kubelet[3034]: E0314 00:14:41.571936 3034 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-2&limit=500&resourceVersion=0\": dial tcp 172.31.28.2:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:14:41.572876 kubelet[3034]: E0314 00:14:41.572818 3034 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.2:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.2:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:14:41.573514 kubelet[3034]: I0314 00:14:41.573444 3034 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:14:41.579987 kubelet[3034]: I0314 00:14:41.579927 3034 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:14:41.580249 kubelet[3034]: W0314 00:14:41.580216 3034 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 14 00:14:41.592241 kubelet[3034]: I0314 00:14:41.592203 3034 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 14 00:14:41.592880 kubelet[3034]: I0314 00:14:41.592857 3034 server.go:1289] "Started kubelet" Mar 14 00:14:41.593477 kubelet[3034]: I0314 00:14:41.593386 3034 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:14:41.597180 kubelet[3034]: I0314 00:14:41.596134 3034 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:14:41.600680 kubelet[3034]: I0314 00:14:41.600581 3034 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:14:41.601421 kubelet[3034]: I0314 00:14:41.601388 3034 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:14:41.604054 kubelet[3034]: E0314 00:14:41.601722 3034 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.2:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.2:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-2.189c8ce80f3437dc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-2,UID:ip-172-31-28-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-2,},FirstTimestamp:2026-03-14 00:14:41.592514524 +0000 UTC m=+2.517902329,LastTimestamp:2026-03-14 00:14:41.592514524 +0000 UTC m=+2.517902329,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-2,}" Mar 14 00:14:41.607757 kubelet[3034]: I0314 00:14:41.607702 3034 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:14:41.611391 kubelet[3034]: I0314 00:14:41.611340 3034 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:14:41.618169 kubelet[3034]: I0314 00:14:41.618096 3034 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 14 00:14:41.619122 kubelet[3034]: E0314 00:14:41.619062 3034 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:14:41.619731 kubelet[3034]: E0314 00:14:41.619683 3034 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-2\" not found" Mar 14 00:14:41.620426 kubelet[3034]: I0314 00:14:41.620358 3034 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 14 00:14:41.620563 kubelet[3034]: I0314 00:14:41.620513 3034 reconciler.go:26] "Reconciler: start to sync state" Mar 14 00:14:41.623490 kubelet[3034]: E0314 00:14:41.623431 3034 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.28.2:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.2:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:14:41.624181 kubelet[3034]: I0314 00:14:41.623894 3034 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:14:41.624181 kubelet[3034]: I0314 00:14:41.624051 3034 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:14:41.624181 kubelet[3034]: E0314 00:14:41.624053 3034 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-2?timeout=10s\": dial tcp 172.31.28.2:6443: connect: connection refused" interval="200ms" Mar 14 00:14:41.628615 kubelet[3034]: I0314 00:14:41.628562 3034 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:14:41.670342 kubelet[3034]: I0314 00:14:41.669385 3034 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 14 00:14:41.672480 kubelet[3034]: I0314 00:14:41.672424 3034 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 14 00:14:41.672480 kubelet[3034]: I0314 00:14:41.672482 3034 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 14 00:14:41.672679 kubelet[3034]: I0314 00:14:41.672516 3034 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:14:41.672679 kubelet[3034]: I0314 00:14:41.672530 3034 kubelet.go:2436] "Starting kubelet main sync loop" Mar 14 00:14:41.672679 kubelet[3034]: E0314 00:14:41.672594 3034 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:14:41.676256 kubelet[3034]: E0314 00:14:41.676204 3034 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.28.2:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.2:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:14:41.689899 kubelet[3034]: I0314 00:14:41.689748 3034 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:14:41.690095 kubelet[3034]: I0314 00:14:41.690074 3034 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:14:41.690235 kubelet[3034]: I0314 00:14:41.690216 3034 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:14:41.695371 kubelet[3034]: I0314 00:14:41.695343 3034 policy_none.go:49] "None policy: Start" Mar 14 00:14:41.695539 kubelet[3034]: I0314 00:14:41.695521 3034 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 14 00:14:41.695638 kubelet[3034]: I0314 00:14:41.695621 3034 state_mem.go:35] "Initializing new in-memory state store" Mar 14 00:14:41.708732 kubelet[3034]: E0314 00:14:41.708681 3034 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:14:41.710599 kubelet[3034]: I0314 00:14:41.710542 3034 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:14:41.710919 kubelet[3034]: I0314 00:14:41.710573 3034 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:14:41.712857 kubelet[3034]: I0314 00:14:41.712798 3034 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:14:41.716590 kubelet[3034]: E0314 00:14:41.716519 3034 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:14:41.716902 kubelet[3034]: E0314 00:14:41.716607 3034 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-2\" not found" Mar 14 00:14:41.788331 kubelet[3034]: E0314 00:14:41.787502 3034 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-2\" not found" node="ip-172-31-28-2" Mar 14 00:14:41.794652 kubelet[3034]: E0314 00:14:41.794613 3034 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-2\" not found" node="ip-172-31-28-2" Mar 14 00:14:41.802738 kubelet[3034]: E0314 00:14:41.802689 3034 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-2\" not found" node="ip-172-31-28-2" Mar 14 00:14:41.813245 kubelet[3034]: I0314 00:14:41.813211 3034 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-2" Mar 14 00:14:41.814054 kubelet[3034]: E0314 00:14:41.814015 3034 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.2:6443/api/v1/nodes\": dial tcp 172.31.28.2:6443: connect: connection refused" node="ip-172-31-28-2" Mar 14 00:14:41.825874 kubelet[3034]: E0314 00:14:41.825811 3034 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-2?timeout=10s\": dial tcp 172.31.28.2:6443: connect: connection refused" interval="400ms" Mar 14 00:14:41.922386 kubelet[3034]: I0314 00:14:41.922317 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/873536b8f8d791ae8850835f6e903fa8-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-2\" (UID: \"873536b8f8d791ae8850835f6e903fa8\") " pod="kube-system/kube-controller-manager-ip-172-31-28-2" Mar 14 00:14:41.922386 kubelet[3034]: I0314 00:14:41.922380 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/873536b8f8d791ae8850835f6e903fa8-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-2\" (UID: \"873536b8f8d791ae8850835f6e903fa8\") " pod="kube-system/kube-controller-manager-ip-172-31-28-2" Mar 14 00:14:41.922588 kubelet[3034]: I0314 00:14:41.922421 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/873536b8f8d791ae8850835f6e903fa8-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-2\" (UID: \"873536b8f8d791ae8850835f6e903fa8\") " pod="kube-system/kube-controller-manager-ip-172-31-28-2" Mar 14 00:14:41.922588 kubelet[3034]: I0314 00:14:41.922465 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/873536b8f8d791ae8850835f6e903fa8-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-2\" (UID: \"873536b8f8d791ae8850835f6e903fa8\") " pod="kube-system/kube-controller-manager-ip-172-31-28-2" Mar 14 00:14:41.922588 kubelet[3034]: I0314 00:14:41.922504 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39ea6064440441b3a90a50705348103b-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-2\" (UID: \"39ea6064440441b3a90a50705348103b\") " pod="kube-system/kube-scheduler-ip-172-31-28-2" Mar 14 00:14:41.922588 kubelet[3034]: I0314 00:14:41.922539 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f56f9235d3f85c850c958c89dc244455-ca-certs\") pod \"kube-apiserver-ip-172-31-28-2\" (UID: \"f56f9235d3f85c850c958c89dc244455\") " pod="kube-system/kube-apiserver-ip-172-31-28-2" Mar 14 00:14:41.922588 kubelet[3034]: I0314 00:14:41.922573 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/873536b8f8d791ae8850835f6e903fa8-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-2\" (UID: \"873536b8f8d791ae8850835f6e903fa8\") " pod="kube-system/kube-controller-manager-ip-172-31-28-2" Mar 14 00:14:41.922835 kubelet[3034]: I0314 00:14:41.922608 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f56f9235d3f85c850c958c89dc244455-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-2\" (UID: \"f56f9235d3f85c850c958c89dc244455\") " pod="kube-system/kube-apiserver-ip-172-31-28-2" Mar 14 00:14:41.922835 kubelet[3034]: I0314 00:14:41.922644 3034 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f56f9235d3f85c850c958c89dc244455-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-2\" (UID: \"f56f9235d3f85c850c958c89dc244455\") " pod="kube-system/kube-apiserver-ip-172-31-28-2" Mar 14 00:14:42.016047 kubelet[3034]: I0314 00:14:42.015988 3034 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-2" Mar 14 00:14:42.016546 kubelet[3034]: E0314 00:14:42.016492 3034 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.2:6443/api/v1/nodes\": dial tcp 172.31.28.2:6443: connect: connection refused" node="ip-172-31-28-2" Mar 14 00:14:42.089753 containerd[2127]: time="2026-03-14T00:14:42.089587130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-2,Uid:f56f9235d3f85c850c958c89dc244455,Namespace:kube-system,Attempt:0,}" Mar 14 00:14:42.097689 containerd[2127]: time="2026-03-14T00:14:42.097239423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-2,Uid:873536b8f8d791ae8850835f6e903fa8,Namespace:kube-system,Attempt:0,}" Mar 14 00:14:42.105238 containerd[2127]: time="2026-03-14T00:14:42.104865555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-2,Uid:39ea6064440441b3a90a50705348103b,Namespace:kube-system,Attempt:0,}" Mar 14 00:14:42.227472 kubelet[3034]: E0314 00:14:42.227401 3034 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-2?timeout=10s\": dial tcp 172.31.28.2:6443: connect: connection refused" interval="800ms" Mar 14 00:14:42.419213 kubelet[3034]: I0314 00:14:42.419068 3034 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-2" Mar 14 00:14:42.420346 kubelet[3034]: E0314 00:14:42.420293 3034 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.2:6443/api/v1/nodes\": dial tcp 172.31.28.2:6443: connect: connection refused" node="ip-172-31-28-2" Mar 14 00:14:42.621476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3179287023.mount: Deactivated successfully. Mar 14 00:14:42.634313 containerd[2127]: time="2026-03-14T00:14:42.633556097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:14:42.636007 kubelet[3034]: E0314 00:14:42.635927 3034 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-2&limit=500&resourceVersion=0\": dial tcp 172.31.28.2:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:14:42.640497 containerd[2127]: time="2026-03-14T00:14:42.640408325Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 14 00:14:42.642328 containerd[2127]: time="2026-03-14T00:14:42.642110153Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:14:42.645327 containerd[2127]: time="2026-03-14T00:14:42.645236705Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:14:42.647972 containerd[2127]: time="2026-03-14T00:14:42.647899853Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:14:42.651867 containerd[2127]: time="2026-03-14T00:14:42.651796253Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:14:42.652215 containerd[2127]: time="2026-03-14T00:14:42.652158089Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:14:42.659885 containerd[2127]: time="2026-03-14T00:14:42.659805569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:14:42.667494 containerd[2127]: time="2026-03-14T00:14:42.667419773Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 562.448114ms" Mar 14 00:14:42.670583 containerd[2127]: time="2026-03-14T00:14:42.670432781Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 580.109391ms" Mar 14 00:14:42.673182 containerd[2127]: time="2026-03-14T00:14:42.673108841Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 575.724206ms" Mar 14 00:14:42.885017 containerd[2127]: time="2026-03-14T00:14:42.882545778Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:14:42.885017 containerd[2127]: time="2026-03-14T00:14:42.882630246Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:14:42.885017 containerd[2127]: time="2026-03-14T00:14:42.882655674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:42.885017 containerd[2127]: time="2026-03-14T00:14:42.882821166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:42.891921 containerd[2127]: time="2026-03-14T00:14:42.891469314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:14:42.891921 containerd[2127]: time="2026-03-14T00:14:42.891580854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:14:42.891921 containerd[2127]: time="2026-03-14T00:14:42.891618462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:42.891921 containerd[2127]: time="2026-03-14T00:14:42.891772206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:42.902301 containerd[2127]: time="2026-03-14T00:14:42.900885559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:14:42.902992 containerd[2127]: time="2026-03-14T00:14:42.900984547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:14:42.903191 containerd[2127]: time="2026-03-14T00:14:42.903134971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:42.905126 containerd[2127]: time="2026-03-14T00:14:42.904541791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:43.007665 kubelet[3034]: E0314 00:14:43.007608 3034 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.2:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.2:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:14:43.029588 kubelet[3034]: E0314 00:14:43.029536 3034 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-2?timeout=10s\": dial tcp 172.31.28.2:6443: connect: connection refused" interval="1.6s" Mar 14 00:14:43.054790 containerd[2127]: time="2026-03-14T00:14:43.054730587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-2,Uid:873536b8f8d791ae8850835f6e903fa8,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b51c87a689472786622327fd77741f5e4d5c32880262bebb7e1e995ce8fd375\"" Mar 14 00:14:43.058131 containerd[2127]: time="2026-03-14T00:14:43.058062699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-2,Uid:39ea6064440441b3a90a50705348103b,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d4db641fa7b19693aae0e7ff18122d87831240effb7d0458e171f56d051bd03\"" Mar 14 00:14:43.068946 containerd[2127]: time="2026-03-14T00:14:43.068886183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-2,Uid:f56f9235d3f85c850c958c89dc244455,Namespace:kube-system,Attempt:0,} returns sandbox id \"8196192e7ee7a634d9debb3cef99dad48ded630557c8256d403ac8e2dc15af6b\"" Mar 14 00:14:43.069745 kubelet[3034]: E0314 00:14:43.069699 3034 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.28.2:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.2:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:14:43.073956 containerd[2127]: time="2026-03-14T00:14:43.073738635Z" level=info msg="CreateContainer within sandbox \"8b51c87a689472786622327fd77741f5e4d5c32880262bebb7e1e995ce8fd375\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 14 00:14:43.078103 containerd[2127]: time="2026-03-14T00:14:43.078051903Z" level=info msg="CreateContainer within sandbox \"4d4db641fa7b19693aae0e7ff18122d87831240effb7d0458e171f56d051bd03\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 14 00:14:43.085317 containerd[2127]: time="2026-03-14T00:14:43.085237671Z" level=info msg="CreateContainer within sandbox \"8196192e7ee7a634d9debb3cef99dad48ded630557c8256d403ac8e2dc15af6b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 14 00:14:43.116169 containerd[2127]: time="2026-03-14T00:14:43.115620844Z" level=info msg="CreateContainer within sandbox \"8b51c87a689472786622327fd77741f5e4d5c32880262bebb7e1e995ce8fd375\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6d948275a4fd7826842e06370b588cf5d5536693f3ba0cd6c49a9df02e26475e\"" Mar 14 00:14:43.117185 containerd[2127]: time="2026-03-14T00:14:43.117140188Z" level=info msg="StartContainer for \"6d948275a4fd7826842e06370b588cf5d5536693f3ba0cd6c49a9df02e26475e\"" Mar 14 00:14:43.126979 containerd[2127]: time="2026-03-14T00:14:43.126898780Z" level=info msg="CreateContainer within sandbox \"4d4db641fa7b19693aae0e7ff18122d87831240effb7d0458e171f56d051bd03\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"67af58498d76279e59560b38006b923b94e91bc9d093a1549ddd593608e376d2\"" Mar 14 00:14:43.133229 containerd[2127]: time="2026-03-14T00:14:43.133149868Z" level=info msg="StartContainer for \"67af58498d76279e59560b38006b923b94e91bc9d093a1549ddd593608e376d2\"" Mar 14 00:14:43.141763 containerd[2127]: time="2026-03-14T00:14:43.141593068Z" level=info msg="CreateContainer within sandbox \"8196192e7ee7a634d9debb3cef99dad48ded630557c8256d403ac8e2dc15af6b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"af1b6dbaf79f94302c1be1b4c11ff30199ace7c965e49771eba539b968a1ab47\"" Mar 14 00:14:43.142545 containerd[2127]: time="2026-03-14T00:14:43.142375960Z" level=info msg="StartContainer for \"af1b6dbaf79f94302c1be1b4c11ff30199ace7c965e49771eba539b968a1ab47\"" Mar 14 00:14:43.223934 kubelet[3034]: I0314 00:14:43.223650 3034 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-2" Mar 14 00:14:43.226018 kubelet[3034]: E0314 00:14:43.225948 3034 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.2:6443/api/v1/nodes\": dial tcp 172.31.28.2:6443: connect: connection refused" node="ip-172-31-28-2" Mar 14 00:14:43.275910 kubelet[3034]: E0314 00:14:43.274942 3034 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.28.2:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.2:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:14:43.301106 containerd[2127]: time="2026-03-14T00:14:43.300363748Z" level=info msg="StartContainer for \"6d948275a4fd7826842e06370b588cf5d5536693f3ba0cd6c49a9df02e26475e\" returns successfully" Mar 14 00:14:43.360158 containerd[2127]: time="2026-03-14T00:14:43.359594921Z" level=info msg="StartContainer for \"af1b6dbaf79f94302c1be1b4c11ff30199ace7c965e49771eba539b968a1ab47\" returns successfully" Mar 14 00:14:43.393613 containerd[2127]: time="2026-03-14T00:14:43.393540497Z" level=info msg="StartContainer for \"67af58498d76279e59560b38006b923b94e91bc9d093a1549ddd593608e376d2\" returns successfully" Mar 14 00:14:43.699520 kubelet[3034]: E0314 00:14:43.697110 3034 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-2\" not found" node="ip-172-31-28-2" Mar 14 00:14:43.705942 kubelet[3034]: E0314 00:14:43.705786 3034 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-2\" not found" node="ip-172-31-28-2" Mar 14 00:14:43.716558 kubelet[3034]: E0314 00:14:43.716468 3034 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-2\" not found" node="ip-172-31-28-2" Mar 14 00:14:44.718375 kubelet[3034]: E0314 00:14:44.717955 3034 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-2\" not found" node="ip-172-31-28-2" Mar 14 00:14:44.721647 kubelet[3034]: E0314 00:14:44.721302 3034 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-2\" not found" node="ip-172-31-28-2" Mar 14 00:14:44.834332 kubelet[3034]: I0314 00:14:44.831030 3034 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-2" Mar 14 00:14:45.721322 kubelet[3034]: E0314 00:14:45.719936 3034 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-2\" not found" node="ip-172-31-28-2" Mar 14 00:14:46.001392 update_engine[2107]: I20260314 00:14:46.001308 2107 update_attempter.cc:509] Updating boot flags... Mar 14 00:14:46.210408 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3330) Mar 14 00:14:46.869309 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3331) Mar 14 00:14:47.555397 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3331) Mar 14 00:14:47.580029 kubelet[3034]: I0314 00:14:47.579361 3034 apiserver.go:52] "Watching apiserver" Mar 14 00:14:47.723380 kubelet[3034]: I0314 00:14:47.720696 3034 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 14 00:14:47.849314 kubelet[3034]: E0314 00:14:47.848435 3034 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-28-2\" not found" node="ip-172-31-28-2" Mar 14 00:14:48.069772 kubelet[3034]: I0314 00:14:48.069712 3034 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-2" Mar 14 00:14:48.120788 kubelet[3034]: I0314 00:14:48.120639 3034 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-2" Mar 14 00:14:48.192318 kubelet[3034]: I0314 00:14:48.191297 3034 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-2" Mar 14 00:14:48.207070 kubelet[3034]: I0314 00:14:48.207021 3034 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-2" Mar 14 00:14:50.060499 systemd[1]: Reloading requested from client PID 3586 ('systemctl') (unit session-7.scope)... Mar 14 00:14:50.060971 systemd[1]: Reloading... Mar 14 00:14:50.225320 zram_generator::config[3632]: No configuration found. Mar 14 00:14:50.466175 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:14:50.656516 systemd[1]: Reloading finished in 594 ms. Mar 14 00:14:50.724438 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:50.742706 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 00:14:50.744384 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:50.757016 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:51.110649 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:51.130024 (kubelet)[3696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:14:51.239019 kubelet[3696]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:14:51.239019 kubelet[3696]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:14:51.239019 kubelet[3696]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:14:51.239658 kubelet[3696]: I0314 00:14:51.239195 3696 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:14:51.257084 kubelet[3696]: I0314 00:14:51.257012 3696 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 14 00:14:51.257084 kubelet[3696]: I0314 00:14:51.257063 3696 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:14:51.257582 kubelet[3696]: I0314 00:14:51.257518 3696 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:14:51.259980 kubelet[3696]: I0314 00:14:51.259929 3696 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 14 00:14:51.266959 kubelet[3696]: I0314 00:14:51.266221 3696 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:14:51.273546 kubelet[3696]: E0314 00:14:51.273497 3696 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:14:51.273846 kubelet[3696]: I0314 00:14:51.273825 3696 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 14 00:14:51.281334 kubelet[3696]: I0314 00:14:51.281245 3696 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 14 00:14:51.282663 kubelet[3696]: I0314 00:14:51.282612 3696 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:14:51.284169 kubelet[3696]: I0314 00:14:51.282793 3696 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 14 00:14:51.284169 kubelet[3696]: I0314 00:14:51.283799 3696 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:14:51.284169 kubelet[3696]: I0314 00:14:51.283822 3696 container_manager_linux.go:303] "Creating device plugin manager" Mar 14 00:14:51.284169 kubelet[3696]: I0314 00:14:51.283916 3696 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:14:51.284652 kubelet[3696]: I0314 00:14:51.284618 3696 kubelet.go:480] "Attempting to sync node with API server" Mar 14 00:14:51.285431 kubelet[3696]: I0314 00:14:51.285364 3696 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:14:51.285757 kubelet[3696]: I0314 00:14:51.285614 3696 kubelet.go:386] "Adding apiserver pod source" Mar 14 00:14:51.285757 kubelet[3696]: I0314 00:14:51.285681 3696 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:14:51.291851 kubelet[3696]: I0314 00:14:51.291777 3696 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:14:51.301906 kubelet[3696]: I0314 00:14:51.301843 3696 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:14:51.313725 kubelet[3696]: I0314 00:14:51.313253 3696 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 14 00:14:51.313725 kubelet[3696]: I0314 00:14:51.313338 3696 server.go:1289] "Started kubelet" Mar 14 00:14:51.318633 kubelet[3696]: I0314 00:14:51.318599 3696 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:14:51.325178 kubelet[3696]: I0314 00:14:51.322726 3696 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:14:51.326371 kubelet[3696]: I0314 00:14:51.325979 3696 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:14:51.334083 kubelet[3696]: I0314 00:14:51.333219 3696 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:14:51.334462 kubelet[3696]: I0314 00:14:51.334417 3696 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:14:51.334863 kubelet[3696]: I0314 00:14:51.334802 3696 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:14:51.343347 kubelet[3696]: I0314 00:14:51.343308 3696 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 14 00:14:51.351677 kubelet[3696]: I0314 00:14:51.351421 3696 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 14 00:14:51.353235 kubelet[3696]: I0314 00:14:51.353181 3696 reconciler.go:26] "Reconciler: start to sync state" Mar 14 00:14:51.362358 kubelet[3696]: I0314 00:14:51.361140 3696 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 14 00:14:51.366803 kubelet[3696]: I0314 00:14:51.366764 3696 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:14:51.367203 kubelet[3696]: I0314 00:14:51.367158 3696 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:14:51.368466 kubelet[3696]: I0314 00:14:51.368417 3696 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:14:51.372016 kubelet[3696]: I0314 00:14:51.371413 3696 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 14 00:14:51.372016 kubelet[3696]: I0314 00:14:51.371476 3696 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 14 00:14:51.372016 kubelet[3696]: I0314 00:14:51.371513 3696 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:14:51.372016 kubelet[3696]: I0314 00:14:51.371529 3696 kubelet.go:2436] "Starting kubelet main sync loop" Mar 14 00:14:51.372016 kubelet[3696]: E0314 00:14:51.371603 3696 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:14:51.472402 kubelet[3696]: E0314 00:14:51.472338 3696 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 14 00:14:51.487909 kubelet[3696]: I0314 00:14:51.487877 3696 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:14:51.489280 kubelet[3696]: I0314 00:14:51.488054 3696 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:14:51.489280 kubelet[3696]: I0314 00:14:51.488096 3696 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:14:51.489280 kubelet[3696]: I0314 00:14:51.488357 3696 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 14 00:14:51.489280 kubelet[3696]: I0314 00:14:51.488379 3696 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 14 00:14:51.489280 kubelet[3696]: I0314 00:14:51.488412 3696 policy_none.go:49] "None policy: Start" Mar 14 00:14:51.489280 kubelet[3696]: I0314 00:14:51.488432 3696 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 14 00:14:51.489280 kubelet[3696]: I0314 00:14:51.488453 3696 state_mem.go:35] "Initializing new in-memory state store" Mar 14 00:14:51.489280 kubelet[3696]: I0314 00:14:51.488608 3696 state_mem.go:75] "Updated machine memory state" Mar 14 00:14:51.493534 kubelet[3696]: E0314 00:14:51.493061 3696 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:14:51.493534 kubelet[3696]: I0314 00:14:51.493388 3696 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:14:51.493534 kubelet[3696]: I0314 00:14:51.493408 3696 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:14:51.499809 kubelet[3696]: I0314 00:14:51.496756 3696 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:14:51.499809 kubelet[3696]: E0314 00:14:51.498679 3696 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:14:51.611478 kubelet[3696]: I0314 00:14:51.611405 3696 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-2" Mar 14 00:14:51.623537 kubelet[3696]: I0314 00:14:51.622768 3696 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-28-2" Mar 14 00:14:51.623537 kubelet[3696]: I0314 00:14:51.622899 3696 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-2" Mar 14 00:14:51.673441 kubelet[3696]: I0314 00:14:51.673377 3696 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-2" Mar 14 00:14:51.677352 kubelet[3696]: I0314 00:14:51.674559 3696 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-2" Mar 14 00:14:51.677352 kubelet[3696]: I0314 00:14:51.676035 3696 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-2" Mar 14 00:14:51.685125 kubelet[3696]: E0314 00:14:51.685052 3696 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-2\" already exists" pod="kube-system/kube-apiserver-ip-172-31-28-2" Mar 14 00:14:51.686682 kubelet[3696]: E0314 00:14:51.686604 3696 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-28-2\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-28-2" Mar 14 00:14:51.688115 kubelet[3696]: E0314 00:14:51.688071 3696 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-2\" already exists" pod="kube-system/kube-scheduler-ip-172-31-28-2" Mar 14 00:14:51.757165 kubelet[3696]: I0314 00:14:51.757022 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39ea6064440441b3a90a50705348103b-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-2\" (UID: \"39ea6064440441b3a90a50705348103b\") " pod="kube-system/kube-scheduler-ip-172-31-28-2" Mar 14 00:14:51.757165 kubelet[3696]: I0314 00:14:51.757107 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/873536b8f8d791ae8850835f6e903fa8-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-2\" (UID: \"873536b8f8d791ae8850835f6e903fa8\") " pod="kube-system/kube-controller-manager-ip-172-31-28-2" Mar 14 00:14:51.757829 kubelet[3696]: I0314 00:14:51.757309 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/873536b8f8d791ae8850835f6e903fa8-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-2\" (UID: \"873536b8f8d791ae8850835f6e903fa8\") " pod="kube-system/kube-controller-manager-ip-172-31-28-2" Mar 14 00:14:51.757829 kubelet[3696]: I0314 00:14:51.757355 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/873536b8f8d791ae8850835f6e903fa8-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-2\" (UID: \"873536b8f8d791ae8850835f6e903fa8\") " pod="kube-system/kube-controller-manager-ip-172-31-28-2" Mar 14 00:14:51.757829 kubelet[3696]: I0314 00:14:51.757394 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/873536b8f8d791ae8850835f6e903fa8-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-2\" (UID: \"873536b8f8d791ae8850835f6e903fa8\") " pod="kube-system/kube-controller-manager-ip-172-31-28-2" Mar 14 00:14:51.757829 kubelet[3696]: I0314 00:14:51.757429 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f56f9235d3f85c850c958c89dc244455-ca-certs\") pod \"kube-apiserver-ip-172-31-28-2\" (UID: \"f56f9235d3f85c850c958c89dc244455\") " pod="kube-system/kube-apiserver-ip-172-31-28-2" Mar 14 00:14:51.757829 kubelet[3696]: I0314 00:14:51.757464 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f56f9235d3f85c850c958c89dc244455-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-2\" (UID: \"f56f9235d3f85c850c958c89dc244455\") " pod="kube-system/kube-apiserver-ip-172-31-28-2" Mar 14 00:14:51.759545 kubelet[3696]: I0314 00:14:51.757506 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f56f9235d3f85c850c958c89dc244455-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-2\" (UID: \"f56f9235d3f85c850c958c89dc244455\") " pod="kube-system/kube-apiserver-ip-172-31-28-2" Mar 14 00:14:51.759545 kubelet[3696]: I0314 00:14:51.757544 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/873536b8f8d791ae8850835f6e903fa8-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-2\" (UID: \"873536b8f8d791ae8850835f6e903fa8\") " pod="kube-system/kube-controller-manager-ip-172-31-28-2" Mar 14 00:14:52.291011 kubelet[3696]: I0314 00:14:52.290665 3696 apiserver.go:52] "Watching apiserver" Mar 14 00:14:52.351063 kubelet[3696]: I0314 00:14:52.349876 3696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-2" podStartSLOduration=4.349853077 podStartE2EDuration="4.349853077s" podCreationTimestamp="2026-03-14 00:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:14:52.347734825 +0000 UTC m=+1.209003715" watchObservedRunningTime="2026-03-14 00:14:52.349853077 +0000 UTC m=+1.211121955" Mar 14 00:14:52.352466 kubelet[3696]: I0314 00:14:52.352403 3696 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 14 00:14:52.364759 kubelet[3696]: I0314 00:14:52.363983 3696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-28-2" podStartSLOduration=4.36396479 podStartE2EDuration="4.36396479s" podCreationTimestamp="2026-03-14 00:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:14:52.363931298 +0000 UTC m=+1.225200200" watchObservedRunningTime="2026-03-14 00:14:52.36396479 +0000 UTC m=+1.225233656" Mar 14 00:14:52.380583 kubelet[3696]: I0314 00:14:52.380312 3696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-2" podStartSLOduration=4.380250638 podStartE2EDuration="4.380250638s" podCreationTimestamp="2026-03-14 00:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:14:52.377849438 +0000 UTC m=+1.239118328" watchObservedRunningTime="2026-03-14 00:14:52.380250638 +0000 UTC m=+1.241519516" Mar 14 00:14:52.439073 kubelet[3696]: I0314 00:14:52.437599 3696 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-2" Mar 14 00:14:52.440976 kubelet[3696]: I0314 00:14:52.440904 3696 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-2" Mar 14 00:14:52.458137 kubelet[3696]: E0314 00:14:52.457953 3696 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-2\" already exists" pod="kube-system/kube-scheduler-ip-172-31-28-2" Mar 14 00:14:52.461532 kubelet[3696]: E0314 00:14:52.460997 3696 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-2\" already exists" pod="kube-system/kube-apiserver-ip-172-31-28-2" Mar 14 00:14:56.570645 kubelet[3696]: I0314 00:14:56.570196 3696 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 14 00:14:56.572721 containerd[2127]: time="2026-03-14T00:14:56.572651514Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 14 00:14:56.575256 kubelet[3696]: I0314 00:14:56.573072 3696 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 14 00:14:57.093285 kubelet[3696]: I0314 00:14:57.092885 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ac8540eb-c0f3-48a3-8350-eb69458b2587-kube-proxy\") pod \"kube-proxy-9z55x\" (UID: \"ac8540eb-c0f3-48a3-8350-eb69458b2587\") " pod="kube-system/kube-proxy-9z55x" Mar 14 00:14:57.095290 kubelet[3696]: I0314 00:14:57.093710 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac8540eb-c0f3-48a3-8350-eb69458b2587-lib-modules\") pod \"kube-proxy-9z55x\" (UID: \"ac8540eb-c0f3-48a3-8350-eb69458b2587\") " pod="kube-system/kube-proxy-9z55x" Mar 14 00:14:57.095290 kubelet[3696]: I0314 00:14:57.093756 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5qnl\" (UniqueName: \"kubernetes.io/projected/ac8540eb-c0f3-48a3-8350-eb69458b2587-kube-api-access-c5qnl\") pod \"kube-proxy-9z55x\" (UID: \"ac8540eb-c0f3-48a3-8350-eb69458b2587\") " pod="kube-system/kube-proxy-9z55x" Mar 14 00:14:57.095290 kubelet[3696]: I0314 00:14:57.093803 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac8540eb-c0f3-48a3-8350-eb69458b2587-xtables-lock\") pod \"kube-proxy-9z55x\" (UID: \"ac8540eb-c0f3-48a3-8350-eb69458b2587\") " pod="kube-system/kube-proxy-9z55x" Mar 14 00:14:57.349227 containerd[2127]: time="2026-03-14T00:14:57.347940534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9z55x,Uid:ac8540eb-c0f3-48a3-8350-eb69458b2587,Namespace:kube-system,Attempt:0,}" Mar 14 00:14:57.392977 containerd[2127]: time="2026-03-14T00:14:57.392824518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:14:57.393343 containerd[2127]: time="2026-03-14T00:14:57.393294966Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:14:57.393527 containerd[2127]: time="2026-03-14T00:14:57.393484806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:57.393957 containerd[2127]: time="2026-03-14T00:14:57.393910375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:57.489744 containerd[2127]: time="2026-03-14T00:14:57.489665791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9z55x,Uid:ac8540eb-c0f3-48a3-8350-eb69458b2587,Namespace:kube-system,Attempt:0,} returns sandbox id \"1bf57a9a7762367cc52c8bf56d699db352ad72df82ee25602c09241726433a4c\"" Mar 14 00:14:57.503795 containerd[2127]: time="2026-03-14T00:14:57.503664835Z" level=info msg="CreateContainer within sandbox \"1bf57a9a7762367cc52c8bf56d699db352ad72df82ee25602c09241726433a4c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 14 00:14:57.531625 containerd[2127]: time="2026-03-14T00:14:57.531564763Z" level=info msg="CreateContainer within sandbox \"1bf57a9a7762367cc52c8bf56d699db352ad72df82ee25602c09241726433a4c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e4fdf08ef1107f730baac750710c9e82d22c0532b391140a0b6ba4846eeca852\"" Mar 14 00:14:57.532712 containerd[2127]: time="2026-03-14T00:14:57.532469707Z" level=info msg="StartContainer for \"e4fdf08ef1107f730baac750710c9e82d22c0532b391140a0b6ba4846eeca852\"" Mar 14 00:14:57.709663 containerd[2127]: time="2026-03-14T00:14:57.709392248Z" level=info msg="StartContainer for \"e4fdf08ef1107f730baac750710c9e82d22c0532b391140a0b6ba4846eeca852\" returns successfully" Mar 14 00:14:57.800629 kubelet[3696]: I0314 00:14:57.800563 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/05d19b67-a101-4ffe-b8ba-a4bd387e1baa-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-4nntk\" (UID: \"05d19b67-a101-4ffe-b8ba-a4bd387e1baa\") " pod="tigera-operator/tigera-operator-6bf85f8dd-4nntk" Mar 14 00:14:57.800629 kubelet[3696]: I0314 00:14:57.800632 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52s4c\" (UniqueName: \"kubernetes.io/projected/05d19b67-a101-4ffe-b8ba-a4bd387e1baa-kube-api-access-52s4c\") pod \"tigera-operator-6bf85f8dd-4nntk\" (UID: \"05d19b67-a101-4ffe-b8ba-a4bd387e1baa\") " pod="tigera-operator/tigera-operator-6bf85f8dd-4nntk" Mar 14 00:14:58.112893 containerd[2127]: time="2026-03-14T00:14:58.112818354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-4nntk,Uid:05d19b67-a101-4ffe-b8ba-a4bd387e1baa,Namespace:tigera-operator,Attempt:0,}" Mar 14 00:14:58.149781 containerd[2127]: time="2026-03-14T00:14:58.149436150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:14:58.149781 containerd[2127]: time="2026-03-14T00:14:58.149542818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:14:58.149781 containerd[2127]: time="2026-03-14T00:14:58.149592306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:58.150839 containerd[2127]: time="2026-03-14T00:14:58.150320394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:58.226109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2403777750.mount: Deactivated successfully. Mar 14 00:14:58.300104 containerd[2127]: time="2026-03-14T00:14:58.300055711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-4nntk,Uid:05d19b67-a101-4ffe-b8ba-a4bd387e1baa,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"cc4b681c6f7601ca08e20ee883e22698fa63c6065f109928fc583c9e8881d938\"" Mar 14 00:14:58.305741 containerd[2127]: time="2026-03-14T00:14:58.305672239Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 14 00:14:58.911204 kubelet[3696]: I0314 00:14:58.910850 3696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9z55x" podStartSLOduration=1.91082845 podStartE2EDuration="1.91082845s" podCreationTimestamp="2026-03-14 00:14:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:14:58.479605484 +0000 UTC m=+7.340874470" watchObservedRunningTime="2026-03-14 00:14:58.91082845 +0000 UTC m=+7.772097340" Mar 14 00:14:59.613722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3073785528.mount: Deactivated successfully. Mar 14 00:15:00.684509 containerd[2127]: time="2026-03-14T00:15:00.684429455Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:00.686222 containerd[2127]: time="2026-03-14T00:15:00.686131979Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Mar 14 00:15:00.688123 containerd[2127]: time="2026-03-14T00:15:00.688055963Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:00.692835 containerd[2127]: time="2026-03-14T00:15:00.692567195Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:00.694356 containerd[2127]: time="2026-03-14T00:15:00.694102559Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.388324972s" Mar 14 00:15:00.694356 containerd[2127]: time="2026-03-14T00:15:00.694182335Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Mar 14 00:15:00.702700 containerd[2127]: time="2026-03-14T00:15:00.702645347Z" level=info msg="CreateContainer within sandbox \"cc4b681c6f7601ca08e20ee883e22698fa63c6065f109928fc583c9e8881d938\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 14 00:15:00.723296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount382722438.mount: Deactivated successfully. Mar 14 00:15:00.725087 containerd[2127]: time="2026-03-14T00:15:00.724083839Z" level=info msg="CreateContainer within sandbox \"cc4b681c6f7601ca08e20ee883e22698fa63c6065f109928fc583c9e8881d938\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1f139217fcf041c32d7adb4ab6cf8bcbdbba4b13c5f2070bebed7c258525c2b6\"" Mar 14 00:15:00.727157 containerd[2127]: time="2026-03-14T00:15:00.726861143Z" level=info msg="StartContainer for \"1f139217fcf041c32d7adb4ab6cf8bcbdbba4b13c5f2070bebed7c258525c2b6\"" Mar 14 00:15:00.790866 systemd[1]: run-containerd-runc-k8s.io-1f139217fcf041c32d7adb4ab6cf8bcbdbba4b13c5f2070bebed7c258525c2b6-runc.FJr2ZT.mount: Deactivated successfully. Mar 14 00:15:00.838764 containerd[2127]: time="2026-03-14T00:15:00.838600200Z" level=info msg="StartContainer for \"1f139217fcf041c32d7adb4ab6cf8bcbdbba4b13c5f2070bebed7c258525c2b6\" returns successfully" Mar 14 00:15:01.485346 kubelet[3696]: I0314 00:15:01.485239 3696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-4nntk" podStartSLOduration=2.091591219 podStartE2EDuration="4.485159951s" podCreationTimestamp="2026-03-14 00:14:57 +0000 UTC" firstStartedPulling="2026-03-14 00:14:58.302393779 +0000 UTC m=+7.163662657" lastFinishedPulling="2026-03-14 00:15:00.695962511 +0000 UTC m=+9.557231389" observedRunningTime="2026-03-14 00:15:01.484954643 +0000 UTC m=+10.346223545" watchObservedRunningTime="2026-03-14 00:15:01.485159951 +0000 UTC m=+10.346428817" Mar 14 00:15:07.889772 sudo[2491]: pam_unix(sudo:session): session closed for user root Mar 14 00:15:07.971617 sshd[2474]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:07.984559 systemd[1]: sshd@6-172.31.28.2:22-68.220.241.50:47760.service: Deactivated successfully. Mar 14 00:15:07.994126 systemd[1]: session-7.scope: Deactivated successfully. Mar 14 00:15:07.997394 systemd-logind[2106]: Session 7 logged out. Waiting for processes to exit. Mar 14 00:15:08.003871 systemd-logind[2106]: Removed session 7. Mar 14 00:15:18.233617 kubelet[3696]: I0314 00:15:18.233423 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/746b1031-8c8c-441f-bf02-b9cebd0d991a-tigera-ca-bundle\") pod \"calico-typha-ccbf44f6b-swp9j\" (UID: \"746b1031-8c8c-441f-bf02-b9cebd0d991a\") " pod="calico-system/calico-typha-ccbf44f6b-swp9j" Mar 14 00:15:18.233617 kubelet[3696]: I0314 00:15:18.233490 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/746b1031-8c8c-441f-bf02-b9cebd0d991a-typha-certs\") pod \"calico-typha-ccbf44f6b-swp9j\" (UID: \"746b1031-8c8c-441f-bf02-b9cebd0d991a\") " pod="calico-system/calico-typha-ccbf44f6b-swp9j" Mar 14 00:15:18.233617 kubelet[3696]: I0314 00:15:18.233533 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp4pr\" (UniqueName: \"kubernetes.io/projected/746b1031-8c8c-441f-bf02-b9cebd0d991a-kube-api-access-tp4pr\") pod \"calico-typha-ccbf44f6b-swp9j\" (UID: \"746b1031-8c8c-441f-bf02-b9cebd0d991a\") " pod="calico-system/calico-typha-ccbf44f6b-swp9j" Mar 14 00:15:18.337317 kubelet[3696]: I0314 00:15:18.334020 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ccbcb924-3e33-489c-a145-aead03dd6ef2-flexvol-driver-host\") pod \"calico-node-d2r6f\" (UID: \"ccbcb924-3e33-489c-a145-aead03dd6ef2\") " pod="calico-system/calico-node-d2r6f" Mar 14 00:15:18.337317 kubelet[3696]: I0314 00:15:18.334102 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccbcb924-3e33-489c-a145-aead03dd6ef2-tigera-ca-bundle\") pod \"calico-node-d2r6f\" (UID: \"ccbcb924-3e33-489c-a145-aead03dd6ef2\") " pod="calico-system/calico-node-d2r6f" Mar 14 00:15:18.337317 kubelet[3696]: I0314 00:15:18.334148 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ccbcb924-3e33-489c-a145-aead03dd6ef2-var-run-calico\") pod \"calico-node-d2r6f\" (UID: \"ccbcb924-3e33-489c-a145-aead03dd6ef2\") " pod="calico-system/calico-node-d2r6f" Mar 14 00:15:18.337317 kubelet[3696]: I0314 00:15:18.334195 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ccbcb924-3e33-489c-a145-aead03dd6ef2-cni-bin-dir\") pod \"calico-node-d2r6f\" (UID: \"ccbcb924-3e33-489c-a145-aead03dd6ef2\") " pod="calico-system/calico-node-d2r6f" Mar 14 00:15:18.337317 kubelet[3696]: I0314 00:15:18.334233 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/ccbcb924-3e33-489c-a145-aead03dd6ef2-bpffs\") pod \"calico-node-d2r6f\" (UID: \"ccbcb924-3e33-489c-a145-aead03dd6ef2\") " pod="calico-system/calico-node-d2r6f" Mar 14 00:15:18.337727 kubelet[3696]: I0314 00:15:18.334289 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ccbcb924-3e33-489c-a145-aead03dd6ef2-cni-log-dir\") pod \"calico-node-d2r6f\" (UID: \"ccbcb924-3e33-489c-a145-aead03dd6ef2\") " pod="calico-system/calico-node-d2r6f" Mar 14 00:15:18.337727 kubelet[3696]: I0314 00:15:18.334328 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ccbcb924-3e33-489c-a145-aead03dd6ef2-node-certs\") pod \"calico-node-d2r6f\" (UID: \"ccbcb924-3e33-489c-a145-aead03dd6ef2\") " pod="calico-system/calico-node-d2r6f" Mar 14 00:15:18.337727 kubelet[3696]: I0314 00:15:18.334364 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/ccbcb924-3e33-489c-a145-aead03dd6ef2-nodeproc\") pod \"calico-node-d2r6f\" (UID: \"ccbcb924-3e33-489c-a145-aead03dd6ef2\") " pod="calico-system/calico-node-d2r6f" Mar 14 00:15:18.337727 kubelet[3696]: I0314 00:15:18.334398 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ccbcb924-3e33-489c-a145-aead03dd6ef2-xtables-lock\") pod \"calico-node-d2r6f\" (UID: \"ccbcb924-3e33-489c-a145-aead03dd6ef2\") " pod="calico-system/calico-node-d2r6f" Mar 14 00:15:18.337727 kubelet[3696]: I0314 00:15:18.334460 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/ccbcb924-3e33-489c-a145-aead03dd6ef2-sys-fs\") pod \"calico-node-d2r6f\" (UID: \"ccbcb924-3e33-489c-a145-aead03dd6ef2\") " pod="calico-system/calico-node-d2r6f" Mar 14 00:15:18.338005 kubelet[3696]: I0314 00:15:18.334495 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx425\" (UniqueName: \"kubernetes.io/projected/ccbcb924-3e33-489c-a145-aead03dd6ef2-kube-api-access-dx425\") pod \"calico-node-d2r6f\" (UID: \"ccbcb924-3e33-489c-a145-aead03dd6ef2\") " pod="calico-system/calico-node-d2r6f" Mar 14 00:15:18.338005 kubelet[3696]: I0314 00:15:18.334532 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ccbcb924-3e33-489c-a145-aead03dd6ef2-lib-modules\") pod \"calico-node-d2r6f\" (UID: \"ccbcb924-3e33-489c-a145-aead03dd6ef2\") " pod="calico-system/calico-node-d2r6f" Mar 14 00:15:18.338005 kubelet[3696]: I0314 00:15:18.334567 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ccbcb924-3e33-489c-a145-aead03dd6ef2-var-lib-calico\") pod \"calico-node-d2r6f\" (UID: \"ccbcb924-3e33-489c-a145-aead03dd6ef2\") " pod="calico-system/calico-node-d2r6f" Mar 14 00:15:18.338005 kubelet[3696]: I0314 00:15:18.334605 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ccbcb924-3e33-489c-a145-aead03dd6ef2-cni-net-dir\") pod \"calico-node-d2r6f\" (UID: \"ccbcb924-3e33-489c-a145-aead03dd6ef2\") " pod="calico-system/calico-node-d2r6f" Mar 14 00:15:18.338005 kubelet[3696]: I0314 00:15:18.334675 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ccbcb924-3e33-489c-a145-aead03dd6ef2-policysync\") pod \"calico-node-d2r6f\" (UID: \"ccbcb924-3e33-489c-a145-aead03dd6ef2\") " pod="calico-system/calico-node-d2r6f" Mar 14 00:15:18.456148 kubelet[3696]: E0314 00:15:18.455798 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.456148 kubelet[3696]: W0314 00:15:18.455835 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.456148 kubelet[3696]: E0314 00:15:18.455896 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.460491 kubelet[3696]: E0314 00:15:18.457417 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.460491 kubelet[3696]: W0314 00:15:18.460318 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.460491 kubelet[3696]: E0314 00:15:18.460414 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.471871 containerd[2127]: time="2026-03-14T00:15:18.469367271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-ccbf44f6b-swp9j,Uid:746b1031-8c8c-441f-bf02-b9cebd0d991a,Namespace:calico-system,Attempt:0,}" Mar 14 00:15:18.498624 kubelet[3696]: E0314 00:15:18.497853 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.498624 kubelet[3696]: W0314 00:15:18.497891 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.498624 kubelet[3696]: E0314 00:15:18.497924 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.609982 containerd[2127]: time="2026-03-14T00:15:18.609704344Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:18.609982 containerd[2127]: time="2026-03-14T00:15:18.609810808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:18.609982 containerd[2127]: time="2026-03-14T00:15:18.609848872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:18.613327 containerd[2127]: time="2026-03-14T00:15:18.610017880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:18.663319 kubelet[3696]: E0314 00:15:18.659661 3696 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ckhmv" podUID="563c486e-c3bd-4f54-8571-23d2db9006c2" Mar 14 00:15:18.664202 containerd[2127]: time="2026-03-14T00:15:18.664145344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d2r6f,Uid:ccbcb924-3e33-489c-a145-aead03dd6ef2,Namespace:calico-system,Attempt:0,}" Mar 14 00:15:18.736827 kubelet[3696]: E0314 00:15:18.736779 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.737015 kubelet[3696]: W0314 00:15:18.736819 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.737015 kubelet[3696]: E0314 00:15:18.736868 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.739004 kubelet[3696]: E0314 00:15:18.738926 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.739129 kubelet[3696]: W0314 00:15:18.738965 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.739129 kubelet[3696]: E0314 00:15:18.739050 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.743148 kubelet[3696]: E0314 00:15:18.741473 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.743148 kubelet[3696]: W0314 00:15:18.741513 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.743148 kubelet[3696]: E0314 00:15:18.741547 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.744888 kubelet[3696]: E0314 00:15:18.744188 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.744888 kubelet[3696]: W0314 00:15:18.744229 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.744888 kubelet[3696]: E0314 00:15:18.744285 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.746308 kubelet[3696]: E0314 00:15:18.746165 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.746308 kubelet[3696]: W0314 00:15:18.746198 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.746308 kubelet[3696]: E0314 00:15:18.746231 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.749452 kubelet[3696]: E0314 00:15:18.749368 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.749452 kubelet[3696]: W0314 00:15:18.749398 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.749452 kubelet[3696]: E0314 00:15:18.749430 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.750766 kubelet[3696]: E0314 00:15:18.750370 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.750766 kubelet[3696]: W0314 00:15:18.750409 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.750766 kubelet[3696]: E0314 00:15:18.750441 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.752789 kubelet[3696]: E0314 00:15:18.752739 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.752789 kubelet[3696]: W0314 00:15:18.752777 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.753003 kubelet[3696]: E0314 00:15:18.752812 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.757340 kubelet[3696]: E0314 00:15:18.756593 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.757340 kubelet[3696]: W0314 00:15:18.756629 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.757340 kubelet[3696]: E0314 00:15:18.756662 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.758631 kubelet[3696]: E0314 00:15:18.758584 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.758631 kubelet[3696]: W0314 00:15:18.758622 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.758856 kubelet[3696]: E0314 00:15:18.758656 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.759133 kubelet[3696]: E0314 00:15:18.759022 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.759133 kubelet[3696]: W0314 00:15:18.759050 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.759133 kubelet[3696]: E0314 00:15:18.759075 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.761482 kubelet[3696]: E0314 00:15:18.760660 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.761482 kubelet[3696]: W0314 00:15:18.760698 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.761482 kubelet[3696]: E0314 00:15:18.760731 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.764654 kubelet[3696]: E0314 00:15:18.764071 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.764654 kubelet[3696]: W0314 00:15:18.764111 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.764654 kubelet[3696]: E0314 00:15:18.764143 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.772698 kubelet[3696]: E0314 00:15:18.771966 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.772698 kubelet[3696]: W0314 00:15:18.772002 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.772698 kubelet[3696]: E0314 00:15:18.772042 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.773372 kubelet[3696]: E0314 00:15:18.773114 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.773372 kubelet[3696]: W0314 00:15:18.773145 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.773372 kubelet[3696]: E0314 00:15:18.773175 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.774997 kubelet[3696]: E0314 00:15:18.774434 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.774997 kubelet[3696]: W0314 00:15:18.774460 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.774997 kubelet[3696]: E0314 00:15:18.774491 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.778767 kubelet[3696]: E0314 00:15:18.778434 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.778767 kubelet[3696]: W0314 00:15:18.778499 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.778767 kubelet[3696]: E0314 00:15:18.778536 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.779963 kubelet[3696]: E0314 00:15:18.779550 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.779963 kubelet[3696]: W0314 00:15:18.779580 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.779963 kubelet[3696]: E0314 00:15:18.779611 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.789303 kubelet[3696]: E0314 00:15:18.786980 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.789303 kubelet[3696]: W0314 00:15:18.787018 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.789303 kubelet[3696]: E0314 00:15:18.787051 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.789303 kubelet[3696]: E0314 00:15:18.787765 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.789303 kubelet[3696]: W0314 00:15:18.787791 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.789303 kubelet[3696]: E0314 00:15:18.787932 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.789862 kubelet[3696]: E0314 00:15:18.789244 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.790352 kubelet[3696]: W0314 00:15:18.790016 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.790352 kubelet[3696]: E0314 00:15:18.790100 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.790352 kubelet[3696]: I0314 00:15:18.790217 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/563c486e-c3bd-4f54-8571-23d2db9006c2-varrun\") pod \"csi-node-driver-ckhmv\" (UID: \"563c486e-c3bd-4f54-8571-23d2db9006c2\") " pod="calico-system/csi-node-driver-ckhmv" Mar 14 00:15:18.791804 kubelet[3696]: E0314 00:15:18.791741 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.791804 kubelet[3696]: W0314 00:15:18.791780 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.792501 kubelet[3696]: E0314 00:15:18.791814 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.793697 kubelet[3696]: E0314 00:15:18.793612 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.793697 kubelet[3696]: W0314 00:15:18.793651 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.793697 kubelet[3696]: E0314 00:15:18.793684 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.798620 kubelet[3696]: E0314 00:15:18.797502 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.798620 kubelet[3696]: W0314 00:15:18.797541 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.798620 kubelet[3696]: E0314 00:15:18.797602 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.798620 kubelet[3696]: I0314 00:15:18.797661 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/563c486e-c3bd-4f54-8571-23d2db9006c2-socket-dir\") pod \"csi-node-driver-ckhmv\" (UID: \"563c486e-c3bd-4f54-8571-23d2db9006c2\") " pod="calico-system/csi-node-driver-ckhmv" Mar 14 00:15:18.802315 kubelet[3696]: E0314 00:15:18.801921 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.802315 kubelet[3696]: W0314 00:15:18.802104 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.806769 kubelet[3696]: E0314 00:15:18.802154 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.809865 kubelet[3696]: E0314 00:15:18.809816 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.810146 kubelet[3696]: W0314 00:15:18.810115 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.810305 kubelet[3696]: E0314 00:15:18.810249 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.814492 kubelet[3696]: E0314 00:15:18.814232 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.815093 kubelet[3696]: W0314 00:15:18.814942 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.815692 kubelet[3696]: E0314 00:15:18.815551 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.816476 kubelet[3696]: I0314 00:15:18.816057 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb2ns\" (UniqueName: \"kubernetes.io/projected/563c486e-c3bd-4f54-8571-23d2db9006c2-kube-api-access-zb2ns\") pod \"csi-node-driver-ckhmv\" (UID: \"563c486e-c3bd-4f54-8571-23d2db9006c2\") " pod="calico-system/csi-node-driver-ckhmv" Mar 14 00:15:18.818246 kubelet[3696]: E0314 00:15:18.818170 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.818629 kubelet[3696]: W0314 00:15:18.818506 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.819647 kubelet[3696]: E0314 00:15:18.818547 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.820936 kubelet[3696]: E0314 00:15:18.820611 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.820936 kubelet[3696]: W0314 00:15:18.820857 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.821375 kubelet[3696]: E0314 00:15:18.820897 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.822951 kubelet[3696]: E0314 00:15:18.822701 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.822951 kubelet[3696]: W0314 00:15:18.822777 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.822951 kubelet[3696]: E0314 00:15:18.822815 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.823451 kubelet[3696]: I0314 00:15:18.822907 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/563c486e-c3bd-4f54-8571-23d2db9006c2-registration-dir\") pod \"csi-node-driver-ckhmv\" (UID: \"563c486e-c3bd-4f54-8571-23d2db9006c2\") " pod="calico-system/csi-node-driver-ckhmv" Mar 14 00:15:18.824951 kubelet[3696]: E0314 00:15:18.824480 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.824951 kubelet[3696]: W0314 00:15:18.824514 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.824951 kubelet[3696]: E0314 00:15:18.824546 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.826404 kubelet[3696]: E0314 00:15:18.826023 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.826404 kubelet[3696]: W0314 00:15:18.826076 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.826404 kubelet[3696]: E0314 00:15:18.826108 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.826404 kubelet[3696]: I0314 00:15:18.826156 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/563c486e-c3bd-4f54-8571-23d2db9006c2-kubelet-dir\") pod \"csi-node-driver-ckhmv\" (UID: \"563c486e-c3bd-4f54-8571-23d2db9006c2\") " pod="calico-system/csi-node-driver-ckhmv" Mar 14 00:15:18.829163 kubelet[3696]: E0314 00:15:18.827913 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.829163 kubelet[3696]: W0314 00:15:18.827950 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.829163 kubelet[3696]: E0314 00:15:18.827983 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.833061 kubelet[3696]: E0314 00:15:18.831521 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.833061 kubelet[3696]: W0314 00:15:18.831555 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.833061 kubelet[3696]: E0314 00:15:18.831589 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.833782 kubelet[3696]: E0314 00:15:18.833615 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.833782 kubelet[3696]: W0314 00:15:18.833675 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.833782 kubelet[3696]: E0314 00:15:18.833707 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.856838 containerd[2127]: time="2026-03-14T00:15:18.856639541Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:18.857015 containerd[2127]: time="2026-03-14T00:15:18.856790633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:18.857015 containerd[2127]: time="2026-03-14T00:15:18.856870325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:18.857812 containerd[2127]: time="2026-03-14T00:15:18.857074433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:18.932094 kubelet[3696]: E0314 00:15:18.931800 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.932094 kubelet[3696]: W0314 00:15:18.931833 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.932094 kubelet[3696]: E0314 00:15:18.931881 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.936071 kubelet[3696]: E0314 00:15:18.935834 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.936666 kubelet[3696]: W0314 00:15:18.935862 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.936666 kubelet[3696]: E0314 00:15:18.936348 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.939290 kubelet[3696]: E0314 00:15:18.938598 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.939506 kubelet[3696]: W0314 00:15:18.938630 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.939666 kubelet[3696]: E0314 00:15:18.939639 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.940463 kubelet[3696]: E0314 00:15:18.940431 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.940702 kubelet[3696]: W0314 00:15:18.940650 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.941061 kubelet[3696]: E0314 00:15:18.940791 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.942925 kubelet[3696]: E0314 00:15:18.942754 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.942925 kubelet[3696]: W0314 00:15:18.942787 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.943660 kubelet[3696]: E0314 00:15:18.943238 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.944730 kubelet[3696]: E0314 00:15:18.944616 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.944730 kubelet[3696]: W0314 00:15:18.944646 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.944730 kubelet[3696]: E0314 00:15:18.944678 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.946326 kubelet[3696]: E0314 00:15:18.946146 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.946326 kubelet[3696]: W0314 00:15:18.946175 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.946326 kubelet[3696]: E0314 00:15:18.946204 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.947451 kubelet[3696]: E0314 00:15:18.947157 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.947451 kubelet[3696]: W0314 00:15:18.947219 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.947451 kubelet[3696]: E0314 00:15:18.947248 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.948408 kubelet[3696]: E0314 00:15:18.948189 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.948408 kubelet[3696]: W0314 00:15:18.948254 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.949324 kubelet[3696]: E0314 00:15:18.948348 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.949987 kubelet[3696]: E0314 00:15:18.949957 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.950163 kubelet[3696]: W0314 00:15:18.950135 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.950293 kubelet[3696]: E0314 00:15:18.950252 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.951806 kubelet[3696]: E0314 00:15:18.951476 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.951806 kubelet[3696]: W0314 00:15:18.951506 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.951806 kubelet[3696]: E0314 00:15:18.951535 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.953091 kubelet[3696]: E0314 00:15:18.953061 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.953581 kubelet[3696]: W0314 00:15:18.953355 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.953581 kubelet[3696]: E0314 00:15:18.953398 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.955136 kubelet[3696]: E0314 00:15:18.954910 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.955136 kubelet[3696]: W0314 00:15:18.954941 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.955136 kubelet[3696]: E0314 00:15:18.954985 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.957313 kubelet[3696]: E0314 00:15:18.956383 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.957570 kubelet[3696]: W0314 00:15:18.957529 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.957724 kubelet[3696]: E0314 00:15:18.957699 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.959744 kubelet[3696]: E0314 00:15:18.959479 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.959744 kubelet[3696]: W0314 00:15:18.959530 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.959744 kubelet[3696]: E0314 00:15:18.959565 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.962715 kubelet[3696]: E0314 00:15:18.962246 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.962715 kubelet[3696]: W0314 00:15:18.962463 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.962715 kubelet[3696]: E0314 00:15:18.962528 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.965345 kubelet[3696]: E0314 00:15:18.965156 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.965715 kubelet[3696]: W0314 00:15:18.965452 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.966547 kubelet[3696]: E0314 00:15:18.965493 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.969779 kubelet[3696]: E0314 00:15:18.969736 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.969779 kubelet[3696]: W0314 00:15:18.969817 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.969779 kubelet[3696]: E0314 00:15:18.969853 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.972328 kubelet[3696]: E0314 00:15:18.972182 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.972328 kubelet[3696]: W0314 00:15:18.972215 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.972328 kubelet[3696]: E0314 00:15:18.972251 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.973804 containerd[2127]: time="2026-03-14T00:15:18.973188054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-ccbf44f6b-swp9j,Uid:746b1031-8c8c-441f-bf02-b9cebd0d991a,Namespace:calico-system,Attempt:0,} returns sandbox id \"f8c79d0275caeb67841383412834d9df7e8b78bbe9cb621d1f8581ba152e4665\"" Mar 14 00:15:18.974219 kubelet[3696]: E0314 00:15:18.974126 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.974219 kubelet[3696]: W0314 00:15:18.974157 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.974219 kubelet[3696]: E0314 00:15:18.974189 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.975748 kubelet[3696]: E0314 00:15:18.975331 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.975748 kubelet[3696]: W0314 00:15:18.975356 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.975748 kubelet[3696]: E0314 00:15:18.975384 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.977324 kubelet[3696]: E0314 00:15:18.976893 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.977324 kubelet[3696]: W0314 00:15:18.976922 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.977324 kubelet[3696]: E0314 00:15:18.976952 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.978282 kubelet[3696]: E0314 00:15:18.978049 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.978282 kubelet[3696]: W0314 00:15:18.978073 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.978282 kubelet[3696]: E0314 00:15:18.978099 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.979447 kubelet[3696]: E0314 00:15:18.979017 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.979447 kubelet[3696]: W0314 00:15:18.979042 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.979447 kubelet[3696]: E0314 00:15:18.979066 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.981050 kubelet[3696]: E0314 00:15:18.980804 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.981050 kubelet[3696]: W0314 00:15:18.980830 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.981050 kubelet[3696]: E0314 00:15:18.980858 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:18.988041 containerd[2127]: time="2026-03-14T00:15:18.986630010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 14 00:15:18.998026 kubelet[3696]: E0314 00:15:18.997694 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:18.998026 kubelet[3696]: W0314 00:15:18.997726 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:18.998026 kubelet[3696]: E0314 00:15:18.997757 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:19.005140 containerd[2127]: time="2026-03-14T00:15:19.004907318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d2r6f,Uid:ccbcb924-3e33-489c-a145-aead03dd6ef2,Namespace:calico-system,Attempt:0,} returns sandbox id \"b76af6111eb007da6ee9b9df1324a9d84dbb29585cc790a75bba856086502a48\"" Mar 14 00:15:20.192704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1176174636.mount: Deactivated successfully. Mar 14 00:15:20.372833 kubelet[3696]: E0314 00:15:20.372766 3696 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ckhmv" podUID="563c486e-c3bd-4f54-8571-23d2db9006c2" Mar 14 00:15:20.970458 containerd[2127]: time="2026-03-14T00:15:20.970401188Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:20.973383 containerd[2127]: time="2026-03-14T00:15:20.973321244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Mar 14 00:15:20.975576 containerd[2127]: time="2026-03-14T00:15:20.975499460Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:20.983833 containerd[2127]: time="2026-03-14T00:15:20.983778056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:20.985446 containerd[2127]: time="2026-03-14T00:15:20.985233824Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 1.998538582s" Mar 14 00:15:20.985446 containerd[2127]: time="2026-03-14T00:15:20.985309976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Mar 14 00:15:20.987804 containerd[2127]: time="2026-03-14T00:15:20.987739640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 14 00:15:21.017407 containerd[2127]: time="2026-03-14T00:15:21.017166400Z" level=info msg="CreateContainer within sandbox \"f8c79d0275caeb67841383412834d9df7e8b78bbe9cb621d1f8581ba152e4665\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 14 00:15:21.051452 containerd[2127]: time="2026-03-14T00:15:21.050933476Z" level=info msg="CreateContainer within sandbox \"f8c79d0275caeb67841383412834d9df7e8b78bbe9cb621d1f8581ba152e4665\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e5cba7e02cac7506482286be55fda1fc127def517458e61a00a198f09703a4fb\"" Mar 14 00:15:21.053815 containerd[2127]: time="2026-03-14T00:15:21.052182748Z" level=info msg="StartContainer for \"e5cba7e02cac7506482286be55fda1fc127def517458e61a00a198f09703a4fb\"" Mar 14 00:15:21.181760 containerd[2127]: time="2026-03-14T00:15:21.181668617Z" level=info msg="StartContainer for \"e5cba7e02cac7506482286be55fda1fc127def517458e61a00a198f09703a4fb\" returns successfully" Mar 14 00:15:21.612718 kubelet[3696]: E0314 00:15:21.612459 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.612718 kubelet[3696]: W0314 00:15:21.612499 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.612718 kubelet[3696]: E0314 00:15:21.612531 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.614956 kubelet[3696]: E0314 00:15:21.614502 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.614956 kubelet[3696]: W0314 00:15:21.614531 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.614956 kubelet[3696]: E0314 00:15:21.614606 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.616637 kubelet[3696]: E0314 00:15:21.616598 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.616814 kubelet[3696]: W0314 00:15:21.616786 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.616930 kubelet[3696]: E0314 00:15:21.616907 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.619235 kubelet[3696]: E0314 00:15:21.619196 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.619676 kubelet[3696]: W0314 00:15:21.619348 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.619676 kubelet[3696]: E0314 00:15:21.619385 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.620121 kubelet[3696]: E0314 00:15:21.620092 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.621083 kubelet[3696]: W0314 00:15:21.620827 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.621083 kubelet[3696]: E0314 00:15:21.620879 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.622739 kubelet[3696]: E0314 00:15:21.622708 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.623062 kubelet[3696]: W0314 00:15:21.622849 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.623062 kubelet[3696]: E0314 00:15:21.622888 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.624365 kubelet[3696]: E0314 00:15:21.624329 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.624713 kubelet[3696]: W0314 00:15:21.624510 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.624713 kubelet[3696]: E0314 00:15:21.624548 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.626321 kubelet[3696]: E0314 00:15:21.625946 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.626321 kubelet[3696]: W0314 00:15:21.625977 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.626321 kubelet[3696]: E0314 00:15:21.626010 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.631298 kubelet[3696]: E0314 00:15:21.630424 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.631298 kubelet[3696]: W0314 00:15:21.630457 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.631298 kubelet[3696]: E0314 00:15:21.630491 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.632467 kubelet[3696]: E0314 00:15:21.632433 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.634121 kubelet[3696]: W0314 00:15:21.634054 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.634121 kubelet[3696]: E0314 00:15:21.634109 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.635685 kubelet[3696]: E0314 00:15:21.635638 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.635808 kubelet[3696]: W0314 00:15:21.635673 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.637113 kubelet[3696]: E0314 00:15:21.635891 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.637113 kubelet[3696]: E0314 00:15:21.636498 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.637113 kubelet[3696]: W0314 00:15:21.636520 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.637113 kubelet[3696]: E0314 00:15:21.636568 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.642165 kubelet[3696]: E0314 00:15:21.639680 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.642165 kubelet[3696]: W0314 00:15:21.639741 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.642165 kubelet[3696]: E0314 00:15:21.639776 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.648313 kubelet[3696]: E0314 00:15:21.646533 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.648313 kubelet[3696]: W0314 00:15:21.646600 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.648313 kubelet[3696]: E0314 00:15:21.646635 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.649548 kubelet[3696]: E0314 00:15:21.649487 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.649548 kubelet[3696]: W0314 00:15:21.649529 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.649852 kubelet[3696]: E0314 00:15:21.649562 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.677301 kubelet[3696]: E0314 00:15:21.677091 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.677475 kubelet[3696]: W0314 00:15:21.677400 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.677798 kubelet[3696]: E0314 00:15:21.677436 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.682521 kubelet[3696]: E0314 00:15:21.682462 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.682521 kubelet[3696]: W0314 00:15:21.682505 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.683302 kubelet[3696]: E0314 00:15:21.682721 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.687948 kubelet[3696]: E0314 00:15:21.687470 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.687948 kubelet[3696]: W0314 00:15:21.687514 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.687948 kubelet[3696]: E0314 00:15:21.687548 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.691334 kubelet[3696]: E0314 00:15:21.690590 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.691334 kubelet[3696]: W0314 00:15:21.690622 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.691334 kubelet[3696]: E0314 00:15:21.690655 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.694342 kubelet[3696]: E0314 00:15:21.693473 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.699600 kubelet[3696]: W0314 00:15:21.697350 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.699600 kubelet[3696]: E0314 00:15:21.697410 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.699600 kubelet[3696]: E0314 00:15:21.698600 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.699600 kubelet[3696]: W0314 00:15:21.698628 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.699600 kubelet[3696]: E0314 00:15:21.698659 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.702332 kubelet[3696]: E0314 00:15:21.701533 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.702632 kubelet[3696]: W0314 00:15:21.702579 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.703330 kubelet[3696]: E0314 00:15:21.702751 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.705875 kubelet[3696]: E0314 00:15:21.705393 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.705875 kubelet[3696]: W0314 00:15:21.705429 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.705875 kubelet[3696]: E0314 00:15:21.705462 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.710308 kubelet[3696]: E0314 00:15:21.707837 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.710308 kubelet[3696]: W0314 00:15:21.707870 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.710308 kubelet[3696]: E0314 00:15:21.707903 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.716464 kubelet[3696]: E0314 00:15:21.716403 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.716464 kubelet[3696]: W0314 00:15:21.716448 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.716674 kubelet[3696]: E0314 00:15:21.716484 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.720534 kubelet[3696]: E0314 00:15:21.720482 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.720534 kubelet[3696]: W0314 00:15:21.720522 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.720747 kubelet[3696]: E0314 00:15:21.720556 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.725298 kubelet[3696]: E0314 00:15:21.723754 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.725298 kubelet[3696]: W0314 00:15:21.723796 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.725298 kubelet[3696]: E0314 00:15:21.723830 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.727310 kubelet[3696]: E0314 00:15:21.726254 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.727310 kubelet[3696]: W0314 00:15:21.726314 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.727310 kubelet[3696]: E0314 00:15:21.726348 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.731158 kubelet[3696]: E0314 00:15:21.729123 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.731158 kubelet[3696]: W0314 00:15:21.730058 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.731158 kubelet[3696]: E0314 00:15:21.730109 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.734242 kubelet[3696]: E0314 00:15:21.733856 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.734242 kubelet[3696]: W0314 00:15:21.733895 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.734242 kubelet[3696]: E0314 00:15:21.733929 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.735527 kubelet[3696]: E0314 00:15:21.734773 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.735527 kubelet[3696]: W0314 00:15:21.734797 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.735527 kubelet[3696]: E0314 00:15:21.734827 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.738367 kubelet[3696]: E0314 00:15:21.738077 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.738367 kubelet[3696]: W0314 00:15:21.738111 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.738580 kubelet[3696]: E0314 00:15:21.738143 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:21.741296 kubelet[3696]: E0314 00:15:21.739416 3696 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:21.741296 kubelet[3696]: W0314 00:15:21.739454 3696 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:21.741296 kubelet[3696]: E0314 00:15:21.739685 3696 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:22.275947 containerd[2127]: time="2026-03-14T00:15:22.275891046Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:22.278299 containerd[2127]: time="2026-03-14T00:15:22.278216166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Mar 14 00:15:22.280715 containerd[2127]: time="2026-03-14T00:15:22.280637070Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:22.287212 containerd[2127]: time="2026-03-14T00:15:22.287135778Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:22.289580 containerd[2127]: time="2026-03-14T00:15:22.288996222Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.301173758s" Mar 14 00:15:22.289580 containerd[2127]: time="2026-03-14T00:15:22.289060386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Mar 14 00:15:22.298224 containerd[2127]: time="2026-03-14T00:15:22.298163274Z" level=info msg="CreateContainer within sandbox \"b76af6111eb007da6ee9b9df1324a9d84dbb29585cc790a75bba856086502a48\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 14 00:15:22.336068 containerd[2127]: time="2026-03-14T00:15:22.335997210Z" level=info msg="CreateContainer within sandbox \"b76af6111eb007da6ee9b9df1324a9d84dbb29585cc790a75bba856086502a48\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"468b4d39419caf8ffceb2409fbcf7a946539a973ac040df8efb00ff3aa083047\"" Mar 14 00:15:22.338550 containerd[2127]: time="2026-03-14T00:15:22.337864998Z" level=info msg="StartContainer for \"468b4d39419caf8ffceb2409fbcf7a946539a973ac040df8efb00ff3aa083047\"" Mar 14 00:15:22.375547 kubelet[3696]: E0314 00:15:22.373663 3696 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ckhmv" podUID="563c486e-c3bd-4f54-8571-23d2db9006c2" Mar 14 00:15:22.456777 containerd[2127]: time="2026-03-14T00:15:22.456713095Z" level=info msg="StartContainer for \"468b4d39419caf8ffceb2409fbcf7a946539a973ac040df8efb00ff3aa083047\" returns successfully" Mar 14 00:15:22.534692 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-468b4d39419caf8ffceb2409fbcf7a946539a973ac040df8efb00ff3aa083047-rootfs.mount: Deactivated successfully. Mar 14 00:15:22.567619 kubelet[3696]: I0314 00:15:22.567575 3696 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:15:22.610482 kubelet[3696]: I0314 00:15:22.610394 3696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-ccbf44f6b-swp9j" podStartSLOduration=2.609190698 podStartE2EDuration="4.610354448s" podCreationTimestamp="2026-03-14 00:15:18 +0000 UTC" firstStartedPulling="2026-03-14 00:15:18.98579199 +0000 UTC m=+27.847060868" lastFinishedPulling="2026-03-14 00:15:20.986955668 +0000 UTC m=+29.848224618" observedRunningTime="2026-03-14 00:15:21.603895603 +0000 UTC m=+30.465164505" watchObservedRunningTime="2026-03-14 00:15:22.610354448 +0000 UTC m=+31.471623314" Mar 14 00:15:23.010577 containerd[2127]: time="2026-03-14T00:15:23.010496994Z" level=info msg="shim disconnected" id=468b4d39419caf8ffceb2409fbcf7a946539a973ac040df8efb00ff3aa083047 namespace=k8s.io Mar 14 00:15:23.010577 containerd[2127]: time="2026-03-14T00:15:23.010573302Z" level=warning msg="cleaning up after shim disconnected" id=468b4d39419caf8ffceb2409fbcf7a946539a973ac040df8efb00ff3aa083047 namespace=k8s.io Mar 14 00:15:23.011031 containerd[2127]: time="2026-03-14T00:15:23.010595430Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:15:23.577648 containerd[2127]: time="2026-03-14T00:15:23.577592841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 14 00:15:24.373610 kubelet[3696]: E0314 00:15:24.371942 3696 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ckhmv" podUID="563c486e-c3bd-4f54-8571-23d2db9006c2" Mar 14 00:15:26.373589 kubelet[3696]: E0314 00:15:26.373072 3696 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ckhmv" podUID="563c486e-c3bd-4f54-8571-23d2db9006c2" Mar 14 00:15:28.372892 kubelet[3696]: E0314 00:15:28.372821 3696 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ckhmv" podUID="563c486e-c3bd-4f54-8571-23d2db9006c2" Mar 14 00:15:29.862078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1784669164.mount: Deactivated successfully. Mar 14 00:15:29.947576 containerd[2127]: time="2026-03-14T00:15:29.947517256Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:29.951809 containerd[2127]: time="2026-03-14T00:15:29.951733096Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Mar 14 00:15:29.956513 containerd[2127]: time="2026-03-14T00:15:29.956433448Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:29.966747 containerd[2127]: time="2026-03-14T00:15:29.966659176Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:29.969301 containerd[2127]: time="2026-03-14T00:15:29.969155728Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 6.391442551s" Mar 14 00:15:29.969301 containerd[2127]: time="2026-03-14T00:15:29.969210304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Mar 14 00:15:29.978035 containerd[2127]: time="2026-03-14T00:15:29.977815204Z" level=info msg="CreateContainer within sandbox \"b76af6111eb007da6ee9b9df1324a9d84dbb29585cc790a75bba856086502a48\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 14 00:15:30.004572 containerd[2127]: time="2026-03-14T00:15:30.004493436Z" level=info msg="CreateContainer within sandbox \"b76af6111eb007da6ee9b9df1324a9d84dbb29585cc790a75bba856086502a48\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"9d6dbdc7b2ae7928c3718eedec4ca1acfd7c0e9828d309bddb10a635a44e733e\"" Mar 14 00:15:30.006395 containerd[2127]: time="2026-03-14T00:15:30.005708892Z" level=info msg="StartContainer for \"9d6dbdc7b2ae7928c3718eedec4ca1acfd7c0e9828d309bddb10a635a44e733e\"" Mar 14 00:15:30.119823 containerd[2127]: time="2026-03-14T00:15:30.119500777Z" level=info msg="StartContainer for \"9d6dbdc7b2ae7928c3718eedec4ca1acfd7c0e9828d309bddb10a635a44e733e\" returns successfully" Mar 14 00:15:30.373322 kubelet[3696]: E0314 00:15:30.372573 3696 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ckhmv" podUID="563c486e-c3bd-4f54-8571-23d2db9006c2" Mar 14 00:15:30.860029 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d6dbdc7b2ae7928c3718eedec4ca1acfd7c0e9828d309bddb10a635a44e733e-rootfs.mount: Deactivated successfully. Mar 14 00:15:31.802041 containerd[2127]: time="2026-03-14T00:15:31.801987917Z" level=error msg="collecting metrics for 9d6dbdc7b2ae7928c3718eedec4ca1acfd7c0e9828d309bddb10a635a44e733e" error="cgroups: cgroup deleted: unknown" Mar 14 00:15:31.846342 containerd[2127]: time="2026-03-14T00:15:31.846223734Z" level=info msg="shim disconnected" id=9d6dbdc7b2ae7928c3718eedec4ca1acfd7c0e9828d309bddb10a635a44e733e namespace=k8s.io Mar 14 00:15:31.846342 containerd[2127]: time="2026-03-14T00:15:31.846382386Z" level=warning msg="cleaning up after shim disconnected" id=9d6dbdc7b2ae7928c3718eedec4ca1acfd7c0e9828d309bddb10a635a44e733e namespace=k8s.io Mar 14 00:15:31.846342 containerd[2127]: time="2026-03-14T00:15:31.846404550Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:15:32.373346 kubelet[3696]: E0314 00:15:32.372476 3696 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ckhmv" podUID="563c486e-c3bd-4f54-8571-23d2db9006c2" Mar 14 00:15:32.617405 containerd[2127]: time="2026-03-14T00:15:32.617336345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 14 00:15:34.190014 kubelet[3696]: I0314 00:15:34.188805 3696 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:15:34.372662 kubelet[3696]: E0314 00:15:34.372057 3696 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ckhmv" podUID="563c486e-c3bd-4f54-8571-23d2db9006c2" Mar 14 00:15:35.525189 containerd[2127]: time="2026-03-14T00:15:35.525108188Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:35.526980 containerd[2127]: time="2026-03-14T00:15:35.526901684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Mar 14 00:15:35.528709 containerd[2127]: time="2026-03-14T00:15:35.528632792Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:35.533499 containerd[2127]: time="2026-03-14T00:15:35.533413808Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:35.538538 containerd[2127]: time="2026-03-14T00:15:35.536866856Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 2.919246675s" Mar 14 00:15:35.538538 containerd[2127]: time="2026-03-14T00:15:35.536937020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Mar 14 00:15:35.546927 containerd[2127]: time="2026-03-14T00:15:35.546847040Z" level=info msg="CreateContainer within sandbox \"b76af6111eb007da6ee9b9df1324a9d84dbb29585cc790a75bba856086502a48\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 14 00:15:35.575642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2798326046.mount: Deactivated successfully. Mar 14 00:15:35.576169 containerd[2127]: time="2026-03-14T00:15:35.576075236Z" level=info msg="CreateContainer within sandbox \"b76af6111eb007da6ee9b9df1324a9d84dbb29585cc790a75bba856086502a48\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9bec11d12664ba311cc341978de460bd10ac37582497adb08538569635f3da30\"" Mar 14 00:15:35.579531 containerd[2127]: time="2026-03-14T00:15:35.578720096Z" level=info msg="StartContainer for \"9bec11d12664ba311cc341978de460bd10ac37582497adb08538569635f3da30\"" Mar 14 00:15:35.703005 containerd[2127]: time="2026-03-14T00:15:35.702922569Z" level=info msg="StartContainer for \"9bec11d12664ba311cc341978de460bd10ac37582497adb08538569635f3da30\" returns successfully" Mar 14 00:15:36.372849 kubelet[3696]: E0314 00:15:36.372755 3696 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ckhmv" podUID="563c486e-c3bd-4f54-8571-23d2db9006c2" Mar 14 00:15:37.408794 containerd[2127]: time="2026-03-14T00:15:37.408727221Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:15:37.437508 kubelet[3696]: I0314 00:15:37.436224 3696 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 14 00:15:37.544013 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9bec11d12664ba311cc341978de460bd10ac37582497adb08538569635f3da30-rootfs.mount: Deactivated successfully. Mar 14 00:15:37.565392 containerd[2127]: time="2026-03-14T00:15:37.565120282Z" level=info msg="shim disconnected" id=9bec11d12664ba311cc341978de460bd10ac37582497adb08538569635f3da30 namespace=k8s.io Mar 14 00:15:37.565392 containerd[2127]: time="2026-03-14T00:15:37.565305118Z" level=warning msg="cleaning up after shim disconnected" id=9bec11d12664ba311cc341978de460bd10ac37582497adb08538569635f3da30 namespace=k8s.io Mar 14 00:15:37.565392 containerd[2127]: time="2026-03-14T00:15:37.565330606Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:15:37.619827 containerd[2127]: time="2026-03-14T00:15:37.619738558Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:15:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:15:37.636344 kubelet[3696]: I0314 00:15:37.634169 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75cf24a3-e934-4460-adde-7f2e597be69d-config-volume\") pod \"coredns-674b8bbfcf-j54qb\" (UID: \"75cf24a3-e934-4460-adde-7f2e597be69d\") " pod="kube-system/coredns-674b8bbfcf-j54qb" Mar 14 00:15:37.636344 kubelet[3696]: I0314 00:15:37.634244 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88h9v\" (UniqueName: \"kubernetes.io/projected/75cf24a3-e934-4460-adde-7f2e597be69d-kube-api-access-88h9v\") pod \"coredns-674b8bbfcf-j54qb\" (UID: \"75cf24a3-e934-4460-adde-7f2e597be69d\") " pod="kube-system/coredns-674b8bbfcf-j54qb" Mar 14 00:15:37.636344 kubelet[3696]: I0314 00:15:37.634329 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8e6ece2-771a-4b1a-940c-5db5a7d45aa3-tigera-ca-bundle\") pod \"calico-kube-controllers-8444cc7f95-vdkpl\" (UID: \"b8e6ece2-771a-4b1a-940c-5db5a7d45aa3\") " pod="calico-system/calico-kube-controllers-8444cc7f95-vdkpl" Mar 14 00:15:37.636344 kubelet[3696]: I0314 00:15:37.634396 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcrmc\" (UniqueName: \"kubernetes.io/projected/b8e6ece2-771a-4b1a-940c-5db5a7d45aa3-kube-api-access-vcrmc\") pod \"calico-kube-controllers-8444cc7f95-vdkpl\" (UID: \"b8e6ece2-771a-4b1a-940c-5db5a7d45aa3\") " pod="calico-system/calico-kube-controllers-8444cc7f95-vdkpl" Mar 14 00:15:37.736380 kubelet[3696]: I0314 00:15:37.735520 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e101a10e-f453-421d-a606-7f3752c5f727-config-volume\") pod \"coredns-674b8bbfcf-4j2tf\" (UID: \"e101a10e-f453-421d-a606-7f3752c5f727\") " pod="kube-system/coredns-674b8bbfcf-4j2tf" Mar 14 00:15:37.736380 kubelet[3696]: I0314 00:15:37.735587 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv64n\" (UniqueName: \"kubernetes.io/projected/850b6616-1e79-423c-baa4-b84acf3a98ba-kube-api-access-dv64n\") pod \"goldmane-5b85766d88-22gqf\" (UID: \"850b6616-1e79-423c-baa4-b84acf3a98ba\") " pod="calico-system/goldmane-5b85766d88-22gqf" Mar 14 00:15:37.736380 kubelet[3696]: I0314 00:15:37.735638 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knb9t\" (UniqueName: \"kubernetes.io/projected/e101a10e-f453-421d-a606-7f3752c5f727-kube-api-access-knb9t\") pod \"coredns-674b8bbfcf-4j2tf\" (UID: \"e101a10e-f453-421d-a606-7f3752c5f727\") " pod="kube-system/coredns-674b8bbfcf-4j2tf" Mar 14 00:15:37.736380 kubelet[3696]: I0314 00:15:37.735679 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/850b6616-1e79-423c-baa4-b84acf3a98ba-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-22gqf\" (UID: \"850b6616-1e79-423c-baa4-b84acf3a98ba\") " pod="calico-system/goldmane-5b85766d88-22gqf" Mar 14 00:15:37.736380 kubelet[3696]: I0314 00:15:37.735720 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/6c90c97c-9e7f-46d8-bdd7-1f5e381c5811-nginx-config\") pod \"whisker-75cf64fb64-gfxc8\" (UID: \"6c90c97c-9e7f-46d8-bdd7-1f5e381c5811\") " pod="calico-system/whisker-75cf64fb64-gfxc8" Mar 14 00:15:37.739450 kubelet[3696]: I0314 00:15:37.735755 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6c90c97c-9e7f-46d8-bdd7-1f5e381c5811-whisker-backend-key-pair\") pod \"whisker-75cf64fb64-gfxc8\" (UID: \"6c90c97c-9e7f-46d8-bdd7-1f5e381c5811\") " pod="calico-system/whisker-75cf64fb64-gfxc8" Mar 14 00:15:37.739450 kubelet[3696]: I0314 00:15:37.735798 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96dkb\" (UniqueName: \"kubernetes.io/projected/a25a53f1-9260-4c87-baf3-12d319ebd2af-kube-api-access-96dkb\") pod \"calico-apiserver-6579f7776-4xv6h\" (UID: \"a25a53f1-9260-4c87-baf3-12d319ebd2af\") " pod="calico-system/calico-apiserver-6579f7776-4xv6h" Mar 14 00:15:37.739450 kubelet[3696]: I0314 00:15:37.735838 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5ad4e2d1-2c97-47c9-b1a8-c77ac0d2d251-calico-apiserver-certs\") pod \"calico-apiserver-6579f7776-dhvpk\" (UID: \"5ad4e2d1-2c97-47c9-b1a8-c77ac0d2d251\") " pod="calico-system/calico-apiserver-6579f7776-dhvpk" Mar 14 00:15:37.739450 kubelet[3696]: I0314 00:15:37.739096 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/850b6616-1e79-423c-baa4-b84acf3a98ba-goldmane-key-pair\") pod \"goldmane-5b85766d88-22gqf\" (UID: \"850b6616-1e79-423c-baa4-b84acf3a98ba\") " pod="calico-system/goldmane-5b85766d88-22gqf" Mar 14 00:15:37.739450 kubelet[3696]: I0314 00:15:37.739189 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9nz2\" (UniqueName: \"kubernetes.io/projected/6c90c97c-9e7f-46d8-bdd7-1f5e381c5811-kube-api-access-r9nz2\") pod \"whisker-75cf64fb64-gfxc8\" (UID: \"6c90c97c-9e7f-46d8-bdd7-1f5e381c5811\") " pod="calico-system/whisker-75cf64fb64-gfxc8" Mar 14 00:15:37.739745 kubelet[3696]: I0314 00:15:37.739255 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shqpd\" (UniqueName: \"kubernetes.io/projected/5ad4e2d1-2c97-47c9-b1a8-c77ac0d2d251-kube-api-access-shqpd\") pod \"calico-apiserver-6579f7776-dhvpk\" (UID: \"5ad4e2d1-2c97-47c9-b1a8-c77ac0d2d251\") " pod="calico-system/calico-apiserver-6579f7776-dhvpk" Mar 14 00:15:37.742922 kubelet[3696]: I0314 00:15:37.739752 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c90c97c-9e7f-46d8-bdd7-1f5e381c5811-whisker-ca-bundle\") pod \"whisker-75cf64fb64-gfxc8\" (UID: \"6c90c97c-9e7f-46d8-bdd7-1f5e381c5811\") " pod="calico-system/whisker-75cf64fb64-gfxc8" Mar 14 00:15:37.743842 kubelet[3696]: I0314 00:15:37.743350 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a25a53f1-9260-4c87-baf3-12d319ebd2af-calico-apiserver-certs\") pod \"calico-apiserver-6579f7776-4xv6h\" (UID: \"a25a53f1-9260-4c87-baf3-12d319ebd2af\") " pod="calico-system/calico-apiserver-6579f7776-4xv6h" Mar 14 00:15:37.751424 kubelet[3696]: I0314 00:15:37.743438 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/850b6616-1e79-423c-baa4-b84acf3a98ba-config\") pod \"goldmane-5b85766d88-22gqf\" (UID: \"850b6616-1e79-423c-baa4-b84acf3a98ba\") " pod="calico-system/goldmane-5b85766d88-22gqf" Mar 14 00:15:37.790562 containerd[2127]: time="2026-03-14T00:15:37.790475555Z" level=info msg="CreateContainer within sandbox \"b76af6111eb007da6ee9b9df1324a9d84dbb29585cc790a75bba856086502a48\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 14 00:15:37.856119 kubelet[3696]: I0314 00:15:37.856050 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwfs7\" (UniqueName: \"kubernetes.io/projected/d4ea2f60-bae6-4228-ba61-99aef0e1061d-kube-api-access-fwfs7\") pod \"calico-apiserver-7bfc5795cd-9tsrz\" (UID: \"d4ea2f60-bae6-4228-ba61-99aef0e1061d\") " pod="calico-system/calico-apiserver-7bfc5795cd-9tsrz" Mar 14 00:15:37.857321 kubelet[3696]: I0314 00:15:37.856366 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d4ea2f60-bae6-4228-ba61-99aef0e1061d-calico-apiserver-certs\") pod \"calico-apiserver-7bfc5795cd-9tsrz\" (UID: \"d4ea2f60-bae6-4228-ba61-99aef0e1061d\") " pod="calico-system/calico-apiserver-7bfc5795cd-9tsrz" Mar 14 00:15:37.905320 containerd[2127]: time="2026-03-14T00:15:37.903517884Z" level=info msg="CreateContainer within sandbox \"b76af6111eb007da6ee9b9df1324a9d84dbb29585cc790a75bba856086502a48\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7974f3e24038949c9914007fc6105fdaf5bdd736f76167536e0d269d58febc78\"" Mar 14 00:15:37.907928 containerd[2127]: time="2026-03-14T00:15:37.907746024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8444cc7f95-vdkpl,Uid:b8e6ece2-771a-4b1a-940c-5db5a7d45aa3,Namespace:calico-system,Attempt:0,}" Mar 14 00:15:37.908310 containerd[2127]: time="2026-03-14T00:15:37.908230896Z" level=info msg="StartContainer for \"7974f3e24038949c9914007fc6105fdaf5bdd736f76167536e0d269d58febc78\"" Mar 14 00:15:37.967964 containerd[2127]: time="2026-03-14T00:15:37.967887120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6579f7776-dhvpk,Uid:5ad4e2d1-2c97-47c9-b1a8-c77ac0d2d251,Namespace:calico-system,Attempt:0,}" Mar 14 00:15:37.973193 containerd[2127]: time="2026-03-14T00:15:37.970151076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j54qb,Uid:75cf24a3-e934-4460-adde-7f2e597be69d,Namespace:kube-system,Attempt:0,}" Mar 14 00:15:37.973193 containerd[2127]: time="2026-03-14T00:15:37.970746000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6579f7776-4xv6h,Uid:a25a53f1-9260-4c87-baf3-12d319ebd2af,Namespace:calico-system,Attempt:0,}" Mar 14 00:15:38.012791 containerd[2127]: time="2026-03-14T00:15:38.012738944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-22gqf,Uid:850b6616-1e79-423c-baa4-b84acf3a98ba,Namespace:calico-system,Attempt:0,}" Mar 14 00:15:38.162724 containerd[2127]: time="2026-03-14T00:15:38.162652305Z" level=info msg="StartContainer for \"7974f3e24038949c9914007fc6105fdaf5bdd736f76167536e0d269d58febc78\" returns successfully" Mar 14 00:15:38.259199 containerd[2127]: time="2026-03-14T00:15:38.258659481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75cf64fb64-gfxc8,Uid:6c90c97c-9e7f-46d8-bdd7-1f5e381c5811,Namespace:calico-system,Attempt:0,}" Mar 14 00:15:38.272976 containerd[2127]: time="2026-03-14T00:15:38.272380210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4j2tf,Uid:e101a10e-f453-421d-a606-7f3752c5f727,Namespace:kube-system,Attempt:0,}" Mar 14 00:15:38.307379 containerd[2127]: time="2026-03-14T00:15:38.306720202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bfc5795cd-9tsrz,Uid:d4ea2f60-bae6-4228-ba61-99aef0e1061d,Namespace:calico-system,Attempt:0,}" Mar 14 00:15:38.392118 containerd[2127]: time="2026-03-14T00:15:38.391419322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ckhmv,Uid:563c486e-c3bd-4f54-8571-23d2db9006c2,Namespace:calico-system,Attempt:0,}" Mar 14 00:15:38.858909 kubelet[3696]: I0314 00:15:38.858782 3696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-d2r6f" podStartSLOduration=4.328705194 podStartE2EDuration="20.85870848s" podCreationTimestamp="2026-03-14 00:15:18 +0000 UTC" firstStartedPulling="2026-03-14 00:15:19.009419066 +0000 UTC m=+27.870687944" lastFinishedPulling="2026-03-14 00:15:35.539422364 +0000 UTC m=+44.400691230" observedRunningTime="2026-03-14 00:15:38.858590316 +0000 UTC m=+47.719859218" watchObservedRunningTime="2026-03-14 00:15:38.85870848 +0000 UTC m=+47.719977358" Mar 14 00:15:39.895657 systemd-networkd[1684]: cali1bf52bbf4df: Link UP Mar 14 00:15:39.904635 systemd-networkd[1684]: cali1bf52bbf4df: Gained carrier Mar 14 00:15:39.906093 systemd[1]: run-containerd-runc-k8s.io-7974f3e24038949c9914007fc6105fdaf5bdd736f76167536e0d269d58febc78-runc.21nGnW.mount: Deactivated successfully. Mar 14 00:15:39.919032 (udev-worker)[4872]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:15:39.926574 systemd-resolved[2025]: Under memory pressure, flushing caches. Mar 14 00:15:39.927636 systemd-journald[1608]: Under memory pressure, flushing caches. Mar 14 00:15:39.926659 systemd-resolved[2025]: Flushed all caches. Mar 14 00:15:39.997252 systemd-networkd[1684]: calie2060b10e2a: Link UP Mar 14 00:15:40.003702 systemd-networkd[1684]: calie2060b10e2a: Gained carrier Mar 14 00:15:40.024650 containerd[2127]: 2026-03-14 00:15:38.848 [ERROR][4634] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:15:40.024650 containerd[2127]: 2026-03-14 00:15:39.030 [INFO][4634] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--2-k8s-goldmane--5b85766d88--22gqf-eth0 goldmane-5b85766d88- calico-system 850b6616-1e79-423c-baa4-b84acf3a98ba 921 0 2026-03-14 00:15:15 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-28-2 goldmane-5b85766d88-22gqf eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1bf52bbf4df [] [] }} ContainerID="486bdc245670476d4e74de98f6eb415542fabc4769ca147ecdc6be5ad6ba6d28" Namespace="calico-system" Pod="goldmane-5b85766d88-22gqf" WorkloadEndpoint="ip--172--31--28--2-k8s-goldmane--5b85766d88--22gqf-" Mar 14 00:15:40.024650 containerd[2127]: 2026-03-14 00:15:39.030 [INFO][4634] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="486bdc245670476d4e74de98f6eb415542fabc4769ca147ecdc6be5ad6ba6d28" Namespace="calico-system" Pod="goldmane-5b85766d88-22gqf" WorkloadEndpoint="ip--172--31--28--2-k8s-goldmane--5b85766d88--22gqf-eth0" Mar 14 00:15:40.024650 containerd[2127]: 2026-03-14 00:15:39.600 [INFO][4766] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="486bdc245670476d4e74de98f6eb415542fabc4769ca147ecdc6be5ad6ba6d28" HandleID="k8s-pod-network.486bdc245670476d4e74de98f6eb415542fabc4769ca147ecdc6be5ad6ba6d28" Workload="ip--172--31--28--2-k8s-goldmane--5b85766d88--22gqf-eth0" Mar 14 00:15:40.024650 containerd[2127]: 2026-03-14 00:15:39.658 [INFO][4766] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="486bdc245670476d4e74de98f6eb415542fabc4769ca147ecdc6be5ad6ba6d28" HandleID="k8s-pod-network.486bdc245670476d4e74de98f6eb415542fabc4769ca147ecdc6be5ad6ba6d28" Workload="ip--172--31--28--2-k8s-goldmane--5b85766d88--22gqf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400036e520), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-2", "pod":"goldmane-5b85766d88-22gqf", "timestamp":"2026-03-14 00:15:39.60046656 +0000 UTC"}, Hostname:"ip-172-31-28-2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000358b00)} Mar 14 00:15:40.024650 containerd[2127]: 2026-03-14 00:15:39.658 [INFO][4766] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:40.024650 containerd[2127]: 2026-03-14 00:15:39.658 [INFO][4766] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:40.024650 containerd[2127]: 2026-03-14 00:15:39.658 [INFO][4766] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-2' Mar 14 00:15:40.024650 containerd[2127]: 2026-03-14 00:15:39.675 [INFO][4766] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.486bdc245670476d4e74de98f6eb415542fabc4769ca147ecdc6be5ad6ba6d28" host="ip-172-31-28-2" Mar 14 00:15:40.024650 containerd[2127]: 2026-03-14 00:15:39.717 [INFO][4766] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-28-2" Mar 14 00:15:40.024650 containerd[2127]: 2026-03-14 00:15:39.736 [INFO][4766] ipam/ipam.go 526: Trying affinity for 192.168.50.192/26 host="ip-172-31-28-2" Mar 14 00:15:40.024650 containerd[2127]: 2026-03-14 00:15:39.746 [INFO][4766] ipam/ipam.go 160: Attempting to load block cidr=192.168.50.192/26 host="ip-172-31-28-2" Mar 14 00:15:40.024650 containerd[2127]: 2026-03-14 00:15:39.768 [INFO][4766] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.50.192/26 host="ip-172-31-28-2" Mar 14 00:15:40.024650 containerd[2127]: 2026-03-14 00:15:39.769 [INFO][4766] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.50.192/26 handle="k8s-pod-network.486bdc245670476d4e74de98f6eb415542fabc4769ca147ecdc6be5ad6ba6d28" host="ip-172-31-28-2" Mar 14 00:15:40.024650 containerd[2127]: 2026-03-14 00:15:39.781 [INFO][4766] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.486bdc245670476d4e74de98f6eb415542fabc4769ca147ecdc6be5ad6ba6d28 Mar 14 00:15:40.024650 containerd[2127]: 2026-03-14 00:15:39.800 [INFO][4766] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.50.192/26 handle="k8s-pod-network.486bdc245670476d4e74de98f6eb415542fabc4769ca147ecdc6be5ad6ba6d28" host="ip-172-31-28-2" Mar 14 00:15:40.024650 containerd[2127]: 2026-03-14 00:15:39.817 [INFO][4766] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.50.193/26] block=192.168.50.192/26 handle="k8s-pod-network.486bdc245670476d4e74de98f6eb415542fabc4769ca147ecdc6be5ad6ba6d28" host="ip-172-31-28-2" Mar 14 00:15:40.024650 containerd[2127]: 2026-03-14 00:15:39.818 [INFO][4766] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.50.193/26] handle="k8s-pod-network.486bdc245670476d4e74de98f6eb415542fabc4769ca147ecdc6be5ad6ba6d28" host="ip-172-31-28-2" Mar 14 00:15:40.024650 containerd[2127]: 2026-03-14 00:15:39.819 [INFO][4766] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:40.024650 containerd[2127]: 2026-03-14 00:15:39.820 [INFO][4766] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.50.193/26] IPv6=[] ContainerID="486bdc245670476d4e74de98f6eb415542fabc4769ca147ecdc6be5ad6ba6d28" HandleID="k8s-pod-network.486bdc245670476d4e74de98f6eb415542fabc4769ca147ecdc6be5ad6ba6d28" Workload="ip--172--31--28--2-k8s-goldmane--5b85766d88--22gqf-eth0" Mar 14 00:15:40.032052 containerd[2127]: 2026-03-14 00:15:39.856 [INFO][4634] cni-plugin/k8s.go 418: Populated endpoint ContainerID="486bdc245670476d4e74de98f6eb415542fabc4769ca147ecdc6be5ad6ba6d28" Namespace="calico-system" Pod="goldmane-5b85766d88-22gqf" WorkloadEndpoint="ip--172--31--28--2-k8s-goldmane--5b85766d88--22gqf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--2-k8s-goldmane--5b85766d88--22gqf-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"850b6616-1e79-423c-baa4-b84acf3a98ba", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-2", ContainerID:"", Pod:"goldmane-5b85766d88-22gqf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.50.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1bf52bbf4df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:40.032052 containerd[2127]: 2026-03-14 00:15:39.857 [INFO][4634] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.193/32] ContainerID="486bdc245670476d4e74de98f6eb415542fabc4769ca147ecdc6be5ad6ba6d28" Namespace="calico-system" Pod="goldmane-5b85766d88-22gqf" WorkloadEndpoint="ip--172--31--28--2-k8s-goldmane--5b85766d88--22gqf-eth0" Mar 14 00:15:40.032052 containerd[2127]: 2026-03-14 00:15:39.857 [INFO][4634] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1bf52bbf4df ContainerID="486bdc245670476d4e74de98f6eb415542fabc4769ca147ecdc6be5ad6ba6d28" Namespace="calico-system" Pod="goldmane-5b85766d88-22gqf" WorkloadEndpoint="ip--172--31--28--2-k8s-goldmane--5b85766d88--22gqf-eth0" Mar 14 00:15:40.032052 containerd[2127]: 2026-03-14 00:15:39.911 [INFO][4634] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="486bdc245670476d4e74de98f6eb415542fabc4769ca147ecdc6be5ad6ba6d28" Namespace="calico-system" Pod="goldmane-5b85766d88-22gqf" WorkloadEndpoint="ip--172--31--28--2-k8s-goldmane--5b85766d88--22gqf-eth0" Mar 14 00:15:40.032052 containerd[2127]: 2026-03-14 00:15:39.921 [INFO][4634] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="486bdc245670476d4e74de98f6eb415542fabc4769ca147ecdc6be5ad6ba6d28" Namespace="calico-system" Pod="goldmane-5b85766d88-22gqf" WorkloadEndpoint="ip--172--31--28--2-k8s-goldmane--5b85766d88--22gqf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--2-k8s-goldmane--5b85766d88--22gqf-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"850b6616-1e79-423c-baa4-b84acf3a98ba", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-2", ContainerID:"486bdc245670476d4e74de98f6eb415542fabc4769ca147ecdc6be5ad6ba6d28", Pod:"goldmane-5b85766d88-22gqf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.50.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1bf52bbf4df", MAC:"ce:94:d9:e0:18:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:40.032052 containerd[2127]: 2026-03-14 00:15:39.991 [INFO][4634] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="486bdc245670476d4e74de98f6eb415542fabc4769ca147ecdc6be5ad6ba6d28" Namespace="calico-system" Pod="goldmane-5b85766d88-22gqf" WorkloadEndpoint="ip--172--31--28--2-k8s-goldmane--5b85766d88--22gqf-eth0" Mar 14 00:15:40.075547 containerd[2127]: 2026-03-14 00:15:38.912 [ERROR][4621] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:15:40.075547 containerd[2127]: 2026-03-14 00:15:39.085 [INFO][4621] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--2-k8s-calico--apiserver--6579f7776--4xv6h-eth0 calico-apiserver-6579f7776- calico-system a25a53f1-9260-4c87-baf3-12d319ebd2af 918 0 2026-03-14 00:15:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6579f7776 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-2 calico-apiserver-6579f7776-4xv6h eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calie2060b10e2a [] [] }} ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" Namespace="calico-system" Pod="calico-apiserver-6579f7776-4xv6h" WorkloadEndpoint="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--4xv6h-" Mar 14 00:15:40.075547 containerd[2127]: 2026-03-14 00:15:39.088 [INFO][4621] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" Namespace="calico-system" Pod="calico-apiserver-6579f7776-4xv6h" WorkloadEndpoint="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--4xv6h-eth0" Mar 14 00:15:40.075547 containerd[2127]: 2026-03-14 00:15:39.664 [INFO][4778] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" HandleID="k8s-pod-network.58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" Workload="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--4xv6h-eth0" Mar 14 00:15:40.075547 containerd[2127]: 2026-03-14 00:15:39.725 [INFO][4778] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" HandleID="k8s-pod-network.58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" Workload="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--4xv6h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034e980), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-2", "pod":"calico-apiserver-6579f7776-4xv6h", "timestamp":"2026-03-14 00:15:39.664106868 +0000 UTC"}, Hostname:"ip-172-31-28-2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400021f600)} Mar 14 00:15:40.075547 containerd[2127]: 2026-03-14 00:15:39.725 [INFO][4778] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:40.075547 containerd[2127]: 2026-03-14 00:15:39.819 [INFO][4778] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:40.075547 containerd[2127]: 2026-03-14 00:15:39.821 [INFO][4778] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-2' Mar 14 00:15:40.075547 containerd[2127]: 2026-03-14 00:15:39.826 [INFO][4778] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" host="ip-172-31-28-2" Mar 14 00:15:40.075547 containerd[2127]: 2026-03-14 00:15:39.841 [INFO][4778] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-28-2" Mar 14 00:15:40.075547 containerd[2127]: 2026-03-14 00:15:39.858 [INFO][4778] ipam/ipam.go 526: Trying affinity for 192.168.50.192/26 host="ip-172-31-28-2" Mar 14 00:15:40.075547 containerd[2127]: 2026-03-14 00:15:39.863 [INFO][4778] ipam/ipam.go 160: Attempting to load block cidr=192.168.50.192/26 host="ip-172-31-28-2" Mar 14 00:15:40.075547 containerd[2127]: 2026-03-14 00:15:39.870 [INFO][4778] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.50.192/26 host="ip-172-31-28-2" Mar 14 00:15:40.075547 containerd[2127]: 2026-03-14 00:15:39.870 [INFO][4778] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.50.192/26 handle="k8s-pod-network.58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" host="ip-172-31-28-2" Mar 14 00:15:40.075547 containerd[2127]: 2026-03-14 00:15:39.875 [INFO][4778] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7 Mar 14 00:15:40.075547 containerd[2127]: 2026-03-14 00:15:39.908 [INFO][4778] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.50.192/26 handle="k8s-pod-network.58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" host="ip-172-31-28-2" Mar 14 00:15:40.075547 containerd[2127]: 2026-03-14 00:15:39.928 [INFO][4778] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.50.194/26] block=192.168.50.192/26 handle="k8s-pod-network.58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" host="ip-172-31-28-2" Mar 14 00:15:40.075547 containerd[2127]: 2026-03-14 00:15:39.930 [INFO][4778] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.50.194/26] handle="k8s-pod-network.58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" host="ip-172-31-28-2" Mar 14 00:15:40.075547 containerd[2127]: 2026-03-14 00:15:39.930 [INFO][4778] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:40.075547 containerd[2127]: 2026-03-14 00:15:39.931 [INFO][4778] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.50.194/26] IPv6=[] ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" HandleID="k8s-pod-network.58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" Workload="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--4xv6h-eth0" Mar 14 00:15:40.078222 containerd[2127]: 2026-03-14 00:15:39.965 [INFO][4621] cni-plugin/k8s.go 418: Populated endpoint ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" Namespace="calico-system" Pod="calico-apiserver-6579f7776-4xv6h" WorkloadEndpoint="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--4xv6h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--2-k8s-calico--apiserver--6579f7776--4xv6h-eth0", GenerateName:"calico-apiserver-6579f7776-", Namespace:"calico-system", SelfLink:"", UID:"a25a53f1-9260-4c87-baf3-12d319ebd2af", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6579f7776", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-2", ContainerID:"", Pod:"calico-apiserver-6579f7776-4xv6h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie2060b10e2a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:40.078222 containerd[2127]: 2026-03-14 00:15:39.971 [INFO][4621] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.194/32] ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" Namespace="calico-system" Pod="calico-apiserver-6579f7776-4xv6h" WorkloadEndpoint="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--4xv6h-eth0" Mar 14 00:15:40.078222 containerd[2127]: 2026-03-14 00:15:39.977 [INFO][4621] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie2060b10e2a ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" Namespace="calico-system" Pod="calico-apiserver-6579f7776-4xv6h" WorkloadEndpoint="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--4xv6h-eth0" Mar 14 00:15:40.078222 containerd[2127]: 2026-03-14 00:15:40.010 [INFO][4621] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" Namespace="calico-system" Pod="calico-apiserver-6579f7776-4xv6h" WorkloadEndpoint="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--4xv6h-eth0" Mar 14 00:15:40.078222 containerd[2127]: 2026-03-14 00:15:40.012 [INFO][4621] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" Namespace="calico-system" Pod="calico-apiserver-6579f7776-4xv6h" WorkloadEndpoint="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--4xv6h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--2-k8s-calico--apiserver--6579f7776--4xv6h-eth0", GenerateName:"calico-apiserver-6579f7776-", Namespace:"calico-system", SelfLink:"", UID:"a25a53f1-9260-4c87-baf3-12d319ebd2af", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6579f7776", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-2", ContainerID:"58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7", Pod:"calico-apiserver-6579f7776-4xv6h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calie2060b10e2a", MAC:"f6:4f:14:6b:f4:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:40.078222 containerd[2127]: 2026-03-14 00:15:40.063 [INFO][4621] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" Namespace="calico-system" Pod="calico-apiserver-6579f7776-4xv6h" WorkloadEndpoint="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--4xv6h-eth0" Mar 14 00:15:40.143112 systemd-networkd[1684]: cali5cda50cbe9a: Link UP Mar 14 00:15:40.148374 systemd-networkd[1684]: cali5cda50cbe9a: Gained carrier Mar 14 00:15:40.195398 containerd[2127]: time="2026-03-14T00:15:40.194581559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:40.195398 containerd[2127]: time="2026-03-14T00:15:40.194704883Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:40.204101 containerd[2127]: time="2026-03-14T00:15:40.199453763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:40.206877 containerd[2127]: time="2026-03-14T00:15:40.206675519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:40.212340 containerd[2127]: 2026-03-14 00:15:38.868 [ERROR][4615] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:15:40.212340 containerd[2127]: 2026-03-14 00:15:39.096 [INFO][4615] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--2-k8s-coredns--674b8bbfcf--j54qb-eth0 coredns-674b8bbfcf- kube-system 75cf24a3-e934-4460-adde-7f2e597be69d 913 0 2026-03-14 00:14:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-2 coredns-674b8bbfcf-j54qb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5cda50cbe9a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="469fa3e58af4f29fd0d2d57cc3aa32b0e34efb87b27918759a0703be1815ed58" Namespace="kube-system" Pod="coredns-674b8bbfcf-j54qb" WorkloadEndpoint="ip--172--31--28--2-k8s-coredns--674b8bbfcf--j54qb-" Mar 14 00:15:40.212340 containerd[2127]: 2026-03-14 00:15:39.111 [INFO][4615] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="469fa3e58af4f29fd0d2d57cc3aa32b0e34efb87b27918759a0703be1815ed58" Namespace="kube-system" Pod="coredns-674b8bbfcf-j54qb" WorkloadEndpoint="ip--172--31--28--2-k8s-coredns--674b8bbfcf--j54qb-eth0" Mar 14 00:15:40.212340 containerd[2127]: 2026-03-14 00:15:39.690 [INFO][4790] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="469fa3e58af4f29fd0d2d57cc3aa32b0e34efb87b27918759a0703be1815ed58" HandleID="k8s-pod-network.469fa3e58af4f29fd0d2d57cc3aa32b0e34efb87b27918759a0703be1815ed58" Workload="ip--172--31--28--2-k8s-coredns--674b8bbfcf--j54qb-eth0" Mar 14 00:15:40.212340 containerd[2127]: 2026-03-14 00:15:39.729 [INFO][4790] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="469fa3e58af4f29fd0d2d57cc3aa32b0e34efb87b27918759a0703be1815ed58" HandleID="k8s-pod-network.469fa3e58af4f29fd0d2d57cc3aa32b0e34efb87b27918759a0703be1815ed58" Workload="ip--172--31--28--2-k8s-coredns--674b8bbfcf--j54qb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005ee360), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-2", "pod":"coredns-674b8bbfcf-j54qb", "timestamp":"2026-03-14 00:15:39.690680293 +0000 UTC"}, Hostname:"ip-172-31-28-2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002d5600)} Mar 14 00:15:40.212340 containerd[2127]: 2026-03-14 00:15:39.729 [INFO][4790] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:40.212340 containerd[2127]: 2026-03-14 00:15:39.933 [INFO][4790] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:40.212340 containerd[2127]: 2026-03-14 00:15:39.933 [INFO][4790] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-2' Mar 14 00:15:40.212340 containerd[2127]: 2026-03-14 00:15:39.944 [INFO][4790] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.469fa3e58af4f29fd0d2d57cc3aa32b0e34efb87b27918759a0703be1815ed58" host="ip-172-31-28-2" Mar 14 00:15:40.212340 containerd[2127]: 2026-03-14 00:15:39.965 [INFO][4790] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-28-2" Mar 14 00:15:40.212340 containerd[2127]: 2026-03-14 00:15:40.001 [INFO][4790] ipam/ipam.go 526: Trying affinity for 192.168.50.192/26 host="ip-172-31-28-2" Mar 14 00:15:40.212340 containerd[2127]: 2026-03-14 00:15:40.016 [INFO][4790] ipam/ipam.go 160: Attempting to load block cidr=192.168.50.192/26 host="ip-172-31-28-2" Mar 14 00:15:40.212340 containerd[2127]: 2026-03-14 00:15:40.038 [INFO][4790] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.50.192/26 host="ip-172-31-28-2" Mar 14 00:15:40.212340 containerd[2127]: 2026-03-14 00:15:40.043 [INFO][4790] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.50.192/26 handle="k8s-pod-network.469fa3e58af4f29fd0d2d57cc3aa32b0e34efb87b27918759a0703be1815ed58" host="ip-172-31-28-2" Mar 14 00:15:40.212340 containerd[2127]: 2026-03-14 00:15:40.058 [INFO][4790] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.469fa3e58af4f29fd0d2d57cc3aa32b0e34efb87b27918759a0703be1815ed58 Mar 14 00:15:40.212340 containerd[2127]: 2026-03-14 00:15:40.077 [INFO][4790] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.50.192/26 handle="k8s-pod-network.469fa3e58af4f29fd0d2d57cc3aa32b0e34efb87b27918759a0703be1815ed58" host="ip-172-31-28-2" Mar 14 00:15:40.212340 containerd[2127]: 2026-03-14 00:15:40.101 [INFO][4790] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.50.195/26] block=192.168.50.192/26 handle="k8s-pod-network.469fa3e58af4f29fd0d2d57cc3aa32b0e34efb87b27918759a0703be1815ed58" host="ip-172-31-28-2" Mar 14 00:15:40.212340 containerd[2127]: 2026-03-14 00:15:40.101 [INFO][4790] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.50.195/26] handle="k8s-pod-network.469fa3e58af4f29fd0d2d57cc3aa32b0e34efb87b27918759a0703be1815ed58" host="ip-172-31-28-2" Mar 14 00:15:40.212340 containerd[2127]: 2026-03-14 00:15:40.102 [INFO][4790] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:40.212340 containerd[2127]: 2026-03-14 00:15:40.102 [INFO][4790] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.50.195/26] IPv6=[] ContainerID="469fa3e58af4f29fd0d2d57cc3aa32b0e34efb87b27918759a0703be1815ed58" HandleID="k8s-pod-network.469fa3e58af4f29fd0d2d57cc3aa32b0e34efb87b27918759a0703be1815ed58" Workload="ip--172--31--28--2-k8s-coredns--674b8bbfcf--j54qb-eth0" Mar 14 00:15:40.215634 containerd[2127]: 2026-03-14 00:15:40.127 [INFO][4615] cni-plugin/k8s.go 418: Populated endpoint ContainerID="469fa3e58af4f29fd0d2d57cc3aa32b0e34efb87b27918759a0703be1815ed58" Namespace="kube-system" Pod="coredns-674b8bbfcf-j54qb" WorkloadEndpoint="ip--172--31--28--2-k8s-coredns--674b8bbfcf--j54qb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--2-k8s-coredns--674b8bbfcf--j54qb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"75cf24a3-e934-4460-adde-7f2e597be69d", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-2", ContainerID:"", Pod:"coredns-674b8bbfcf-j54qb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5cda50cbe9a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:40.215634 containerd[2127]: 2026-03-14 00:15:40.128 [INFO][4615] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.195/32] ContainerID="469fa3e58af4f29fd0d2d57cc3aa32b0e34efb87b27918759a0703be1815ed58" Namespace="kube-system" Pod="coredns-674b8bbfcf-j54qb" WorkloadEndpoint="ip--172--31--28--2-k8s-coredns--674b8bbfcf--j54qb-eth0" Mar 14 00:15:40.215634 containerd[2127]: 2026-03-14 00:15:40.128 [INFO][4615] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5cda50cbe9a ContainerID="469fa3e58af4f29fd0d2d57cc3aa32b0e34efb87b27918759a0703be1815ed58" Namespace="kube-system" Pod="coredns-674b8bbfcf-j54qb" WorkloadEndpoint="ip--172--31--28--2-k8s-coredns--674b8bbfcf--j54qb-eth0" Mar 14 00:15:40.215634 containerd[2127]: 2026-03-14 00:15:40.172 [INFO][4615] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="469fa3e58af4f29fd0d2d57cc3aa32b0e34efb87b27918759a0703be1815ed58" Namespace="kube-system" Pod="coredns-674b8bbfcf-j54qb" WorkloadEndpoint="ip--172--31--28--2-k8s-coredns--674b8bbfcf--j54qb-eth0" Mar 14 00:15:40.215634 containerd[2127]: 2026-03-14 00:15:40.177 [INFO][4615] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="469fa3e58af4f29fd0d2d57cc3aa32b0e34efb87b27918759a0703be1815ed58" Namespace="kube-system" Pod="coredns-674b8bbfcf-j54qb" WorkloadEndpoint="ip--172--31--28--2-k8s-coredns--674b8bbfcf--j54qb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--2-k8s-coredns--674b8bbfcf--j54qb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"75cf24a3-e934-4460-adde-7f2e597be69d", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-2", ContainerID:"469fa3e58af4f29fd0d2d57cc3aa32b0e34efb87b27918759a0703be1815ed58", Pod:"coredns-674b8bbfcf-j54qb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5cda50cbe9a", MAC:"fe:56:39:0c:86:5c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:40.215634 containerd[2127]: 2026-03-14 00:15:40.203 [INFO][4615] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="469fa3e58af4f29fd0d2d57cc3aa32b0e34efb87b27918759a0703be1815ed58" Namespace="kube-system" Pod="coredns-674b8bbfcf-j54qb" WorkloadEndpoint="ip--172--31--28--2-k8s-coredns--674b8bbfcf--j54qb-eth0" Mar 14 00:15:40.232997 containerd[2127]: time="2026-03-14T00:15:40.231755087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:40.232997 containerd[2127]: time="2026-03-14T00:15:40.231916883Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:40.232997 containerd[2127]: time="2026-03-14T00:15:40.231947735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:40.238363 containerd[2127]: time="2026-03-14T00:15:40.233733815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:40.279989 systemd-networkd[1684]: cali74a271ef3fd: Link UP Mar 14 00:15:40.286452 systemd-networkd[1684]: cali74a271ef3fd: Gained carrier Mar 14 00:15:40.357037 containerd[2127]: 2026-03-14 00:15:38.951 [ERROR][4676] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:15:40.357037 containerd[2127]: 2026-03-14 00:15:39.180 [INFO][4676] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--2-k8s-whisker--75cf64fb64--gfxc8-eth0 whisker-75cf64fb64- calico-system 6c90c97c-9e7f-46d8-bdd7-1f5e381c5811 934 0 2026-03-14 00:15:21 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:75cf64fb64 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-28-2 whisker-75cf64fb64-gfxc8 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali74a271ef3fd [] [] }} ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" Namespace="calico-system" Pod="whisker-75cf64fb64-gfxc8" WorkloadEndpoint="ip--172--31--28--2-k8s-whisker--75cf64fb64--gfxc8-" Mar 14 00:15:40.357037 containerd[2127]: 2026-03-14 00:15:39.180 [INFO][4676] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" Namespace="calico-system" Pod="whisker-75cf64fb64-gfxc8" WorkloadEndpoint="ip--172--31--28--2-k8s-whisker--75cf64fb64--gfxc8-eth0" Mar 14 00:15:40.357037 containerd[2127]: 2026-03-14 00:15:39.728 [INFO][4801] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" HandleID="k8s-pod-network.e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" Workload="ip--172--31--28--2-k8s-whisker--75cf64fb64--gfxc8-eth0" Mar 14 00:15:40.357037 containerd[2127]: 2026-03-14 00:15:39.765 [INFO][4801] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" HandleID="k8s-pod-network.e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" Workload="ip--172--31--28--2-k8s-whisker--75cf64fb64--gfxc8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c8e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-2", "pod":"whisker-75cf64fb64-gfxc8", "timestamp":"2026-03-14 00:15:39.728373841 +0000 UTC"}, Hostname:"ip-172-31-28-2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000184420)} Mar 14 00:15:40.357037 containerd[2127]: 2026-03-14 00:15:39.765 [INFO][4801] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:40.357037 containerd[2127]: 2026-03-14 00:15:40.103 [INFO][4801] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:40.357037 containerd[2127]: 2026-03-14 00:15:40.103 [INFO][4801] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-2' Mar 14 00:15:40.357037 containerd[2127]: 2026-03-14 00:15:40.111 [INFO][4801] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" host="ip-172-31-28-2" Mar 14 00:15:40.357037 containerd[2127]: 2026-03-14 00:15:40.124 [INFO][4801] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-28-2" Mar 14 00:15:40.357037 containerd[2127]: 2026-03-14 00:15:40.171 [INFO][4801] ipam/ipam.go 526: Trying affinity for 192.168.50.192/26 host="ip-172-31-28-2" Mar 14 00:15:40.357037 containerd[2127]: 2026-03-14 00:15:40.176 [INFO][4801] ipam/ipam.go 160: Attempting to load block cidr=192.168.50.192/26 host="ip-172-31-28-2" Mar 14 00:15:40.357037 containerd[2127]: 2026-03-14 00:15:40.192 [INFO][4801] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.50.192/26 host="ip-172-31-28-2" Mar 14 00:15:40.357037 containerd[2127]: 2026-03-14 00:15:40.198 [INFO][4801] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.50.192/26 handle="k8s-pod-network.e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" host="ip-172-31-28-2" Mar 14 00:15:40.357037 containerd[2127]: 2026-03-14 00:15:40.207 [INFO][4801] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c Mar 14 00:15:40.357037 containerd[2127]: 2026-03-14 00:15:40.230 [INFO][4801] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.50.192/26 handle="k8s-pod-network.e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" host="ip-172-31-28-2" Mar 14 00:15:40.357037 containerd[2127]: 2026-03-14 00:15:40.248 [INFO][4801] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.50.196/26] block=192.168.50.192/26 handle="k8s-pod-network.e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" host="ip-172-31-28-2" Mar 14 00:15:40.357037 containerd[2127]: 2026-03-14 00:15:40.251 [INFO][4801] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.50.196/26] handle="k8s-pod-network.e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" host="ip-172-31-28-2" Mar 14 00:15:40.357037 containerd[2127]: 2026-03-14 00:15:40.251 [INFO][4801] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:40.357037 containerd[2127]: 2026-03-14 00:15:40.251 [INFO][4801] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.50.196/26] IPv6=[] ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" HandleID="k8s-pod-network.e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" Workload="ip--172--31--28--2-k8s-whisker--75cf64fb64--gfxc8-eth0" Mar 14 00:15:40.358235 containerd[2127]: 2026-03-14 00:15:40.269 [INFO][4676] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" Namespace="calico-system" Pod="whisker-75cf64fb64-gfxc8" WorkloadEndpoint="ip--172--31--28--2-k8s-whisker--75cf64fb64--gfxc8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--2-k8s-whisker--75cf64fb64--gfxc8-eth0", GenerateName:"whisker-75cf64fb64-", Namespace:"calico-system", SelfLink:"", UID:"6c90c97c-9e7f-46d8-bdd7-1f5e381c5811", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"75cf64fb64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-2", ContainerID:"", Pod:"whisker-75cf64fb64-gfxc8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.50.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali74a271ef3fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:40.358235 containerd[2127]: 2026-03-14 00:15:40.269 [INFO][4676] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.196/32] ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" Namespace="calico-system" Pod="whisker-75cf64fb64-gfxc8" WorkloadEndpoint="ip--172--31--28--2-k8s-whisker--75cf64fb64--gfxc8-eth0" Mar 14 00:15:40.358235 containerd[2127]: 2026-03-14 00:15:40.270 [INFO][4676] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali74a271ef3fd ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" Namespace="calico-system" Pod="whisker-75cf64fb64-gfxc8" WorkloadEndpoint="ip--172--31--28--2-k8s-whisker--75cf64fb64--gfxc8-eth0" Mar 14 00:15:40.358235 containerd[2127]: 2026-03-14 00:15:40.298 [INFO][4676] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" Namespace="calico-system" Pod="whisker-75cf64fb64-gfxc8" WorkloadEndpoint="ip--172--31--28--2-k8s-whisker--75cf64fb64--gfxc8-eth0" Mar 14 00:15:40.358235 containerd[2127]: 2026-03-14 00:15:40.306 [INFO][4676] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" Namespace="calico-system" Pod="whisker-75cf64fb64-gfxc8" WorkloadEndpoint="ip--172--31--28--2-k8s-whisker--75cf64fb64--gfxc8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--2-k8s-whisker--75cf64fb64--gfxc8-eth0", GenerateName:"whisker-75cf64fb64-", Namespace:"calico-system", SelfLink:"", UID:"6c90c97c-9e7f-46d8-bdd7-1f5e381c5811", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"75cf64fb64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-2", ContainerID:"e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c", Pod:"whisker-75cf64fb64-gfxc8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.50.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali74a271ef3fd", MAC:"d2:06:b9:31:8d:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:40.358235 containerd[2127]: 2026-03-14 00:15:40.342 [INFO][4676] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" Namespace="calico-system" Pod="whisker-75cf64fb64-gfxc8" WorkloadEndpoint="ip--172--31--28--2-k8s-whisker--75cf64fb64--gfxc8-eth0" Mar 14 00:15:40.477597 systemd-networkd[1684]: cali9e993974351: Link UP Mar 14 00:15:40.480069 containerd[2127]: time="2026-03-14T00:15:40.478669285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:40.480069 containerd[2127]: time="2026-03-14T00:15:40.478772593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:40.480069 containerd[2127]: time="2026-03-14T00:15:40.478816261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:40.480069 containerd[2127]: time="2026-03-14T00:15:40.479037493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:40.495566 systemd-networkd[1684]: cali9e993974351: Gained carrier Mar 14 00:15:40.522380 containerd[2127]: time="2026-03-14T00:15:40.518570005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:40.522380 containerd[2127]: time="2026-03-14T00:15:40.518672101Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:40.522380 containerd[2127]: time="2026-03-14T00:15:40.518735293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:40.522380 containerd[2127]: time="2026-03-14T00:15:40.518917213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:40.572322 containerd[2127]: 2026-03-14 00:15:38.867 [ERROR][4687] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:15:40.572322 containerd[2127]: 2026-03-14 00:15:39.058 [INFO][4687] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--2-k8s-calico--apiserver--7bfc5795cd--9tsrz-eth0 calico-apiserver-7bfc5795cd- calico-system d4ea2f60-bae6-4228-ba61-99aef0e1061d 922 0 2026-03-14 00:15:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bfc5795cd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-2 calico-apiserver-7bfc5795cd-9tsrz eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali9e993974351 [] [] }} ContainerID="9a0fc71079573dd7792576f9ca16aea2c5e930d2129b49faf1928589c7c4bd7d" Namespace="calico-system" Pod="calico-apiserver-7bfc5795cd-9tsrz" WorkloadEndpoint="ip--172--31--28--2-k8s-calico--apiserver--7bfc5795cd--9tsrz-" Mar 14 00:15:40.572322 containerd[2127]: 2026-03-14 00:15:39.063 [INFO][4687] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9a0fc71079573dd7792576f9ca16aea2c5e930d2129b49faf1928589c7c4bd7d" Namespace="calico-system" Pod="calico-apiserver-7bfc5795cd-9tsrz" WorkloadEndpoint="ip--172--31--28--2-k8s-calico--apiserver--7bfc5795cd--9tsrz-eth0" Mar 14 00:15:40.572322 containerd[2127]: 2026-03-14 00:15:39.750 [INFO][4772] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a0fc71079573dd7792576f9ca16aea2c5e930d2129b49faf1928589c7c4bd7d" HandleID="k8s-pod-network.9a0fc71079573dd7792576f9ca16aea2c5e930d2129b49faf1928589c7c4bd7d" Workload="ip--172--31--28--2-k8s-calico--apiserver--7bfc5795cd--9tsrz-eth0" Mar 14 00:15:40.572322 containerd[2127]: 2026-03-14 00:15:39.788 [INFO][4772] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9a0fc71079573dd7792576f9ca16aea2c5e930d2129b49faf1928589c7c4bd7d" HandleID="k8s-pod-network.9a0fc71079573dd7792576f9ca16aea2c5e930d2129b49faf1928589c7c4bd7d" Workload="ip--172--31--28--2-k8s-calico--apiserver--7bfc5795cd--9tsrz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c540), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-2", "pod":"calico-apiserver-7bfc5795cd-9tsrz", "timestamp":"2026-03-14 00:15:39.750494641 +0000 UTC"}, Hostname:"ip-172-31-28-2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001fa6e0)} Mar 14 00:15:40.572322 containerd[2127]: 2026-03-14 00:15:39.788 [INFO][4772] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:40.572322 containerd[2127]: 2026-03-14 00:15:40.254 [INFO][4772] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:40.572322 containerd[2127]: 2026-03-14 00:15:40.254 [INFO][4772] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-2' Mar 14 00:15:40.572322 containerd[2127]: 2026-03-14 00:15:40.261 [INFO][4772] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9a0fc71079573dd7792576f9ca16aea2c5e930d2129b49faf1928589c7c4bd7d" host="ip-172-31-28-2" Mar 14 00:15:40.572322 containerd[2127]: 2026-03-14 00:15:40.276 [INFO][4772] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-28-2" Mar 14 00:15:40.572322 containerd[2127]: 2026-03-14 00:15:40.304 [INFO][4772] ipam/ipam.go 526: Trying affinity for 192.168.50.192/26 host="ip-172-31-28-2" Mar 14 00:15:40.572322 containerd[2127]: 2026-03-14 00:15:40.325 [INFO][4772] ipam/ipam.go 160: Attempting to load block cidr=192.168.50.192/26 host="ip-172-31-28-2" Mar 14 00:15:40.572322 containerd[2127]: 2026-03-14 00:15:40.339 [INFO][4772] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.50.192/26 host="ip-172-31-28-2" Mar 14 00:15:40.572322 containerd[2127]: 2026-03-14 00:15:40.339 [INFO][4772] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.50.192/26 handle="k8s-pod-network.9a0fc71079573dd7792576f9ca16aea2c5e930d2129b49faf1928589c7c4bd7d" host="ip-172-31-28-2" Mar 14 00:15:40.572322 containerd[2127]: 2026-03-14 00:15:40.364 [INFO][4772] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9a0fc71079573dd7792576f9ca16aea2c5e930d2129b49faf1928589c7c4bd7d Mar 14 00:15:40.572322 containerd[2127]: 2026-03-14 00:15:40.380 [INFO][4772] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.50.192/26 handle="k8s-pod-network.9a0fc71079573dd7792576f9ca16aea2c5e930d2129b49faf1928589c7c4bd7d" host="ip-172-31-28-2" Mar 14 00:15:40.572322 containerd[2127]: 2026-03-14 00:15:40.404 [INFO][4772] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.50.197/26] block=192.168.50.192/26 handle="k8s-pod-network.9a0fc71079573dd7792576f9ca16aea2c5e930d2129b49faf1928589c7c4bd7d" host="ip-172-31-28-2" Mar 14 00:15:40.572322 containerd[2127]: 2026-03-14 00:15:40.404 [INFO][4772] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.50.197/26] handle="k8s-pod-network.9a0fc71079573dd7792576f9ca16aea2c5e930d2129b49faf1928589c7c4bd7d" host="ip-172-31-28-2" Mar 14 00:15:40.572322 containerd[2127]: 2026-03-14 00:15:40.405 [INFO][4772] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:40.572322 containerd[2127]: 2026-03-14 00:15:40.405 [INFO][4772] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.50.197/26] IPv6=[] ContainerID="9a0fc71079573dd7792576f9ca16aea2c5e930d2129b49faf1928589c7c4bd7d" HandleID="k8s-pod-network.9a0fc71079573dd7792576f9ca16aea2c5e930d2129b49faf1928589c7c4bd7d" Workload="ip--172--31--28--2-k8s-calico--apiserver--7bfc5795cd--9tsrz-eth0" Mar 14 00:15:40.574286 containerd[2127]: 2026-03-14 00:15:40.431 [INFO][4687] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9a0fc71079573dd7792576f9ca16aea2c5e930d2129b49faf1928589c7c4bd7d" Namespace="calico-system" Pod="calico-apiserver-7bfc5795cd-9tsrz" WorkloadEndpoint="ip--172--31--28--2-k8s-calico--apiserver--7bfc5795cd--9tsrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--2-k8s-calico--apiserver--7bfc5795cd--9tsrz-eth0", GenerateName:"calico-apiserver-7bfc5795cd-", Namespace:"calico-system", SelfLink:"", UID:"d4ea2f60-bae6-4228-ba61-99aef0e1061d", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bfc5795cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-2", ContainerID:"", Pod:"calico-apiserver-7bfc5795cd-9tsrz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9e993974351", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:40.574286 containerd[2127]: 2026-03-14 00:15:40.431 [INFO][4687] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.197/32] ContainerID="9a0fc71079573dd7792576f9ca16aea2c5e930d2129b49faf1928589c7c4bd7d" Namespace="calico-system" Pod="calico-apiserver-7bfc5795cd-9tsrz" WorkloadEndpoint="ip--172--31--28--2-k8s-calico--apiserver--7bfc5795cd--9tsrz-eth0" Mar 14 00:15:40.574286 containerd[2127]: 2026-03-14 00:15:40.431 [INFO][4687] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9e993974351 ContainerID="9a0fc71079573dd7792576f9ca16aea2c5e930d2129b49faf1928589c7c4bd7d" Namespace="calico-system" Pod="calico-apiserver-7bfc5795cd-9tsrz" WorkloadEndpoint="ip--172--31--28--2-k8s-calico--apiserver--7bfc5795cd--9tsrz-eth0" Mar 14 00:15:40.574286 containerd[2127]: 2026-03-14 00:15:40.500 [INFO][4687] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9a0fc71079573dd7792576f9ca16aea2c5e930d2129b49faf1928589c7c4bd7d" Namespace="calico-system" Pod="calico-apiserver-7bfc5795cd-9tsrz" WorkloadEndpoint="ip--172--31--28--2-k8s-calico--apiserver--7bfc5795cd--9tsrz-eth0" Mar 14 00:15:40.574286 containerd[2127]: 2026-03-14 00:15:40.506 [INFO][4687] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9a0fc71079573dd7792576f9ca16aea2c5e930d2129b49faf1928589c7c4bd7d" Namespace="calico-system" Pod="calico-apiserver-7bfc5795cd-9tsrz" WorkloadEndpoint="ip--172--31--28--2-k8s-calico--apiserver--7bfc5795cd--9tsrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--2-k8s-calico--apiserver--7bfc5795cd--9tsrz-eth0", GenerateName:"calico-apiserver-7bfc5795cd-", Namespace:"calico-system", SelfLink:"", UID:"d4ea2f60-bae6-4228-ba61-99aef0e1061d", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bfc5795cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-2", ContainerID:"9a0fc71079573dd7792576f9ca16aea2c5e930d2129b49faf1928589c7c4bd7d", Pod:"calico-apiserver-7bfc5795cd-9tsrz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9e993974351", MAC:"72:f4:77:0e:3c:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:40.574286 containerd[2127]: 2026-03-14 00:15:40.538 [INFO][4687] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9a0fc71079573dd7792576f9ca16aea2c5e930d2129b49faf1928589c7c4bd7d" Namespace="calico-system" Pod="calico-apiserver-7bfc5795cd-9tsrz" WorkloadEndpoint="ip--172--31--28--2-k8s-calico--apiserver--7bfc5795cd--9tsrz-eth0" Mar 14 00:15:40.593801 systemd-networkd[1684]: caliae6bccba74f: Link UP Mar 14 00:15:40.596553 systemd-networkd[1684]: caliae6bccba74f: Gained carrier Mar 14 00:15:40.645767 containerd[2127]: time="2026-03-14T00:15:40.645165121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-22gqf,Uid:850b6616-1e79-423c-baa4-b84acf3a98ba,Namespace:calico-system,Attempt:0,} returns sandbox id \"486bdc245670476d4e74de98f6eb415542fabc4769ca147ecdc6be5ad6ba6d28\"" Mar 14 00:15:40.662550 containerd[2127]: time="2026-03-14T00:15:40.662041945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 14 00:15:40.665951 containerd[2127]: 2026-03-14 00:15:38.782 [ERROR][4655] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:15:40.665951 containerd[2127]: 2026-03-14 00:15:39.053 [INFO][4655] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--2-k8s-calico--apiserver--6579f7776--dhvpk-eth0 calico-apiserver-6579f7776- calico-system 5ad4e2d1-2c97-47c9-b1a8-c77ac0d2d251 920 0 2026-03-14 00:15:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6579f7776 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-2 calico-apiserver-6579f7776-dhvpk eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] caliae6bccba74f [] [] }} ContainerID="ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65" Namespace="calico-system" Pod="calico-apiserver-6579f7776-dhvpk" WorkloadEndpoint="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--dhvpk-" Mar 14 00:15:40.665951 containerd[2127]: 2026-03-14 00:15:39.053 [INFO][4655] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65" Namespace="calico-system" Pod="calico-apiserver-6579f7776-dhvpk" WorkloadEndpoint="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--dhvpk-eth0" Mar 14 00:15:40.665951 containerd[2127]: 2026-03-14 00:15:39.732 [INFO][4770] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65" HandleID="k8s-pod-network.ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65" Workload="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--dhvpk-eth0" Mar 14 00:15:40.665951 containerd[2127]: 2026-03-14 00:15:39.791 [INFO][4770] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65" HandleID="k8s-pod-network.ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65" Workload="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--dhvpk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000284840), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-2", "pod":"calico-apiserver-6579f7776-dhvpk", "timestamp":"2026-03-14 00:15:39.732187417 +0000 UTC"}, Hostname:"ip-172-31-28-2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400010adc0)} Mar 14 00:15:40.665951 containerd[2127]: 2026-03-14 00:15:39.791 [INFO][4770] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:40.665951 containerd[2127]: 2026-03-14 00:15:40.407 [INFO][4770] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:40.665951 containerd[2127]: 2026-03-14 00:15:40.407 [INFO][4770] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-2' Mar 14 00:15:40.665951 containerd[2127]: 2026-03-14 00:15:40.413 [INFO][4770] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65" host="ip-172-31-28-2" Mar 14 00:15:40.665951 containerd[2127]: 2026-03-14 00:15:40.449 [INFO][4770] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-28-2" Mar 14 00:15:40.665951 containerd[2127]: 2026-03-14 00:15:40.471 [INFO][4770] ipam/ipam.go 526: Trying affinity for 192.168.50.192/26 host="ip-172-31-28-2" Mar 14 00:15:40.665951 containerd[2127]: 2026-03-14 00:15:40.477 [INFO][4770] ipam/ipam.go 160: Attempting to load block cidr=192.168.50.192/26 host="ip-172-31-28-2" Mar 14 00:15:40.665951 containerd[2127]: 2026-03-14 00:15:40.489 [INFO][4770] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.50.192/26 host="ip-172-31-28-2" Mar 14 00:15:40.665951 containerd[2127]: 2026-03-14 00:15:40.491 [INFO][4770] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.50.192/26 handle="k8s-pod-network.ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65" host="ip-172-31-28-2" Mar 14 00:15:40.665951 containerd[2127]: 2026-03-14 00:15:40.498 [INFO][4770] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65 Mar 14 00:15:40.665951 containerd[2127]: 2026-03-14 00:15:40.513 [INFO][4770] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.50.192/26 handle="k8s-pod-network.ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65" host="ip-172-31-28-2" Mar 14 00:15:40.665951 containerd[2127]: 2026-03-14 00:15:40.539 [INFO][4770] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.50.198/26] block=192.168.50.192/26 handle="k8s-pod-network.ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65" host="ip-172-31-28-2" Mar 14 00:15:40.665951 containerd[2127]: 2026-03-14 00:15:40.539 [INFO][4770] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.50.198/26] handle="k8s-pod-network.ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65" host="ip-172-31-28-2" Mar 14 00:15:40.665951 containerd[2127]: 2026-03-14 00:15:40.539 [INFO][4770] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:40.665951 containerd[2127]: 2026-03-14 00:15:40.539 [INFO][4770] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.50.198/26] IPv6=[] ContainerID="ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65" HandleID="k8s-pod-network.ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65" Workload="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--dhvpk-eth0" Mar 14 00:15:40.667095 containerd[2127]: 2026-03-14 00:15:40.576 [INFO][4655] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65" Namespace="calico-system" Pod="calico-apiserver-6579f7776-dhvpk" WorkloadEndpoint="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--dhvpk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--2-k8s-calico--apiserver--6579f7776--dhvpk-eth0", GenerateName:"calico-apiserver-6579f7776-", Namespace:"calico-system", SelfLink:"", UID:"5ad4e2d1-2c97-47c9-b1a8-c77ac0d2d251", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6579f7776", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-2", ContainerID:"", Pod:"calico-apiserver-6579f7776-dhvpk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliae6bccba74f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:40.667095 containerd[2127]: 2026-03-14 00:15:40.582 [INFO][4655] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.198/32] ContainerID="ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65" Namespace="calico-system" Pod="calico-apiserver-6579f7776-dhvpk" WorkloadEndpoint="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--dhvpk-eth0" Mar 14 00:15:40.667095 containerd[2127]: 2026-03-14 00:15:40.582 [INFO][4655] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae6bccba74f ContainerID="ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65" Namespace="calico-system" Pod="calico-apiserver-6579f7776-dhvpk" WorkloadEndpoint="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--dhvpk-eth0" Mar 14 00:15:40.667095 containerd[2127]: 2026-03-14 00:15:40.592 [INFO][4655] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65" Namespace="calico-system" Pod="calico-apiserver-6579f7776-dhvpk" WorkloadEndpoint="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--dhvpk-eth0" Mar 14 00:15:40.667095 containerd[2127]: 2026-03-14 00:15:40.606 [INFO][4655] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65" Namespace="calico-system" Pod="calico-apiserver-6579f7776-dhvpk" WorkloadEndpoint="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--dhvpk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--2-k8s-calico--apiserver--6579f7776--dhvpk-eth0", GenerateName:"calico-apiserver-6579f7776-", Namespace:"calico-system", SelfLink:"", UID:"5ad4e2d1-2c97-47c9-b1a8-c77ac0d2d251", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6579f7776", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-2", ContainerID:"ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65", Pod:"calico-apiserver-6579f7776-dhvpk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliae6bccba74f", MAC:"36:15:2a:8d:87:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:40.667095 containerd[2127]: 2026-03-14 00:15:40.645 [INFO][4655] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65" Namespace="calico-system" Pod="calico-apiserver-6579f7776-dhvpk" WorkloadEndpoint="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--dhvpk-eth0" Mar 14 00:15:40.704483 containerd[2127]: time="2026-03-14T00:15:40.704121482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6579f7776-4xv6h,Uid:a25a53f1-9260-4c87-baf3-12d319ebd2af,Namespace:calico-system,Attempt:0,} returns sandbox id \"58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7\"" Mar 14 00:15:40.755479 systemd-networkd[1684]: calid09b783f110: Link UP Mar 14 00:15:40.770601 systemd-networkd[1684]: calid09b783f110: Gained carrier Mar 14 00:15:40.894076 containerd[2127]: 2026-03-14 00:15:39.520 [INFO][4764] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="580ef4adff09eff641eaba1d0b27e6caf8df0c556d8af1e9dff9565414e4aa6f" Mar 14 00:15:40.894076 containerd[2127]: 2026-03-14 00:15:39.520 [INFO][4764] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="580ef4adff09eff641eaba1d0b27e6caf8df0c556d8af1e9dff9565414e4aa6f" iface="eth0" netns="/var/run/netns/cni-80388506-0405-bf2c-1eb3-ea3d129bc5f7" Mar 14 00:15:40.894076 containerd[2127]: 2026-03-14 00:15:39.521 [INFO][4764] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="580ef4adff09eff641eaba1d0b27e6caf8df0c556d8af1e9dff9565414e4aa6f" iface="eth0" netns="/var/run/netns/cni-80388506-0405-bf2c-1eb3-ea3d129bc5f7" Mar 14 00:15:40.894076 containerd[2127]: 2026-03-14 00:15:39.538 [INFO][4764] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="580ef4adff09eff641eaba1d0b27e6caf8df0c556d8af1e9dff9565414e4aa6f" iface="eth0" netns="/var/run/netns/cni-80388506-0405-bf2c-1eb3-ea3d129bc5f7" Mar 14 00:15:40.894076 containerd[2127]: 2026-03-14 00:15:39.538 [INFO][4764] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="580ef4adff09eff641eaba1d0b27e6caf8df0c556d8af1e9dff9565414e4aa6f" Mar 14 00:15:40.894076 containerd[2127]: 2026-03-14 00:15:39.538 [INFO][4764] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="580ef4adff09eff641eaba1d0b27e6caf8df0c556d8af1e9dff9565414e4aa6f" Mar 14 00:15:40.894076 containerd[2127]: 2026-03-14 00:15:39.849 [INFO][4828] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="580ef4adff09eff641eaba1d0b27e6caf8df0c556d8af1e9dff9565414e4aa6f" HandleID="k8s-pod-network.580ef4adff09eff641eaba1d0b27e6caf8df0c556d8af1e9dff9565414e4aa6f" Workload="ip--172--31--28--2-k8s-calico--kube--controllers--8444cc7f95--vdkpl-eth0" Mar 14 00:15:40.894076 containerd[2127]: 2026-03-14 00:15:39.849 [INFO][4828] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:40.894076 containerd[2127]: 2026-03-14 00:15:40.698 [INFO][4828] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:40.894076 containerd[2127]: 2026-03-14 00:15:40.787 [WARNING][4828] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="580ef4adff09eff641eaba1d0b27e6caf8df0c556d8af1e9dff9565414e4aa6f" HandleID="k8s-pod-network.580ef4adff09eff641eaba1d0b27e6caf8df0c556d8af1e9dff9565414e4aa6f" Workload="ip--172--31--28--2-k8s-calico--kube--controllers--8444cc7f95--vdkpl-eth0" Mar 14 00:15:40.894076 containerd[2127]: 2026-03-14 00:15:40.787 [INFO][4828] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="580ef4adff09eff641eaba1d0b27e6caf8df0c556d8af1e9dff9565414e4aa6f" HandleID="k8s-pod-network.580ef4adff09eff641eaba1d0b27e6caf8df0c556d8af1e9dff9565414e4aa6f" Workload="ip--172--31--28--2-k8s-calico--kube--controllers--8444cc7f95--vdkpl-eth0" Mar 14 00:15:40.894076 containerd[2127]: 2026-03-14 00:15:40.793 [INFO][4828] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:40.894076 containerd[2127]: 2026-03-14 00:15:40.846 [INFO][4764] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="580ef4adff09eff641eaba1d0b27e6caf8df0c556d8af1e9dff9565414e4aa6f" Mar 14 00:15:40.909887 systemd[1]: run-containerd-runc-k8s.io-486bdc245670476d4e74de98f6eb415542fabc4769ca147ecdc6be5ad6ba6d28-runc.cIjcoT.mount: Deactivated successfully. Mar 14 00:15:40.931297 containerd[2127]: 2026-03-14 00:15:39.163 [ERROR][4696] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:15:40.931297 containerd[2127]: 2026-03-14 00:15:39.370 [INFO][4696] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--2-k8s-coredns--674b8bbfcf--4j2tf-eth0 coredns-674b8bbfcf- kube-system e101a10e-f453-421d-a606-7f3752c5f727 917 0 2026-03-14 00:14:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-2 coredns-674b8bbfcf-4j2tf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid09b783f110 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ba1919d1c2cfea27a2bf857c9ef8e6d4d4759715813f009ee083d918861d669c" Namespace="kube-system" Pod="coredns-674b8bbfcf-4j2tf" WorkloadEndpoint="ip--172--31--28--2-k8s-coredns--674b8bbfcf--4j2tf-" Mar 14 00:15:40.931297 containerd[2127]: 2026-03-14 00:15:39.370 [INFO][4696] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ba1919d1c2cfea27a2bf857c9ef8e6d4d4759715813f009ee083d918861d669c" Namespace="kube-system" Pod="coredns-674b8bbfcf-4j2tf" WorkloadEndpoint="ip--172--31--28--2-k8s-coredns--674b8bbfcf--4j2tf-eth0" Mar 14 00:15:40.931297 containerd[2127]: 2026-03-14 00:15:39.771 [INFO][4814] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ba1919d1c2cfea27a2bf857c9ef8e6d4d4759715813f009ee083d918861d669c" HandleID="k8s-pod-network.ba1919d1c2cfea27a2bf857c9ef8e6d4d4759715813f009ee083d918861d669c" Workload="ip--172--31--28--2-k8s-coredns--674b8bbfcf--4j2tf-eth0" Mar 14 00:15:40.931297 containerd[2127]: 2026-03-14 00:15:39.802 [INFO][4814] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ba1919d1c2cfea27a2bf857c9ef8e6d4d4759715813f009ee083d918861d669c" HandleID="k8s-pod-network.ba1919d1c2cfea27a2bf857c9ef8e6d4d4759715813f009ee083d918861d669c" Workload="ip--172--31--28--2-k8s-coredns--674b8bbfcf--4j2tf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003d0870), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-2", "pod":"coredns-674b8bbfcf-4j2tf", "timestamp":"2026-03-14 00:15:39.771879565 +0000 UTC"}, Hostname:"ip-172-31-28-2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000404580)} Mar 14 00:15:40.931297 containerd[2127]: 2026-03-14 00:15:39.802 [INFO][4814] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:40.931297 containerd[2127]: 2026-03-14 00:15:40.541 [INFO][4814] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:40.931297 containerd[2127]: 2026-03-14 00:15:40.541 [INFO][4814] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-2' Mar 14 00:15:40.931297 containerd[2127]: 2026-03-14 00:15:40.553 [INFO][4814] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ba1919d1c2cfea27a2bf857c9ef8e6d4d4759715813f009ee083d918861d669c" host="ip-172-31-28-2" Mar 14 00:15:40.931297 containerd[2127]: 2026-03-14 00:15:40.565 [INFO][4814] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-28-2" Mar 14 00:15:40.931297 containerd[2127]: 2026-03-14 00:15:40.581 [INFO][4814] ipam/ipam.go 526: Trying affinity for 192.168.50.192/26 host="ip-172-31-28-2" Mar 14 00:15:40.931297 containerd[2127]: 2026-03-14 00:15:40.595 [INFO][4814] ipam/ipam.go 160: Attempting to load block cidr=192.168.50.192/26 host="ip-172-31-28-2" Mar 14 00:15:40.931297 containerd[2127]: 2026-03-14 00:15:40.602 [INFO][4814] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.50.192/26 host="ip-172-31-28-2" Mar 14 00:15:40.931297 containerd[2127]: 2026-03-14 00:15:40.602 [INFO][4814] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.50.192/26 handle="k8s-pod-network.ba1919d1c2cfea27a2bf857c9ef8e6d4d4759715813f009ee083d918861d669c" host="ip-172-31-28-2" Mar 14 00:15:40.931297 containerd[2127]: 2026-03-14 00:15:40.626 [INFO][4814] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ba1919d1c2cfea27a2bf857c9ef8e6d4d4759715813f009ee083d918861d669c Mar 14 00:15:40.931297 containerd[2127]: 2026-03-14 00:15:40.646 [INFO][4814] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.50.192/26 handle="k8s-pod-network.ba1919d1c2cfea27a2bf857c9ef8e6d4d4759715813f009ee083d918861d669c" host="ip-172-31-28-2" Mar 14 00:15:40.931297 containerd[2127]: 2026-03-14 00:15:40.679 [INFO][4814] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.50.199/26] block=192.168.50.192/26 handle="k8s-pod-network.ba1919d1c2cfea27a2bf857c9ef8e6d4d4759715813f009ee083d918861d669c" host="ip-172-31-28-2" Mar 14 00:15:40.931297 containerd[2127]: 2026-03-14 00:15:40.679 [INFO][4814] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.50.199/26] handle="k8s-pod-network.ba1919d1c2cfea27a2bf857c9ef8e6d4d4759715813f009ee083d918861d669c" host="ip-172-31-28-2" Mar 14 00:15:40.931297 containerd[2127]: 2026-03-14 00:15:40.679 [INFO][4814] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:40.931297 containerd[2127]: 2026-03-14 00:15:40.679 [INFO][4814] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.50.199/26] IPv6=[] ContainerID="ba1919d1c2cfea27a2bf857c9ef8e6d4d4759715813f009ee083d918861d669c" HandleID="k8s-pod-network.ba1919d1c2cfea27a2bf857c9ef8e6d4d4759715813f009ee083d918861d669c" Workload="ip--172--31--28--2-k8s-coredns--674b8bbfcf--4j2tf-eth0" Mar 14 00:15:40.932486 containerd[2127]: 2026-03-14 00:15:40.714 [INFO][4696] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ba1919d1c2cfea27a2bf857c9ef8e6d4d4759715813f009ee083d918861d669c" Namespace="kube-system" Pod="coredns-674b8bbfcf-4j2tf" WorkloadEndpoint="ip--172--31--28--2-k8s-coredns--674b8bbfcf--4j2tf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--2-k8s-coredns--674b8bbfcf--4j2tf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e101a10e-f453-421d-a606-7f3752c5f727", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-2", ContainerID:"", Pod:"coredns-674b8bbfcf-4j2tf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid09b783f110", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:40.932486 containerd[2127]: 2026-03-14 00:15:40.714 [INFO][4696] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.199/32] ContainerID="ba1919d1c2cfea27a2bf857c9ef8e6d4d4759715813f009ee083d918861d669c" Namespace="kube-system" Pod="coredns-674b8bbfcf-4j2tf" WorkloadEndpoint="ip--172--31--28--2-k8s-coredns--674b8bbfcf--4j2tf-eth0" Mar 14 00:15:40.932486 containerd[2127]: 2026-03-14 00:15:40.714 [INFO][4696] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid09b783f110 ContainerID="ba1919d1c2cfea27a2bf857c9ef8e6d4d4759715813f009ee083d918861d669c" Namespace="kube-system" Pod="coredns-674b8bbfcf-4j2tf" WorkloadEndpoint="ip--172--31--28--2-k8s-coredns--674b8bbfcf--4j2tf-eth0" Mar 14 00:15:40.932486 containerd[2127]: 2026-03-14 00:15:40.766 [INFO][4696] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ba1919d1c2cfea27a2bf857c9ef8e6d4d4759715813f009ee083d918861d669c" Namespace="kube-system" Pod="coredns-674b8bbfcf-4j2tf" WorkloadEndpoint="ip--172--31--28--2-k8s-coredns--674b8bbfcf--4j2tf-eth0" Mar 14 00:15:40.932486 containerd[2127]: 2026-03-14 00:15:40.788 [INFO][4696] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ba1919d1c2cfea27a2bf857c9ef8e6d4d4759715813f009ee083d918861d669c" Namespace="kube-system" Pod="coredns-674b8bbfcf-4j2tf" WorkloadEndpoint="ip--172--31--28--2-k8s-coredns--674b8bbfcf--4j2tf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--2-k8s-coredns--674b8bbfcf--4j2tf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e101a10e-f453-421d-a606-7f3752c5f727", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-2", ContainerID:"ba1919d1c2cfea27a2bf857c9ef8e6d4d4759715813f009ee083d918861d669c", Pod:"coredns-674b8bbfcf-4j2tf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid09b783f110", MAC:"f6:fa:23:f5:9d:80", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:40.932486 containerd[2127]: 2026-03-14 00:15:40.834 [INFO][4696] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ba1919d1c2cfea27a2bf857c9ef8e6d4d4759715813f009ee083d918861d669c" Namespace="kube-system" Pod="coredns-674b8bbfcf-4j2tf" WorkloadEndpoint="ip--172--31--28--2-k8s-coredns--674b8bbfcf--4j2tf-eth0" Mar 14 00:15:40.932677 systemd[1]: run-netns-cni\x2d80388506\x2d0405\x2dbf2c\x2d1eb3\x2dea3d129bc5f7.mount: Deactivated successfully. Mar 14 00:15:40.954025 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-580ef4adff09eff641eaba1d0b27e6caf8df0c556d8af1e9dff9565414e4aa6f-shm.mount: Deactivated successfully. Mar 14 00:15:40.977926 containerd[2127]: time="2026-03-14T00:15:40.971876799Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8444cc7f95-vdkpl,Uid:b8e6ece2-771a-4b1a-940c-5db5a7d45aa3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"580ef4adff09eff641eaba1d0b27e6caf8df0c556d8af1e9dff9565414e4aa6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:40.978322 kubelet[3696]: E0314 00:15:40.973130 3696 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"580ef4adff09eff641eaba1d0b27e6caf8df0c556d8af1e9dff9565414e4aa6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:40.978322 kubelet[3696]: E0314 00:15:40.973227 3696 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"580ef4adff09eff641eaba1d0b27e6caf8df0c556d8af1e9dff9565414e4aa6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8444cc7f95-vdkpl" Mar 14 00:15:40.978322 kubelet[3696]: E0314 00:15:40.977304 3696 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"580ef4adff09eff641eaba1d0b27e6caf8df0c556d8af1e9dff9565414e4aa6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8444cc7f95-vdkpl" Mar 14 00:15:40.978985 kubelet[3696]: E0314 00:15:40.977445 3696 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8444cc7f95-vdkpl_calico-system(b8e6ece2-771a-4b1a-940c-5db5a7d45aa3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8444cc7f95-vdkpl_calico-system(b8e6ece2-771a-4b1a-940c-5db5a7d45aa3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"580ef4adff09eff641eaba1d0b27e6caf8df0c556d8af1e9dff9565414e4aa6f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8444cc7f95-vdkpl" podUID="b8e6ece2-771a-4b1a-940c-5db5a7d45aa3" Mar 14 00:15:41.127679 systemd-networkd[1684]: cali9476b6a5e47: Link UP Mar 14 00:15:41.135611 systemd-networkd[1684]: cali9476b6a5e47: Gained carrier Mar 14 00:15:41.138842 systemd-networkd[1684]: calie2060b10e2a: Gained IPv6LL Mar 14 00:15:41.194259 containerd[2127]: time="2026-03-14T00:15:41.191319288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:41.194259 containerd[2127]: time="2026-03-14T00:15:41.191452932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:41.194259 containerd[2127]: time="2026-03-14T00:15:41.191491452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:41.194259 containerd[2127]: time="2026-03-14T00:15:41.193042320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:41.213305 containerd[2127]: time="2026-03-14T00:15:41.211634304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:41.213305 containerd[2127]: time="2026-03-14T00:15:41.211766820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:41.213305 containerd[2127]: time="2026-03-14T00:15:41.211803780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:41.213305 containerd[2127]: time="2026-03-14T00:15:41.212001948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:41.267477 containerd[2127]: 2026-03-14 00:15:39.215 [ERROR][4710] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:15:41.267477 containerd[2127]: 2026-03-14 00:15:39.350 [INFO][4710] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--2-k8s-csi--node--driver--ckhmv-eth0 csi-node-driver- calico-system 563c486e-c3bd-4f54-8571-23d2db9006c2 781 0 2026-03-14 00:15:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-28-2 csi-node-driver-ckhmv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9476b6a5e47 [] [] }} ContainerID="b7a20e9cb07e0e338fb3cb06b3977f017b0bca3f54098be23ae7f85f2268588c" Namespace="calico-system" Pod="csi-node-driver-ckhmv" WorkloadEndpoint="ip--172--31--28--2-k8s-csi--node--driver--ckhmv-" Mar 14 00:15:41.267477 containerd[2127]: 2026-03-14 00:15:39.351 [INFO][4710] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b7a20e9cb07e0e338fb3cb06b3977f017b0bca3f54098be23ae7f85f2268588c" Namespace="calico-system" Pod="csi-node-driver-ckhmv" WorkloadEndpoint="ip--172--31--28--2-k8s-csi--node--driver--ckhmv-eth0" Mar 14 00:15:41.267477 containerd[2127]: 2026-03-14 00:15:39.829 [INFO][4808] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7a20e9cb07e0e338fb3cb06b3977f017b0bca3f54098be23ae7f85f2268588c" HandleID="k8s-pod-network.b7a20e9cb07e0e338fb3cb06b3977f017b0bca3f54098be23ae7f85f2268588c" Workload="ip--172--31--28--2-k8s-csi--node--driver--ckhmv-eth0" Mar 14 00:15:41.267477 containerd[2127]: 2026-03-14 00:15:39.866 [INFO][4808] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b7a20e9cb07e0e338fb3cb06b3977f017b0bca3f54098be23ae7f85f2268588c" HandleID="k8s-pod-network.b7a20e9cb07e0e338fb3cb06b3977f017b0bca3f54098be23ae7f85f2268588c" Workload="ip--172--31--28--2-k8s-csi--node--driver--ckhmv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000386090), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-2", "pod":"csi-node-driver-ckhmv", "timestamp":"2026-03-14 00:15:39.829622365 +0000 UTC"}, Hostname:"ip-172-31-28-2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000461760)} Mar 14 00:15:41.267477 containerd[2127]: 2026-03-14 00:15:39.867 [INFO][4808] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:41.267477 containerd[2127]: 2026-03-14 00:15:40.793 [INFO][4808] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:41.267477 containerd[2127]: 2026-03-14 00:15:40.794 [INFO][4808] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-2' Mar 14 00:15:41.267477 containerd[2127]: 2026-03-14 00:15:40.818 [INFO][4808] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b7a20e9cb07e0e338fb3cb06b3977f017b0bca3f54098be23ae7f85f2268588c" host="ip-172-31-28-2" Mar 14 00:15:41.267477 containerd[2127]: 2026-03-14 00:15:40.841 [INFO][4808] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-28-2" Mar 14 00:15:41.267477 containerd[2127]: 2026-03-14 00:15:40.897 [INFO][4808] ipam/ipam.go 526: Trying affinity for 192.168.50.192/26 host="ip-172-31-28-2" Mar 14 00:15:41.267477 containerd[2127]: 2026-03-14 00:15:40.922 [INFO][4808] ipam/ipam.go 160: Attempting to load block cidr=192.168.50.192/26 host="ip-172-31-28-2" Mar 14 00:15:41.267477 containerd[2127]: 2026-03-14 00:15:40.934 [INFO][4808] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.50.192/26 host="ip-172-31-28-2" Mar 14 00:15:41.267477 containerd[2127]: 2026-03-14 00:15:40.939 [INFO][4808] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.50.192/26 handle="k8s-pod-network.b7a20e9cb07e0e338fb3cb06b3977f017b0bca3f54098be23ae7f85f2268588c" host="ip-172-31-28-2" Mar 14 00:15:41.267477 containerd[2127]: 2026-03-14 00:15:40.950 [INFO][4808] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b7a20e9cb07e0e338fb3cb06b3977f017b0bca3f54098be23ae7f85f2268588c Mar 14 00:15:41.267477 containerd[2127]: 2026-03-14 00:15:40.989 [INFO][4808] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.50.192/26 handle="k8s-pod-network.b7a20e9cb07e0e338fb3cb06b3977f017b0bca3f54098be23ae7f85f2268588c" host="ip-172-31-28-2" Mar 14 00:15:41.267477 containerd[2127]: 2026-03-14 00:15:41.023 [INFO][4808] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.50.200/26] block=192.168.50.192/26 handle="k8s-pod-network.b7a20e9cb07e0e338fb3cb06b3977f017b0bca3f54098be23ae7f85f2268588c" host="ip-172-31-28-2" Mar 14 00:15:41.267477 containerd[2127]: 2026-03-14 00:15:41.038 [INFO][4808] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.50.200/26] handle="k8s-pod-network.b7a20e9cb07e0e338fb3cb06b3977f017b0bca3f54098be23ae7f85f2268588c" host="ip-172-31-28-2" Mar 14 00:15:41.267477 containerd[2127]: 2026-03-14 00:15:41.039 [INFO][4808] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:41.267477 containerd[2127]: 2026-03-14 00:15:41.039 [INFO][4808] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.50.200/26] IPv6=[] ContainerID="b7a20e9cb07e0e338fb3cb06b3977f017b0bca3f54098be23ae7f85f2268588c" HandleID="k8s-pod-network.b7a20e9cb07e0e338fb3cb06b3977f017b0bca3f54098be23ae7f85f2268588c" Workload="ip--172--31--28--2-k8s-csi--node--driver--ckhmv-eth0" Mar 14 00:15:41.269674 containerd[2127]: 2026-03-14 00:15:41.069 [INFO][4710] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b7a20e9cb07e0e338fb3cb06b3977f017b0bca3f54098be23ae7f85f2268588c" Namespace="calico-system" Pod="csi-node-driver-ckhmv" WorkloadEndpoint="ip--172--31--28--2-k8s-csi--node--driver--ckhmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--2-k8s-csi--node--driver--ckhmv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"563c486e-c3bd-4f54-8571-23d2db9006c2", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-2", ContainerID:"", Pod:"csi-node-driver-ckhmv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9476b6a5e47", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:41.269674 containerd[2127]: 2026-03-14 00:15:41.074 [INFO][4710] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.200/32] ContainerID="b7a20e9cb07e0e338fb3cb06b3977f017b0bca3f54098be23ae7f85f2268588c" Namespace="calico-system" Pod="csi-node-driver-ckhmv" WorkloadEndpoint="ip--172--31--28--2-k8s-csi--node--driver--ckhmv-eth0" Mar 14 00:15:41.269674 containerd[2127]: 2026-03-14 00:15:41.074 [INFO][4710] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9476b6a5e47 ContainerID="b7a20e9cb07e0e338fb3cb06b3977f017b0bca3f54098be23ae7f85f2268588c" Namespace="calico-system" Pod="csi-node-driver-ckhmv" WorkloadEndpoint="ip--172--31--28--2-k8s-csi--node--driver--ckhmv-eth0" Mar 14 00:15:41.269674 containerd[2127]: 2026-03-14 00:15:41.149 [INFO][4710] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b7a20e9cb07e0e338fb3cb06b3977f017b0bca3f54098be23ae7f85f2268588c" Namespace="calico-system" Pod="csi-node-driver-ckhmv" WorkloadEndpoint="ip--172--31--28--2-k8s-csi--node--driver--ckhmv-eth0" Mar 14 00:15:41.269674 containerd[2127]: 2026-03-14 00:15:41.158 [INFO][4710] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b7a20e9cb07e0e338fb3cb06b3977f017b0bca3f54098be23ae7f85f2268588c" Namespace="calico-system" Pod="csi-node-driver-ckhmv" WorkloadEndpoint="ip--172--31--28--2-k8s-csi--node--driver--ckhmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--2-k8s-csi--node--driver--ckhmv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"563c486e-c3bd-4f54-8571-23d2db9006c2", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-2", ContainerID:"b7a20e9cb07e0e338fb3cb06b3977f017b0bca3f54098be23ae7f85f2268588c", Pod:"csi-node-driver-ckhmv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9476b6a5e47", MAC:"5a:81:76:81:51:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:41.269674 containerd[2127]: 2026-03-14 00:15:41.204 [INFO][4710] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b7a20e9cb07e0e338fb3cb06b3977f017b0bca3f54098be23ae7f85f2268588c" Namespace="calico-system" Pod="csi-node-driver-ckhmv" WorkloadEndpoint="ip--172--31--28--2-k8s-csi--node--driver--ckhmv-eth0" Mar 14 00:15:41.331996 systemd-networkd[1684]: cali74a271ef3fd: Gained IPv6LL Mar 14 00:15:41.333371 systemd-networkd[1684]: cali5cda50cbe9a: Gained IPv6LL Mar 14 00:15:41.353606 containerd[2127]: time="2026-03-14T00:15:41.350051617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-j54qb,Uid:75cf24a3-e934-4460-adde-7f2e597be69d,Namespace:kube-system,Attempt:0,} returns sandbox id \"469fa3e58af4f29fd0d2d57cc3aa32b0e34efb87b27918759a0703be1815ed58\"" Mar 14 00:15:41.415155 containerd[2127]: time="2026-03-14T00:15:41.412076665Z" level=info msg="CreateContainer within sandbox \"469fa3e58af4f29fd0d2d57cc3aa32b0e34efb87b27918759a0703be1815ed58\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:15:41.513898 containerd[2127]: time="2026-03-14T00:15:41.511829174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:41.513898 containerd[2127]: time="2026-03-14T00:15:41.511949642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:41.513898 containerd[2127]: time="2026-03-14T00:15:41.511985810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:41.513898 containerd[2127]: time="2026-03-14T00:15:41.512175158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:41.527213 containerd[2127]: time="2026-03-14T00:15:41.526919726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75cf64fb64-gfxc8,Uid:6c90c97c-9e7f-46d8-bdd7-1f5e381c5811,Namespace:calico-system,Attempt:0,} returns sandbox id \"e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c\"" Mar 14 00:15:41.650655 systemd-networkd[1684]: cali1bf52bbf4df: Gained IPv6LL Mar 14 00:15:41.663830 containerd[2127]: time="2026-03-14T00:15:41.655734782Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:41.663830 containerd[2127]: time="2026-03-14T00:15:41.655971578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:41.663830 containerd[2127]: time="2026-03-14T00:15:41.657132110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:41.663830 containerd[2127]: time="2026-03-14T00:15:41.659431598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:41.685056 containerd[2127]: time="2026-03-14T00:15:41.684389618Z" level=info msg="CreateContainer within sandbox \"469fa3e58af4f29fd0d2d57cc3aa32b0e34efb87b27918759a0703be1815ed58\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f6beba77cde7f9d05ba4212359a0d5098fa0a72e8157ce33045dffb34b6d9e5e\"" Mar 14 00:15:41.694558 containerd[2127]: time="2026-03-14T00:15:41.693982419Z" level=info msg="StartContainer for \"f6beba77cde7f9d05ba4212359a0d5098fa0a72e8157ce33045dffb34b6d9e5e\"" Mar 14 00:15:41.824400 containerd[2127]: time="2026-03-14T00:15:41.824225463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6579f7776-dhvpk,Uid:5ad4e2d1-2c97-47c9-b1a8-c77ac0d2d251,Namespace:calico-system,Attempt:0,} returns sandbox id \"ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65\"" Mar 14 00:15:41.950688 containerd[2127]: time="2026-03-14T00:15:41.948541216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8444cc7f95-vdkpl,Uid:b8e6ece2-771a-4b1a-940c-5db5a7d45aa3,Namespace:calico-system,Attempt:0,}" Mar 14 00:15:41.972319 systemd-networkd[1684]: cali9e993974351: Gained IPv6LL Mar 14 00:15:41.986722 containerd[2127]: time="2026-03-14T00:15:41.986640628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4j2tf,Uid:e101a10e-f453-421d-a606-7f3752c5f727,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba1919d1c2cfea27a2bf857c9ef8e6d4d4759715813f009ee083d918861d669c\"" Mar 14 00:15:42.012005 containerd[2127]: time="2026-03-14T00:15:42.010319916Z" level=info msg="CreateContainer within sandbox \"ba1919d1c2cfea27a2bf857c9ef8e6d4d4759715813f009ee083d918861d669c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:15:42.100705 systemd-networkd[1684]: calid09b783f110: Gained IPv6LL Mar 14 00:15:42.165013 containerd[2127]: time="2026-03-14T00:15:42.164695189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bfc5795cd-9tsrz,Uid:d4ea2f60-bae6-4228-ba61-99aef0e1061d,Namespace:calico-system,Attempt:0,} returns sandbox id \"9a0fc71079573dd7792576f9ca16aea2c5e930d2129b49faf1928589c7c4bd7d\"" Mar 14 00:15:42.273616 containerd[2127]: time="2026-03-14T00:15:42.273545509Z" level=info msg="CreateContainer within sandbox \"ba1919d1c2cfea27a2bf857c9ef8e6d4d4759715813f009ee083d918861d669c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"03a0db4a1db493755325ac0f3987e03fbef176a5ad54dfa31bf01a28ff9823c3\"" Mar 14 00:15:42.278568 containerd[2127]: time="2026-03-14T00:15:42.276545389Z" level=info msg="StartContainer for \"03a0db4a1db493755325ac0f3987e03fbef176a5ad54dfa31bf01a28ff9823c3\"" Mar 14 00:15:42.339569 containerd[2127]: time="2026-03-14T00:15:42.339299642Z" level=info msg="StartContainer for \"f6beba77cde7f9d05ba4212359a0d5098fa0a72e8157ce33045dffb34b6d9e5e\" returns successfully" Mar 14 00:15:42.457536 containerd[2127]: time="2026-03-14T00:15:42.456939902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ckhmv,Uid:563c486e-c3bd-4f54-8571-23d2db9006c2,Namespace:calico-system,Attempt:0,} returns sandbox id \"b7a20e9cb07e0e338fb3cb06b3977f017b0bca3f54098be23ae7f85f2268588c\"" Mar 14 00:15:42.484405 systemd-networkd[1684]: caliae6bccba74f: Gained IPv6LL Mar 14 00:15:42.598823 kernel: calico-node[5258]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 14 00:15:42.738611 systemd-networkd[1684]: cali9476b6a5e47: Gained IPv6LL Mar 14 00:15:42.809773 containerd[2127]: time="2026-03-14T00:15:42.805381912Z" level=info msg="StartContainer for \"03a0db4a1db493755325ac0f3987e03fbef176a5ad54dfa31bf01a28ff9823c3\" returns successfully" Mar 14 00:15:42.928216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount229688116.mount: Deactivated successfully. Mar 14 00:15:43.140237 kubelet[3696]: I0314 00:15:43.139714 3696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-j54qb" podStartSLOduration=46.139685894 podStartE2EDuration="46.139685894s" podCreationTimestamp="2026-03-14 00:14:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:15:43.138069998 +0000 UTC m=+51.999338900" watchObservedRunningTime="2026-03-14 00:15:43.139685894 +0000 UTC m=+52.000954796" Mar 14 00:15:43.264881 systemd-networkd[1684]: calidc105cc6006: Link UP Mar 14 00:15:43.266919 systemd-networkd[1684]: calidc105cc6006: Gained carrier Mar 14 00:15:43.304519 kubelet[3696]: I0314 00:15:43.303191 3696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4j2tf" podStartSLOduration=46.303167703 podStartE2EDuration="46.303167703s" podCreationTimestamp="2026-03-14 00:14:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:15:43.190940066 +0000 UTC m=+52.052208956" watchObservedRunningTime="2026-03-14 00:15:43.303167703 +0000 UTC m=+52.164436569" Mar 14 00:15:43.364619 containerd[2127]: 2026-03-14 00:15:42.591 [INFO][5391] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--2-k8s-calico--kube--controllers--8444cc7f95--vdkpl-eth0 calico-kube-controllers-8444cc7f95- calico-system b8e6ece2-771a-4b1a-940c-5db5a7d45aa3 953 0 2026-03-14 00:15:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8444cc7f95 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-28-2 calico-kube-controllers-8444cc7f95-vdkpl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calidc105cc6006 [] [] }} ContainerID="ef7014f2c50d187700c043d6b27bc3d70c17367ecee36ddc002d4322f03f1bfb" Namespace="calico-system" Pod="calico-kube-controllers-8444cc7f95-vdkpl" WorkloadEndpoint="ip--172--31--28--2-k8s-calico--kube--controllers--8444cc7f95--vdkpl-" Mar 14 00:15:43.364619 containerd[2127]: 2026-03-14 00:15:42.596 [INFO][5391] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ef7014f2c50d187700c043d6b27bc3d70c17367ecee36ddc002d4322f03f1bfb" Namespace="calico-system" Pod="calico-kube-controllers-8444cc7f95-vdkpl" WorkloadEndpoint="ip--172--31--28--2-k8s-calico--kube--controllers--8444cc7f95--vdkpl-eth0" Mar 14 00:15:43.364619 containerd[2127]: 2026-03-14 00:15:42.805 [INFO][5470] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef7014f2c50d187700c043d6b27bc3d70c17367ecee36ddc002d4322f03f1bfb" HandleID="k8s-pod-network.ef7014f2c50d187700c043d6b27bc3d70c17367ecee36ddc002d4322f03f1bfb" Workload="ip--172--31--28--2-k8s-calico--kube--controllers--8444cc7f95--vdkpl-eth0" Mar 14 00:15:43.364619 containerd[2127]: 2026-03-14 00:15:43.014 [INFO][5470] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ef7014f2c50d187700c043d6b27bc3d70c17367ecee36ddc002d4322f03f1bfb" HandleID="k8s-pod-network.ef7014f2c50d187700c043d6b27bc3d70c17367ecee36ddc002d4322f03f1bfb" Workload="ip--172--31--28--2-k8s-calico--kube--controllers--8444cc7f95--vdkpl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ec4a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-2", "pod":"calico-kube-controllers-8444cc7f95-vdkpl", "timestamp":"2026-03-14 00:15:42.805051684 +0000 UTC"}, Hostname:"ip-172-31-28-2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400030e580)} Mar 14 00:15:43.364619 containerd[2127]: 2026-03-14 00:15:43.015 [INFO][5470] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:43.364619 containerd[2127]: 2026-03-14 00:15:43.015 [INFO][5470] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:43.364619 containerd[2127]: 2026-03-14 00:15:43.016 [INFO][5470] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-2' Mar 14 00:15:43.364619 containerd[2127]: 2026-03-14 00:15:43.034 [INFO][5470] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ef7014f2c50d187700c043d6b27bc3d70c17367ecee36ddc002d4322f03f1bfb" host="ip-172-31-28-2" Mar 14 00:15:43.364619 containerd[2127]: 2026-03-14 00:15:43.061 [INFO][5470] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-28-2" Mar 14 00:15:43.364619 containerd[2127]: 2026-03-14 00:15:43.100 [INFO][5470] ipam/ipam.go 526: Trying affinity for 192.168.50.192/26 host="ip-172-31-28-2" Mar 14 00:15:43.364619 containerd[2127]: 2026-03-14 00:15:43.112 [INFO][5470] ipam/ipam.go 160: Attempting to load block cidr=192.168.50.192/26 host="ip-172-31-28-2" Mar 14 00:15:43.364619 containerd[2127]: 2026-03-14 00:15:43.128 [INFO][5470] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.50.192/26 host="ip-172-31-28-2" Mar 14 00:15:43.364619 containerd[2127]: 2026-03-14 00:15:43.129 [INFO][5470] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.50.192/26 handle="k8s-pod-network.ef7014f2c50d187700c043d6b27bc3d70c17367ecee36ddc002d4322f03f1bfb" host="ip-172-31-28-2" Mar 14 00:15:43.364619 containerd[2127]: 2026-03-14 00:15:43.146 [INFO][5470] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ef7014f2c50d187700c043d6b27bc3d70c17367ecee36ddc002d4322f03f1bfb Mar 14 00:15:43.364619 containerd[2127]: 2026-03-14 00:15:43.174 [INFO][5470] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.50.192/26 handle="k8s-pod-network.ef7014f2c50d187700c043d6b27bc3d70c17367ecee36ddc002d4322f03f1bfb" host="ip-172-31-28-2" Mar 14 00:15:43.364619 containerd[2127]: 2026-03-14 00:15:43.206 [INFO][5470] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.50.201/26] block=192.168.50.192/26 handle="k8s-pod-network.ef7014f2c50d187700c043d6b27bc3d70c17367ecee36ddc002d4322f03f1bfb" host="ip-172-31-28-2" Mar 14 00:15:43.364619 containerd[2127]: 2026-03-14 00:15:43.207 [INFO][5470] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.50.201/26] handle="k8s-pod-network.ef7014f2c50d187700c043d6b27bc3d70c17367ecee36ddc002d4322f03f1bfb" host="ip-172-31-28-2" Mar 14 00:15:43.364619 containerd[2127]: 2026-03-14 00:15:43.210 [INFO][5470] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:43.364619 containerd[2127]: 2026-03-14 00:15:43.211 [INFO][5470] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.50.201/26] IPv6=[] ContainerID="ef7014f2c50d187700c043d6b27bc3d70c17367ecee36ddc002d4322f03f1bfb" HandleID="k8s-pod-network.ef7014f2c50d187700c043d6b27bc3d70c17367ecee36ddc002d4322f03f1bfb" Workload="ip--172--31--28--2-k8s-calico--kube--controllers--8444cc7f95--vdkpl-eth0" Mar 14 00:15:43.376420 containerd[2127]: 2026-03-14 00:15:43.233 [INFO][5391] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ef7014f2c50d187700c043d6b27bc3d70c17367ecee36ddc002d4322f03f1bfb" Namespace="calico-system" Pod="calico-kube-controllers-8444cc7f95-vdkpl" WorkloadEndpoint="ip--172--31--28--2-k8s-calico--kube--controllers--8444cc7f95--vdkpl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--2-k8s-calico--kube--controllers--8444cc7f95--vdkpl-eth0", GenerateName:"calico-kube-controllers-8444cc7f95-", Namespace:"calico-system", SelfLink:"", UID:"b8e6ece2-771a-4b1a-940c-5db5a7d45aa3", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8444cc7f95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-2", ContainerID:"", Pod:"calico-kube-controllers-8444cc7f95-vdkpl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidc105cc6006", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:43.376420 containerd[2127]: 2026-03-14 00:15:43.233 [INFO][5391] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.201/32] ContainerID="ef7014f2c50d187700c043d6b27bc3d70c17367ecee36ddc002d4322f03f1bfb" Namespace="calico-system" Pod="calico-kube-controllers-8444cc7f95-vdkpl" WorkloadEndpoint="ip--172--31--28--2-k8s-calico--kube--controllers--8444cc7f95--vdkpl-eth0" Mar 14 00:15:43.376420 containerd[2127]: 2026-03-14 00:15:43.233 [INFO][5391] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc105cc6006 ContainerID="ef7014f2c50d187700c043d6b27bc3d70c17367ecee36ddc002d4322f03f1bfb" Namespace="calico-system" Pod="calico-kube-controllers-8444cc7f95-vdkpl" WorkloadEndpoint="ip--172--31--28--2-k8s-calico--kube--controllers--8444cc7f95--vdkpl-eth0" Mar 14 00:15:43.376420 containerd[2127]: 2026-03-14 00:15:43.277 [INFO][5391] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ef7014f2c50d187700c043d6b27bc3d70c17367ecee36ddc002d4322f03f1bfb" Namespace="calico-system" Pod="calico-kube-controllers-8444cc7f95-vdkpl" WorkloadEndpoint="ip--172--31--28--2-k8s-calico--kube--controllers--8444cc7f95--vdkpl-eth0" Mar 14 00:15:43.376420 containerd[2127]: 2026-03-14 00:15:43.281 [INFO][5391] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ef7014f2c50d187700c043d6b27bc3d70c17367ecee36ddc002d4322f03f1bfb" Namespace="calico-system" Pod="calico-kube-controllers-8444cc7f95-vdkpl" WorkloadEndpoint="ip--172--31--28--2-k8s-calico--kube--controllers--8444cc7f95--vdkpl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--2-k8s-calico--kube--controllers--8444cc7f95--vdkpl-eth0", GenerateName:"calico-kube-controllers-8444cc7f95-", Namespace:"calico-system", SelfLink:"", UID:"b8e6ece2-771a-4b1a-940c-5db5a7d45aa3", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8444cc7f95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-2", ContainerID:"ef7014f2c50d187700c043d6b27bc3d70c17367ecee36ddc002d4322f03f1bfb", Pod:"calico-kube-controllers-8444cc7f95-vdkpl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidc105cc6006", MAC:"1a:0e:11:fc:b6:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:43.376420 containerd[2127]: 2026-03-14 00:15:43.304 [INFO][5391] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ef7014f2c50d187700c043d6b27bc3d70c17367ecee36ddc002d4322f03f1bfb" Namespace="calico-system" Pod="calico-kube-controllers-8444cc7f95-vdkpl" WorkloadEndpoint="ip--172--31--28--2-k8s-calico--kube--controllers--8444cc7f95--vdkpl-eth0" Mar 14 00:15:43.645707 containerd[2127]: time="2026-03-14T00:15:43.643571608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:43.646929 containerd[2127]: time="2026-03-14T00:15:43.646662580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:43.646929 containerd[2127]: time="2026-03-14T00:15:43.646718584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:43.649418 containerd[2127]: time="2026-03-14T00:15:43.648616228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:43.959790 systemd-journald[1608]: Under memory pressure, flushing caches. Mar 14 00:15:43.954523 systemd-resolved[2025]: Under memory pressure, flushing caches. Mar 14 00:15:43.954574 systemd-resolved[2025]: Flushed all caches. Mar 14 00:15:43.999402 containerd[2127]: time="2026-03-14T00:15:43.998554938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8444cc7f95-vdkpl,Uid:b8e6ece2-771a-4b1a-940c-5db5a7d45aa3,Namespace:calico-system,Attempt:0,} returns sandbox id \"ef7014f2c50d187700c043d6b27bc3d70c17367ecee36ddc002d4322f03f1bfb\"" Mar 14 00:15:44.244868 systemd-networkd[1684]: vxlan.calico: Link UP Mar 14 00:15:44.244889 systemd-networkd[1684]: vxlan.calico: Gained carrier Mar 14 00:15:44.406840 (udev-worker)[4871]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:15:44.779214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount510681097.mount: Deactivated successfully. Mar 14 00:15:45.234529 systemd-networkd[1684]: calidc105cc6006: Gained IPv6LL Mar 14 00:15:45.555243 containerd[2127]: time="2026-03-14T00:15:45.555078342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:45.557814 containerd[2127]: time="2026-03-14T00:15:45.557444730Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Mar 14 00:15:45.558802 containerd[2127]: time="2026-03-14T00:15:45.558757446Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:45.565291 containerd[2127]: time="2026-03-14T00:15:45.565156266Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:45.567401 containerd[2127]: time="2026-03-14T00:15:45.566914662Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 4.904806069s" Mar 14 00:15:45.567401 containerd[2127]: time="2026-03-14T00:15:45.566984670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Mar 14 00:15:45.571350 containerd[2127]: time="2026-03-14T00:15:45.571239954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 14 00:15:45.577534 containerd[2127]: time="2026-03-14T00:15:45.577477674Z" level=info msg="CreateContainer within sandbox \"486bdc245670476d4e74de98f6eb415542fabc4769ca147ecdc6be5ad6ba6d28\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 14 00:15:45.606505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1236194952.mount: Deactivated successfully. Mar 14 00:15:45.611112 containerd[2127]: time="2026-03-14T00:15:45.610956570Z" level=info msg="CreateContainer within sandbox \"486bdc245670476d4e74de98f6eb415542fabc4769ca147ecdc6be5ad6ba6d28\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"f79f0c755ea0397e4acde80be3708f0e9622564b8874f1978ec5a3391699028d\"" Mar 14 00:15:45.616543 containerd[2127]: time="2026-03-14T00:15:45.616466262Z" level=info msg="StartContainer for \"f79f0c755ea0397e4acde80be3708f0e9622564b8874f1978ec5a3391699028d\"" Mar 14 00:15:45.767588 containerd[2127]: time="2026-03-14T00:15:45.767404915Z" level=info msg="StartContainer for \"f79f0c755ea0397e4acde80be3708f0e9622564b8874f1978ec5a3391699028d\" returns successfully" Mar 14 00:15:45.875648 systemd-networkd[1684]: vxlan.calico: Gained IPv6LL Mar 14 00:15:46.002549 systemd-resolved[2025]: Under memory pressure, flushing caches. Mar 14 00:15:46.005531 systemd-journald[1608]: Under memory pressure, flushing caches. Mar 14 00:15:46.002578 systemd-resolved[2025]: Flushed all caches. Mar 14 00:15:46.145038 kubelet[3696]: I0314 00:15:46.144824 3696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-22gqf" podStartSLOduration=26.232651896 podStartE2EDuration="31.144799805s" podCreationTimestamp="2026-03-14 00:15:15 +0000 UTC" firstStartedPulling="2026-03-14 00:15:40.657896269 +0000 UTC m=+49.519165147" lastFinishedPulling="2026-03-14 00:15:45.570044178 +0000 UTC m=+54.431313056" observedRunningTime="2026-03-14 00:15:46.142955189 +0000 UTC m=+55.004224079" watchObservedRunningTime="2026-03-14 00:15:46.144799805 +0000 UTC m=+55.006068683" Mar 14 00:15:47.793769 systemd[1]: Started sshd@7-172.31.28.2:22-68.220.241.50:36612.service - OpenSSH per-connection server daemon (68.220.241.50:36612). Mar 14 00:15:48.332282 sshd[5734]: Accepted publickey for core from 68.220.241.50 port 36612 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:15:48.338856 sshd[5734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:48.351460 systemd-logind[2106]: New session 8 of user core. Mar 14 00:15:48.357945 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 14 00:15:48.666337 ntpd[2086]: Listen normally on 6 vxlan.calico 192.168.50.192:123 Mar 14 00:15:48.671330 ntpd[2086]: 14 Mar 00:15:48 ntpd[2086]: Listen normally on 6 vxlan.calico 192.168.50.192:123 Mar 14 00:15:48.671330 ntpd[2086]: 14 Mar 00:15:48 ntpd[2086]: Listen normally on 7 cali1bf52bbf4df [fe80::ecee:eeff:feee:eeee%4]:123 Mar 14 00:15:48.671330 ntpd[2086]: 14 Mar 00:15:48 ntpd[2086]: Listen normally on 8 calie2060b10e2a [fe80::ecee:eeff:feee:eeee%5]:123 Mar 14 00:15:48.671330 ntpd[2086]: 14 Mar 00:15:48 ntpd[2086]: Listen normally on 9 cali5cda50cbe9a [fe80::ecee:eeff:feee:eeee%6]:123 Mar 14 00:15:48.671330 ntpd[2086]: 14 Mar 00:15:48 ntpd[2086]: Listen normally on 10 cali74a271ef3fd [fe80::ecee:eeff:feee:eeee%7]:123 Mar 14 00:15:48.671330 ntpd[2086]: 14 Mar 00:15:48 ntpd[2086]: Listen normally on 11 cali9e993974351 [fe80::ecee:eeff:feee:eeee%8]:123 Mar 14 00:15:48.671330 ntpd[2086]: 14 Mar 00:15:48 ntpd[2086]: Listen normally on 12 caliae6bccba74f [fe80::ecee:eeff:feee:eeee%9]:123 Mar 14 00:15:48.671330 ntpd[2086]: 14 Mar 00:15:48 ntpd[2086]: Listen normally on 13 calid09b783f110 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 14 00:15:48.671330 ntpd[2086]: 14 Mar 00:15:48 ntpd[2086]: Listen normally on 14 cali9476b6a5e47 [fe80::ecee:eeff:feee:eeee%11]:123 Mar 14 00:15:48.671330 ntpd[2086]: 14 Mar 00:15:48 ntpd[2086]: Listen normally on 15 calidc105cc6006 [fe80::ecee:eeff:feee:eeee%12]:123 Mar 14 00:15:48.671330 ntpd[2086]: 14 Mar 00:15:48 ntpd[2086]: Listen normally on 16 vxlan.calico [fe80::6487:aaff:fe41:1560%13]:123 Mar 14 00:15:48.668482 ntpd[2086]: Listen normally on 7 cali1bf52bbf4df [fe80::ecee:eeff:feee:eeee%4]:123 Mar 14 00:15:48.668568 ntpd[2086]: Listen normally on 8 calie2060b10e2a [fe80::ecee:eeff:feee:eeee%5]:123 Mar 14 00:15:48.668639 ntpd[2086]: Listen normally on 9 cali5cda50cbe9a [fe80::ecee:eeff:feee:eeee%6]:123 Mar 14 00:15:48.668707 ntpd[2086]: Listen normally on 10 cali74a271ef3fd [fe80::ecee:eeff:feee:eeee%7]:123 Mar 14 00:15:48.668776 ntpd[2086]: Listen normally on 11 cali9e993974351 [fe80::ecee:eeff:feee:eeee%8]:123 Mar 14 00:15:48.668844 ntpd[2086]: Listen normally on 12 caliae6bccba74f [fe80::ecee:eeff:feee:eeee%9]:123 Mar 14 00:15:48.668912 ntpd[2086]: Listen normally on 13 calid09b783f110 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 14 00:15:48.668988 ntpd[2086]: Listen normally on 14 cali9476b6a5e47 [fe80::ecee:eeff:feee:eeee%11]:123 Mar 14 00:15:48.669063 ntpd[2086]: Listen normally on 15 calidc105cc6006 [fe80::ecee:eeff:feee:eeee%12]:123 Mar 14 00:15:48.669135 ntpd[2086]: Listen normally on 16 vxlan.calico [fe80::6487:aaff:fe41:1560%13]:123 Mar 14 00:15:48.887822 sshd[5734]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:48.901331 systemd[1]: sshd@7-172.31.28.2:22-68.220.241.50:36612.service: Deactivated successfully. Mar 14 00:15:48.911248 systemd[1]: session-8.scope: Deactivated successfully. Mar 14 00:15:48.914106 systemd-logind[2106]: Session 8 logged out. Waiting for processes to exit. Mar 14 00:15:48.918433 systemd-logind[2106]: Removed session 8. Mar 14 00:15:49.274525 containerd[2127]: time="2026-03-14T00:15:49.274471700Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:49.276862 containerd[2127]: time="2026-03-14T00:15:49.276814244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Mar 14 00:15:49.277863 containerd[2127]: time="2026-03-14T00:15:49.277805864Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:49.283989 containerd[2127]: time="2026-03-14T00:15:49.283937144Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:49.285779 containerd[2127]: time="2026-03-14T00:15:49.285607412Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 3.714255582s" Mar 14 00:15:49.285779 containerd[2127]: time="2026-03-14T00:15:49.285663644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 14 00:15:49.288508 containerd[2127]: time="2026-03-14T00:15:49.288446156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 14 00:15:49.295680 containerd[2127]: time="2026-03-14T00:15:49.295625108Z" level=info msg="CreateContainer within sandbox \"58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 14 00:15:49.328468 containerd[2127]: time="2026-03-14T00:15:49.328345340Z" level=info msg="CreateContainer within sandbox \"58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3feb6a2b93d7f4571999319bce10957dd1e266bd83302395593df1c905960703\"" Mar 14 00:15:49.329283 containerd[2127]: time="2026-03-14T00:15:49.329048492Z" level=info msg="StartContainer for \"3feb6a2b93d7f4571999319bce10957dd1e266bd83302395593df1c905960703\"" Mar 14 00:15:49.508405 containerd[2127]: time="2026-03-14T00:15:49.507296073Z" level=info msg="StartContainer for \"3feb6a2b93d7f4571999319bce10957dd1e266bd83302395593df1c905960703\" returns successfully" Mar 14 00:15:50.172229 kubelet[3696]: I0314 00:15:50.171862 3696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6579f7776-4xv6h" podStartSLOduration=27.592884939 podStartE2EDuration="36.171716793s" podCreationTimestamp="2026-03-14 00:15:14 +0000 UTC" firstStartedPulling="2026-03-14 00:15:40.708559526 +0000 UTC m=+49.569828404" lastFinishedPulling="2026-03-14 00:15:49.287391392 +0000 UTC m=+58.148660258" observedRunningTime="2026-03-14 00:15:50.169983393 +0000 UTC m=+59.031252283" watchObservedRunningTime="2026-03-14 00:15:50.171716793 +0000 UTC m=+59.032985731" Mar 14 00:15:50.757479 containerd[2127]: time="2026-03-14T00:15:50.757222116Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:50.761503 containerd[2127]: time="2026-03-14T00:15:50.761437824Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Mar 14 00:15:50.764191 containerd[2127]: time="2026-03-14T00:15:50.764004252Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:50.775755 containerd[2127]: time="2026-03-14T00:15:50.775549992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:50.778103 containerd[2127]: time="2026-03-14T00:15:50.777911880Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.489399988s" Mar 14 00:15:50.778103 containerd[2127]: time="2026-03-14T00:15:50.777975804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Mar 14 00:15:50.795557 containerd[2127]: time="2026-03-14T00:15:50.795456348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 14 00:15:50.810810 containerd[2127]: time="2026-03-14T00:15:50.810732228Z" level=info msg="CreateContainer within sandbox \"e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 14 00:15:50.845534 containerd[2127]: time="2026-03-14T00:15:50.844770084Z" level=info msg="CreateContainer within sandbox \"e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"43dccacd5b507a75ae6fc5f2dfcbfd8aa9be27d99d098b70dae346cb2d0ba4aa\"" Mar 14 00:15:50.851292 containerd[2127]: time="2026-03-14T00:15:50.851212680Z" level=info msg="StartContainer for \"43dccacd5b507a75ae6fc5f2dfcbfd8aa9be27d99d098b70dae346cb2d0ba4aa\"" Mar 14 00:15:51.068947 containerd[2127]: time="2026-03-14T00:15:51.068774397Z" level=info msg="StartContainer for \"43dccacd5b507a75ae6fc5f2dfcbfd8aa9be27d99d098b70dae346cb2d0ba4aa\" returns successfully" Mar 14 00:15:51.159856 containerd[2127]: time="2026-03-14T00:15:51.157074430Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:51.159856 containerd[2127]: time="2026-03-14T00:15:51.157190578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 14 00:15:51.161618 kubelet[3696]: I0314 00:15:51.161537 3696 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:15:51.178955 containerd[2127]: time="2026-03-14T00:15:51.178614238Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 383.053862ms" Mar 14 00:15:51.178955 containerd[2127]: time="2026-03-14T00:15:51.178682386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 14 00:15:51.184248 containerd[2127]: time="2026-03-14T00:15:51.184173922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 14 00:15:51.194933 containerd[2127]: time="2026-03-14T00:15:51.194848678Z" level=info msg="CreateContainer within sandbox \"ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 14 00:15:51.231015 containerd[2127]: time="2026-03-14T00:15:51.230758306Z" level=info msg="CreateContainer within sandbox \"ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4cc1a2d05595428150a2de006790f509de7e6d9017deb2ca713a151eb5ab50dc\"" Mar 14 00:15:51.236255 containerd[2127]: time="2026-03-14T00:15:51.236160418Z" level=info msg="StartContainer for \"4cc1a2d05595428150a2de006790f509de7e6d9017deb2ca713a151eb5ab50dc\"" Mar 14 00:15:51.479733 containerd[2127]: time="2026-03-14T00:15:51.478862111Z" level=info msg="StartContainer for \"4cc1a2d05595428150a2de006790f509de7e6d9017deb2ca713a151eb5ab50dc\" returns successfully" Mar 14 00:15:51.558059 containerd[2127]: time="2026-03-14T00:15:51.552393504Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:51.558059 containerd[2127]: time="2026-03-14T00:15:51.552908364Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 14 00:15:51.578931 containerd[2127]: time="2026-03-14T00:15:51.577060092Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 392.806178ms" Mar 14 00:15:51.578931 containerd[2127]: time="2026-03-14T00:15:51.577130556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 14 00:15:51.580956 containerd[2127]: time="2026-03-14T00:15:51.579444540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 14 00:15:51.596901 containerd[2127]: time="2026-03-14T00:15:51.596723220Z" level=info msg="CreateContainer within sandbox \"9a0fc71079573dd7792576f9ca16aea2c5e930d2129b49faf1928589c7c4bd7d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 14 00:15:51.628204 containerd[2127]: time="2026-03-14T00:15:51.627977892Z" level=info msg="CreateContainer within sandbox \"9a0fc71079573dd7792576f9ca16aea2c5e930d2129b49faf1928589c7c4bd7d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a095594764895d7036f2354ac7b0c7298a0b862af4914e2459919af315bb387a\"" Mar 14 00:15:51.633954 containerd[2127]: time="2026-03-14T00:15:51.631551336Z" level=info msg="StartContainer for \"a095594764895d7036f2354ac7b0c7298a0b862af4914e2459919af315bb387a\"" Mar 14 00:15:51.866355 containerd[2127]: time="2026-03-14T00:15:51.866280061Z" level=info msg="StartContainer for \"a095594764895d7036f2354ac7b0c7298a0b862af4914e2459919af315bb387a\" returns successfully" Mar 14 00:15:52.266786 kubelet[3696]: I0314 00:15:52.266657 3696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6579f7776-dhvpk" podStartSLOduration=28.916196512 podStartE2EDuration="38.266608619s" podCreationTimestamp="2026-03-14 00:15:14 +0000 UTC" firstStartedPulling="2026-03-14 00:15:41.830955075 +0000 UTC m=+50.692223953" lastFinishedPulling="2026-03-14 00:15:51.181367194 +0000 UTC m=+60.042636060" observedRunningTime="2026-03-14 00:15:52.259295711 +0000 UTC m=+61.120564613" watchObservedRunningTime="2026-03-14 00:15:52.266608619 +0000 UTC m=+61.127877497" Mar 14 00:15:52.267580 kubelet[3696]: I0314 00:15:52.266908 3696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7bfc5795cd-9tsrz" podStartSLOduration=26.868945144 podStartE2EDuration="36.266875307s" podCreationTimestamp="2026-03-14 00:15:16 +0000 UTC" firstStartedPulling="2026-03-14 00:15:42.180365893 +0000 UTC m=+51.041634771" lastFinishedPulling="2026-03-14 00:15:51.578296068 +0000 UTC m=+60.439564934" observedRunningTime="2026-03-14 00:15:52.221074991 +0000 UTC m=+61.082343881" watchObservedRunningTime="2026-03-14 00:15:52.266875307 +0000 UTC m=+61.128144185" Mar 14 00:15:53.193939 kubelet[3696]: I0314 00:15:53.191342 3696 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:15:53.193939 kubelet[3696]: I0314 00:15:53.192186 3696 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:15:53.267781 containerd[2127]: time="2026-03-14T00:15:53.267709512Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:53.271925 containerd[2127]: time="2026-03-14T00:15:53.271867632Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Mar 14 00:15:53.273853 containerd[2127]: time="2026-03-14T00:15:53.272804292Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:53.281550 containerd[2127]: time="2026-03-14T00:15:53.281485800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:53.287075 containerd[2127]: time="2026-03-14T00:15:53.286988124Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.707160868s" Mar 14 00:15:53.288295 containerd[2127]: time="2026-03-14T00:15:53.287073384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Mar 14 00:15:53.321053 containerd[2127]: time="2026-03-14T00:15:53.319811304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 14 00:15:53.335777 containerd[2127]: time="2026-03-14T00:15:53.335712300Z" level=info msg="CreateContainer within sandbox \"b7a20e9cb07e0e338fb3cb06b3977f017b0bca3f54098be23ae7f85f2268588c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 14 00:15:53.399353 containerd[2127]: time="2026-03-14T00:15:53.398133685Z" level=info msg="CreateContainer within sandbox \"b7a20e9cb07e0e338fb3cb06b3977f017b0bca3f54098be23ae7f85f2268588c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"586ba14451fcbae2e1befd4fdb9f0fd42e1cecf8ef17182f66ba552d63596e4b\"" Mar 14 00:15:53.402040 containerd[2127]: time="2026-03-14T00:15:53.401549089Z" level=info msg="StartContainer for \"586ba14451fcbae2e1befd4fdb9f0fd42e1cecf8ef17182f66ba552d63596e4b\"" Mar 14 00:15:53.677001 containerd[2127]: time="2026-03-14T00:15:53.676518002Z" level=info msg="StartContainer for \"586ba14451fcbae2e1befd4fdb9f0fd42e1cecf8ef17182f66ba552d63596e4b\" returns successfully" Mar 14 00:15:54.012829 systemd[1]: Started sshd@8-172.31.28.2:22-68.220.241.50:46720.service - OpenSSH per-connection server daemon (68.220.241.50:46720). Mar 14 00:15:54.605315 sshd[5982]: Accepted publickey for core from 68.220.241.50 port 46720 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:15:54.615817 kubelet[3696]: I0314 00:15:54.613572 3696 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:15:54.621910 sshd[5982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:54.667372 systemd-logind[2106]: New session 9 of user core. Mar 14 00:15:54.676324 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 14 00:15:55.357571 sshd[5982]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:55.382515 systemd[1]: sshd@8-172.31.28.2:22-68.220.241.50:46720.service: Deactivated successfully. Mar 14 00:15:55.400191 systemd[1]: session-9.scope: Deactivated successfully. Mar 14 00:15:55.403692 systemd-logind[2106]: Session 9 logged out. Waiting for processes to exit. Mar 14 00:15:55.418639 systemd-logind[2106]: Removed session 9. Mar 14 00:15:55.924069 systemd-resolved[2025]: Under memory pressure, flushing caches. Mar 14 00:15:55.924121 systemd-resolved[2025]: Flushed all caches. Mar 14 00:15:55.927361 systemd-journald[1608]: Under memory pressure, flushing caches. Mar 14 00:15:57.484229 containerd[2127]: time="2026-03-14T00:15:57.483685421Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:57.487863 containerd[2127]: time="2026-03-14T00:15:57.487177193Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Mar 14 00:15:57.490726 containerd[2127]: time="2026-03-14T00:15:57.490558529Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:57.506395 containerd[2127]: time="2026-03-14T00:15:57.504018941Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:57.512373 containerd[2127]: time="2026-03-14T00:15:57.511581785Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 4.191695085s" Mar 14 00:15:57.512373 containerd[2127]: time="2026-03-14T00:15:57.511659281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Mar 14 00:15:57.516745 containerd[2127]: time="2026-03-14T00:15:57.516690785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 14 00:15:57.587862 containerd[2127]: time="2026-03-14T00:15:57.587576369Z" level=info msg="CreateContainer within sandbox \"ef7014f2c50d187700c043d6b27bc3d70c17367ecee36ddc002d4322f03f1bfb\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 14 00:15:57.624663 containerd[2127]: time="2026-03-14T00:15:57.624492798Z" level=info msg="CreateContainer within sandbox \"ef7014f2c50d187700c043d6b27bc3d70c17367ecee36ddc002d4322f03f1bfb\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a2225435cd603db2e615841cb79eb87d4c43b075237904c98aaaba19f1f40a8a\"" Mar 14 00:15:57.635980 containerd[2127]: time="2026-03-14T00:15:57.628616106Z" level=info msg="StartContainer for \"a2225435cd603db2e615841cb79eb87d4c43b075237904c98aaaba19f1f40a8a\"" Mar 14 00:15:57.632689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2637830101.mount: Deactivated successfully. Mar 14 00:15:57.975377 systemd-journald[1608]: Under memory pressure, flushing caches. Mar 14 00:15:57.971733 systemd-resolved[2025]: Under memory pressure, flushing caches. Mar 14 00:15:57.971759 systemd-resolved[2025]: Flushed all caches. Mar 14 00:15:58.014168 containerd[2127]: time="2026-03-14T00:15:58.013189084Z" level=info msg="StartContainer for \"a2225435cd603db2e615841cb79eb87d4c43b075237904c98aaaba19f1f40a8a\" returns successfully" Mar 14 00:15:58.576986 kubelet[3696]: I0314 00:15:58.576764 3696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-8444cc7f95-vdkpl" podStartSLOduration=27.070007703 podStartE2EDuration="40.576738426s" podCreationTimestamp="2026-03-14 00:15:18 +0000 UTC" firstStartedPulling="2026-03-14 00:15:44.008047874 +0000 UTC m=+52.869316752" lastFinishedPulling="2026-03-14 00:15:57.514778585 +0000 UTC m=+66.376047475" observedRunningTime="2026-03-14 00:15:58.298960133 +0000 UTC m=+67.160229035" watchObservedRunningTime="2026-03-14 00:15:58.576738426 +0000 UTC m=+67.438007304" Mar 14 00:15:59.597056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3341177740.mount: Deactivated successfully. Mar 14 00:15:59.640328 containerd[2127]: time="2026-03-14T00:15:59.640249400Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:59.647069 containerd[2127]: time="2026-03-14T00:15:59.644223260Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Mar 14 00:15:59.648681 containerd[2127]: time="2026-03-14T00:15:59.648622148Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:59.661851 containerd[2127]: time="2026-03-14T00:15:59.661733288Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:59.663475 containerd[2127]: time="2026-03-14T00:15:59.663323108Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 2.146359779s" Mar 14 00:15:59.663661 containerd[2127]: time="2026-03-14T00:15:59.663629552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Mar 14 00:15:59.668732 containerd[2127]: time="2026-03-14T00:15:59.668446496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 14 00:15:59.676099 containerd[2127]: time="2026-03-14T00:15:59.676025996Z" level=info msg="CreateContainer within sandbox \"e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 14 00:15:59.703033 containerd[2127]: time="2026-03-14T00:15:59.701455376Z" level=info msg="CreateContainer within sandbox \"e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"43e562a7a6e7c30a5b3fb3ac5e82864cedaafb7dd77460d7a2a5e010d2226ee2\"" Mar 14 00:15:59.704624 containerd[2127]: time="2026-03-14T00:15:59.704520464Z" level=info msg="StartContainer for \"43e562a7a6e7c30a5b3fb3ac5e82864cedaafb7dd77460d7a2a5e010d2226ee2\"" Mar 14 00:15:59.904645 containerd[2127]: time="2026-03-14T00:15:59.904507365Z" level=info msg="StartContainer for \"43e562a7a6e7c30a5b3fb3ac5e82864cedaafb7dd77460d7a2a5e010d2226ee2\" returns successfully" Mar 14 00:16:00.254587 containerd[2127]: time="2026-03-14T00:16:00.254502475Z" level=info msg="StopContainer for \"43dccacd5b507a75ae6fc5f2dfcbfd8aa9be27d99d098b70dae346cb2d0ba4aa\" with timeout 30 (s)" Mar 14 00:16:00.255996 containerd[2127]: time="2026-03-14T00:16:00.255938071Z" level=info msg="StopContainer for \"43e562a7a6e7c30a5b3fb3ac5e82864cedaafb7dd77460d7a2a5e010d2226ee2\" with timeout 30 (s)" Mar 14 00:16:00.256733 containerd[2127]: time="2026-03-14T00:16:00.256564279Z" level=info msg="Stop container \"43e562a7a6e7c30a5b3fb3ac5e82864cedaafb7dd77460d7a2a5e010d2226ee2\" with signal terminated" Mar 14 00:16:00.259892 containerd[2127]: time="2026-03-14T00:16:00.258351295Z" level=info msg="Stop container \"43dccacd5b507a75ae6fc5f2dfcbfd8aa9be27d99d098b70dae346cb2d0ba4aa\" with signal terminated" Mar 14 00:16:00.289899 kubelet[3696]: I0314 00:16:00.289646 3696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-75cf64fb64-gfxc8" podStartSLOduration=21.287787265 podStartE2EDuration="39.289622743s" podCreationTimestamp="2026-03-14 00:15:21 +0000 UTC" firstStartedPulling="2026-03-14 00:15:41.66624221 +0000 UTC m=+50.527511100" lastFinishedPulling="2026-03-14 00:15:59.66807764 +0000 UTC m=+68.529346578" observedRunningTime="2026-03-14 00:16:00.289210771 +0000 UTC m=+69.150479661" watchObservedRunningTime="2026-03-14 00:16:00.289622743 +0000 UTC m=+69.150891633" Mar 14 00:16:00.390687 containerd[2127]: time="2026-03-14T00:16:00.388829935Z" level=info msg="shim disconnected" id=43dccacd5b507a75ae6fc5f2dfcbfd8aa9be27d99d098b70dae346cb2d0ba4aa namespace=k8s.io Mar 14 00:16:00.390687 containerd[2127]: time="2026-03-14T00:16:00.388915375Z" level=warning msg="cleaning up after shim disconnected" id=43dccacd5b507a75ae6fc5f2dfcbfd8aa9be27d99d098b70dae346cb2d0ba4aa namespace=k8s.io Mar 14 00:16:00.390687 containerd[2127]: time="2026-03-14T00:16:00.388937179Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:16:00.423544 containerd[2127]: time="2026-03-14T00:16:00.422309912Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:16:00Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:16:00.443804 systemd[1]: Started sshd@9-172.31.28.2:22-68.220.241.50:46722.service - OpenSSH per-connection server daemon (68.220.241.50:46722). Mar 14 00:16:00.447450 containerd[2127]: time="2026-03-14T00:16:00.447341420Z" level=info msg="shim disconnected" id=43e562a7a6e7c30a5b3fb3ac5e82864cedaafb7dd77460d7a2a5e010d2226ee2 namespace=k8s.io Mar 14 00:16:00.448113 containerd[2127]: time="2026-03-14T00:16:00.447714344Z" level=warning msg="cleaning up after shim disconnected" id=43e562a7a6e7c30a5b3fb3ac5e82864cedaafb7dd77460d7a2a5e010d2226ee2 namespace=k8s.io Mar 14 00:16:00.448113 containerd[2127]: time="2026-03-14T00:16:00.447741740Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:16:00.453638 containerd[2127]: time="2026-03-14T00:16:00.453581012Z" level=info msg="StopContainer for \"43dccacd5b507a75ae6fc5f2dfcbfd8aa9be27d99d098b70dae346cb2d0ba4aa\" returns successfully" Mar 14 00:16:00.488542 containerd[2127]: time="2026-03-14T00:16:00.488459972Z" level=info msg="StopContainer for \"43e562a7a6e7c30a5b3fb3ac5e82864cedaafb7dd77460d7a2a5e010d2226ee2\" returns successfully" Mar 14 00:16:00.491059 containerd[2127]: time="2026-03-14T00:16:00.490810424Z" level=info msg="StopPodSandbox for \"e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c\"" Mar 14 00:16:00.491059 containerd[2127]: time="2026-03-14T00:16:00.491022536Z" level=info msg="Container to stop \"43dccacd5b507a75ae6fc5f2dfcbfd8aa9be27d99d098b70dae346cb2d0ba4aa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:16:00.491356 containerd[2127]: time="2026-03-14T00:16:00.491055488Z" level=info msg="Container to stop \"43e562a7a6e7c30a5b3fb3ac5e82864cedaafb7dd77460d7a2a5e010d2226ee2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:16:00.544396 containerd[2127]: time="2026-03-14T00:16:00.542375036Z" level=info msg="shim disconnected" id=e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c namespace=k8s.io Mar 14 00:16:00.544396 containerd[2127]: time="2026-03-14T00:16:00.542481512Z" level=warning msg="cleaning up after shim disconnected" id=e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c namespace=k8s.io Mar 14 00:16:00.544396 containerd[2127]: time="2026-03-14T00:16:00.542505320Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:16:00.599655 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43e562a7a6e7c30a5b3fb3ac5e82864cedaafb7dd77460d7a2a5e010d2226ee2-rootfs.mount: Deactivated successfully. Mar 14 00:16:00.600735 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43dccacd5b507a75ae6fc5f2dfcbfd8aa9be27d99d098b70dae346cb2d0ba4aa-rootfs.mount: Deactivated successfully. Mar 14 00:16:00.600969 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c-rootfs.mount: Deactivated successfully. Mar 14 00:16:00.601199 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c-shm.mount: Deactivated successfully. Mar 14 00:16:00.697744 systemd-networkd[1684]: cali74a271ef3fd: Link DOWN Mar 14 00:16:00.697765 systemd-networkd[1684]: cali74a271ef3fd: Lost carrier Mar 14 00:16:00.925845 containerd[2127]: 2026-03-14 00:16:00.693 [INFO][6242] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" Mar 14 00:16:00.925845 containerd[2127]: 2026-03-14 00:16:00.694 [INFO][6242] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" iface="eth0" netns="/var/run/netns/cni-78cc6455-1a7e-0b0c-3bc3-9d7c9fb711bd" Mar 14 00:16:00.925845 containerd[2127]: 2026-03-14 00:16:00.696 [INFO][6242] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" iface="eth0" netns="/var/run/netns/cni-78cc6455-1a7e-0b0c-3bc3-9d7c9fb711bd" Mar 14 00:16:00.925845 containerd[2127]: 2026-03-14 00:16:00.711 [INFO][6242] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" after=16.52208ms iface="eth0" netns="/var/run/netns/cni-78cc6455-1a7e-0b0c-3bc3-9d7c9fb711bd" Mar 14 00:16:00.925845 containerd[2127]: 2026-03-14 00:16:00.711 [INFO][6242] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" Mar 14 00:16:00.925845 containerd[2127]: 2026-03-14 00:16:00.711 [INFO][6242] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" Mar 14 00:16:00.925845 containerd[2127]: 2026-03-14 00:16:00.797 [INFO][6253] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" HandleID="k8s-pod-network.e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" Workload="ip--172--31--28--2-k8s-whisker--75cf64fb64--gfxc8-eth0" Mar 14 00:16:00.925845 containerd[2127]: 2026-03-14 00:16:00.797 [INFO][6253] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:16:00.925845 containerd[2127]: 2026-03-14 00:16:00.798 [INFO][6253] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:16:00.925845 containerd[2127]: 2026-03-14 00:16:00.902 [INFO][6253] ipam/ipam_plugin.go 516: Released address using handleID ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" HandleID="k8s-pod-network.e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" Workload="ip--172--31--28--2-k8s-whisker--75cf64fb64--gfxc8-eth0" Mar 14 00:16:00.925845 containerd[2127]: 2026-03-14 00:16:00.904 [INFO][6253] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" HandleID="k8s-pod-network.e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" Workload="ip--172--31--28--2-k8s-whisker--75cf64fb64--gfxc8-eth0" Mar 14 00:16:00.925845 containerd[2127]: 2026-03-14 00:16:00.908 [INFO][6253] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:16:00.925845 containerd[2127]: 2026-03-14 00:16:00.916 [INFO][6242] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" Mar 14 00:16:00.930042 containerd[2127]: time="2026-03-14T00:16:00.929447782Z" level=info msg="TearDown network for sandbox \"e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c\" successfully" Mar 14 00:16:00.930042 containerd[2127]: time="2026-03-14T00:16:00.929496106Z" level=info msg="StopPodSandbox for \"e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c\" returns successfully" Mar 14 00:16:00.937925 systemd[1]: run-netns-cni\x2d78cc6455\x2d1a7e\x2d0b0c\x2d3bc3\x2d9d7c9fb711bd.mount: Deactivated successfully. Mar 14 00:16:00.987983 sshd[6186]: Accepted publickey for core from 68.220.241.50 port 46722 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:00.993656 sshd[6186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:01.026217 systemd-logind[2106]: New session 10 of user core. Mar 14 00:16:01.031979 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 14 00:16:01.097542 kubelet[3696]: I0314 00:16:01.097338 3696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6c90c97c-9e7f-46d8-bdd7-1f5e381c5811-whisker-backend-key-pair\") pod \"6c90c97c-9e7f-46d8-bdd7-1f5e381c5811\" (UID: \"6c90c97c-9e7f-46d8-bdd7-1f5e381c5811\") " Mar 14 00:16:01.097542 kubelet[3696]: I0314 00:16:01.097452 3696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c90c97c-9e7f-46d8-bdd7-1f5e381c5811-whisker-ca-bundle\") pod \"6c90c97c-9e7f-46d8-bdd7-1f5e381c5811\" (UID: \"6c90c97c-9e7f-46d8-bdd7-1f5e381c5811\") " Mar 14 00:16:01.097542 kubelet[3696]: I0314 00:16:01.097499 3696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/6c90c97c-9e7f-46d8-bdd7-1f5e381c5811-nginx-config\") pod \"6c90c97c-9e7f-46d8-bdd7-1f5e381c5811\" (UID: \"6c90c97c-9e7f-46d8-bdd7-1f5e381c5811\") " Mar 14 00:16:01.097542 kubelet[3696]: I0314 00:16:01.097541 3696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9nz2\" (UniqueName: \"kubernetes.io/projected/6c90c97c-9e7f-46d8-bdd7-1f5e381c5811-kube-api-access-r9nz2\") pod \"6c90c97c-9e7f-46d8-bdd7-1f5e381c5811\" (UID: \"6c90c97c-9e7f-46d8-bdd7-1f5e381c5811\") " Mar 14 00:16:01.101110 kubelet[3696]: I0314 00:16:01.101054 3696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c90c97c-9e7f-46d8-bdd7-1f5e381c5811-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "6c90c97c-9e7f-46d8-bdd7-1f5e381c5811" (UID: "6c90c97c-9e7f-46d8-bdd7-1f5e381c5811"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:16:01.102074 kubelet[3696]: I0314 00:16:01.101949 3696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c90c97c-9e7f-46d8-bdd7-1f5e381c5811-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6c90c97c-9e7f-46d8-bdd7-1f5e381c5811" (UID: "6c90c97c-9e7f-46d8-bdd7-1f5e381c5811"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:16:01.117392 kubelet[3696]: I0314 00:16:01.117214 3696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c90c97c-9e7f-46d8-bdd7-1f5e381c5811-kube-api-access-r9nz2" (OuterVolumeSpecName: "kube-api-access-r9nz2") pod "6c90c97c-9e7f-46d8-bdd7-1f5e381c5811" (UID: "6c90c97c-9e7f-46d8-bdd7-1f5e381c5811"). InnerVolumeSpecName "kube-api-access-r9nz2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:16:01.120663 systemd[1]: var-lib-kubelet-pods-6c90c97c\x2d9e7f\x2d46d8\x2dbdd7\x2d1f5e381c5811-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr9nz2.mount: Deactivated successfully. Mar 14 00:16:01.124115 kubelet[3696]: I0314 00:16:01.123749 3696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c90c97c-9e7f-46d8-bdd7-1f5e381c5811-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6c90c97c-9e7f-46d8-bdd7-1f5e381c5811" (UID: "6c90c97c-9e7f-46d8-bdd7-1f5e381c5811"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 14 00:16:01.129683 systemd[1]: var-lib-kubelet-pods-6c90c97c\x2d9e7f\x2d46d8\x2dbdd7\x2d1f5e381c5811-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 14 00:16:01.199230 kubelet[3696]: I0314 00:16:01.198875 3696 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6c90c97c-9e7f-46d8-bdd7-1f5e381c5811-whisker-backend-key-pair\") on node \"ip-172-31-28-2\" DevicePath \"\"" Mar 14 00:16:01.199230 kubelet[3696]: I0314 00:16:01.198924 3696 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c90c97c-9e7f-46d8-bdd7-1f5e381c5811-whisker-ca-bundle\") on node \"ip-172-31-28-2\" DevicePath \"\"" Mar 14 00:16:01.199230 kubelet[3696]: I0314 00:16:01.198947 3696 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/6c90c97c-9e7f-46d8-bdd7-1f5e381c5811-nginx-config\") on node \"ip-172-31-28-2\" DevicePath \"\"" Mar 14 00:16:01.199230 kubelet[3696]: I0314 00:16:01.198972 3696 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r9nz2\" (UniqueName: \"kubernetes.io/projected/6c90c97c-9e7f-46d8-bdd7-1f5e381c5811-kube-api-access-r9nz2\") on node \"ip-172-31-28-2\" DevicePath \"\"" Mar 14 00:16:01.265344 kubelet[3696]: I0314 00:16:01.264227 3696 scope.go:117] "RemoveContainer" containerID="43e562a7a6e7c30a5b3fb3ac5e82864cedaafb7dd77460d7a2a5e010d2226ee2" Mar 14 00:16:01.283894 containerd[2127]: time="2026-03-14T00:16:01.283832144Z" level=info msg="RemoveContainer for \"43e562a7a6e7c30a5b3fb3ac5e82864cedaafb7dd77460d7a2a5e010d2226ee2\"" Mar 14 00:16:01.291216 containerd[2127]: time="2026-03-14T00:16:01.291138176Z" level=info msg="RemoveContainer for \"43e562a7a6e7c30a5b3fb3ac5e82864cedaafb7dd77460d7a2a5e010d2226ee2\" returns successfully" Mar 14 00:16:01.292097 kubelet[3696]: I0314 00:16:01.291574 3696 scope.go:117] "RemoveContainer" containerID="43dccacd5b507a75ae6fc5f2dfcbfd8aa9be27d99d098b70dae346cb2d0ba4aa" Mar 14 00:16:01.299730 containerd[2127]: time="2026-03-14T00:16:01.299107580Z" level=info msg="RemoveContainer for \"43dccacd5b507a75ae6fc5f2dfcbfd8aa9be27d99d098b70dae346cb2d0ba4aa\"" Mar 14 00:16:01.339889 containerd[2127]: time="2026-03-14T00:16:01.339792872Z" level=info msg="RemoveContainer for \"43dccacd5b507a75ae6fc5f2dfcbfd8aa9be27d99d098b70dae346cb2d0ba4aa\" returns successfully" Mar 14 00:16:01.342568 kubelet[3696]: I0314 00:16:01.342222 3696 scope.go:117] "RemoveContainer" containerID="43e562a7a6e7c30a5b3fb3ac5e82864cedaafb7dd77460d7a2a5e010d2226ee2" Mar 14 00:16:01.345546 containerd[2127]: time="2026-03-14T00:16:01.345197732Z" level=error msg="ContainerStatus for \"43e562a7a6e7c30a5b3fb3ac5e82864cedaafb7dd77460d7a2a5e010d2226ee2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"43e562a7a6e7c30a5b3fb3ac5e82864cedaafb7dd77460d7a2a5e010d2226ee2\": not found" Mar 14 00:16:01.346827 kubelet[3696]: E0314 00:16:01.345994 3696 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"43e562a7a6e7c30a5b3fb3ac5e82864cedaafb7dd77460d7a2a5e010d2226ee2\": not found" containerID="43e562a7a6e7c30a5b3fb3ac5e82864cedaafb7dd77460d7a2a5e010d2226ee2" Mar 14 00:16:01.346827 kubelet[3696]: I0314 00:16:01.346051 3696 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"43e562a7a6e7c30a5b3fb3ac5e82864cedaafb7dd77460d7a2a5e010d2226ee2"} err="failed to get container status \"43e562a7a6e7c30a5b3fb3ac5e82864cedaafb7dd77460d7a2a5e010d2226ee2\": rpc error: code = NotFound desc = an error occurred when try to find container \"43e562a7a6e7c30a5b3fb3ac5e82864cedaafb7dd77460d7a2a5e010d2226ee2\": not found" Mar 14 00:16:01.346827 kubelet[3696]: I0314 00:16:01.346285 3696 scope.go:117] "RemoveContainer" containerID="43dccacd5b507a75ae6fc5f2dfcbfd8aa9be27d99d098b70dae346cb2d0ba4aa" Mar 14 00:16:01.347039 containerd[2127]: time="2026-03-14T00:16:01.346905692Z" level=error msg="ContainerStatus for \"43dccacd5b507a75ae6fc5f2dfcbfd8aa9be27d99d098b70dae346cb2d0ba4aa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"43dccacd5b507a75ae6fc5f2dfcbfd8aa9be27d99d098b70dae346cb2d0ba4aa\": not found" Mar 14 00:16:01.348746 kubelet[3696]: E0314 00:16:01.347553 3696 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"43dccacd5b507a75ae6fc5f2dfcbfd8aa9be27d99d098b70dae346cb2d0ba4aa\": not found" containerID="43dccacd5b507a75ae6fc5f2dfcbfd8aa9be27d99d098b70dae346cb2d0ba4aa" Mar 14 00:16:01.348746 kubelet[3696]: I0314 00:16:01.347615 3696 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"43dccacd5b507a75ae6fc5f2dfcbfd8aa9be27d99d098b70dae346cb2d0ba4aa"} err="failed to get container status \"43dccacd5b507a75ae6fc5f2dfcbfd8aa9be27d99d098b70dae346cb2d0ba4aa\": rpc error: code = NotFound desc = an error occurred when try to find container \"43dccacd5b507a75ae6fc5f2dfcbfd8aa9be27d99d098b70dae346cb2d0ba4aa\": not found" Mar 14 00:16:01.348746 kubelet[3696]: I0314 00:16:01.347655 3696 scope.go:117] "RemoveContainer" containerID="43e562a7a6e7c30a5b3fb3ac5e82864cedaafb7dd77460d7a2a5e010d2226ee2" Mar 14 00:16:01.350839 containerd[2127]: time="2026-03-14T00:16:01.349602212Z" level=error msg="ContainerStatus for \"43e562a7a6e7c30a5b3fb3ac5e82864cedaafb7dd77460d7a2a5e010d2226ee2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"43e562a7a6e7c30a5b3fb3ac5e82864cedaafb7dd77460d7a2a5e010d2226ee2\": not found" Mar 14 00:16:01.355599 containerd[2127]: time="2026-03-14T00:16:01.352549256Z" level=error msg="ContainerStatus for \"43dccacd5b507a75ae6fc5f2dfcbfd8aa9be27d99d098b70dae346cb2d0ba4aa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"43dccacd5b507a75ae6fc5f2dfcbfd8aa9be27d99d098b70dae346cb2d0ba4aa\": not found" Mar 14 00:16:01.355701 kubelet[3696]: I0314 00:16:01.351026 3696 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"43e562a7a6e7c30a5b3fb3ac5e82864cedaafb7dd77460d7a2a5e010d2226ee2"} err="failed to get container status \"43e562a7a6e7c30a5b3fb3ac5e82864cedaafb7dd77460d7a2a5e010d2226ee2\": rpc error: code = NotFound desc = an error occurred when try to find container \"43e562a7a6e7c30a5b3fb3ac5e82864cedaafb7dd77460d7a2a5e010d2226ee2\": not found" Mar 14 00:16:01.355701 kubelet[3696]: I0314 00:16:01.351083 3696 scope.go:117] "RemoveContainer" containerID="43dccacd5b507a75ae6fc5f2dfcbfd8aa9be27d99d098b70dae346cb2d0ba4aa" Mar 14 00:16:01.355701 kubelet[3696]: I0314 00:16:01.354521 3696 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"43dccacd5b507a75ae6fc5f2dfcbfd8aa9be27d99d098b70dae346cb2d0ba4aa"} err="failed to get container status \"43dccacd5b507a75ae6fc5f2dfcbfd8aa9be27d99d098b70dae346cb2d0ba4aa\": rpc error: code = NotFound desc = an error occurred when try to find container \"43dccacd5b507a75ae6fc5f2dfcbfd8aa9be27d99d098b70dae346cb2d0ba4aa\": not found" Mar 14 00:16:01.418469 kubelet[3696]: I0314 00:16:01.416537 3696 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c90c97c-9e7f-46d8-bdd7-1f5e381c5811" path="/var/lib/kubelet/pods/6c90c97c-9e7f-46d8-bdd7-1f5e381c5811/volumes" Mar 14 00:16:01.514086 kubelet[3696]: I0314 00:16:01.513937 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/0ae62c07-d35d-4fcc-8c21-5263c107b7ac-nginx-config\") pod \"whisker-65bfb8c945-vbpnf\" (UID: \"0ae62c07-d35d-4fcc-8c21-5263c107b7ac\") " pod="calico-system/whisker-65bfb8c945-vbpnf" Mar 14 00:16:01.515875 kubelet[3696]: I0314 00:16:01.514391 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh7x5\" (UniqueName: \"kubernetes.io/projected/0ae62c07-d35d-4fcc-8c21-5263c107b7ac-kube-api-access-nh7x5\") pod \"whisker-65bfb8c945-vbpnf\" (UID: \"0ae62c07-d35d-4fcc-8c21-5263c107b7ac\") " pod="calico-system/whisker-65bfb8c945-vbpnf" Mar 14 00:16:01.518403 kubelet[3696]: I0314 00:16:01.515372 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0ae62c07-d35d-4fcc-8c21-5263c107b7ac-whisker-backend-key-pair\") pod \"whisker-65bfb8c945-vbpnf\" (UID: \"0ae62c07-d35d-4fcc-8c21-5263c107b7ac\") " pod="calico-system/whisker-65bfb8c945-vbpnf" Mar 14 00:16:01.518403 kubelet[3696]: I0314 00:16:01.517969 3696 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ae62c07-d35d-4fcc-8c21-5263c107b7ac-whisker-ca-bundle\") pod \"whisker-65bfb8c945-vbpnf\" (UID: \"0ae62c07-d35d-4fcc-8c21-5263c107b7ac\") " pod="calico-system/whisker-65bfb8c945-vbpnf" Mar 14 00:16:01.730731 sshd[6186]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:01.741120 systemd[1]: sshd@9-172.31.28.2:22-68.220.241.50:46722.service: Deactivated successfully. Mar 14 00:16:01.748857 systemd[1]: session-10.scope: Deactivated successfully. Mar 14 00:16:01.749380 systemd-logind[2106]: Session 10 logged out. Waiting for processes to exit. Mar 14 00:16:01.753750 systemd-logind[2106]: Removed session 10. Mar 14 00:16:01.774369 containerd[2127]: time="2026-03-14T00:16:01.774191134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65bfb8c945-vbpnf,Uid:0ae62c07-d35d-4fcc-8c21-5263c107b7ac,Namespace:calico-system,Attempt:0,}" Mar 14 00:16:01.779363 containerd[2127]: time="2026-03-14T00:16:01.778312054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:01.782461 containerd[2127]: time="2026-03-14T00:16:01.782376646Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Mar 14 00:16:01.785914 containerd[2127]: time="2026-03-14T00:16:01.785716654Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:01.796634 containerd[2127]: time="2026-03-14T00:16:01.796573498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:01.805833 containerd[2127]: time="2026-03-14T00:16:01.801340582Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 2.13283063s" Mar 14 00:16:01.805833 containerd[2127]: time="2026-03-14T00:16:01.801404458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Mar 14 00:16:01.819191 containerd[2127]: time="2026-03-14T00:16:01.819121367Z" level=info msg="CreateContainer within sandbox \"b7a20e9cb07e0e338fb3cb06b3977f017b0bca3f54098be23ae7f85f2268588c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 14 00:16:01.864498 containerd[2127]: time="2026-03-14T00:16:01.864420119Z" level=info msg="CreateContainer within sandbox \"b7a20e9cb07e0e338fb3cb06b3977f017b0bca3f54098be23ae7f85f2268588c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9b341d388b7968d4feb627941bafc05a78a657973c0bab564ec47af080ec9306\"" Mar 14 00:16:01.866227 containerd[2127]: time="2026-03-14T00:16:01.865814255Z" level=info msg="StartContainer for \"9b341d388b7968d4feb627941bafc05a78a657973c0bab564ec47af080ec9306\"" Mar 14 00:16:02.049742 containerd[2127]: time="2026-03-14T00:16:02.049012808Z" level=info msg="StartContainer for \"9b341d388b7968d4feb627941bafc05a78a657973c0bab564ec47af080ec9306\" returns successfully" Mar 14 00:16:02.131992 (udev-worker)[6250]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:16:02.133658 systemd-networkd[1684]: calic60271d4cae: Link UP Mar 14 00:16:02.134120 systemd-networkd[1684]: calic60271d4cae: Gained carrier Mar 14 00:16:02.170006 containerd[2127]: 2026-03-14 00:16:01.921 [INFO][6302] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--2-k8s-whisker--65bfb8c945--vbpnf-eth0 whisker-65bfb8c945- calico-system 0ae62c07-d35d-4fcc-8c21-5263c107b7ac 1202 0 2026-03-14 00:16:01 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:65bfb8c945 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-28-2 whisker-65bfb8c945-vbpnf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic60271d4cae [] [] }} ContainerID="95854a8391cc23084a783dfd5dc0e59a5568d00dd3d534044dfd613803065e17" Namespace="calico-system" Pod="whisker-65bfb8c945-vbpnf" WorkloadEndpoint="ip--172--31--28--2-k8s-whisker--65bfb8c945--vbpnf-" Mar 14 00:16:02.170006 containerd[2127]: 2026-03-14 00:16:01.922 [INFO][6302] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="95854a8391cc23084a783dfd5dc0e59a5568d00dd3d534044dfd613803065e17" Namespace="calico-system" Pod="whisker-65bfb8c945-vbpnf" WorkloadEndpoint="ip--172--31--28--2-k8s-whisker--65bfb8c945--vbpnf-eth0" Mar 14 00:16:02.170006 containerd[2127]: 2026-03-14 00:16:02.018 [INFO][6328] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95854a8391cc23084a783dfd5dc0e59a5568d00dd3d534044dfd613803065e17" HandleID="k8s-pod-network.95854a8391cc23084a783dfd5dc0e59a5568d00dd3d534044dfd613803065e17" Workload="ip--172--31--28--2-k8s-whisker--65bfb8c945--vbpnf-eth0" Mar 14 00:16:02.170006 containerd[2127]: 2026-03-14 00:16:02.047 [INFO][6328] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="95854a8391cc23084a783dfd5dc0e59a5568d00dd3d534044dfd613803065e17" HandleID="k8s-pod-network.95854a8391cc23084a783dfd5dc0e59a5568d00dd3d534044dfd613803065e17" Workload="ip--172--31--28--2-k8s-whisker--65bfb8c945--vbpnf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dea0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-2", "pod":"whisker-65bfb8c945-vbpnf", "timestamp":"2026-03-14 00:16:02.018870007 +0000 UTC"}, Hostname:"ip-172-31-28-2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000184580)} Mar 14 00:16:02.170006 containerd[2127]: 2026-03-14 00:16:02.048 [INFO][6328] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:16:02.170006 containerd[2127]: 2026-03-14 00:16:02.051 [INFO][6328] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:16:02.170006 containerd[2127]: 2026-03-14 00:16:02.052 [INFO][6328] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-2' Mar 14 00:16:02.170006 containerd[2127]: 2026-03-14 00:16:02.057 [INFO][6328] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.95854a8391cc23084a783dfd5dc0e59a5568d00dd3d534044dfd613803065e17" host="ip-172-31-28-2" Mar 14 00:16:02.170006 containerd[2127]: 2026-03-14 00:16:02.067 [INFO][6328] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-28-2" Mar 14 00:16:02.170006 containerd[2127]: 2026-03-14 00:16:02.080 [INFO][6328] ipam/ipam.go 526: Trying affinity for 192.168.50.192/26 host="ip-172-31-28-2" Mar 14 00:16:02.170006 containerd[2127]: 2026-03-14 00:16:02.084 [INFO][6328] ipam/ipam.go 160: Attempting to load block cidr=192.168.50.192/26 host="ip-172-31-28-2" Mar 14 00:16:02.170006 containerd[2127]: 2026-03-14 00:16:02.091 [INFO][6328] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.50.192/26 host="ip-172-31-28-2" Mar 14 00:16:02.170006 containerd[2127]: 2026-03-14 00:16:02.091 [INFO][6328] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.50.192/26 handle="k8s-pod-network.95854a8391cc23084a783dfd5dc0e59a5568d00dd3d534044dfd613803065e17" host="ip-172-31-28-2" Mar 14 00:16:02.170006 containerd[2127]: 2026-03-14 00:16:02.095 [INFO][6328] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.95854a8391cc23084a783dfd5dc0e59a5568d00dd3d534044dfd613803065e17 Mar 14 00:16:02.170006 containerd[2127]: 2026-03-14 00:16:02.105 [INFO][6328] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.50.192/26 handle="k8s-pod-network.95854a8391cc23084a783dfd5dc0e59a5568d00dd3d534044dfd613803065e17" host="ip-172-31-28-2" Mar 14 00:16:02.170006 containerd[2127]: 2026-03-14 00:16:02.119 [INFO][6328] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.50.202/26] block=192.168.50.192/26 handle="k8s-pod-network.95854a8391cc23084a783dfd5dc0e59a5568d00dd3d534044dfd613803065e17" host="ip-172-31-28-2" Mar 14 00:16:02.170006 containerd[2127]: 2026-03-14 00:16:02.119 [INFO][6328] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.50.202/26] handle="k8s-pod-network.95854a8391cc23084a783dfd5dc0e59a5568d00dd3d534044dfd613803065e17" host="ip-172-31-28-2" Mar 14 00:16:02.170006 containerd[2127]: 2026-03-14 00:16:02.119 [INFO][6328] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:16:02.170006 containerd[2127]: 2026-03-14 00:16:02.119 [INFO][6328] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.50.202/26] IPv6=[] ContainerID="95854a8391cc23084a783dfd5dc0e59a5568d00dd3d534044dfd613803065e17" HandleID="k8s-pod-network.95854a8391cc23084a783dfd5dc0e59a5568d00dd3d534044dfd613803065e17" Workload="ip--172--31--28--2-k8s-whisker--65bfb8c945--vbpnf-eth0" Mar 14 00:16:02.175239 containerd[2127]: 2026-03-14 00:16:02.125 [INFO][6302] cni-plugin/k8s.go 418: Populated endpoint ContainerID="95854a8391cc23084a783dfd5dc0e59a5568d00dd3d534044dfd613803065e17" Namespace="calico-system" Pod="whisker-65bfb8c945-vbpnf" WorkloadEndpoint="ip--172--31--28--2-k8s-whisker--65bfb8c945--vbpnf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--2-k8s-whisker--65bfb8c945--vbpnf-eth0", GenerateName:"whisker-65bfb8c945-", Namespace:"calico-system", SelfLink:"", UID:"0ae62c07-d35d-4fcc-8c21-5263c107b7ac", ResourceVersion:"1202", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 16, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65bfb8c945", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-2", ContainerID:"", Pod:"whisker-65bfb8c945-vbpnf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.50.202/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic60271d4cae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:16:02.175239 containerd[2127]: 2026-03-14 00:16:02.125 [INFO][6302] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.202/32] ContainerID="95854a8391cc23084a783dfd5dc0e59a5568d00dd3d534044dfd613803065e17" Namespace="calico-system" Pod="whisker-65bfb8c945-vbpnf" WorkloadEndpoint="ip--172--31--28--2-k8s-whisker--65bfb8c945--vbpnf-eth0" Mar 14 00:16:02.175239 containerd[2127]: 2026-03-14 00:16:02.125 [INFO][6302] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic60271d4cae ContainerID="95854a8391cc23084a783dfd5dc0e59a5568d00dd3d534044dfd613803065e17" Namespace="calico-system" Pod="whisker-65bfb8c945-vbpnf" WorkloadEndpoint="ip--172--31--28--2-k8s-whisker--65bfb8c945--vbpnf-eth0" Mar 14 00:16:02.175239 containerd[2127]: 2026-03-14 00:16:02.131 [INFO][6302] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="95854a8391cc23084a783dfd5dc0e59a5568d00dd3d534044dfd613803065e17" Namespace="calico-system" Pod="whisker-65bfb8c945-vbpnf" WorkloadEndpoint="ip--172--31--28--2-k8s-whisker--65bfb8c945--vbpnf-eth0" Mar 14 00:16:02.175239 containerd[2127]: 2026-03-14 00:16:02.131 [INFO][6302] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="95854a8391cc23084a783dfd5dc0e59a5568d00dd3d534044dfd613803065e17" Namespace="calico-system" Pod="whisker-65bfb8c945-vbpnf" WorkloadEndpoint="ip--172--31--28--2-k8s-whisker--65bfb8c945--vbpnf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--2-k8s-whisker--65bfb8c945--vbpnf-eth0", GenerateName:"whisker-65bfb8c945-", Namespace:"calico-system", SelfLink:"", UID:"0ae62c07-d35d-4fcc-8c21-5263c107b7ac", ResourceVersion:"1202", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 16, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65bfb8c945", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-2", ContainerID:"95854a8391cc23084a783dfd5dc0e59a5568d00dd3d534044dfd613803065e17", Pod:"whisker-65bfb8c945-vbpnf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.50.202/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic60271d4cae", MAC:"8e:61:22:80:b7:c1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:16:02.175239 containerd[2127]: 2026-03-14 00:16:02.149 [INFO][6302] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="95854a8391cc23084a783dfd5dc0e59a5568d00dd3d534044dfd613803065e17" Namespace="calico-system" Pod="whisker-65bfb8c945-vbpnf" WorkloadEndpoint="ip--172--31--28--2-k8s-whisker--65bfb8c945--vbpnf-eth0" Mar 14 00:16:02.301400 containerd[2127]: time="2026-03-14T00:16:02.296069037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:16:02.301400 containerd[2127]: time="2026-03-14T00:16:02.296176353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:16:02.301400 containerd[2127]: time="2026-03-14T00:16:02.296213613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:02.301400 containerd[2127]: time="2026-03-14T00:16:02.296414481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:02.363895 kubelet[3696]: I0314 00:16:02.362437 3696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ckhmv" podStartSLOduration=25.031245877 podStartE2EDuration="44.362410209s" podCreationTimestamp="2026-03-14 00:15:18 +0000 UTC" firstStartedPulling="2026-03-14 00:15:42.47668757 +0000 UTC m=+51.337956448" lastFinishedPulling="2026-03-14 00:16:01.807851914 +0000 UTC m=+70.669120780" observedRunningTime="2026-03-14 00:16:02.359809833 +0000 UTC m=+71.221078735" watchObservedRunningTime="2026-03-14 00:16:02.362410209 +0000 UTC m=+71.223679123" Mar 14 00:16:02.451421 containerd[2127]: time="2026-03-14T00:16:02.451340746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65bfb8c945-vbpnf,Uid:0ae62c07-d35d-4fcc-8c21-5263c107b7ac,Namespace:calico-system,Attempt:0,} returns sandbox id \"95854a8391cc23084a783dfd5dc0e59a5568d00dd3d534044dfd613803065e17\"" Mar 14 00:16:02.461555 containerd[2127]: time="2026-03-14T00:16:02.461213314Z" level=info msg="CreateContainer within sandbox \"95854a8391cc23084a783dfd5dc0e59a5568d00dd3d534044dfd613803065e17\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 14 00:16:02.480341 containerd[2127]: time="2026-03-14T00:16:02.480186934Z" level=info msg="CreateContainer within sandbox \"95854a8391cc23084a783dfd5dc0e59a5568d00dd3d534044dfd613803065e17\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"3a57b69486f253aa79272da81f2731641192202c67ba91703bb13e6b0ce14fa5\"" Mar 14 00:16:02.482657 containerd[2127]: time="2026-03-14T00:16:02.482462554Z" level=info msg="StartContainer for \"3a57b69486f253aa79272da81f2731641192202c67ba91703bb13e6b0ce14fa5\"" Mar 14 00:16:02.562664 kubelet[3696]: I0314 00:16:02.559971 3696 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 14 00:16:02.562664 kubelet[3696]: I0314 00:16:02.560079 3696 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 14 00:16:02.626063 containerd[2127]: time="2026-03-14T00:16:02.625936979Z" level=info msg="StartContainer for \"3a57b69486f253aa79272da81f2731641192202c67ba91703bb13e6b0ce14fa5\" returns successfully" Mar 14 00:16:02.642388 containerd[2127]: time="2026-03-14T00:16:02.639162335Z" level=info msg="CreateContainer within sandbox \"95854a8391cc23084a783dfd5dc0e59a5568d00dd3d534044dfd613803065e17\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 14 00:16:02.675068 containerd[2127]: time="2026-03-14T00:16:02.674901995Z" level=info msg="CreateContainer within sandbox \"95854a8391cc23084a783dfd5dc0e59a5568d00dd3d534044dfd613803065e17\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c8847734d67784fe50aaab1130f3b6f4d237eef187463426b410b07fe28f7037\"" Mar 14 00:16:02.680559 containerd[2127]: time="2026-03-14T00:16:02.680044403Z" level=info msg="StartContainer for \"c8847734d67784fe50aaab1130f3b6f4d237eef187463426b410b07fe28f7037\"" Mar 14 00:16:02.867628 containerd[2127]: time="2026-03-14T00:16:02.867108372Z" level=info msg="StartContainer for \"c8847734d67784fe50aaab1130f3b6f4d237eef187463426b410b07fe28f7037\" returns successfully" Mar 14 00:16:03.350437 kubelet[3696]: I0314 00:16:03.350207 3696 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-65bfb8c945-vbpnf" podStartSLOduration=2.350185222 podStartE2EDuration="2.350185222s" podCreationTimestamp="2026-03-14 00:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:16:03.34394599 +0000 UTC m=+72.205214868" watchObservedRunningTime="2026-03-14 00:16:03.350185222 +0000 UTC m=+72.211454100" Mar 14 00:16:03.987495 systemd-networkd[1684]: calic60271d4cae: Gained IPv6LL Mar 14 00:16:06.666500 ntpd[2086]: Listen normally on 17 calic60271d4cae [fe80::ecee:eeff:feee:eeee%16]:123 Mar 14 00:16:06.666581 ntpd[2086]: Deleting interface #10 cali74a271ef3fd, fe80::ecee:eeff:feee:eeee%7#123, interface stats: received=0, sent=0, dropped=0, active_time=18 secs Mar 14 00:16:06.667150 ntpd[2086]: 14 Mar 00:16:06 ntpd[2086]: Listen normally on 17 calic60271d4cae [fe80::ecee:eeff:feee:eeee%16]:123 Mar 14 00:16:06.667150 ntpd[2086]: 14 Mar 00:16:06 ntpd[2086]: Deleting interface #10 cali74a271ef3fd, fe80::ecee:eeff:feee:eeee%7#123, interface stats: received=0, sent=0, dropped=0, active_time=18 secs Mar 14 00:16:06.829727 systemd[1]: Started sshd@10-172.31.28.2:22-68.220.241.50:43440.service - OpenSSH per-connection server daemon (68.220.241.50:43440). Mar 14 00:16:07.405711 sshd[6517]: Accepted publickey for core from 68.220.241.50 port 43440 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:07.409478 sshd[6517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:07.419169 systemd-logind[2106]: New session 11 of user core. Mar 14 00:16:07.424755 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 14 00:16:07.998630 sshd[6517]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:08.006114 systemd[1]: sshd@10-172.31.28.2:22-68.220.241.50:43440.service: Deactivated successfully. Mar 14 00:16:08.015927 systemd[1]: session-11.scope: Deactivated successfully. Mar 14 00:16:08.017935 systemd-logind[2106]: Session 11 logged out. Waiting for processes to exit. Mar 14 00:16:08.021160 systemd-logind[2106]: Removed session 11. Mar 14 00:16:08.080995 systemd[1]: Started sshd@11-172.31.28.2:22-68.220.241.50:43450.service - OpenSSH per-connection server daemon (68.220.241.50:43450). Mar 14 00:16:08.591850 sshd[6554]: Accepted publickey for core from 68.220.241.50 port 43450 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:08.596002 sshd[6554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:08.607378 systemd-logind[2106]: New session 12 of user core. Mar 14 00:16:08.616561 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 14 00:16:09.146251 sshd[6554]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:09.153180 systemd[1]: sshd@11-172.31.28.2:22-68.220.241.50:43450.service: Deactivated successfully. Mar 14 00:16:09.158991 systemd[1]: session-12.scope: Deactivated successfully. Mar 14 00:16:09.160390 systemd-logind[2106]: Session 12 logged out. Waiting for processes to exit. Mar 14 00:16:09.163358 systemd-logind[2106]: Removed session 12. Mar 14 00:16:09.244940 systemd[1]: Started sshd@12-172.31.28.2:22-68.220.241.50:43460.service - OpenSSH per-connection server daemon (68.220.241.50:43460). Mar 14 00:16:09.810007 sshd[6566]: Accepted publickey for core from 68.220.241.50 port 43460 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:09.815035 sshd[6566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:09.832354 systemd-logind[2106]: New session 13 of user core. Mar 14 00:16:09.839693 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 14 00:16:10.349008 sshd[6566]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:10.357512 systemd[1]: sshd@12-172.31.28.2:22-68.220.241.50:43460.service: Deactivated successfully. Mar 14 00:16:10.363386 systemd-logind[2106]: Session 13 logged out. Waiting for processes to exit. Mar 14 00:16:10.364115 systemd[1]: session-13.scope: Deactivated successfully. Mar 14 00:16:10.368096 systemd-logind[2106]: Removed session 13. Mar 14 00:16:15.427774 systemd[1]: Started sshd@13-172.31.28.2:22-68.220.241.50:46210.service - OpenSSH per-connection server daemon (68.220.241.50:46210). Mar 14 00:16:15.950787 sshd[6606]: Accepted publickey for core from 68.220.241.50 port 46210 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:15.953557 sshd[6606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:15.964208 systemd-logind[2106]: New session 14 of user core. Mar 14 00:16:15.967809 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 14 00:16:16.462702 sshd[6606]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:16.470791 systemd[1]: sshd@13-172.31.28.2:22-68.220.241.50:46210.service: Deactivated successfully. Mar 14 00:16:16.482588 systemd-logind[2106]: Session 14 logged out. Waiting for processes to exit. Mar 14 00:16:16.484722 systemd[1]: session-14.scope: Deactivated successfully. Mar 14 00:16:16.490149 systemd-logind[2106]: Removed session 14. Mar 14 00:16:16.549794 systemd[1]: Started sshd@14-172.31.28.2:22-68.220.241.50:46224.service - OpenSSH per-connection server daemon (68.220.241.50:46224). Mar 14 00:16:17.064206 sshd[6620]: Accepted publickey for core from 68.220.241.50 port 46224 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:17.067201 sshd[6620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:17.080393 systemd-logind[2106]: New session 15 of user core. Mar 14 00:16:17.086817 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 14 00:16:19.267350 kubelet[3696]: I0314 00:16:19.267112 3696 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:16:19.412676 kubelet[3696]: I0314 00:16:19.412611 3696 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:16:19.415101 containerd[2127]: time="2026-03-14T00:16:19.415045994Z" level=info msg="StopContainer for \"3feb6a2b93d7f4571999319bce10957dd1e266bd83302395593df1c905960703\" with timeout 30 (s)" Mar 14 00:16:19.419104 containerd[2127]: time="2026-03-14T00:16:19.418961618Z" level=info msg="Stop container \"3feb6a2b93d7f4571999319bce10957dd1e266bd83302395593df1c905960703\" with signal terminated" Mar 14 00:16:19.712551 containerd[2127]: time="2026-03-14T00:16:19.712192995Z" level=info msg="shim disconnected" id=3feb6a2b93d7f4571999319bce10957dd1e266bd83302395593df1c905960703 namespace=k8s.io Mar 14 00:16:19.714167 containerd[2127]: time="2026-03-14T00:16:19.713927139Z" level=warning msg="cleaning up after shim disconnected" id=3feb6a2b93d7f4571999319bce10957dd1e266bd83302395593df1c905960703 namespace=k8s.io Mar 14 00:16:19.714167 containerd[2127]: time="2026-03-14T00:16:19.713980791Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:16:19.728673 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3feb6a2b93d7f4571999319bce10957dd1e266bd83302395593df1c905960703-rootfs.mount: Deactivated successfully. Mar 14 00:16:19.770878 containerd[2127]: time="2026-03-14T00:16:19.770811940Z" level=info msg="StopContainer for \"3feb6a2b93d7f4571999319bce10957dd1e266bd83302395593df1c905960703\" returns successfully" Mar 14 00:16:19.771749 containerd[2127]: time="2026-03-14T00:16:19.771651976Z" level=info msg="StopPodSandbox for \"58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7\"" Mar 14 00:16:19.771847 containerd[2127]: time="2026-03-14T00:16:19.771756484Z" level=info msg="Container to stop \"3feb6a2b93d7f4571999319bce10957dd1e266bd83302395593df1c905960703\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:16:19.779723 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7-shm.mount: Deactivated successfully. Mar 14 00:16:19.839036 containerd[2127]: time="2026-03-14T00:16:19.838956676Z" level=info msg="shim disconnected" id=58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7 namespace=k8s.io Mar 14 00:16:19.842350 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7-rootfs.mount: Deactivated successfully. Mar 14 00:16:19.842590 containerd[2127]: time="2026-03-14T00:16:19.842331616Z" level=warning msg="cleaning up after shim disconnected" id=58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7 namespace=k8s.io Mar 14 00:16:19.842722 containerd[2127]: time="2026-03-14T00:16:19.842691052Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:16:19.965247 systemd-networkd[1684]: calie2060b10e2a: Link DOWN Mar 14 00:16:19.965287 systemd-networkd[1684]: calie2060b10e2a: Lost carrier Mar 14 00:16:20.264022 containerd[2127]: 2026-03-14 00:16:19.959 [INFO][6730] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" Mar 14 00:16:20.264022 containerd[2127]: 2026-03-14 00:16:19.959 [INFO][6730] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" iface="eth0" netns="/var/run/netns/cni-54b9727c-8a26-7f0d-1b52-7751baeade38" Mar 14 00:16:20.264022 containerd[2127]: 2026-03-14 00:16:19.960 [INFO][6730] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" iface="eth0" netns="/var/run/netns/cni-54b9727c-8a26-7f0d-1b52-7751baeade38" Mar 14 00:16:20.264022 containerd[2127]: 2026-03-14 00:16:19.986 [INFO][6730] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" after=26.110668ms iface="eth0" netns="/var/run/netns/cni-54b9727c-8a26-7f0d-1b52-7751baeade38" Mar 14 00:16:20.264022 containerd[2127]: 2026-03-14 00:16:19.987 [INFO][6730] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" Mar 14 00:16:20.264022 containerd[2127]: 2026-03-14 00:16:19.987 [INFO][6730] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" Mar 14 00:16:20.264022 containerd[2127]: 2026-03-14 00:16:20.076 [INFO][6742] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" HandleID="k8s-pod-network.58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" Workload="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--4xv6h-eth0" Mar 14 00:16:20.264022 containerd[2127]: 2026-03-14 00:16:20.077 [INFO][6742] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:16:20.264022 containerd[2127]: 2026-03-14 00:16:20.077 [INFO][6742] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:16:20.264022 containerd[2127]: 2026-03-14 00:16:20.251 [INFO][6742] ipam/ipam_plugin.go 516: Released address using handleID ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" HandleID="k8s-pod-network.58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" Workload="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--4xv6h-eth0" Mar 14 00:16:20.264022 containerd[2127]: 2026-03-14 00:16:20.251 [INFO][6742] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" HandleID="k8s-pod-network.58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" Workload="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--4xv6h-eth0" Mar 14 00:16:20.264022 containerd[2127]: 2026-03-14 00:16:20.255 [INFO][6742] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:16:20.264022 containerd[2127]: 2026-03-14 00:16:20.259 [INFO][6730] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" Mar 14 00:16:20.268990 containerd[2127]: time="2026-03-14T00:16:20.268409630Z" level=info msg="TearDown network for sandbox \"58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7\" successfully" Mar 14 00:16:20.268990 containerd[2127]: time="2026-03-14T00:16:20.268461770Z" level=info msg="StopPodSandbox for \"58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7\" returns successfully" Mar 14 00:16:20.274404 systemd[1]: run-netns-cni\x2d54b9727c\x2d8a26\x2d7f0d\x2d1b52\x2d7751baeade38.mount: Deactivated successfully. Mar 14 00:16:20.374293 kubelet[3696]: I0314 00:16:20.374204 3696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a25a53f1-9260-4c87-baf3-12d319ebd2af-calico-apiserver-certs\") pod \"a25a53f1-9260-4c87-baf3-12d319ebd2af\" (UID: \"a25a53f1-9260-4c87-baf3-12d319ebd2af\") " Mar 14 00:16:20.374937 kubelet[3696]: I0314 00:16:20.374351 3696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96dkb\" (UniqueName: \"kubernetes.io/projected/a25a53f1-9260-4c87-baf3-12d319ebd2af-kube-api-access-96dkb\") pod \"a25a53f1-9260-4c87-baf3-12d319ebd2af\" (UID: \"a25a53f1-9260-4c87-baf3-12d319ebd2af\") " Mar 14 00:16:20.384725 kubelet[3696]: I0314 00:16:20.382199 3696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25a53f1-9260-4c87-baf3-12d319ebd2af-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "a25a53f1-9260-4c87-baf3-12d319ebd2af" (UID: "a25a53f1-9260-4c87-baf3-12d319ebd2af"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 14 00:16:20.389715 kubelet[3696]: I0314 00:16:20.389632 3696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a25a53f1-9260-4c87-baf3-12d319ebd2af-kube-api-access-96dkb" (OuterVolumeSpecName: "kube-api-access-96dkb") pod "a25a53f1-9260-4c87-baf3-12d319ebd2af" (UID: "a25a53f1-9260-4c87-baf3-12d319ebd2af"). InnerVolumeSpecName "kube-api-access-96dkb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:16:20.395187 systemd[1]: var-lib-kubelet-pods-a25a53f1\x2d9260\x2d4c87\x2dbaf3\x2d12d319ebd2af-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Mar 14 00:16:20.396513 kubelet[3696]: I0314 00:16:20.396335 3696 scope.go:117] "RemoveContainer" containerID="3feb6a2b93d7f4571999319bce10957dd1e266bd83302395593df1c905960703" Mar 14 00:16:20.402295 containerd[2127]: time="2026-03-14T00:16:20.401399511Z" level=info msg="RemoveContainer for \"3feb6a2b93d7f4571999319bce10957dd1e266bd83302395593df1c905960703\"" Mar 14 00:16:20.419058 containerd[2127]: time="2026-03-14T00:16:20.419005239Z" level=info msg="RemoveContainer for \"3feb6a2b93d7f4571999319bce10957dd1e266bd83302395593df1c905960703\" returns successfully" Mar 14 00:16:20.420539 kubelet[3696]: I0314 00:16:20.420024 3696 scope.go:117] "RemoveContainer" containerID="3feb6a2b93d7f4571999319bce10957dd1e266bd83302395593df1c905960703" Mar 14 00:16:20.420711 containerd[2127]: time="2026-03-14T00:16:20.420441363Z" level=error msg="ContainerStatus for \"3feb6a2b93d7f4571999319bce10957dd1e266bd83302395593df1c905960703\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3feb6a2b93d7f4571999319bce10957dd1e266bd83302395593df1c905960703\": not found" Mar 14 00:16:20.421300 kubelet[3696]: E0314 00:16:20.421155 3696 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3feb6a2b93d7f4571999319bce10957dd1e266bd83302395593df1c905960703\": not found" containerID="3feb6a2b93d7f4571999319bce10957dd1e266bd83302395593df1c905960703" Mar 14 00:16:20.421300 kubelet[3696]: I0314 00:16:20.421210 3696 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3feb6a2b93d7f4571999319bce10957dd1e266bd83302395593df1c905960703"} err="failed to get container status \"3feb6a2b93d7f4571999319bce10957dd1e266bd83302395593df1c905960703\": rpc error: code = NotFound desc = an error occurred when try to find container \"3feb6a2b93d7f4571999319bce10957dd1e266bd83302395593df1c905960703\": not found" Mar 14 00:16:20.476451 kubelet[3696]: I0314 00:16:20.476372 3696 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a25a53f1-9260-4c87-baf3-12d319ebd2af-calico-apiserver-certs\") on node \"ip-172-31-28-2\" DevicePath \"\"" Mar 14 00:16:20.476451 kubelet[3696]: I0314 00:16:20.476417 3696 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-96dkb\" (UniqueName: \"kubernetes.io/projected/a25a53f1-9260-4c87-baf3-12d319ebd2af-kube-api-access-96dkb\") on node \"ip-172-31-28-2\" DevicePath \"\"" Mar 14 00:16:20.530082 sshd[6620]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:20.538933 systemd[1]: sshd@14-172.31.28.2:22-68.220.241.50:46224.service: Deactivated successfully. Mar 14 00:16:20.546167 systemd[1]: session-15.scope: Deactivated successfully. Mar 14 00:16:20.548066 systemd-logind[2106]: Session 15 logged out. Waiting for processes to exit. Mar 14 00:16:20.550874 systemd-logind[2106]: Removed session 15. Mar 14 00:16:20.612733 systemd[1]: Started sshd@15-172.31.28.2:22-68.220.241.50:46230.service - OpenSSH per-connection server daemon (68.220.241.50:46230). Mar 14 00:16:20.716361 systemd[1]: var-lib-kubelet-pods-a25a53f1\x2d9260\x2d4c87\x2dbaf3\x2d12d319ebd2af-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d96dkb.mount: Deactivated successfully. Mar 14 00:16:21.145329 sshd[6766]: Accepted publickey for core from 68.220.241.50 port 46230 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:21.148867 sshd[6766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:21.158616 systemd-logind[2106]: New session 16 of user core. Mar 14 00:16:21.166885 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 14 00:16:21.377297 kubelet[3696]: I0314 00:16:21.376762 3696 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a25a53f1-9260-4c87-baf3-12d319ebd2af" path="/var/lib/kubelet/pods/a25a53f1-9260-4c87-baf3-12d319ebd2af/volumes" Mar 14 00:16:22.666419 ntpd[2086]: Deleting interface #8 calie2060b10e2a, fe80::ecee:eeff:feee:eeee%5#123, interface stats: received=0, sent=0, dropped=0, active_time=34 secs Mar 14 00:16:22.667087 ntpd[2086]: 14 Mar 00:16:22 ntpd[2086]: Deleting interface #8 calie2060b10e2a, fe80::ecee:eeff:feee:eeee%5#123, interface stats: received=0, sent=0, dropped=0, active_time=34 secs Mar 14 00:16:22.798576 sshd[6766]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:22.815083 systemd[1]: sshd@15-172.31.28.2:22-68.220.241.50:46230.service: Deactivated successfully. Mar 14 00:16:22.826104 systemd[1]: session-16.scope: Deactivated successfully. Mar 14 00:16:22.828918 systemd-logind[2106]: Session 16 logged out. Waiting for processes to exit. Mar 14 00:16:22.832170 systemd-logind[2106]: Removed session 16. Mar 14 00:16:22.897936 systemd[1]: Started sshd@16-172.31.28.2:22-68.220.241.50:50158.service - OpenSSH per-connection server daemon (68.220.241.50:50158). Mar 14 00:16:23.452003 sshd[6795]: Accepted publickey for core from 68.220.241.50 port 50158 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:23.454809 sshd[6795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:23.463393 systemd-logind[2106]: New session 17 of user core. Mar 14 00:16:23.471859 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 14 00:16:24.205001 sshd[6795]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:24.212889 systemd-logind[2106]: Session 17 logged out. Waiting for processes to exit. Mar 14 00:16:24.214760 systemd[1]: sshd@16-172.31.28.2:22-68.220.241.50:50158.service: Deactivated successfully. Mar 14 00:16:24.221698 systemd[1]: session-17.scope: Deactivated successfully. Mar 14 00:16:24.224867 systemd-logind[2106]: Removed session 17. Mar 14 00:16:24.285254 systemd[1]: Started sshd@17-172.31.28.2:22-68.220.241.50:50172.service - OpenSSH per-connection server daemon (68.220.241.50:50172). Mar 14 00:16:24.797042 sshd[6809]: Accepted publickey for core from 68.220.241.50 port 50172 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:24.799778 sshd[6809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:24.808691 systemd-logind[2106]: New session 18 of user core. Mar 14 00:16:24.814890 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 14 00:16:25.299255 sshd[6809]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:25.310568 systemd-logind[2106]: Session 18 logged out. Waiting for processes to exit. Mar 14 00:16:25.314467 systemd[1]: sshd@17-172.31.28.2:22-68.220.241.50:50172.service: Deactivated successfully. Mar 14 00:16:25.321212 systemd[1]: session-18.scope: Deactivated successfully. Mar 14 00:16:25.326360 systemd-logind[2106]: Removed session 18. Mar 14 00:16:30.404602 systemd[1]: Started sshd@18-172.31.28.2:22-68.220.241.50:50184.service - OpenSSH per-connection server daemon (68.220.241.50:50184). Mar 14 00:16:30.978699 sshd[6904]: Accepted publickey for core from 68.220.241.50 port 50184 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:30.981490 sshd[6904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:30.990609 systemd-logind[2106]: New session 19 of user core. Mar 14 00:16:30.994004 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 14 00:16:31.494405 sshd[6904]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:31.504337 systemd[1]: sshd@18-172.31.28.2:22-68.220.241.50:50184.service: Deactivated successfully. Mar 14 00:16:31.511441 systemd[1]: session-19.scope: Deactivated successfully. Mar 14 00:16:31.511529 systemd-logind[2106]: Session 19 logged out. Waiting for processes to exit. Mar 14 00:16:31.518682 systemd-logind[2106]: Removed session 19. Mar 14 00:16:36.575783 systemd[1]: Started sshd@19-172.31.28.2:22-68.220.241.50:41734.service - OpenSSH per-connection server daemon (68.220.241.50:41734). Mar 14 00:16:37.092875 sshd[6920]: Accepted publickey for core from 68.220.241.50 port 41734 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:37.095945 sshd[6920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:37.105210 systemd-logind[2106]: New session 20 of user core. Mar 14 00:16:37.113850 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 14 00:16:37.569666 sshd[6920]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:37.577172 systemd[1]: sshd@19-172.31.28.2:22-68.220.241.50:41734.service: Deactivated successfully. Mar 14 00:16:37.585395 systemd[1]: session-20.scope: Deactivated successfully. Mar 14 00:16:37.590003 systemd-logind[2106]: Session 20 logged out. Waiting for processes to exit. Mar 14 00:16:37.592199 systemd-logind[2106]: Removed session 20. Mar 14 00:16:42.657944 systemd[1]: Started sshd@20-172.31.28.2:22-68.220.241.50:60154.service - OpenSSH per-connection server daemon (68.220.241.50:60154). Mar 14 00:16:43.153934 sshd[6960]: Accepted publickey for core from 68.220.241.50 port 60154 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:43.156837 sshd[6960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:43.166136 systemd-logind[2106]: New session 21 of user core. Mar 14 00:16:43.174401 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 14 00:16:43.624590 sshd[6960]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:43.633918 systemd-logind[2106]: Session 21 logged out. Waiting for processes to exit. Mar 14 00:16:43.635191 systemd[1]: sshd@20-172.31.28.2:22-68.220.241.50:60154.service: Deactivated successfully. Mar 14 00:16:43.643726 systemd[1]: session-21.scope: Deactivated successfully. Mar 14 00:16:43.647575 systemd-logind[2106]: Removed session 21. Mar 14 00:16:48.715915 systemd[1]: Started sshd@21-172.31.28.2:22-68.220.241.50:60160.service - OpenSSH per-connection server daemon (68.220.241.50:60160). Mar 14 00:16:49.232322 sshd[6996]: Accepted publickey for core from 68.220.241.50 port 60160 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:49.234137 sshd[6996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:49.243742 systemd-logind[2106]: New session 22 of user core. Mar 14 00:16:49.249934 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 14 00:16:49.710574 sshd[6996]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:49.718771 systemd[1]: sshd@21-172.31.28.2:22-68.220.241.50:60160.service: Deactivated successfully. Mar 14 00:16:49.724648 systemd-logind[2106]: Session 22 logged out. Waiting for processes to exit. Mar 14 00:16:49.725803 systemd[1]: session-22.scope: Deactivated successfully. Mar 14 00:16:49.728714 systemd-logind[2106]: Removed session 22. Mar 14 00:16:50.974846 containerd[2127]: time="2026-03-14T00:16:50.974454131Z" level=info msg="StopContainer for \"4cc1a2d05595428150a2de006790f509de7e6d9017deb2ca713a151eb5ab50dc\" with timeout 30 (s)" Mar 14 00:16:50.978038 containerd[2127]: time="2026-03-14T00:16:50.977961647Z" level=info msg="Stop container \"4cc1a2d05595428150a2de006790f509de7e6d9017deb2ca713a151eb5ab50dc\" with signal terminated" Mar 14 00:16:51.119583 containerd[2127]: time="2026-03-14T00:16:51.119247739Z" level=info msg="shim disconnected" id=4cc1a2d05595428150a2de006790f509de7e6d9017deb2ca713a151eb5ab50dc namespace=k8s.io Mar 14 00:16:51.119583 containerd[2127]: time="2026-03-14T00:16:51.119439355Z" level=warning msg="cleaning up after shim disconnected" id=4cc1a2d05595428150a2de006790f509de7e6d9017deb2ca713a151eb5ab50dc namespace=k8s.io Mar 14 00:16:51.119583 containerd[2127]: time="2026-03-14T00:16:51.119465047Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:16:51.132780 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4cc1a2d05595428150a2de006790f509de7e6d9017deb2ca713a151eb5ab50dc-rootfs.mount: Deactivated successfully. Mar 14 00:16:51.149281 containerd[2127]: time="2026-03-14T00:16:51.149134664Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:16:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:16:51.172634 containerd[2127]: time="2026-03-14T00:16:51.172570748Z" level=info msg="StopContainer for \"4cc1a2d05595428150a2de006790f509de7e6d9017deb2ca713a151eb5ab50dc\" returns successfully" Mar 14 00:16:51.173779 containerd[2127]: time="2026-03-14T00:16:51.173254808Z" level=info msg="StopPodSandbox for \"ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65\"" Mar 14 00:16:51.173779 containerd[2127]: time="2026-03-14T00:16:51.173577800Z" level=info msg="Container to stop \"4cc1a2d05595428150a2de006790f509de7e6d9017deb2ca713a151eb5ab50dc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:16:51.183672 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65-shm.mount: Deactivated successfully. Mar 14 00:16:51.234185 containerd[2127]: time="2026-03-14T00:16:51.233457896Z" level=info msg="shim disconnected" id=ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65 namespace=k8s.io Mar 14 00:16:51.234185 containerd[2127]: time="2026-03-14T00:16:51.233561084Z" level=warning msg="cleaning up after shim disconnected" id=ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65 namespace=k8s.io Mar 14 00:16:51.234954 containerd[2127]: time="2026-03-14T00:16:51.233583308Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:16:51.245684 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65-rootfs.mount: Deactivated successfully. Mar 14 00:16:51.378904 systemd-networkd[1684]: caliae6bccba74f: Link DOWN Mar 14 00:16:51.378917 systemd-networkd[1684]: caliae6bccba74f: Lost carrier Mar 14 00:16:51.458114 containerd[2127]: time="2026-03-14T00:16:51.457596621Z" level=info msg="StopPodSandbox for \"58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7\"" Mar 14 00:16:51.542675 kubelet[3696]: I0314 00:16:51.540373 3696 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65" Mar 14 00:16:51.690493 containerd[2127]: 2026-03-14 00:16:51.372 [INFO][7090] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65" Mar 14 00:16:51.690493 containerd[2127]: 2026-03-14 00:16:51.372 [INFO][7090] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65" iface="eth0" netns="/var/run/netns/cni-552f1a62-5242-5ee1-14b8-359456dfd03e" Mar 14 00:16:51.690493 containerd[2127]: 2026-03-14 00:16:51.373 [INFO][7090] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65" iface="eth0" netns="/var/run/netns/cni-552f1a62-5242-5ee1-14b8-359456dfd03e" Mar 14 00:16:51.690493 containerd[2127]: 2026-03-14 00:16:51.396 [INFO][7090] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65" after=23.70918ms iface="eth0" netns="/var/run/netns/cni-552f1a62-5242-5ee1-14b8-359456dfd03e" Mar 14 00:16:51.690493 containerd[2127]: 2026-03-14 00:16:51.396 [INFO][7090] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65" Mar 14 00:16:51.690493 containerd[2127]: 2026-03-14 00:16:51.396 [INFO][7090] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65" Mar 14 00:16:51.690493 containerd[2127]: 2026-03-14 00:16:51.485 [INFO][7104] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65" HandleID="k8s-pod-network.ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65" Workload="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--dhvpk-eth0" Mar 14 00:16:51.690493 containerd[2127]: 2026-03-14 00:16:51.485 [INFO][7104] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:16:51.690493 containerd[2127]: 2026-03-14 00:16:51.485 [INFO][7104] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:16:51.690493 containerd[2127]: 2026-03-14 00:16:51.650 [INFO][7104] ipam/ipam_plugin.go 516: Released address using handleID ContainerID="ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65" HandleID="k8s-pod-network.ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65" Workload="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--dhvpk-eth0" Mar 14 00:16:51.690493 containerd[2127]: 2026-03-14 00:16:51.650 [INFO][7104] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65" HandleID="k8s-pod-network.ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65" Workload="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--dhvpk-eth0" Mar 14 00:16:51.690493 containerd[2127]: 2026-03-14 00:16:51.657 [INFO][7104] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:16:51.690493 containerd[2127]: 2026-03-14 00:16:51.672 [INFO][7090] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65" Mar 14 00:16:51.697547 containerd[2127]: time="2026-03-14T00:16:51.697453978Z" level=info msg="TearDown network for sandbox \"ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65\" successfully" Mar 14 00:16:51.697844 containerd[2127]: time="2026-03-14T00:16:51.697719406Z" level=info msg="StopPodSandbox for \"ee6fcaa8efd0bfcd094eab1034e2d67021af5a2db4f1d6510b7971be15f43f65\" returns successfully" Mar 14 00:16:51.718277 systemd[1]: run-netns-cni\x2d552f1a62\x2d5242\x2d5ee1\x2d14b8\x2d359456dfd03e.mount: Deactivated successfully. Mar 14 00:16:51.817519 containerd[2127]: 2026-03-14 00:16:51.631 [WARNING][7124] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" WorkloadEndpoint="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--4xv6h-eth0" Mar 14 00:16:51.817519 containerd[2127]: 2026-03-14 00:16:51.634 [INFO][7124] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" Mar 14 00:16:51.817519 containerd[2127]: 2026-03-14 00:16:51.634 [INFO][7124] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" iface="eth0" netns="" Mar 14 00:16:51.817519 containerd[2127]: 2026-03-14 00:16:51.634 [INFO][7124] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" Mar 14 00:16:51.817519 containerd[2127]: 2026-03-14 00:16:51.634 [INFO][7124] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" Mar 14 00:16:51.817519 containerd[2127]: 2026-03-14 00:16:51.784 [INFO][7135] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" HandleID="k8s-pod-network.58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" Workload="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--4xv6h-eth0" Mar 14 00:16:51.817519 containerd[2127]: 2026-03-14 00:16:51.785 [INFO][7135] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:16:51.817519 containerd[2127]: 2026-03-14 00:16:51.786 [INFO][7135] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:16:51.817519 containerd[2127]: 2026-03-14 00:16:51.802 [WARNING][7135] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" HandleID="k8s-pod-network.58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" Workload="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--4xv6h-eth0" Mar 14 00:16:51.817519 containerd[2127]: 2026-03-14 00:16:51.802 [INFO][7135] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" HandleID="k8s-pod-network.58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" Workload="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--4xv6h-eth0" Mar 14 00:16:51.817519 containerd[2127]: 2026-03-14 00:16:51.806 [INFO][7135] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:16:51.817519 containerd[2127]: 2026-03-14 00:16:51.812 [INFO][7124] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" Mar 14 00:16:51.818357 containerd[2127]: time="2026-03-14T00:16:51.817566395Z" level=info msg="TearDown network for sandbox \"58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7\" successfully" Mar 14 00:16:51.818357 containerd[2127]: time="2026-03-14T00:16:51.817626443Z" level=info msg="StopPodSandbox for \"58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7\" returns successfully" Mar 14 00:16:51.819514 containerd[2127]: time="2026-03-14T00:16:51.819230159Z" level=info msg="RemovePodSandbox for \"58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7\"" Mar 14 00:16:51.819514 containerd[2127]: time="2026-03-14T00:16:51.819357455Z" level=info msg="Forcibly stopping sandbox \"58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7\"" Mar 14 00:16:51.838096 kubelet[3696]: I0314 00:16:51.838028 3696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shqpd\" (UniqueName: \"kubernetes.io/projected/5ad4e2d1-2c97-47c9-b1a8-c77ac0d2d251-kube-api-access-shqpd\") pod \"5ad4e2d1-2c97-47c9-b1a8-c77ac0d2d251\" (UID: \"5ad4e2d1-2c97-47c9-b1a8-c77ac0d2d251\") " Mar 14 00:16:51.838686 kubelet[3696]: I0314 00:16:51.838420 3696 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5ad4e2d1-2c97-47c9-b1a8-c77ac0d2d251-calico-apiserver-certs\") pod \"5ad4e2d1-2c97-47c9-b1a8-c77ac0d2d251\" (UID: \"5ad4e2d1-2c97-47c9-b1a8-c77ac0d2d251\") " Mar 14 00:16:51.852962 kubelet[3696]: I0314 00:16:51.852882 3696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ad4e2d1-2c97-47c9-b1a8-c77ac0d2d251-kube-api-access-shqpd" (OuterVolumeSpecName: "kube-api-access-shqpd") pod "5ad4e2d1-2c97-47c9-b1a8-c77ac0d2d251" (UID: "5ad4e2d1-2c97-47c9-b1a8-c77ac0d2d251"). InnerVolumeSpecName "kube-api-access-shqpd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:16:51.857957 kubelet[3696]: I0314 00:16:51.857876 3696 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ad4e2d1-2c97-47c9-b1a8-c77ac0d2d251-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "5ad4e2d1-2c97-47c9-b1a8-c77ac0d2d251" (UID: "5ad4e2d1-2c97-47c9-b1a8-c77ac0d2d251"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 14 00:16:51.872062 systemd[1]: var-lib-kubelet-pods-5ad4e2d1\x2d2c97\x2d47c9\x2db1a8\x2dc77ac0d2d251-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dshqpd.mount: Deactivated successfully. Mar 14 00:16:51.945541 kubelet[3696]: I0314 00:16:51.944651 3696 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5ad4e2d1-2c97-47c9-b1a8-c77ac0d2d251-calico-apiserver-certs\") on node \"ip-172-31-28-2\" DevicePath \"\"" Mar 14 00:16:51.945541 kubelet[3696]: I0314 00:16:51.944696 3696 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-shqpd\" (UniqueName: \"kubernetes.io/projected/5ad4e2d1-2c97-47c9-b1a8-c77ac0d2d251-kube-api-access-shqpd\") on node \"ip-172-31-28-2\" DevicePath \"\"" Mar 14 00:16:52.085628 containerd[2127]: 2026-03-14 00:16:51.945 [WARNING][7150] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" WorkloadEndpoint="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--4xv6h-eth0" Mar 14 00:16:52.085628 containerd[2127]: 2026-03-14 00:16:51.945 [INFO][7150] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" Mar 14 00:16:52.085628 containerd[2127]: 2026-03-14 00:16:51.945 [INFO][7150] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" iface="eth0" netns="" Mar 14 00:16:52.085628 containerd[2127]: 2026-03-14 00:16:51.945 [INFO][7150] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" Mar 14 00:16:52.085628 containerd[2127]: 2026-03-14 00:16:51.945 [INFO][7150] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" Mar 14 00:16:52.085628 containerd[2127]: 2026-03-14 00:16:52.034 [INFO][7160] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" HandleID="k8s-pod-network.58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" Workload="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--4xv6h-eth0" Mar 14 00:16:52.085628 containerd[2127]: 2026-03-14 00:16:52.034 [INFO][7160] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:16:52.085628 containerd[2127]: 2026-03-14 00:16:52.034 [INFO][7160] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:16:52.085628 containerd[2127]: 2026-03-14 00:16:52.058 [WARNING][7160] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" HandleID="k8s-pod-network.58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" Workload="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--4xv6h-eth0" Mar 14 00:16:52.085628 containerd[2127]: 2026-03-14 00:16:52.060 [INFO][7160] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" HandleID="k8s-pod-network.58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" Workload="ip--172--31--28--2-k8s-calico--apiserver--6579f7776--4xv6h-eth0" Mar 14 00:16:52.085628 containerd[2127]: 2026-03-14 00:16:52.073 [INFO][7160] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:16:52.085628 containerd[2127]: 2026-03-14 00:16:52.079 [INFO][7150] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7" Mar 14 00:16:52.085628 containerd[2127]: time="2026-03-14T00:16:52.084460892Z" level=info msg="TearDown network for sandbox \"58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7\" successfully" Mar 14 00:16:52.097377 containerd[2127]: time="2026-03-14T00:16:52.095995808Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:16:52.097377 containerd[2127]: time="2026-03-14T00:16:52.096119528Z" level=info msg="RemovePodSandbox \"58e8fa76f094ced5761b585b4ed3c9ba5fccd760b1eb71f4b01855d2cb21d3c7\" returns successfully" Mar 14 00:16:52.098994 containerd[2127]: time="2026-03-14T00:16:52.098152904Z" level=info msg="StopPodSandbox for \"e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c\"" Mar 14 00:16:52.137436 systemd[1]: var-lib-kubelet-pods-5ad4e2d1\x2d2c97\x2d47c9\x2db1a8\x2dc77ac0d2d251-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Mar 14 00:16:52.422500 containerd[2127]: 2026-03-14 00:16:52.240 [WARNING][7176] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" WorkloadEndpoint="ip--172--31--28--2-k8s-whisker--75cf64fb64--gfxc8-eth0" Mar 14 00:16:52.422500 containerd[2127]: 2026-03-14 00:16:52.242 [INFO][7176] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" Mar 14 00:16:52.422500 containerd[2127]: 2026-03-14 00:16:52.243 [INFO][7176] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" iface="eth0" netns="" Mar 14 00:16:52.422500 containerd[2127]: 2026-03-14 00:16:52.243 [INFO][7176] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" Mar 14 00:16:52.422500 containerd[2127]: 2026-03-14 00:16:52.244 [INFO][7176] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" Mar 14 00:16:52.422500 containerd[2127]: 2026-03-14 00:16:52.365 [INFO][7184] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" HandleID="k8s-pod-network.e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" Workload="ip--172--31--28--2-k8s-whisker--75cf64fb64--gfxc8-eth0" Mar 14 00:16:52.422500 containerd[2127]: 2026-03-14 00:16:52.365 [INFO][7184] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:16:52.422500 containerd[2127]: 2026-03-14 00:16:52.365 [INFO][7184] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:16:52.422500 containerd[2127]: 2026-03-14 00:16:52.399 [WARNING][7184] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" HandleID="k8s-pod-network.e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" Workload="ip--172--31--28--2-k8s-whisker--75cf64fb64--gfxc8-eth0" Mar 14 00:16:52.422500 containerd[2127]: 2026-03-14 00:16:52.401 [INFO][7184] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" HandleID="k8s-pod-network.e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" Workload="ip--172--31--28--2-k8s-whisker--75cf64fb64--gfxc8-eth0" Mar 14 00:16:52.422500 containerd[2127]: 2026-03-14 00:16:52.406 [INFO][7184] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:16:52.422500 containerd[2127]: 2026-03-14 00:16:52.417 [INFO][7176] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" Mar 14 00:16:52.423162 containerd[2127]: time="2026-03-14T00:16:52.422501290Z" level=info msg="TearDown network for sandbox \"e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c\" successfully" Mar 14 00:16:52.423162 containerd[2127]: time="2026-03-14T00:16:52.422557162Z" level=info msg="StopPodSandbox for \"e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c\" returns successfully" Mar 14 00:16:52.426857 containerd[2127]: time="2026-03-14T00:16:52.423352198Z" level=info msg="RemovePodSandbox for \"e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c\"" Mar 14 00:16:52.426857 containerd[2127]: time="2026-03-14T00:16:52.423414202Z" level=info msg="Forcibly stopping sandbox \"e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c\"" Mar 14 00:16:52.727182 containerd[2127]: 2026-03-14 00:16:52.640 [WARNING][7199] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" WorkloadEndpoint="ip--172--31--28--2-k8s-whisker--75cf64fb64--gfxc8-eth0" Mar 14 00:16:52.727182 containerd[2127]: 2026-03-14 00:16:52.641 [INFO][7199] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" Mar 14 00:16:52.727182 containerd[2127]: 2026-03-14 00:16:52.641 [INFO][7199] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" iface="eth0" netns="" Mar 14 00:16:52.727182 containerd[2127]: 2026-03-14 00:16:52.641 [INFO][7199] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" Mar 14 00:16:52.727182 containerd[2127]: 2026-03-14 00:16:52.641 [INFO][7199] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" Mar 14 00:16:52.727182 containerd[2127]: 2026-03-14 00:16:52.705 [INFO][7206] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" HandleID="k8s-pod-network.e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" Workload="ip--172--31--28--2-k8s-whisker--75cf64fb64--gfxc8-eth0" Mar 14 00:16:52.727182 containerd[2127]: 2026-03-14 00:16:52.705 [INFO][7206] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:16:52.727182 containerd[2127]: 2026-03-14 00:16:52.705 [INFO][7206] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:16:52.727182 containerd[2127]: 2026-03-14 00:16:52.718 [WARNING][7206] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" HandleID="k8s-pod-network.e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" Workload="ip--172--31--28--2-k8s-whisker--75cf64fb64--gfxc8-eth0" Mar 14 00:16:52.727182 containerd[2127]: 2026-03-14 00:16:52.718 [INFO][7206] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" HandleID="k8s-pod-network.e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" Workload="ip--172--31--28--2-k8s-whisker--75cf64fb64--gfxc8-eth0" Mar 14 00:16:52.727182 containerd[2127]: 2026-03-14 00:16:52.721 [INFO][7206] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:16:52.727182 containerd[2127]: 2026-03-14 00:16:52.723 [INFO][7199] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c" Mar 14 00:16:52.727182 containerd[2127]: time="2026-03-14T00:16:52.727141379Z" level=info msg="TearDown network for sandbox \"e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c\" successfully" Mar 14 00:16:52.736204 containerd[2127]: time="2026-03-14T00:16:52.736099019Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:16:52.736406 containerd[2127]: time="2026-03-14T00:16:52.736345463Z" level=info msg="RemovePodSandbox \"e9040da6c397fb77f892848f71576e3976a97966327fe0d3612057443f090f8c\" returns successfully" Mar 14 00:16:53.377313 kubelet[3696]: I0314 00:16:53.376951 3696 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ad4e2d1-2c97-47c9-b1a8-c77ac0d2d251" path="/var/lib/kubelet/pods/5ad4e2d1-2c97-47c9-b1a8-c77ac0d2d251/volumes" Mar 14 00:16:53.666386 ntpd[2086]: Deleting interface #12 caliae6bccba74f, fe80::ecee:eeff:feee:eeee%9#123, interface stats: received=0, sent=0, dropped=0, active_time=65 secs Mar 14 00:16:53.667213 ntpd[2086]: 14 Mar 00:16:53 ntpd[2086]: Deleting interface #12 caliae6bccba74f, fe80::ecee:eeff:feee:eeee%9#123, interface stats: received=0, sent=0, dropped=0, active_time=65 secs Mar 14 00:16:54.794925 systemd[1]: Started sshd@22-172.31.28.2:22-68.220.241.50:43822.service - OpenSSH per-connection server daemon (68.220.241.50:43822). Mar 14 00:16:55.324314 sshd[7213]: Accepted publickey for core from 68.220.241.50 port 43822 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:55.328836 sshd[7213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:55.338617 systemd-logind[2106]: New session 23 of user core. Mar 14 00:16:55.345917 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 14 00:16:55.793916 sshd[7213]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:55.803156 systemd[1]: sshd@22-172.31.28.2:22-68.220.241.50:43822.service: Deactivated successfully. Mar 14 00:16:55.809930 systemd[1]: session-23.scope: Deactivated successfully. Mar 14 00:16:55.810158 systemd-logind[2106]: Session 23 logged out. Waiting for processes to exit. Mar 14 00:16:55.814117 systemd-logind[2106]: Removed session 23. Mar 14 00:17:10.318798 containerd[2127]: time="2026-03-14T00:17:10.318708087Z" level=info msg="shim disconnected" id=6d948275a4fd7826842e06370b588cf5d5536693f3ba0cd6c49a9df02e26475e namespace=k8s.io Mar 14 00:17:10.318798 containerd[2127]: time="2026-03-14T00:17:10.318793155Z" level=warning msg="cleaning up after shim disconnected" id=6d948275a4fd7826842e06370b588cf5d5536693f3ba0cd6c49a9df02e26475e namespace=k8s.io Mar 14 00:17:10.319907 containerd[2127]: time="2026-03-14T00:17:10.318815307Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:17:10.326458 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d948275a4fd7826842e06370b588cf5d5536693f3ba0cd6c49a9df02e26475e-rootfs.mount: Deactivated successfully. Mar 14 00:17:10.601523 kubelet[3696]: I0314 00:17:10.600942 3696 scope.go:117] "RemoveContainer" containerID="6d948275a4fd7826842e06370b588cf5d5536693f3ba0cd6c49a9df02e26475e" Mar 14 00:17:10.609625 containerd[2127]: time="2026-03-14T00:17:10.609566692Z" level=info msg="CreateContainer within sandbox \"8b51c87a689472786622327fd77741f5e4d5c32880262bebb7e1e995ce8fd375\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 14 00:17:10.637335 containerd[2127]: time="2026-03-14T00:17:10.637249060Z" level=info msg="CreateContainer within sandbox \"8b51c87a689472786622327fd77741f5e4d5c32880262bebb7e1e995ce8fd375\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"1ef69aeea8d9e2d845b0df7500b55a8a440682484669c67ea668be8fb6fc4b27\"" Mar 14 00:17:10.638460 containerd[2127]: time="2026-03-14T00:17:10.638405812Z" level=info msg="StartContainer for \"1ef69aeea8d9e2d845b0df7500b55a8a440682484669c67ea668be8fb6fc4b27\"" Mar 14 00:17:10.759494 containerd[2127]: time="2026-03-14T00:17:10.759413897Z" level=info msg="StartContainer for \"1ef69aeea8d9e2d845b0df7500b55a8a440682484669c67ea668be8fb6fc4b27\" returns successfully" Mar 14 00:17:11.563516 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f139217fcf041c32d7adb4ab6cf8bcbdbba4b13c5f2070bebed7c258525c2b6-rootfs.mount: Deactivated successfully. Mar 14 00:17:11.570441 containerd[2127]: time="2026-03-14T00:17:11.567857921Z" level=info msg="shim disconnected" id=1f139217fcf041c32d7adb4ab6cf8bcbdbba4b13c5f2070bebed7c258525c2b6 namespace=k8s.io Mar 14 00:17:11.570441 containerd[2127]: time="2026-03-14T00:17:11.567951749Z" level=warning msg="cleaning up after shim disconnected" id=1f139217fcf041c32d7adb4ab6cf8bcbdbba4b13c5f2070bebed7c258525c2b6 namespace=k8s.io Mar 14 00:17:11.570441 containerd[2127]: time="2026-03-14T00:17:11.567974405Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:17:12.621092 kubelet[3696]: I0314 00:17:12.620803 3696 scope.go:117] "RemoveContainer" containerID="1f139217fcf041c32d7adb4ab6cf8bcbdbba4b13c5f2070bebed7c258525c2b6" Mar 14 00:17:12.625384 containerd[2127]: time="2026-03-14T00:17:12.625059018Z" level=info msg="CreateContainer within sandbox \"cc4b681c6f7601ca08e20ee883e22698fa63c6065f109928fc583c9e8881d938\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Mar 14 00:17:12.649857 containerd[2127]: time="2026-03-14T00:17:12.647730234Z" level=info msg="CreateContainer within sandbox \"cc4b681c6f7601ca08e20ee883e22698fa63c6065f109928fc583c9e8881d938\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"d9e8288a0cd72f4aca2881d3893cc79e57d0ae79ca53a1ee888f71c1567adfae\"" Mar 14 00:17:12.649857 containerd[2127]: time="2026-03-14T00:17:12.649552902Z" level=info msg="StartContainer for \"d9e8288a0cd72f4aca2881d3893cc79e57d0ae79ca53a1ee888f71c1567adfae\"" Mar 14 00:17:12.755451 containerd[2127]: time="2026-03-14T00:17:12.755377267Z" level=info msg="StartContainer for \"d9e8288a0cd72f4aca2881d3893cc79e57d0ae79ca53a1ee888f71c1567adfae\" returns successfully" Mar 14 00:17:13.992016 kubelet[3696]: E0314 00:17:13.991157 3696 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-2?timeout=10s\": context deadline exceeded" Mar 14 00:17:15.380293 containerd[2127]: time="2026-03-14T00:17:15.378736976Z" level=info msg="shim disconnected" id=67af58498d76279e59560b38006b923b94e91bc9d093a1549ddd593608e376d2 namespace=k8s.io Mar 14 00:17:15.380293 containerd[2127]: time="2026-03-14T00:17:15.378922628Z" level=warning msg="cleaning up after shim disconnected" id=67af58498d76279e59560b38006b923b94e91bc9d093a1549ddd593608e376d2 namespace=k8s.io Mar 14 00:17:15.380293 containerd[2127]: time="2026-03-14T00:17:15.378948008Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:17:15.385623 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67af58498d76279e59560b38006b923b94e91bc9d093a1549ddd593608e376d2-rootfs.mount: Deactivated successfully. Mar 14 00:17:15.635167 kubelet[3696]: I0314 00:17:15.635015 3696 scope.go:117] "RemoveContainer" containerID="67af58498d76279e59560b38006b923b94e91bc9d093a1549ddd593608e376d2" Mar 14 00:17:15.639889 containerd[2127]: time="2026-03-14T00:17:15.639832605Z" level=info msg="CreateContainer within sandbox \"4d4db641fa7b19693aae0e7ff18122d87831240effb7d0458e171f56d051bd03\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 14 00:17:15.673111 containerd[2127]: time="2026-03-14T00:17:15.672896229Z" level=info msg="CreateContainer within sandbox \"4d4db641fa7b19693aae0e7ff18122d87831240effb7d0458e171f56d051bd03\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"cb28947f00354950ca4c4d074c15af6b266aa325c9a756aa0714a52e27da3057\"" Mar 14 00:17:15.674984 containerd[2127]: time="2026-03-14T00:17:15.673792269Z" level=info msg="StartContainer for \"cb28947f00354950ca4c4d074c15af6b266aa325c9a756aa0714a52e27da3057\"" Mar 14 00:17:15.679958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1608030674.mount: Deactivated successfully. Mar 14 00:17:15.797589 containerd[2127]: time="2026-03-14T00:17:15.797523946Z" level=info msg="StartContainer for \"cb28947f00354950ca4c4d074c15af6b266aa325c9a756aa0714a52e27da3057\" returns successfully" Mar 14 00:17:23.993299 kubelet[3696]: E0314 00:17:23.992290 3696 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-2?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 14 00:17:24.268122 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9e8288a0cd72f4aca2881d3893cc79e57d0ae79ca53a1ee888f71c1567adfae-rootfs.mount: Deactivated successfully. Mar 14 00:17:24.274419 containerd[2127]: time="2026-03-14T00:17:24.274217344Z" level=info msg="shim disconnected" id=d9e8288a0cd72f4aca2881d3893cc79e57d0ae79ca53a1ee888f71c1567adfae namespace=k8s.io Mar 14 00:17:24.275059 containerd[2127]: time="2026-03-14T00:17:24.274401880Z" level=warning msg="cleaning up after shim disconnected" id=d9e8288a0cd72f4aca2881d3893cc79e57d0ae79ca53a1ee888f71c1567adfae namespace=k8s.io Mar 14 00:17:24.275059 containerd[2127]: time="2026-03-14T00:17:24.274466224Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:17:24.298013 containerd[2127]: time="2026-03-14T00:17:24.297926848Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:17:24Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:17:24.670807 kubelet[3696]: I0314 00:17:24.670389 3696 scope.go:117] "RemoveContainer" containerID="1f139217fcf041c32d7adb4ab6cf8bcbdbba4b13c5f2070bebed7c258525c2b6" Mar 14 00:17:24.671006 kubelet[3696]: I0314 00:17:24.670962 3696 scope.go:117] "RemoveContainer" containerID="d9e8288a0cd72f4aca2881d3893cc79e57d0ae79ca53a1ee888f71c1567adfae" Mar 14 00:17:24.671376 kubelet[3696]: E0314 00:17:24.671195 3696 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-6bf85f8dd-4nntk_tigera-operator(05d19b67-a101-4ffe-b8ba-a4bd387e1baa)\"" pod="tigera-operator/tigera-operator-6bf85f8dd-4nntk" podUID="05d19b67-a101-4ffe-b8ba-a4bd387e1baa" Mar 14 00:17:24.673657 containerd[2127]: time="2026-03-14T00:17:24.673538622Z" level=info msg="RemoveContainer for \"1f139217fcf041c32d7adb4ab6cf8bcbdbba4b13c5f2070bebed7c258525c2b6\"" Mar 14 00:17:24.680565 containerd[2127]: time="2026-03-14T00:17:24.680494458Z" level=info msg="RemoveContainer for \"1f139217fcf041c32d7adb4ab6cf8bcbdbba4b13c5f2070bebed7c258525c2b6\" returns successfully" Mar 14 00:17:28.277770 systemd[1]: run-containerd-runc-k8s.io-a2225435cd603db2e615841cb79eb87d4c43b075237904c98aaaba19f1f40a8a-runc.38xKZi.mount: Deactivated successfully.