Mar 7 00:54:09.258065 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Mar 7 00:54:09.258113 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Mar 6 22:59:59 -00 2026 Mar 7 00:54:09.258139 kernel: KASLR disabled due to lack of seed Mar 7 00:54:09.258156 kernel: efi: EFI v2.7 by EDK II Mar 7 00:54:09.258174 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Mar 7 00:54:09.258190 kernel: ACPI: Early table checksum verification disabled Mar 7 00:54:09.258208 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Mar 7 00:54:09.258224 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Mar 7 00:54:09.258241 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 7 00:54:09.258257 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Mar 7 00:54:09.258277 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 7 00:54:09.258294 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Mar 7 00:54:09.258309 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Mar 7 00:54:09.258326 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Mar 7 00:54:09.258344 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 7 00:54:09.258365 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Mar 7 00:54:09.258383 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Mar 7 00:54:09.258400 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Mar 7 00:54:09.258417 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Mar 7 00:54:09.258434 kernel: printk: bootconsole [uart0] enabled Mar 7 00:54:09.258450 kernel: NUMA: Failed to initialise from firmware Mar 7 00:54:09.258468 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Mar 7 00:54:09.258485 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Mar 7 00:54:09.258502 kernel: Zone ranges: Mar 7 00:54:09.258519 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 7 00:54:09.258535 kernel: DMA32 empty Mar 7 00:54:09.258556 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Mar 7 00:54:09.258573 kernel: Movable zone start for each node Mar 7 00:54:09.258589 kernel: Early memory node ranges Mar 7 00:54:09.258607 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Mar 7 00:54:09.258623 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Mar 7 00:54:09.258640 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Mar 7 00:54:09.258658 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Mar 7 00:54:09.258674 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Mar 7 00:54:09.258691 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Mar 7 00:54:09.258708 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Mar 7 00:54:09.258724 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Mar 7 00:54:09.258741 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Mar 7 00:54:09.258762 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Mar 7 00:54:09.258780 kernel: psci: probing for conduit method from ACPI. Mar 7 00:54:09.258804 kernel: psci: PSCIv1.0 detected in firmware. Mar 7 00:54:09.258822 kernel: psci: Using standard PSCI v0.2 function IDs Mar 7 00:54:09.258839 kernel: psci: Trusted OS migration not required Mar 7 00:54:09.258861 kernel: psci: SMC Calling Convention v1.1 Mar 7 00:54:09.258880 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Mar 7 00:54:09.258897 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 7 00:54:09.258915 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 7 00:54:09.259855 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 7 00:54:09.259889 kernel: Detected PIPT I-cache on CPU0 Mar 7 00:54:09.259908 kernel: CPU features: detected: GIC system register CPU interface Mar 7 00:54:09.259926 kernel: CPU features: detected: Spectre-v2 Mar 7 00:54:09.259987 kernel: CPU features: detected: Spectre-v3a Mar 7 00:54:09.260007 kernel: CPU features: detected: Spectre-BHB Mar 7 00:54:09.260025 kernel: CPU features: detected: ARM erratum 1742098 Mar 7 00:54:09.260050 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Mar 7 00:54:09.260069 kernel: alternatives: applying boot alternatives Mar 7 00:54:09.260089 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9d22c40559a0d209dc0fcc2dfdd5ddf9671e6da0cc59463f610ba522f01325a6 Mar 7 00:54:09.260111 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 7 00:54:09.260132 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 7 00:54:09.260151 kernel: Fallback order for Node 0: 0 Mar 7 00:54:09.260169 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Mar 7 00:54:09.260187 kernel: Policy zone: Normal Mar 7 00:54:09.260205 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 7 00:54:09.260223 kernel: software IO TLB: area num 2. Mar 7 00:54:09.260243 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Mar 7 00:54:09.260269 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Mar 7 00:54:09.260287 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 7 00:54:09.260305 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 7 00:54:09.260324 kernel: rcu: RCU event tracing is enabled. Mar 7 00:54:09.260342 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 7 00:54:09.260361 kernel: Trampoline variant of Tasks RCU enabled. Mar 7 00:54:09.260379 kernel: Tracing variant of Tasks RCU enabled. Mar 7 00:54:09.260397 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 7 00:54:09.260415 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 7 00:54:09.260433 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 7 00:54:09.260451 kernel: GICv3: 96 SPIs implemented Mar 7 00:54:09.260473 kernel: GICv3: 0 Extended SPIs implemented Mar 7 00:54:09.260491 kernel: Root IRQ handler: gic_handle_irq Mar 7 00:54:09.260508 kernel: GICv3: GICv3 features: 16 PPIs Mar 7 00:54:09.260526 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Mar 7 00:54:09.260544 kernel: ITS [mem 0x10080000-0x1009ffff] Mar 7 00:54:09.260563 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Mar 7 00:54:09.260582 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Mar 7 00:54:09.260600 kernel: GICv3: using LPI property table @0x00000004000d0000 Mar 7 00:54:09.260618 kernel: ITS: Using hypervisor restricted LPI range [128] Mar 7 00:54:09.260636 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Mar 7 00:54:09.260654 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 7 00:54:09.260672 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Mar 7 00:54:09.260695 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Mar 7 00:54:09.260713 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Mar 7 00:54:09.260733 kernel: Console: colour dummy device 80x25 Mar 7 00:54:09.260751 kernel: printk: console [tty1] enabled Mar 7 00:54:09.260769 kernel: ACPI: Core revision 20230628 Mar 7 00:54:09.260788 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Mar 7 00:54:09.260806 kernel: pid_max: default: 32768 minimum: 301 Mar 7 00:54:09.260824 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 7 00:54:09.260843 kernel: landlock: Up and running. Mar 7 00:54:09.260887 kernel: SELinux: Initializing. Mar 7 00:54:09.260909 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 00:54:09.260928 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 7 00:54:09.261094 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 00:54:09.261116 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 7 00:54:09.261135 kernel: rcu: Hierarchical SRCU implementation. Mar 7 00:54:09.261154 kernel: rcu: Max phase no-delay instances is 400. Mar 7 00:54:09.261172 kernel: Platform MSI: ITS@0x10080000 domain created Mar 7 00:54:09.261191 kernel: PCI/MSI: ITS@0x10080000 domain created Mar 7 00:54:09.261214 kernel: Remapping and enabling EFI services. Mar 7 00:54:09.261233 kernel: smp: Bringing up secondary CPUs ... Mar 7 00:54:09.261250 kernel: Detected PIPT I-cache on CPU1 Mar 7 00:54:09.261268 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Mar 7 00:54:09.261288 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Mar 7 00:54:09.261306 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Mar 7 00:54:09.261324 kernel: smp: Brought up 1 node, 2 CPUs Mar 7 00:54:09.261341 kernel: SMP: Total of 2 processors activated. Mar 7 00:54:09.261359 kernel: CPU features: detected: 32-bit EL0 Support Mar 7 00:54:09.261381 kernel: CPU features: detected: 32-bit EL1 Support Mar 7 00:54:09.261400 kernel: CPU features: detected: CRC32 instructions Mar 7 00:54:09.261418 kernel: CPU: All CPU(s) started at EL1 Mar 7 00:54:09.261447 kernel: alternatives: applying system-wide alternatives Mar 7 00:54:09.261470 kernel: devtmpfs: initialized Mar 7 00:54:09.261489 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 7 00:54:09.261508 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 7 00:54:09.261527 kernel: pinctrl core: initialized pinctrl subsystem Mar 7 00:54:09.261546 kernel: SMBIOS 3.0.0 present. Mar 7 00:54:09.261568 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Mar 7 00:54:09.261587 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 7 00:54:09.261606 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 7 00:54:09.261625 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 7 00:54:09.261644 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 7 00:54:09.261664 kernel: audit: initializing netlink subsys (disabled) Mar 7 00:54:09.261683 kernel: audit: type=2000 audit(0.297:1): state=initialized audit_enabled=0 res=1 Mar 7 00:54:09.261702 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 7 00:54:09.261725 kernel: cpuidle: using governor menu Mar 7 00:54:09.261744 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 7 00:54:09.261763 kernel: ASID allocator initialised with 65536 entries Mar 7 00:54:09.261782 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 7 00:54:09.261800 kernel: Serial: AMBA PL011 UART driver Mar 7 00:54:09.261819 kernel: Modules: 17488 pages in range for non-PLT usage Mar 7 00:54:09.261838 kernel: Modules: 509008 pages in range for PLT usage Mar 7 00:54:09.261857 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 7 00:54:09.261876 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 7 00:54:09.261900 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 7 00:54:09.261919 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 7 00:54:09.261962 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 7 00:54:09.261985 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 7 00:54:09.262004 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 7 00:54:09.262023 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 7 00:54:09.262041 kernel: ACPI: Added _OSI(Module Device) Mar 7 00:54:09.262060 kernel: ACPI: Added _OSI(Processor Device) Mar 7 00:54:09.262079 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 7 00:54:09.262105 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 7 00:54:09.262123 kernel: ACPI: Interpreter enabled Mar 7 00:54:09.262142 kernel: ACPI: Using GIC for interrupt routing Mar 7 00:54:09.262161 kernel: ACPI: MCFG table detected, 1 entries Mar 7 00:54:09.262179 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Mar 7 00:54:09.262508 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 7 00:54:09.262747 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 7 00:54:09.263290 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 7 00:54:09.263573 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Mar 7 00:54:09.263807 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Mar 7 00:54:09.263837 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Mar 7 00:54:09.263858 kernel: acpiphp: Slot [1] registered Mar 7 00:54:09.263877 kernel: acpiphp: Slot [2] registered Mar 7 00:54:09.263896 kernel: acpiphp: Slot [3] registered Mar 7 00:54:09.263914 kernel: acpiphp: Slot [4] registered Mar 7 00:54:09.263970 kernel: acpiphp: Slot [5] registered Mar 7 00:54:09.264011 kernel: acpiphp: Slot [6] registered Mar 7 00:54:09.264060 kernel: acpiphp: Slot [7] registered Mar 7 00:54:09.264082 kernel: acpiphp: Slot [8] registered Mar 7 00:54:09.264101 kernel: acpiphp: Slot [9] registered Mar 7 00:54:09.264120 kernel: acpiphp: Slot [10] registered Mar 7 00:54:09.264138 kernel: acpiphp: Slot [11] registered Mar 7 00:54:09.264157 kernel: acpiphp: Slot [12] registered Mar 7 00:54:09.264176 kernel: acpiphp: Slot [13] registered Mar 7 00:54:09.264212 kernel: acpiphp: Slot [14] registered Mar 7 00:54:09.264233 kernel: acpiphp: Slot [15] registered Mar 7 00:54:09.264260 kernel: acpiphp: Slot [16] registered Mar 7 00:54:09.264279 kernel: acpiphp: Slot [17] registered Mar 7 00:54:09.264299 kernel: acpiphp: Slot [18] registered Mar 7 00:54:09.264318 kernel: acpiphp: Slot [19] registered Mar 7 00:54:09.264337 kernel: acpiphp: Slot [20] registered Mar 7 00:54:09.264356 kernel: acpiphp: Slot [21] registered Mar 7 00:54:09.264375 kernel: acpiphp: Slot [22] registered Mar 7 00:54:09.264394 kernel: acpiphp: Slot [23] registered Mar 7 00:54:09.264413 kernel: acpiphp: Slot [24] registered Mar 7 00:54:09.264436 kernel: acpiphp: Slot [25] registered Mar 7 00:54:09.264456 kernel: acpiphp: Slot [26] registered Mar 7 00:54:09.264474 kernel: acpiphp: Slot [27] registered Mar 7 00:54:09.264494 kernel: acpiphp: Slot [28] registered Mar 7 00:54:09.264513 kernel: acpiphp: Slot [29] registered Mar 7 00:54:09.264533 kernel: acpiphp: Slot [30] registered Mar 7 00:54:09.264552 kernel: acpiphp: Slot [31] registered Mar 7 00:54:09.264572 kernel: PCI host bridge to bus 0000:00 Mar 7 00:54:09.264911 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Mar 7 00:54:09.267829 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 7 00:54:09.268134 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Mar 7 00:54:09.268354 kernel: pci_bus 0000:00: root bus resource [bus 00] Mar 7 00:54:09.268626 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Mar 7 00:54:09.268919 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Mar 7 00:54:09.269254 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Mar 7 00:54:09.269538 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 7 00:54:09.269805 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Mar 7 00:54:09.274186 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 7 00:54:09.274470 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 7 00:54:09.274703 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Mar 7 00:54:09.277030 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Mar 7 00:54:09.277331 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Mar 7 00:54:09.277569 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 7 00:54:09.277775 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Mar 7 00:54:09.278029 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 7 00:54:09.278238 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Mar 7 00:54:09.278268 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 7 00:54:09.278289 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 7 00:54:09.278309 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 7 00:54:09.278330 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 7 00:54:09.278358 kernel: iommu: Default domain type: Translated Mar 7 00:54:09.278378 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 7 00:54:09.278397 kernel: efivars: Registered efivars operations Mar 7 00:54:09.278417 kernel: vgaarb: loaded Mar 7 00:54:09.278437 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 7 00:54:09.278456 kernel: VFS: Disk quotas dquot_6.6.0 Mar 7 00:54:09.278477 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 7 00:54:09.278497 kernel: pnp: PnP ACPI init Mar 7 00:54:09.278779 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Mar 7 00:54:09.278831 kernel: pnp: PnP ACPI: found 1 devices Mar 7 00:54:09.278851 kernel: NET: Registered PF_INET protocol family Mar 7 00:54:09.278872 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 7 00:54:09.278892 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 7 00:54:09.278912 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 7 00:54:09.278931 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 7 00:54:09.278997 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 7 00:54:09.279018 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 7 00:54:09.279046 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 00:54:09.279067 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 7 00:54:09.279087 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 7 00:54:09.279107 kernel: PCI: CLS 0 bytes, default 64 Mar 7 00:54:09.279126 kernel: kvm [1]: HYP mode not available Mar 7 00:54:09.279146 kernel: Initialise system trusted keyrings Mar 7 00:54:09.279165 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 7 00:54:09.279185 kernel: Key type asymmetric registered Mar 7 00:54:09.279204 kernel: Asymmetric key parser 'x509' registered Mar 7 00:54:09.279230 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 7 00:54:09.279251 kernel: io scheduler mq-deadline registered Mar 7 00:54:09.279271 kernel: io scheduler kyber registered Mar 7 00:54:09.279290 kernel: io scheduler bfq registered Mar 7 00:54:09.279576 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Mar 7 00:54:09.279611 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 7 00:54:09.279631 kernel: ACPI: button: Power Button [PWRB] Mar 7 00:54:09.279651 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Mar 7 00:54:09.279678 kernel: ACPI: button: Sleep Button [SLPB] Mar 7 00:54:09.279700 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 7 00:54:09.279720 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 7 00:54:09.281171 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Mar 7 00:54:09.281223 kernel: printk: console [ttyS0] disabled Mar 7 00:54:09.281244 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Mar 7 00:54:09.281266 kernel: printk: console [ttyS0] enabled Mar 7 00:54:09.281286 kernel: printk: bootconsole [uart0] disabled Mar 7 00:54:09.281305 kernel: thunder_xcv, ver 1.0 Mar 7 00:54:09.281324 kernel: thunder_bgx, ver 1.0 Mar 7 00:54:09.281353 kernel: nicpf, ver 1.0 Mar 7 00:54:09.281373 kernel: nicvf, ver 1.0 Mar 7 00:54:09.281652 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 7 00:54:09.281879 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-07T00:54:08 UTC (1772844848) Mar 7 00:54:09.281915 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 7 00:54:09.283961 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Mar 7 00:54:09.284010 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 7 00:54:09.284043 kernel: watchdog: Hard watchdog permanently disabled Mar 7 00:54:09.284063 kernel: NET: Registered PF_INET6 protocol family Mar 7 00:54:09.284083 kernel: Segment Routing with IPv6 Mar 7 00:54:09.284103 kernel: In-situ OAM (IOAM) with IPv6 Mar 7 00:54:09.284123 kernel: NET: Registered PF_PACKET protocol family Mar 7 00:54:09.284142 kernel: Key type dns_resolver registered Mar 7 00:54:09.284163 kernel: registered taskstats version 1 Mar 7 00:54:09.284183 kernel: Loading compiled-in X.509 certificates Mar 7 00:54:09.284203 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: e62b4e4ebcb406beff1271ecc7444548c4ab67e9' Mar 7 00:54:09.284222 kernel: Key type .fscrypt registered Mar 7 00:54:09.284249 kernel: Key type fscrypt-provisioning registered Mar 7 00:54:09.284268 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 7 00:54:09.284287 kernel: ima: Allocated hash algorithm: sha1 Mar 7 00:54:09.284307 kernel: ima: No architecture policies found Mar 7 00:54:09.284326 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 7 00:54:09.284346 kernel: clk: Disabling unused clocks Mar 7 00:54:09.284365 kernel: Freeing unused kernel memory: 39424K Mar 7 00:54:09.284384 kernel: Run /init as init process Mar 7 00:54:09.284404 kernel: with arguments: Mar 7 00:54:09.284429 kernel: /init Mar 7 00:54:09.284448 kernel: with environment: Mar 7 00:54:09.284466 kernel: HOME=/ Mar 7 00:54:09.284486 kernel: TERM=linux Mar 7 00:54:09.284512 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 00:54:09.284538 systemd[1]: Detected virtualization amazon. Mar 7 00:54:09.284560 systemd[1]: Detected architecture arm64. Mar 7 00:54:09.284585 systemd[1]: Running in initrd. Mar 7 00:54:09.284606 systemd[1]: No hostname configured, using default hostname. Mar 7 00:54:09.284626 systemd[1]: Hostname set to . Mar 7 00:54:09.284647 systemd[1]: Initializing machine ID from VM UUID. Mar 7 00:54:09.284669 systemd[1]: Queued start job for default target initrd.target. Mar 7 00:54:09.284690 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 00:54:09.284711 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 00:54:09.284734 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 7 00:54:09.284760 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 00:54:09.284782 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 7 00:54:09.284804 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 7 00:54:09.284828 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 7 00:54:09.284850 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 7 00:54:09.284900 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 00:54:09.284923 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 00:54:09.286111 systemd[1]: Reached target paths.target - Path Units. Mar 7 00:54:09.286146 systemd[1]: Reached target slices.target - Slice Units. Mar 7 00:54:09.286168 systemd[1]: Reached target swap.target - Swaps. Mar 7 00:54:09.286189 systemd[1]: Reached target timers.target - Timer Units. Mar 7 00:54:09.286210 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 00:54:09.286232 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 00:54:09.286254 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 00:54:09.286275 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 00:54:09.286297 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 00:54:09.286327 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 00:54:09.286349 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 00:54:09.286371 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 00:54:09.286393 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 7 00:54:09.286415 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 00:54:09.286436 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 7 00:54:09.286457 systemd[1]: Starting systemd-fsck-usr.service... Mar 7 00:54:09.286479 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 00:54:09.286505 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 00:54:09.286526 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:54:09.286548 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 7 00:54:09.286569 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 00:54:09.286594 systemd[1]: Finished systemd-fsck-usr.service. Mar 7 00:54:09.286619 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 00:54:09.286706 systemd-journald[251]: Collecting audit messages is disabled. Mar 7 00:54:09.286753 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 7 00:54:09.286774 kernel: Bridge firewalling registered Mar 7 00:54:09.286801 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 00:54:09.286823 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 00:54:09.286844 systemd-journald[251]: Journal started Mar 7 00:54:09.286882 systemd-journald[251]: Runtime Journal (/run/log/journal/ec218296cfdcb8be6186d67457ca7bb9) is 8.0M, max 75.3M, 67.3M free. Mar 7 00:54:09.229491 systemd-modules-load[252]: Inserted module 'overlay' Mar 7 00:54:09.260186 systemd-modules-load[252]: Inserted module 'br_netfilter' Mar 7 00:54:09.302430 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 00:54:09.303824 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:54:09.315040 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 00:54:09.322308 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 00:54:09.339333 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 00:54:09.345098 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 00:54:09.364893 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 00:54:09.397171 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 00:54:09.411183 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 00:54:09.414604 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 00:54:09.432377 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 7 00:54:09.448759 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 00:54:09.476329 dracut-cmdline[287]: dracut-dracut-053 Mar 7 00:54:09.488016 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9d22c40559a0d209dc0fcc2dfdd5ddf9671e6da0cc59463f610ba522f01325a6 Mar 7 00:54:09.540431 systemd-resolved[289]: Positive Trust Anchors: Mar 7 00:54:09.542761 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 00:54:09.542828 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 00:54:09.679994 kernel: SCSI subsystem initialized Mar 7 00:54:09.688133 kernel: Loading iSCSI transport class v2.0-870. Mar 7 00:54:09.702073 kernel: iscsi: registered transport (tcp) Mar 7 00:54:09.725639 kernel: iscsi: registered transport (qla4xxx) Mar 7 00:54:09.725719 kernel: QLogic iSCSI HBA Driver Mar 7 00:54:09.799970 kernel: random: crng init done Mar 7 00:54:09.798290 systemd-resolved[289]: Defaulting to hostname 'linux'. Mar 7 00:54:09.800796 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 00:54:09.807861 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 00:54:09.832003 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 7 00:54:09.846336 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 7 00:54:09.883621 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 7 00:54:09.883694 kernel: device-mapper: uevent: version 1.0.3 Mar 7 00:54:09.883733 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 7 00:54:09.968022 kernel: raid6: neonx8 gen() 6603 MB/s Mar 7 00:54:09.970994 kernel: raid6: neonx4 gen() 6465 MB/s Mar 7 00:54:09.988002 kernel: raid6: neonx2 gen() 5381 MB/s Mar 7 00:54:10.004997 kernel: raid6: neonx1 gen() 3918 MB/s Mar 7 00:54:10.022008 kernel: raid6: int64x8 gen() 3783 MB/s Mar 7 00:54:10.039995 kernel: raid6: int64x4 gen() 3663 MB/s Mar 7 00:54:10.056995 kernel: raid6: int64x2 gen() 3563 MB/s Mar 7 00:54:10.075091 kernel: raid6: int64x1 gen() 2740 MB/s Mar 7 00:54:10.075185 kernel: raid6: using algorithm neonx8 gen() 6603 MB/s Mar 7 00:54:10.094096 kernel: raid6: .... xor() 4873 MB/s, rmw enabled Mar 7 00:54:10.094172 kernel: raid6: using neon recovery algorithm Mar 7 00:54:10.103364 kernel: xor: measuring software checksum speed Mar 7 00:54:10.103436 kernel: 8regs : 11025 MB/sec Mar 7 00:54:10.104610 kernel: 32regs : 11961 MB/sec Mar 7 00:54:10.107045 kernel: arm64_neon : 8759 MB/sec Mar 7 00:54:10.107081 kernel: xor: using function: 32regs (11961 MB/sec) Mar 7 00:54:10.194000 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 7 00:54:10.214729 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 7 00:54:10.227280 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 00:54:10.273465 systemd-udevd[471]: Using default interface naming scheme 'v255'. Mar 7 00:54:10.282564 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 00:54:10.296731 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 7 00:54:10.334866 dracut-pre-trigger[473]: rd.md=0: removing MD RAID activation Mar 7 00:54:10.392677 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 00:54:10.406374 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 00:54:10.528864 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 00:54:10.553264 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 7 00:54:10.599316 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 7 00:54:10.604784 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 00:54:10.614208 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 00:54:10.619711 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 00:54:10.637270 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 7 00:54:10.686062 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 7 00:54:10.758991 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 7 00:54:10.762574 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Mar 7 00:54:10.768408 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 7 00:54:10.768778 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 7 00:54:10.768017 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 00:54:10.768268 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 00:54:10.780276 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 00:54:10.783643 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 00:54:10.788002 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:54:10.801320 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:c8:17:68:48:1f Mar 7 00:54:10.795002 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:54:10.810598 (udev-worker)[542]: Network interface NamePolicy= disabled on kernel command line. Mar 7 00:54:10.811366 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:54:10.845989 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 7 00:54:10.848078 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 7 00:54:10.855102 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:54:10.867971 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 7 00:54:10.870247 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 7 00:54:10.885599 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 7 00:54:10.885665 kernel: GPT:9289727 != 33554431 Mar 7 00:54:10.888920 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 7 00:54:10.888993 kernel: GPT:9289727 != 33554431 Mar 7 00:54:10.889019 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 7 00:54:10.890156 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 7 00:54:10.917306 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 00:54:10.980478 kernel: BTRFS: device fsid 237c8587-8110-47ef-99f9-37e4ed4d3b31 devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (526) Mar 7 00:54:11.024982 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (534) Mar 7 00:54:11.048914 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 7 00:54:11.103231 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 7 00:54:11.109482 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 7 00:54:11.138125 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 7 00:54:11.156670 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 7 00:54:11.173246 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 7 00:54:11.197971 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 7 00:54:11.198407 disk-uuid[662]: Primary Header is updated. Mar 7 00:54:11.198407 disk-uuid[662]: Secondary Entries is updated. Mar 7 00:54:11.198407 disk-uuid[662]: Secondary Header is updated. Mar 7 00:54:12.232990 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 7 00:54:12.234562 disk-uuid[663]: The operation has completed successfully. Mar 7 00:54:12.428800 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 7 00:54:12.429512 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 7 00:54:12.481237 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 7 00:54:12.504103 sh[1008]: Success Mar 7 00:54:12.533984 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 7 00:54:12.725829 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 7 00:54:12.729115 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 7 00:54:12.746665 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 7 00:54:12.770015 kernel: BTRFS info (device dm-0): first mount of filesystem 237c8587-8110-47ef-99f9-37e4ed4d3b31 Mar 7 00:54:12.770082 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:54:12.770110 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 7 00:54:12.771970 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 7 00:54:12.774512 kernel: BTRFS info (device dm-0): using free space tree Mar 7 00:54:12.870983 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 7 00:54:12.887776 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 7 00:54:12.894783 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 7 00:54:12.909195 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 7 00:54:12.921016 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 7 00:54:12.946215 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 00:54:12.946276 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:54:12.947559 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 7 00:54:13.008976 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 7 00:54:13.028646 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 7 00:54:13.033780 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 00:54:13.045904 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 7 00:54:13.058360 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 7 00:54:13.114213 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 00:54:13.129392 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 00:54:13.210180 systemd-networkd[1202]: lo: Link UP Mar 7 00:54:13.210202 systemd-networkd[1202]: lo: Gained carrier Mar 7 00:54:13.218540 systemd-networkd[1202]: Enumeration completed Mar 7 00:54:13.218842 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 00:54:13.221856 systemd-networkd[1202]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:54:13.221863 systemd-networkd[1202]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 00:54:13.237891 systemd[1]: Reached target network.target - Network. Mar 7 00:54:13.248312 systemd-networkd[1202]: eth0: Link UP Mar 7 00:54:13.248326 systemd-networkd[1202]: eth0: Gained carrier Mar 7 00:54:13.248345 systemd-networkd[1202]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:54:13.263091 systemd-networkd[1202]: eth0: DHCPv4 address 172.31.21.232/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 7 00:54:13.289108 ignition[1161]: Ignition 2.19.0 Mar 7 00:54:13.289646 ignition[1161]: Stage: fetch-offline Mar 7 00:54:13.291516 ignition[1161]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:54:13.291541 ignition[1161]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 00:54:13.293169 ignition[1161]: Ignition finished successfully Mar 7 00:54:13.300622 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 00:54:13.313362 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 7 00:54:13.347284 ignition[1211]: Ignition 2.19.0 Mar 7 00:54:13.347315 ignition[1211]: Stage: fetch Mar 7 00:54:13.349195 ignition[1211]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:54:13.349254 ignition[1211]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 00:54:13.350519 ignition[1211]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 00:54:13.368336 ignition[1211]: PUT result: OK Mar 7 00:54:13.371468 ignition[1211]: parsed url from cmdline: "" Mar 7 00:54:13.371492 ignition[1211]: no config URL provided Mar 7 00:54:13.371507 ignition[1211]: reading system config file "/usr/lib/ignition/user.ign" Mar 7 00:54:13.371560 ignition[1211]: no config at "/usr/lib/ignition/user.ign" Mar 7 00:54:13.371595 ignition[1211]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 00:54:13.381820 ignition[1211]: PUT result: OK Mar 7 00:54:13.381899 ignition[1211]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 7 00:54:13.384778 ignition[1211]: GET result: OK Mar 7 00:54:13.384961 ignition[1211]: parsing config with SHA512: e0ff6cab040bcb2a6a6cd68bcc636debb763c98640e73cede228f49dfd121c6e7af5cd749b8e7984c5937c8de832ac3b8f8863aeaad43cc867331d3529f4c057 Mar 7 00:54:13.398214 unknown[1211]: fetched base config from "system" Mar 7 00:54:13.398244 unknown[1211]: fetched base config from "system" Mar 7 00:54:13.398283 unknown[1211]: fetched user config from "aws" Mar 7 00:54:13.402377 ignition[1211]: fetch: fetch complete Mar 7 00:54:13.402427 ignition[1211]: fetch: fetch passed Mar 7 00:54:13.403858 ignition[1211]: Ignition finished successfully Mar 7 00:54:13.412024 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 7 00:54:13.422243 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 7 00:54:13.451203 ignition[1218]: Ignition 2.19.0 Mar 7 00:54:13.451702 ignition[1218]: Stage: kargs Mar 7 00:54:13.452427 ignition[1218]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:54:13.452453 ignition[1218]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 00:54:13.452611 ignition[1218]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 00:54:13.462117 ignition[1218]: PUT result: OK Mar 7 00:54:13.466899 ignition[1218]: kargs: kargs passed Mar 7 00:54:13.467105 ignition[1218]: Ignition finished successfully Mar 7 00:54:13.472582 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 7 00:54:13.485916 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 7 00:54:13.512385 ignition[1225]: Ignition 2.19.0 Mar 7 00:54:13.512413 ignition[1225]: Stage: disks Mar 7 00:54:13.513399 ignition[1225]: no configs at "/usr/lib/ignition/base.d" Mar 7 00:54:13.513425 ignition[1225]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 00:54:13.513604 ignition[1225]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 00:54:13.520582 ignition[1225]: PUT result: OK Mar 7 00:54:13.527284 ignition[1225]: disks: disks passed Mar 7 00:54:13.527436 ignition[1225]: Ignition finished successfully Mar 7 00:54:13.530813 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 7 00:54:13.533790 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 7 00:54:13.536795 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 00:54:13.539586 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 00:54:13.542304 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 00:54:13.544615 systemd[1]: Reached target basic.target - Basic System. Mar 7 00:54:13.569769 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 7 00:54:13.628146 systemd-fsck[1233]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 7 00:54:13.636222 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 7 00:54:13.651188 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 7 00:54:13.734969 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 596a8ea8-9d3d-4d06-a56e-9d3ebd3cb76d r/w with ordered data mode. Quota mode: none. Mar 7 00:54:13.736560 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 7 00:54:13.741023 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 7 00:54:13.757174 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 00:54:13.763592 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 7 00:54:13.768220 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 7 00:54:13.768304 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 7 00:54:13.768354 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 00:54:13.801021 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 7 00:54:13.806202 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 7 00:54:13.833987 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1252) Mar 7 00:54:13.837952 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 00:54:13.837995 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:54:13.839300 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 7 00:54:13.873975 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 7 00:54:13.876112 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 00:54:13.964111 initrd-setup-root[1276]: cut: /sysroot/etc/passwd: No such file or directory Mar 7 00:54:13.975188 initrd-setup-root[1283]: cut: /sysroot/etc/group: No such file or directory Mar 7 00:54:13.984723 initrd-setup-root[1290]: cut: /sysroot/etc/shadow: No such file or directory Mar 7 00:54:13.993704 initrd-setup-root[1297]: cut: /sysroot/etc/gshadow: No such file or directory Mar 7 00:54:14.164913 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 7 00:54:14.176153 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 7 00:54:14.191228 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 7 00:54:14.206704 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 7 00:54:14.209408 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 00:54:14.265202 ignition[1365]: INFO : Ignition 2.19.0 Mar 7 00:54:14.268896 ignition[1365]: INFO : Stage: mount Mar 7 00:54:14.268896 ignition[1365]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 00:54:14.268896 ignition[1365]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 00:54:14.268896 ignition[1365]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 00:54:14.270638 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 7 00:54:14.279980 ignition[1365]: INFO : PUT result: OK Mar 7 00:54:14.287518 ignition[1365]: INFO : mount: mount passed Mar 7 00:54:14.289394 ignition[1365]: INFO : Ignition finished successfully Mar 7 00:54:14.293328 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 7 00:54:14.309962 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 7 00:54:14.331321 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 7 00:54:14.356971 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1377) Mar 7 00:54:14.359494 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 6e876a94-9f11-430e-8016-2af72863cd2e Mar 7 00:54:14.361270 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 7 00:54:14.362610 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 7 00:54:14.368977 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 7 00:54:14.373056 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 7 00:54:14.413737 ignition[1395]: INFO : Ignition 2.19.0 Mar 7 00:54:14.413737 ignition[1395]: INFO : Stage: files Mar 7 00:54:14.421426 ignition[1395]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 00:54:14.421426 ignition[1395]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 00:54:14.421426 ignition[1395]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 00:54:14.431873 ignition[1395]: INFO : PUT result: OK Mar 7 00:54:14.435920 ignition[1395]: DEBUG : files: compiled without relabeling support, skipping Mar 7 00:54:14.438573 ignition[1395]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 7 00:54:14.438573 ignition[1395]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 7 00:54:14.450968 ignition[1395]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 7 00:54:14.457130 ignition[1395]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 7 00:54:14.461453 unknown[1395]: wrote ssh authorized keys file for user: core Mar 7 00:54:14.464017 ignition[1395]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 7 00:54:14.469992 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 7 00:54:14.469992 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 7 00:54:14.469992 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 7 00:54:14.469992 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 7 00:54:14.557645 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 7 00:54:14.692294 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 7 00:54:14.699050 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 7 00:54:14.699050 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 7 00:54:14.699050 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 7 00:54:14.699050 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 7 00:54:14.699050 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 00:54:14.699050 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 7 00:54:14.699050 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 00:54:14.699050 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 7 00:54:14.699050 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 00:54:14.699050 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 7 00:54:14.699050 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 00:54:14.699050 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 00:54:14.699050 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 00:54:14.699050 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Mar 7 00:54:15.164196 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 7 00:54:15.236138 systemd-networkd[1202]: eth0: Gained IPv6LL Mar 7 00:54:15.567536 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 7 00:54:15.567536 ignition[1395]: INFO : files: op(c): [started] processing unit "containerd.service" Mar 7 00:54:15.575657 ignition[1395]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 7 00:54:15.575657 ignition[1395]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 7 00:54:15.575657 ignition[1395]: INFO : files: op(c): [finished] processing unit "containerd.service" Mar 7 00:54:15.575657 ignition[1395]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Mar 7 00:54:15.575657 ignition[1395]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 00:54:15.575657 ignition[1395]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 7 00:54:15.575657 ignition[1395]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Mar 7 00:54:15.575657 ignition[1395]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Mar 7 00:54:15.575657 ignition[1395]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Mar 7 00:54:15.575657 ignition[1395]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 7 00:54:15.575657 ignition[1395]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 7 00:54:15.575657 ignition[1395]: INFO : files: files passed Mar 7 00:54:15.575657 ignition[1395]: INFO : Ignition finished successfully Mar 7 00:54:15.592285 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 7 00:54:15.628718 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 7 00:54:15.642249 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 7 00:54:15.646775 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 7 00:54:15.647036 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 7 00:54:15.685215 initrd-setup-root-after-ignition[1423]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 00:54:15.685215 initrd-setup-root-after-ignition[1423]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 7 00:54:15.692732 initrd-setup-root-after-ignition[1427]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 7 00:54:15.699822 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 00:54:15.700224 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 7 00:54:15.718748 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 7 00:54:15.767828 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 7 00:54:15.768272 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 7 00:54:15.777155 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 7 00:54:15.779552 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 7 00:54:15.782400 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 7 00:54:15.800182 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 7 00:54:15.832029 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 00:54:15.847276 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 7 00:54:15.874725 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 7 00:54:15.880382 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 00:54:15.883824 systemd[1]: Stopped target timers.target - Timer Units. Mar 7 00:54:15.886687 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 7 00:54:15.887007 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 7 00:54:15.899499 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 7 00:54:15.903138 systemd[1]: Stopped target basic.target - Basic System. Mar 7 00:54:15.910587 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 7 00:54:15.913333 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 7 00:54:15.916792 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 7 00:54:15.924452 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 7 00:54:15.931531 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 7 00:54:15.935081 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 7 00:54:15.940767 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 7 00:54:15.945450 systemd[1]: Stopped target swap.target - Swaps. Mar 7 00:54:15.949300 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 7 00:54:15.949569 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 7 00:54:15.954245 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 7 00:54:15.964692 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 00:54:15.968429 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 7 00:54:15.970883 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 00:54:15.974456 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 7 00:54:15.974761 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 7 00:54:15.987481 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 7 00:54:15.988064 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 7 00:54:15.996241 systemd[1]: ignition-files.service: Deactivated successfully. Mar 7 00:54:15.997312 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 7 00:54:16.013723 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 7 00:54:16.015863 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 7 00:54:16.016850 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 00:54:16.029443 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 7 00:54:16.031590 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 7 00:54:16.037819 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 00:54:16.048994 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 7 00:54:16.050805 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 7 00:54:16.069751 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 7 00:54:16.071481 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 7 00:54:16.081927 ignition[1447]: INFO : Ignition 2.19.0 Mar 7 00:54:16.081927 ignition[1447]: INFO : Stage: umount Mar 7 00:54:16.087228 ignition[1447]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 7 00:54:16.087228 ignition[1447]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 7 00:54:16.087228 ignition[1447]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 7 00:54:16.096037 ignition[1447]: INFO : PUT result: OK Mar 7 00:54:16.102014 ignition[1447]: INFO : umount: umount passed Mar 7 00:54:16.105154 ignition[1447]: INFO : Ignition finished successfully Mar 7 00:54:16.104757 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 7 00:54:16.105366 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 7 00:54:16.115351 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 7 00:54:16.115556 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 7 00:54:16.118400 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 7 00:54:16.118514 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 7 00:54:16.121791 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 7 00:54:16.121900 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 7 00:54:16.129242 systemd[1]: Stopped target network.target - Network. Mar 7 00:54:16.135098 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 7 00:54:16.135228 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 7 00:54:16.138358 systemd[1]: Stopped target paths.target - Path Units. Mar 7 00:54:16.140438 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 7 00:54:16.145181 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 00:54:16.148269 systemd[1]: Stopped target slices.target - Slice Units. Mar 7 00:54:16.150783 systemd[1]: Stopped target sockets.target - Socket Units. Mar 7 00:54:16.171411 systemd[1]: iscsid.socket: Deactivated successfully. Mar 7 00:54:16.171511 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 7 00:54:16.177124 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 7 00:54:16.177220 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 7 00:54:16.184217 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 7 00:54:16.184330 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 7 00:54:16.190763 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 7 00:54:16.190875 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 7 00:54:16.193728 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 7 00:54:16.198318 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 7 00:54:16.214095 systemd-networkd[1202]: eth0: DHCPv6 lease lost Mar 7 00:54:16.223744 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 7 00:54:16.225322 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 7 00:54:16.225616 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 7 00:54:16.239621 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 7 00:54:16.243094 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 7 00:54:16.249033 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 7 00:54:16.249519 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 7 00:54:16.258460 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 7 00:54:16.258604 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 7 00:54:16.262613 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 7 00:54:16.262726 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 7 00:54:16.280122 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 7 00:54:16.282215 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 7 00:54:16.282341 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 7 00:54:16.286484 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 7 00:54:16.286590 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 7 00:54:16.294647 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 7 00:54:16.294758 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 7 00:54:16.309069 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 7 00:54:16.309173 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 00:54:16.312115 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 00:54:16.342595 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 7 00:54:16.345138 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 7 00:54:16.350400 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 7 00:54:16.352844 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 00:54:16.355542 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 7 00:54:16.355672 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 7 00:54:16.355874 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 7 00:54:16.356519 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 00:54:16.363329 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 7 00:54:16.363437 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 7 00:54:16.366299 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 7 00:54:16.366405 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 7 00:54:16.387550 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 7 00:54:16.387657 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 7 00:54:16.398182 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 7 00:54:16.401241 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 7 00:54:16.401372 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 00:54:16.409801 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 7 00:54:16.409905 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:54:16.440501 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 7 00:54:16.442442 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 7 00:54:16.449780 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 7 00:54:16.461282 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 7 00:54:16.482568 systemd[1]: Switching root. Mar 7 00:54:16.527004 systemd-journald[251]: Journal stopped Mar 7 00:54:18.590023 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Mar 7 00:54:18.590208 kernel: SELinux: policy capability network_peer_controls=1 Mar 7 00:54:18.590253 kernel: SELinux: policy capability open_perms=1 Mar 7 00:54:18.590295 kernel: SELinux: policy capability extended_socket_class=1 Mar 7 00:54:18.590327 kernel: SELinux: policy capability always_check_network=0 Mar 7 00:54:18.590357 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 7 00:54:18.590389 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 7 00:54:18.590427 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 7 00:54:18.590461 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 7 00:54:18.590493 kernel: audit: type=1403 audit(1772844856.911:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 7 00:54:18.590536 systemd[1]: Successfully loaded SELinux policy in 56.218ms. Mar 7 00:54:18.590577 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.611ms. Mar 7 00:54:18.590614 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 7 00:54:18.590655 systemd[1]: Detected virtualization amazon. Mar 7 00:54:18.590687 systemd[1]: Detected architecture arm64. Mar 7 00:54:18.590717 systemd[1]: Detected first boot. Mar 7 00:54:18.590749 systemd[1]: Initializing machine ID from VM UUID. Mar 7 00:54:18.590786 zram_generator::config[1507]: No configuration found. Mar 7 00:54:18.590836 systemd[1]: Populated /etc with preset unit settings. Mar 7 00:54:18.590868 systemd[1]: Queued start job for default target multi-user.target. Mar 7 00:54:18.590900 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 7 00:54:18.592973 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 7 00:54:18.593061 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 7 00:54:18.593100 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 7 00:54:18.593134 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 7 00:54:18.593179 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 7 00:54:18.593215 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 7 00:54:18.593249 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 7 00:54:18.593281 systemd[1]: Created slice user.slice - User and Session Slice. Mar 7 00:54:18.593312 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 7 00:54:18.593349 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 7 00:54:18.593384 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 7 00:54:18.593420 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 7 00:54:18.593455 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 7 00:54:18.593493 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 7 00:54:18.593527 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 7 00:54:18.593558 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 7 00:54:18.593591 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 7 00:54:18.593625 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 7 00:54:18.593666 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 7 00:54:18.593699 systemd[1]: Reached target slices.target - Slice Units. Mar 7 00:54:18.593731 systemd[1]: Reached target swap.target - Swaps. Mar 7 00:54:18.593767 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 7 00:54:18.593798 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 7 00:54:18.593828 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 7 00:54:18.593858 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 7 00:54:18.593901 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 7 00:54:18.593963 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 7 00:54:18.594004 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 7 00:54:18.594035 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 7 00:54:18.594066 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 7 00:54:18.594098 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 7 00:54:18.594137 systemd[1]: Mounting media.mount - External Media Directory... Mar 7 00:54:18.594174 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 7 00:54:18.594206 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 7 00:54:18.594238 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 7 00:54:18.594268 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 7 00:54:18.594300 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 00:54:18.594330 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 7 00:54:18.594363 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 7 00:54:18.594405 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 00:54:18.594436 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 00:54:18.594466 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 00:54:18.594498 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 7 00:54:18.594529 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 00:54:18.594560 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 7 00:54:18.594592 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 7 00:54:18.594627 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 7 00:54:18.594661 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 7 00:54:18.594690 kernel: loop: module loaded Mar 7 00:54:18.594722 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 7 00:54:18.594750 kernel: fuse: init (API version 7.39) Mar 7 00:54:18.594780 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 7 00:54:18.594811 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 7 00:54:18.594842 kernel: ACPI: bus type drm_connector registered Mar 7 00:54:18.594872 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 7 00:54:18.594905 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 7 00:54:18.596974 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 7 00:54:18.597048 systemd[1]: Mounted media.mount - External Media Directory. Mar 7 00:54:18.597080 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 7 00:54:18.597160 systemd-journald[1614]: Collecting audit messages is disabled. Mar 7 00:54:18.597229 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 7 00:54:18.597264 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 7 00:54:18.597295 systemd-journald[1614]: Journal started Mar 7 00:54:18.597344 systemd-journald[1614]: Runtime Journal (/run/log/journal/ec218296cfdcb8be6186d67457ca7bb9) is 8.0M, max 75.3M, 67.3M free. Mar 7 00:54:18.608875 systemd[1]: Started systemd-journald.service - Journal Service. Mar 7 00:54:18.614762 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 7 00:54:18.630495 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 7 00:54:18.630892 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 7 00:54:18.640535 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 7 00:54:18.649647 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 00:54:18.650077 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 00:54:18.658132 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 00:54:18.658542 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 00:54:18.665376 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 00:54:18.665767 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 00:54:18.673088 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 7 00:54:18.673500 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 7 00:54:18.682463 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 00:54:18.683217 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 00:54:18.690329 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 7 00:54:18.698247 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 7 00:54:18.706014 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 7 00:54:18.738030 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 7 00:54:18.753280 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 7 00:54:18.770125 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 7 00:54:18.776166 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 7 00:54:18.788275 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 7 00:54:18.806066 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 7 00:54:18.809452 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 00:54:18.817190 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 7 00:54:18.820200 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 00:54:18.825793 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 7 00:54:18.841218 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 7 00:54:18.856135 systemd-journald[1614]: Time spent on flushing to /var/log/journal/ec218296cfdcb8be6186d67457ca7bb9 is 95.480ms for 883 entries. Mar 7 00:54:18.856135 systemd-journald[1614]: System Journal (/var/log/journal/ec218296cfdcb8be6186d67457ca7bb9) is 8.0M, max 195.6M, 187.6M free. Mar 7 00:54:18.972644 systemd-journald[1614]: Received client request to flush runtime journal. Mar 7 00:54:18.865326 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 7 00:54:18.872479 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 7 00:54:18.877322 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 7 00:54:18.886479 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 7 00:54:18.903140 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 7 00:54:18.926626 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 7 00:54:18.958182 udevadm[1668]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 7 00:54:18.984576 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 7 00:54:18.993730 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 7 00:54:18.999877 systemd-tmpfiles[1660]: ACLs are not supported, ignoring. Mar 7 00:54:18.999924 systemd-tmpfiles[1660]: ACLs are not supported, ignoring. Mar 7 00:54:19.014108 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 7 00:54:19.033430 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 7 00:54:19.100660 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 7 00:54:19.117372 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 7 00:54:19.174536 systemd-tmpfiles[1681]: ACLs are not supported, ignoring. Mar 7 00:54:19.174585 systemd-tmpfiles[1681]: ACLs are not supported, ignoring. Mar 7 00:54:19.184912 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 7 00:54:19.853774 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 7 00:54:19.869357 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 7 00:54:19.923812 systemd-udevd[1687]: Using default interface naming scheme 'v255'. Mar 7 00:54:19.964667 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 7 00:54:19.988340 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 7 00:54:20.089619 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 7 00:54:20.097789 (udev-worker)[1702]: Network interface NamePolicy= disabled on kernel command line. Mar 7 00:54:20.100960 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Mar 7 00:54:20.250228 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 7 00:54:20.369841 systemd-networkd[1695]: lo: Link UP Mar 7 00:54:20.370533 systemd-networkd[1695]: lo: Gained carrier Mar 7 00:54:20.373902 systemd-networkd[1695]: Enumeration completed Mar 7 00:54:20.374358 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 7 00:54:20.385273 systemd-networkd[1695]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:54:20.388188 systemd-networkd[1695]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 7 00:54:20.399132 systemd-networkd[1695]: eth0: Link UP Mar 7 00:54:20.401664 systemd-networkd[1695]: eth0: Gained carrier Mar 7 00:54:20.401709 systemd-networkd[1695]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 7 00:54:20.410418 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 7 00:54:20.422066 systemd-networkd[1695]: eth0: DHCPv4 address 172.31.21.232/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 7 00:54:20.433117 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 7 00:54:20.472015 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1694) Mar 7 00:54:20.600554 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 7 00:54:20.714850 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 7 00:54:20.718773 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 7 00:54:20.734455 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 7 00:54:20.761144 lvm[1816]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 00:54:20.804128 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 7 00:54:20.808312 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 7 00:54:20.818336 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 7 00:54:20.834987 lvm[1819]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 7 00:54:20.872255 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 7 00:54:20.876359 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 7 00:54:20.879491 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 7 00:54:20.879753 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 7 00:54:20.882442 systemd[1]: Reached target machines.target - Containers. Mar 7 00:54:20.887777 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 7 00:54:20.898279 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 7 00:54:20.914404 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 7 00:54:20.917395 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 00:54:20.920268 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 7 00:54:20.942352 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 7 00:54:20.958256 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 7 00:54:20.966050 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 7 00:54:21.005527 kernel: loop0: detected capacity change from 0 to 114328 Mar 7 00:54:21.009748 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 7 00:54:21.030084 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 7 00:54:21.035292 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 7 00:54:21.067988 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 7 00:54:21.096215 kernel: loop1: detected capacity change from 0 to 209336 Mar 7 00:54:21.157118 kernel: loop2: detected capacity change from 0 to 52536 Mar 7 00:54:21.277014 kernel: loop3: detected capacity change from 0 to 114432 Mar 7 00:54:21.324047 kernel: loop4: detected capacity change from 0 to 114328 Mar 7 00:54:21.345019 kernel: loop5: detected capacity change from 0 to 209336 Mar 7 00:54:21.376999 kernel: loop6: detected capacity change from 0 to 52536 Mar 7 00:54:21.392160 kernel: loop7: detected capacity change from 0 to 114432 Mar 7 00:54:21.415466 (sd-merge)[1840]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 7 00:54:21.417473 (sd-merge)[1840]: Merged extensions into '/usr'. Mar 7 00:54:21.426914 systemd[1]: Reloading requested from client PID 1827 ('systemd-sysext') (unit systemd-sysext.service)... Mar 7 00:54:21.426995 systemd[1]: Reloading... Mar 7 00:54:21.595977 zram_generator::config[1874]: No configuration found. Mar 7 00:54:21.710990 ldconfig[1823]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 7 00:54:21.901379 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 00:54:22.071352 systemd[1]: Reloading finished in 643 ms. Mar 7 00:54:22.084149 systemd-networkd[1695]: eth0: Gained IPv6LL Mar 7 00:54:22.103671 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 7 00:54:22.108265 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 7 00:54:22.112746 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 7 00:54:22.146225 systemd[1]: Starting ensure-sysext.service... Mar 7 00:54:22.154265 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 7 00:54:22.163131 systemd[1]: Reloading requested from client PID 1929 ('systemctl') (unit ensure-sysext.service)... Mar 7 00:54:22.163165 systemd[1]: Reloading... Mar 7 00:54:22.207069 systemd-tmpfiles[1930]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 7 00:54:22.207746 systemd-tmpfiles[1930]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 7 00:54:22.210448 systemd-tmpfiles[1930]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 7 00:54:22.211349 systemd-tmpfiles[1930]: ACLs are not supported, ignoring. Mar 7 00:54:22.211702 systemd-tmpfiles[1930]: ACLs are not supported, ignoring. Mar 7 00:54:22.219571 systemd-tmpfiles[1930]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 00:54:22.219603 systemd-tmpfiles[1930]: Skipping /boot Mar 7 00:54:22.246183 systemd-tmpfiles[1930]: Detected autofs mount point /boot during canonicalization of boot. Mar 7 00:54:22.246213 systemd-tmpfiles[1930]: Skipping /boot Mar 7 00:54:22.342968 zram_generator::config[1961]: No configuration found. Mar 7 00:54:22.588991 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 00:54:22.747808 systemd[1]: Reloading finished in 583 ms. Mar 7 00:54:22.777774 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 7 00:54:22.803288 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 00:54:22.823308 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 7 00:54:22.831596 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 7 00:54:22.850553 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 7 00:54:22.860720 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 7 00:54:22.887850 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 00:54:22.901255 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 00:54:22.919660 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 00:54:22.929497 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 00:54:22.934385 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 00:54:22.943809 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 00:54:22.944291 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 00:54:22.962243 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 7 00:54:22.979534 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 00:54:22.981321 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 00:54:23.001495 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 00:54:23.004970 augenrules[2046]: No rules Mar 7 00:54:23.007321 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 00:54:23.014414 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 00:54:23.031513 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 7 00:54:23.056510 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 7 00:54:23.065528 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 7 00:54:23.082544 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 7 00:54:23.104463 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 7 00:54:23.123842 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 7 00:54:23.127299 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 7 00:54:23.130134 systemd[1]: Reached target time-set.target - System Time Set. Mar 7 00:54:23.159531 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 7 00:54:23.165589 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 7 00:54:23.166363 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 7 00:54:23.176234 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 7 00:54:23.176636 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 7 00:54:23.180782 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 7 00:54:23.181237 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 7 00:54:23.201654 systemd[1]: Finished ensure-sysext.service. Mar 7 00:54:23.211482 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 7 00:54:23.211964 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 7 00:54:23.215905 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 7 00:54:23.238679 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 7 00:54:23.246486 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 7 00:54:23.246732 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 7 00:54:23.246840 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 7 00:54:23.276788 systemd-resolved[2027]: Positive Trust Anchors: Mar 7 00:54:23.276829 systemd-resolved[2027]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 7 00:54:23.276893 systemd-resolved[2027]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 7 00:54:23.293492 systemd-resolved[2027]: Defaulting to hostname 'linux'. Mar 7 00:54:23.298111 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 7 00:54:23.301187 systemd[1]: Reached target network.target - Network. Mar 7 00:54:23.303395 systemd[1]: Reached target network-online.target - Network is Online. Mar 7 00:54:23.306079 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 7 00:54:23.309063 systemd[1]: Reached target sysinit.target - System Initialization. Mar 7 00:54:23.311874 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 7 00:54:23.315011 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 7 00:54:23.318421 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 7 00:54:23.321347 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 7 00:54:23.324415 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 7 00:54:23.327453 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 7 00:54:23.327512 systemd[1]: Reached target paths.target - Path Units. Mar 7 00:54:23.329720 systemd[1]: Reached target timers.target - Timer Units. Mar 7 00:54:23.333141 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 7 00:54:23.339348 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 7 00:54:23.345659 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 7 00:54:23.352485 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 7 00:54:23.355319 systemd[1]: Reached target sockets.target - Socket Units. Mar 7 00:54:23.357719 systemd[1]: Reached target basic.target - Basic System. Mar 7 00:54:23.360362 systemd[1]: System is tainted: cgroupsv1 Mar 7 00:54:23.360453 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 7 00:54:23.360509 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 7 00:54:23.365231 systemd[1]: Starting containerd.service - containerd container runtime... Mar 7 00:54:23.380293 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 7 00:54:23.389458 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 7 00:54:23.396033 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 7 00:54:23.423442 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 7 00:54:23.426162 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 7 00:54:23.445162 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:54:23.459352 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 7 00:54:23.470153 jq[2085]: false Mar 7 00:54:23.474288 systemd[1]: Started ntpd.service - Network Time Service. Mar 7 00:54:23.498680 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 7 00:54:23.530110 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 7 00:54:23.545182 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 7 00:54:23.558483 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 7 00:54:23.567781 dbus-daemon[2084]: [system] SELinux support is enabled Mar 7 00:54:23.573500 dbus-daemon[2084]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1695 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 7 00:54:23.585574 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 7 00:54:23.624692 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 7 00:54:23.632137 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 7 00:54:23.644120 systemd[1]: Starting update-engine.service - Update Engine... Mar 7 00:54:23.663190 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 7 00:54:23.667694 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 7 00:54:23.704864 extend-filesystems[2086]: Found loop4 Mar 7 00:54:23.704864 extend-filesystems[2086]: Found loop5 Mar 7 00:54:23.704864 extend-filesystems[2086]: Found loop6 Mar 7 00:54:23.704864 extend-filesystems[2086]: Found loop7 Mar 7 00:54:23.704864 extend-filesystems[2086]: Found nvme0n1 Mar 7 00:54:23.704864 extend-filesystems[2086]: Found nvme0n1p1 Mar 7 00:54:23.704864 extend-filesystems[2086]: Found nvme0n1p2 Mar 7 00:54:23.704864 extend-filesystems[2086]: Found nvme0n1p3 Mar 7 00:54:23.704864 extend-filesystems[2086]: Found usr Mar 7 00:54:23.704864 extend-filesystems[2086]: Found nvme0n1p4 Mar 7 00:54:23.704864 extend-filesystems[2086]: Found nvme0n1p6 Mar 7 00:54:23.704864 extend-filesystems[2086]: Found nvme0n1p7 Mar 7 00:54:23.704864 extend-filesystems[2086]: Found nvme0n1p9 Mar 7 00:54:23.704864 extend-filesystems[2086]: Checking size of /dev/nvme0n1p9 Mar 7 00:54:23.713898 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 7 00:54:23.714609 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 7 00:54:23.786072 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 7 00:54:23.820127 jq[2111]: true Mar 7 00:54:23.786657 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 7 00:54:23.830893 ntpd[2093]: ntpd 4.2.8p17@1.4004-o Fri Mar 6 22:14:43 UTC 2026 (1): Starting Mar 7 00:54:23.862234 ntpd[2093]: 7 Mar 00:54:23 ntpd[2093]: ntpd 4.2.8p17@1.4004-o Fri Mar 6 22:14:43 UTC 2026 (1): Starting Mar 7 00:54:23.862234 ntpd[2093]: 7 Mar 00:54:23 ntpd[2093]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 7 00:54:23.862234 ntpd[2093]: 7 Mar 00:54:23 ntpd[2093]: ---------------------------------------------------- Mar 7 00:54:23.862234 ntpd[2093]: 7 Mar 00:54:23 ntpd[2093]: ntp-4 is maintained by Network Time Foundation, Mar 7 00:54:23.862234 ntpd[2093]: 7 Mar 00:54:23 ntpd[2093]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 7 00:54:23.862234 ntpd[2093]: 7 Mar 00:54:23 ntpd[2093]: corporation. Support and training for ntp-4 are Mar 7 00:54:23.862234 ntpd[2093]: 7 Mar 00:54:23 ntpd[2093]: available at https://www.nwtime.org/support Mar 7 00:54:23.862234 ntpd[2093]: 7 Mar 00:54:23 ntpd[2093]: ---------------------------------------------------- Mar 7 00:54:23.862234 ntpd[2093]: 7 Mar 00:54:23 ntpd[2093]: proto: precision = 0.096 usec (-23) Mar 7 00:54:23.862234 ntpd[2093]: 7 Mar 00:54:23 ntpd[2093]: basedate set to 2026-02-22 Mar 7 00:54:23.862234 ntpd[2093]: 7 Mar 00:54:23 ntpd[2093]: gps base set to 2026-02-22 (week 2407) Mar 7 00:54:23.875402 coreos-metadata[2082]: Mar 07 00:54:23.854 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 7 00:54:23.843902 systemd[1]: motdgen.service: Deactivated successfully. Mar 7 00:54:23.831014 ntpd[2093]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 7 00:54:23.970620 ntpd[2093]: 7 Mar 00:54:23 ntpd[2093]: Listen and drop on 0 v6wildcard [::]:123 Mar 7 00:54:23.970620 ntpd[2093]: 7 Mar 00:54:23 ntpd[2093]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 7 00:54:23.970620 ntpd[2093]: 7 Mar 00:54:23 ntpd[2093]: Listen normally on 2 lo 127.0.0.1:123 Mar 7 00:54:23.970620 ntpd[2093]: 7 Mar 00:54:23 ntpd[2093]: Listen normally on 3 eth0 172.31.21.232:123 Mar 7 00:54:23.970620 ntpd[2093]: 7 Mar 00:54:23 ntpd[2093]: Listen normally on 4 lo [::1]:123 Mar 7 00:54:23.970620 ntpd[2093]: 7 Mar 00:54:23 ntpd[2093]: Listen normally on 5 eth0 [fe80::4c8:17ff:fe68:481f%2]:123 Mar 7 00:54:23.970620 ntpd[2093]: 7 Mar 00:54:23 ntpd[2093]: Listening on routing socket on fd #22 for interface updates Mar 7 00:54:23.970620 ntpd[2093]: 7 Mar 00:54:23 ntpd[2093]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 00:54:23.970620 ntpd[2093]: 7 Mar 00:54:23 ntpd[2093]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 00:54:23.996107 extend-filesystems[2086]: Resized partition /dev/nvme0n1p9 Mar 7 00:54:24.001174 coreos-metadata[2082]: Mar 07 00:54:23.879 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 7 00:54:24.001174 coreos-metadata[2082]: Mar 07 00:54:23.895 INFO Fetch successful Mar 7 00:54:24.001174 coreos-metadata[2082]: Mar 07 00:54:23.895 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 7 00:54:24.001174 coreos-metadata[2082]: Mar 07 00:54:23.908 INFO Fetch successful Mar 7 00:54:24.001174 coreos-metadata[2082]: Mar 07 00:54:23.908 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 7 00:54:24.001174 coreos-metadata[2082]: Mar 07 00:54:23.912 INFO Fetch successful Mar 7 00:54:24.001174 coreos-metadata[2082]: Mar 07 00:54:23.912 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 7 00:54:24.001174 coreos-metadata[2082]: Mar 07 00:54:23.926 INFO Fetch successful Mar 7 00:54:24.001174 coreos-metadata[2082]: Mar 07 00:54:23.926 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 7 00:54:24.001174 coreos-metadata[2082]: Mar 07 00:54:23.963 INFO Fetch failed with 404: resource not found Mar 7 00:54:24.001174 coreos-metadata[2082]: Mar 07 00:54:23.963 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 7 00:54:24.001174 coreos-metadata[2082]: Mar 07 00:54:23.968 INFO Fetch successful Mar 7 00:54:24.001174 coreos-metadata[2082]: Mar 07 00:54:23.968 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 7 00:54:24.001174 coreos-metadata[2082]: Mar 07 00:54:23.972 INFO Fetch successful Mar 7 00:54:24.001174 coreos-metadata[2082]: Mar 07 00:54:23.972 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 7 00:54:24.001174 coreos-metadata[2082]: Mar 07 00:54:23.973 INFO Fetch successful Mar 7 00:54:24.001174 coreos-metadata[2082]: Mar 07 00:54:23.973 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 7 00:54:24.001174 coreos-metadata[2082]: Mar 07 00:54:23.981 INFO Fetch successful Mar 7 00:54:24.001174 coreos-metadata[2082]: Mar 07 00:54:23.981 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 7 00:54:24.001174 coreos-metadata[2082]: Mar 07 00:54:23.984 INFO Fetch successful Mar 7 00:54:23.844575 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 7 00:54:23.831046 ntpd[2093]: ---------------------------------------------------- Mar 7 00:54:24.012636 extend-filesystems[2145]: resize2fs 1.47.1 (20-May-2024) Mar 7 00:54:23.921120 (ntainerd)[2132]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 7 00:54:23.831066 ntpd[2093]: ntp-4 is maintained by Network Time Foundation, Mar 7 00:54:23.973807 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 7 00:54:24.019260 tar[2120]: linux-arm64/LICENSE Mar 7 00:54:24.019260 tar[2120]: linux-arm64/helm Mar 7 00:54:23.831089 ntpd[2093]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 7 00:54:23.989693 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 7 00:54:23.831108 ntpd[2093]: corporation. Support and training for ntp-4 are Mar 7 00:54:23.989792 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 7 00:54:23.831128 ntpd[2093]: available at https://www.nwtime.org/support Mar 7 00:54:24.003263 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 7 00:54:23.831147 ntpd[2093]: ---------------------------------------------------- Mar 7 00:54:24.005873 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 7 00:54:23.837574 ntpd[2093]: proto: precision = 0.096 usec (-23) Mar 7 00:54:24.005930 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 7 00:54:23.852728 ntpd[2093]: basedate set to 2026-02-22 Mar 7 00:54:23.852812 ntpd[2093]: gps base set to 2026-02-22 (week 2407) Mar 7 00:54:23.865781 ntpd[2093]: Listen and drop on 0 v6wildcard [::]:123 Mar 7 00:54:23.865893 ntpd[2093]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 7 00:54:24.030195 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Mar 7 00:54:23.874082 ntpd[2093]: Listen normally on 2 lo 127.0.0.1:123 Mar 7 00:54:23.874181 ntpd[2093]: Listen normally on 3 eth0 172.31.21.232:123 Mar 7 00:54:23.874255 ntpd[2093]: Listen normally on 4 lo [::1]:123 Mar 7 00:54:23.874340 ntpd[2093]: Listen normally on 5 eth0 [fe80::4c8:17ff:fe68:481f%2]:123 Mar 7 00:54:23.874418 ntpd[2093]: Listening on routing socket on fd #22 for interface updates Mar 7 00:54:23.897383 ntpd[2093]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 00:54:23.897442 ntpd[2093]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 7 00:54:23.971259 dbus-daemon[2084]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 7 00:54:24.084321 update_engine[2108]: I20260307 00:54:24.083837 2108 main.cc:92] Flatcar Update Engine starting Mar 7 00:54:24.084917 jq[2128]: true Mar 7 00:54:24.097296 update_engine[2108]: I20260307 00:54:24.096796 2108 update_check_scheduler.cc:74] Next update check in 6m18s Mar 7 00:54:24.144408 systemd[1]: Started update-engine.service - Update Engine. Mar 7 00:54:24.150070 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 7 00:54:24.153339 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 7 00:54:24.179174 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 7 00:54:24.188334 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 7 00:54:24.296568 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 7 00:54:24.301470 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 7 00:54:24.363833 systemd-logind[2105]: Watching system buttons on /dev/input/event0 (Power Button) Mar 7 00:54:24.363896 systemd-logind[2105]: Watching system buttons on /dev/input/event1 (Sleep Button) Mar 7 00:54:24.366273 systemd-logind[2105]: New seat seat0. Mar 7 00:54:24.368689 systemd[1]: Started systemd-logind.service - User Login Management. Mar 7 00:54:24.409990 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Mar 7 00:54:24.421987 extend-filesystems[2145]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 7 00:54:24.421987 extend-filesystems[2145]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 7 00:54:24.421987 extend-filesystems[2145]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Mar 7 00:54:24.453293 extend-filesystems[2086]: Resized filesystem in /dev/nvme0n1p9 Mar 7 00:54:24.431686 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 7 00:54:24.432381 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 7 00:54:24.502252 bash[2194]: Updated "/home/core/.ssh/authorized_keys" Mar 7 00:54:24.517672 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 7 00:54:24.540468 systemd[1]: Starting sshkeys.service... Mar 7 00:54:24.562211 amazon-ssm-agent[2165]: Initializing new seelog logger Mar 7 00:54:24.600285 amazon-ssm-agent[2165]: New Seelog Logger Creation Complete Mar 7 00:54:24.600285 amazon-ssm-agent[2165]: 2026/03/07 00:54:24 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 00:54:24.600285 amazon-ssm-agent[2165]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 00:54:24.600285 amazon-ssm-agent[2165]: 2026/03/07 00:54:24 processing appconfig overrides Mar 7 00:54:24.600285 amazon-ssm-agent[2165]: 2026/03/07 00:54:24 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 00:54:24.600285 amazon-ssm-agent[2165]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 00:54:24.600285 amazon-ssm-agent[2165]: 2026/03/07 00:54:24 processing appconfig overrides Mar 7 00:54:24.600285 amazon-ssm-agent[2165]: 2026/03/07 00:54:24 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 00:54:24.600285 amazon-ssm-agent[2165]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 00:54:24.600285 amazon-ssm-agent[2165]: 2026/03/07 00:54:24 processing appconfig overrides Mar 7 00:54:24.600285 amazon-ssm-agent[2165]: 2026-03-07 00:54:24 INFO Proxy environment variables: Mar 7 00:54:24.620430 amazon-ssm-agent[2165]: 2026/03/07 00:54:24 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 00:54:24.620430 amazon-ssm-agent[2165]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 7 00:54:24.620430 amazon-ssm-agent[2165]: 2026/03/07 00:54:24 processing appconfig overrides Mar 7 00:54:24.673216 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 7 00:54:24.685899 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 7 00:54:24.694092 amazon-ssm-agent[2165]: 2026-03-07 00:54:24 INFO https_proxy: Mar 7 00:54:24.823147 amazon-ssm-agent[2165]: 2026-03-07 00:54:24 INFO http_proxy: Mar 7 00:54:24.897097 containerd[2132]: time="2026-03-07T00:54:24.896911717Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 7 00:54:24.922891 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 7 00:54:24.939788 amazon-ssm-agent[2165]: 2026-03-07 00:54:24 INFO no_proxy: Mar 7 00:54:25.041737 amazon-ssm-agent[2165]: 2026-03-07 00:54:24 INFO Checking if agent identity type OnPrem can be assumed Mar 7 00:54:25.107974 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (2188) Mar 7 00:54:25.127577 containerd[2132]: time="2026-03-07T00:54:25.125475490Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 7 00:54:25.135238 coreos-metadata[2215]: Mar 07 00:54:25.135 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 7 00:54:25.142240 coreos-metadata[2215]: Mar 07 00:54:25.139 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 7 00:54:25.142379 amazon-ssm-agent[2165]: 2026-03-07 00:54:24 INFO Checking if agent identity type EC2 can be assumed Mar 7 00:54:25.145768 coreos-metadata[2215]: Mar 07 00:54:25.144 INFO Fetch successful Mar 7 00:54:25.145768 coreos-metadata[2215]: Mar 07 00:54:25.144 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 7 00:54:25.145768 coreos-metadata[2215]: Mar 07 00:54:25.145 INFO Fetch successful Mar 7 00:54:25.146099 containerd[2132]: time="2026-03-07T00:54:25.144784474Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 7 00:54:25.146099 containerd[2132]: time="2026-03-07T00:54:25.144857674Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 7 00:54:25.146099 containerd[2132]: time="2026-03-07T00:54:25.144898762Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 7 00:54:25.146099 containerd[2132]: time="2026-03-07T00:54:25.145299502Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 7 00:54:25.146099 containerd[2132]: time="2026-03-07T00:54:25.145351906Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 7 00:54:25.146099 containerd[2132]: time="2026-03-07T00:54:25.145534942Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 00:54:25.146099 containerd[2132]: time="2026-03-07T00:54:25.145587166Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 7 00:54:25.151859 containerd[2132]: time="2026-03-07T00:54:25.150546718Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 00:54:25.152320 containerd[2132]: time="2026-03-07T00:54:25.152060398Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 7 00:54:25.152320 containerd[2132]: time="2026-03-07T00:54:25.152128102Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 00:54:25.152320 containerd[2132]: time="2026-03-07T00:54:25.152170330Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 7 00:54:25.154423 containerd[2132]: time="2026-03-07T00:54:25.153787426Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 7 00:54:25.154423 containerd[2132]: time="2026-03-07T00:54:25.154328458Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 7 00:54:25.153913 unknown[2215]: wrote ssh authorized keys file for user: core Mar 7 00:54:25.157152 containerd[2132]: time="2026-03-07T00:54:25.157092538Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 7 00:54:25.160168 containerd[2132]: time="2026-03-07T00:54:25.158439970Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 7 00:54:25.168254 locksmithd[2162]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 7 00:54:25.175299 containerd[2132]: time="2026-03-07T00:54:25.171218326Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 7 00:54:25.175299 containerd[2132]: time="2026-03-07T00:54:25.171425098Z" level=info msg="metadata content store policy set" policy=shared Mar 7 00:54:25.202334 containerd[2132]: time="2026-03-07T00:54:25.202219354Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 7 00:54:25.203727 containerd[2132]: time="2026-03-07T00:54:25.203549938Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 7 00:54:25.205454 containerd[2132]: time="2026-03-07T00:54:25.203992126Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 7 00:54:25.205454 containerd[2132]: time="2026-03-07T00:54:25.204055234Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 7 00:54:25.205454 containerd[2132]: time="2026-03-07T00:54:25.204095350Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 7 00:54:25.205454 containerd[2132]: time="2026-03-07T00:54:25.204395938Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 7 00:54:25.211139 containerd[2132]: time="2026-03-07T00:54:25.209494354Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 7 00:54:25.214486 containerd[2132]: time="2026-03-07T00:54:25.212700551Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 7 00:54:25.214486 containerd[2132]: time="2026-03-07T00:54:25.212832419Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 7 00:54:25.214486 containerd[2132]: time="2026-03-07T00:54:25.212877443Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 7 00:54:25.214486 containerd[2132]: time="2026-03-07T00:54:25.212958503Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 7 00:54:25.214486 containerd[2132]: time="2026-03-07T00:54:25.213001139Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 7 00:54:25.214486 containerd[2132]: time="2026-03-07T00:54:25.213032819Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 7 00:54:25.214486 containerd[2132]: time="2026-03-07T00:54:25.213084503Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 7 00:54:25.214486 containerd[2132]: time="2026-03-07T00:54:25.213119255Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 7 00:54:25.214486 containerd[2132]: time="2026-03-07T00:54:25.213155951Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 7 00:54:25.214486 containerd[2132]: time="2026-03-07T00:54:25.213189299Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 7 00:54:25.214486 containerd[2132]: time="2026-03-07T00:54:25.213223055Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 7 00:54:25.214486 containerd[2132]: time="2026-03-07T00:54:25.213273719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 7 00:54:25.214486 containerd[2132]: time="2026-03-07T00:54:25.213307187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 7 00:54:25.214486 containerd[2132]: time="2026-03-07T00:54:25.213337931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 7 00:54:25.215265 containerd[2132]: time="2026-03-07T00:54:25.213386687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 7 00:54:25.215265 containerd[2132]: time="2026-03-07T00:54:25.213418391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 7 00:54:25.215265 containerd[2132]: time="2026-03-07T00:54:25.213451079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 7 00:54:25.215265 containerd[2132]: time="2026-03-07T00:54:25.213480455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 7 00:54:25.215265 containerd[2132]: time="2026-03-07T00:54:25.213519359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 7 00:54:25.215265 containerd[2132]: time="2026-03-07T00:54:25.213558095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 7 00:54:25.215265 containerd[2132]: time="2026-03-07T00:54:25.213596267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 7 00:54:25.215265 containerd[2132]: time="2026-03-07T00:54:25.213641015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 7 00:54:25.215265 containerd[2132]: time="2026-03-07T00:54:25.213672383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 7 00:54:25.215265 containerd[2132]: time="2026-03-07T00:54:25.213704471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 7 00:54:25.215265 containerd[2132]: time="2026-03-07T00:54:25.213785351Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 7 00:54:25.215265 containerd[2132]: time="2026-03-07T00:54:25.213837719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 7 00:54:25.215265 containerd[2132]: time="2026-03-07T00:54:25.213869039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 7 00:54:25.215265 containerd[2132]: time="2026-03-07T00:54:25.213896279Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 7 00:54:25.226667 containerd[2132]: time="2026-03-07T00:54:25.223751075Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 7 00:54:25.226667 containerd[2132]: time="2026-03-07T00:54:25.223881671Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 7 00:54:25.226667 containerd[2132]: time="2026-03-07T00:54:25.223983203Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 7 00:54:25.226667 containerd[2132]: time="2026-03-07T00:54:25.224023175Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 7 00:54:25.226667 containerd[2132]: time="2026-03-07T00:54:25.224050043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 7 00:54:25.226667 containerd[2132]: time="2026-03-07T00:54:25.224087591Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 7 00:54:25.226667 containerd[2132]: time="2026-03-07T00:54:25.224114387Z" level=info msg="NRI interface is disabled by configuration." Mar 7 00:54:25.226667 containerd[2132]: time="2026-03-07T00:54:25.224143427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 7 00:54:25.227852 containerd[2132]: time="2026-03-07T00:54:25.224882195Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 7 00:54:25.227852 containerd[2132]: time="2026-03-07T00:54:25.225051323Z" level=info msg="Connect containerd service" Mar 7 00:54:25.227852 containerd[2132]: time="2026-03-07T00:54:25.225122867Z" level=info msg="using legacy CRI server" Mar 7 00:54:25.227852 containerd[2132]: time="2026-03-07T00:54:25.225141335Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 7 00:54:25.227852 containerd[2132]: time="2026-03-07T00:54:25.225319943Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 7 00:54:25.235020 containerd[2132]: time="2026-03-07T00:54:25.231736367Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 00:54:25.242992 amazon-ssm-agent[2165]: 2026-03-07 00:54:25 INFO Agent will take identity from EC2 Mar 7 00:54:25.243121 containerd[2132]: time="2026-03-07T00:54:25.242636747Z" level=info msg="Start subscribing containerd event" Mar 7 00:54:25.243121 containerd[2132]: time="2026-03-07T00:54:25.242762723Z" level=info msg="Start recovering state" Mar 7 00:54:25.259985 containerd[2132]: time="2026-03-07T00:54:25.255590615Z" level=info msg="Start event monitor" Mar 7 00:54:25.259985 containerd[2132]: time="2026-03-07T00:54:25.255669815Z" level=info msg="Start snapshots syncer" Mar 7 00:54:25.259985 containerd[2132]: time="2026-03-07T00:54:25.255697199Z" level=info msg="Start cni network conf syncer for default" Mar 7 00:54:25.259985 containerd[2132]: time="2026-03-07T00:54:25.255718739Z" level=info msg="Start streaming server" Mar 7 00:54:25.259985 containerd[2132]: time="2026-03-07T00:54:25.256701347Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 7 00:54:25.259985 containerd[2132]: time="2026-03-07T00:54:25.256880963Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 7 00:54:25.259985 containerd[2132]: time="2026-03-07T00:54:25.257074823Z" level=info msg="containerd successfully booted in 0.367987s" Mar 7 00:54:25.271718 systemd[1]: Started containerd.service - containerd container runtime. Mar 7 00:54:25.277791 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 7 00:54:25.276672 dbus-daemon[2084]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 7 00:54:25.285131 dbus-daemon[2084]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2151 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 7 00:54:25.298653 systemd[1]: Starting polkit.service - Authorization Manager... Mar 7 00:54:25.343994 amazon-ssm-agent[2165]: 2026-03-07 00:54:25 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 7 00:54:25.355014 update-ssh-keys[2260]: Updated "/home/core/.ssh/authorized_keys" Mar 7 00:54:25.360633 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 7 00:54:25.386179 systemd[1]: Finished sshkeys.service. Mar 7 00:54:25.433323 polkitd[2265]: Started polkitd version 121 Mar 7 00:54:25.442373 amazon-ssm-agent[2165]: 2026-03-07 00:54:25 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 7 00:54:25.475129 polkitd[2265]: Loading rules from directory /etc/polkit-1/rules.d Mar 7 00:54:25.475283 polkitd[2265]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 7 00:54:25.483459 polkitd[2265]: Finished loading, compiling and executing 2 rules Mar 7 00:54:25.492036 dbus-daemon[2084]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 7 00:54:25.492578 systemd[1]: Started polkit.service - Authorization Manager. Mar 7 00:54:25.497545 polkitd[2265]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 7 00:54:25.541725 amazon-ssm-agent[2165]: 2026-03-07 00:54:25 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 7 00:54:25.598104 systemd-hostnamed[2151]: Hostname set to (transient) Mar 7 00:54:25.599230 systemd-resolved[2027]: System hostname changed to 'ip-172-31-21-232'. Mar 7 00:54:25.641059 amazon-ssm-agent[2165]: 2026-03-07 00:54:25 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Mar 7 00:54:25.742994 amazon-ssm-agent[2165]: 2026-03-07 00:54:25 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Mar 7 00:54:25.842690 amazon-ssm-agent[2165]: 2026-03-07 00:54:25 INFO [amazon-ssm-agent] Starting Core Agent Mar 7 00:54:25.944205 amazon-ssm-agent[2165]: 2026-03-07 00:54:25 INFO [amazon-ssm-agent] registrar detected. Attempting registration Mar 7 00:54:26.026343 sshd_keygen[2115]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 7 00:54:26.044293 amazon-ssm-agent[2165]: 2026-03-07 00:54:25 INFO [Registrar] Starting registrar module Mar 7 00:54:26.137293 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 7 00:54:26.144959 amazon-ssm-agent[2165]: 2026-03-07 00:54:25 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Mar 7 00:54:26.155485 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 7 00:54:26.173454 systemd[1]: Started sshd@0-172.31.21.232:22-20.161.92.111:42330.service - OpenSSH per-connection server daemon (20.161.92.111:42330). Mar 7 00:54:26.252293 systemd[1]: issuegen.service: Deactivated successfully. Mar 7 00:54:26.252923 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 7 00:54:26.271439 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 7 00:54:26.337823 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 7 00:54:26.360721 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 7 00:54:26.368114 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 7 00:54:26.372469 systemd[1]: Reached target getty.target - Login Prompts. Mar 7 00:54:26.793997 tar[2120]: linux-arm64/README.md Mar 7 00:54:26.812105 sshd[2346]: Accepted publickey for core from 20.161.92.111 port 42330 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:54:26.819872 sshd[2346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:54:26.833130 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 7 00:54:26.863051 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 7 00:54:26.874457 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 7 00:54:26.885074 systemd-logind[2105]: New session 1 of user core. Mar 7 00:54:26.959337 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 7 00:54:26.975583 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 7 00:54:26.989277 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:54:26.999127 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 7 00:54:27.026859 (kubelet)[2375]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:54:27.030987 (systemd)[2373]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 7 00:54:27.065909 amazon-ssm-agent[2165]: 2026-03-07 00:54:27 INFO [EC2Identity] EC2 registration was successful. Mar 7 00:54:27.112027 amazon-ssm-agent[2165]: 2026-03-07 00:54:27 INFO [CredentialRefresher] credentialRefresher has started Mar 7 00:54:27.112027 amazon-ssm-agent[2165]: 2026-03-07 00:54:27 INFO [CredentialRefresher] Starting credentials refresher loop Mar 7 00:54:27.112027 amazon-ssm-agent[2165]: 2026-03-07 00:54:27 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 7 00:54:27.168119 amazon-ssm-agent[2165]: 2026-03-07 00:54:27 INFO [CredentialRefresher] Next credential rotation will be in 31.908296627733332 minutes Mar 7 00:54:27.304174 systemd[2373]: Queued start job for default target default.target. Mar 7 00:54:27.305071 systemd[2373]: Created slice app.slice - User Application Slice. Mar 7 00:54:27.305132 systemd[2373]: Reached target paths.target - Paths. Mar 7 00:54:27.305165 systemd[2373]: Reached target timers.target - Timers. Mar 7 00:54:27.314283 systemd[2373]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 7 00:54:27.336922 systemd[2373]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 7 00:54:27.337129 systemd[2373]: Reached target sockets.target - Sockets. Mar 7 00:54:27.337165 systemd[2373]: Reached target basic.target - Basic System. Mar 7 00:54:27.338373 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 7 00:54:27.341167 systemd[2373]: Reached target default.target - Main User Target. Mar 7 00:54:27.341268 systemd[2373]: Startup finished in 295ms. Mar 7 00:54:27.352812 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 7 00:54:27.357522 systemd[1]: Startup finished in 9.332s (kernel) + 10.500s (userspace) = 19.832s. Mar 7 00:54:27.746856 systemd[1]: Started sshd@1-172.31.21.232:22-20.161.92.111:42332.service - OpenSSH per-connection server daemon (20.161.92.111:42332). Mar 7 00:54:28.107331 kubelet[2375]: E0307 00:54:28.107136 2375 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:54:28.112132 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:54:28.114454 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:54:28.148956 amazon-ssm-agent[2165]: 2026-03-07 00:54:28 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 7 00:54:28.249285 amazon-ssm-agent[2165]: 2026-03-07 00:54:28 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2402) started Mar 7 00:54:28.261231 sshd[2397]: Accepted publickey for core from 20.161.92.111 port 42332 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:54:28.264186 sshd[2397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:54:28.276320 systemd-logind[2105]: New session 2 of user core. Mar 7 00:54:28.285122 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 7 00:54:28.349980 amazon-ssm-agent[2165]: 2026-03-07 00:54:28 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 7 00:54:28.631239 sshd[2397]: pam_unix(sshd:session): session closed for user core Mar 7 00:54:28.638572 systemd[1]: sshd@1-172.31.21.232:22-20.161.92.111:42332.service: Deactivated successfully. Mar 7 00:54:28.644493 systemd[1]: session-2.scope: Deactivated successfully. Mar 7 00:54:28.646095 systemd-logind[2105]: Session 2 logged out. Waiting for processes to exit. Mar 7 00:54:28.648802 systemd-logind[2105]: Removed session 2. Mar 7 00:54:28.715539 systemd[1]: Started sshd@2-172.31.21.232:22-20.161.92.111:42344.service - OpenSSH per-connection server daemon (20.161.92.111:42344). Mar 7 00:54:29.235895 sshd[2418]: Accepted publickey for core from 20.161.92.111 port 42344 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:54:29.237658 sshd[2418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:54:29.246016 systemd-logind[2105]: New session 3 of user core. Mar 7 00:54:29.249464 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 7 00:54:29.589236 sshd[2418]: pam_unix(sshd:session): session closed for user core Mar 7 00:54:29.595128 systemd-logind[2105]: Session 3 logged out. Waiting for processes to exit. Mar 7 00:54:29.598598 systemd[1]: sshd@2-172.31.21.232:22-20.161.92.111:42344.service: Deactivated successfully. Mar 7 00:54:29.605433 systemd[1]: session-3.scope: Deactivated successfully. Mar 7 00:54:29.607952 systemd-logind[2105]: Removed session 3. Mar 7 00:54:29.676439 systemd[1]: Started sshd@3-172.31.21.232:22-20.161.92.111:42352.service - OpenSSH per-connection server daemon (20.161.92.111:42352). Mar 7 00:54:30.195041 sshd[2426]: Accepted publickey for core from 20.161.92.111 port 42352 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:54:30.197737 sshd[2426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:54:30.206969 systemd-logind[2105]: New session 4 of user core. Mar 7 00:54:30.212480 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 7 00:54:30.561297 sshd[2426]: pam_unix(sshd:session): session closed for user core Mar 7 00:54:30.571161 systemd[1]: sshd@3-172.31.21.232:22-20.161.92.111:42352.service: Deactivated successfully. Mar 7 00:54:30.576498 systemd[1]: session-4.scope: Deactivated successfully. Mar 7 00:54:30.577890 systemd-logind[2105]: Session 4 logged out. Waiting for processes to exit. Mar 7 00:54:30.579652 systemd-logind[2105]: Removed session 4. Mar 7 00:54:30.646459 systemd[1]: Started sshd@4-172.31.21.232:22-20.161.92.111:46984.service - OpenSSH per-connection server daemon (20.161.92.111:46984). Mar 7 00:54:31.157555 sshd[2434]: Accepted publickey for core from 20.161.92.111 port 46984 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:54:31.159266 sshd[2434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:54:31.167652 systemd-logind[2105]: New session 5 of user core. Mar 7 00:54:31.177437 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 7 00:54:31.458795 sudo[2438]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 7 00:54:31.459480 sudo[2438]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:54:31.478689 sudo[2438]: pam_unix(sudo:session): session closed for user root Mar 7 00:54:31.558065 sshd[2434]: pam_unix(sshd:session): session closed for user core Mar 7 00:54:31.566649 systemd[1]: sshd@4-172.31.21.232:22-20.161.92.111:46984.service: Deactivated successfully. Mar 7 00:54:31.572703 systemd[1]: session-5.scope: Deactivated successfully. Mar 7 00:54:31.574357 systemd-logind[2105]: Session 5 logged out. Waiting for processes to exit. Mar 7 00:54:31.576118 systemd-logind[2105]: Removed session 5. Mar 7 00:54:31.649831 systemd[1]: Started sshd@5-172.31.21.232:22-20.161.92.111:46992.service - OpenSSH per-connection server daemon (20.161.92.111:46992). Mar 7 00:54:32.172660 sshd[2443]: Accepted publickey for core from 20.161.92.111 port 46992 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:54:32.174459 sshd[2443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:54:32.181958 systemd-logind[2105]: New session 6 of user core. Mar 7 00:54:32.190409 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 7 00:54:32.460821 sudo[2448]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 7 00:54:32.461535 sudo[2448]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:54:32.467799 sudo[2448]: pam_unix(sudo:session): session closed for user root Mar 7 00:54:32.478189 sudo[2447]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 7 00:54:32.478851 sudo[2447]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:54:32.501504 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 7 00:54:32.518209 auditctl[2451]: No rules Mar 7 00:54:32.519860 systemd[1]: audit-rules.service: Deactivated successfully. Mar 7 00:54:32.521546 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 7 00:54:32.533750 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 7 00:54:32.577249 augenrules[2470]: No rules Mar 7 00:54:32.581338 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 7 00:54:32.585789 sudo[2447]: pam_unix(sudo:session): session closed for user root Mar 7 00:54:32.669252 sshd[2443]: pam_unix(sshd:session): session closed for user core Mar 7 00:54:32.675311 systemd[1]: sshd@5-172.31.21.232:22-20.161.92.111:46992.service: Deactivated successfully. Mar 7 00:54:32.676229 systemd-logind[2105]: Session 6 logged out. Waiting for processes to exit. Mar 7 00:54:32.684825 systemd[1]: session-6.scope: Deactivated successfully. Mar 7 00:54:32.686578 systemd-logind[2105]: Removed session 6. Mar 7 00:54:32.753437 systemd[1]: Started sshd@6-172.31.21.232:22-20.161.92.111:46998.service - OpenSSH per-connection server daemon (20.161.92.111:46998). Mar 7 00:54:33.256783 sshd[2479]: Accepted publickey for core from 20.161.92.111 port 46998 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:54:33.258548 sshd[2479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:54:33.267256 systemd-logind[2105]: New session 7 of user core. Mar 7 00:54:33.270488 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 7 00:54:33.537869 sudo[2483]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 7 00:54:33.539312 sudo[2483]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 7 00:54:34.069446 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 7 00:54:34.086752 (dockerd)[2498]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 7 00:54:34.516080 dockerd[2498]: time="2026-03-07T00:54:34.515441378Z" level=info msg="Starting up" Mar 7 00:54:34.932595 systemd[1]: var-lib-docker-metacopy\x2dcheck2642894768-merged.mount: Deactivated successfully. Mar 7 00:54:34.985383 dockerd[2498]: time="2026-03-07T00:54:34.985321260Z" level=info msg="Loading containers: start." Mar 7 00:54:35.168993 kernel: Initializing XFRM netlink socket Mar 7 00:54:35.205738 (udev-worker)[2519]: Network interface NamePolicy= disabled on kernel command line. Mar 7 00:54:35.301609 systemd-networkd[1695]: docker0: Link UP Mar 7 00:54:35.332473 dockerd[2498]: time="2026-03-07T00:54:35.332400219Z" level=info msg="Loading containers: done." Mar 7 00:54:35.380001 dockerd[2498]: time="2026-03-07T00:54:35.379491966Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 7 00:54:35.380001 dockerd[2498]: time="2026-03-07T00:54:35.379647311Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 7 00:54:35.380001 dockerd[2498]: time="2026-03-07T00:54:35.379830355Z" level=info msg="Daemon has completed initialization" Mar 7 00:54:35.448341 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 7 00:54:35.450791 dockerd[2498]: time="2026-03-07T00:54:35.448361900Z" level=info msg="API listen on /run/docker.sock" Mar 7 00:54:36.326980 containerd[2132]: time="2026-03-07T00:54:36.326633799Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 7 00:54:37.005860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4025456869.mount: Deactivated successfully. Mar 7 00:54:38.363220 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 7 00:54:38.374303 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:54:38.770644 containerd[2132]: time="2026-03-07T00:54:38.769608164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:38.778740 containerd[2132]: time="2026-03-07T00:54:38.778162727Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=27390174" Mar 7 00:54:38.787983 containerd[2132]: time="2026-03-07T00:54:38.787456380Z" level=info msg="ImageCreate event name:\"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:38.806339 containerd[2132]: time="2026-03-07T00:54:38.806273456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:38.810612 containerd[2132]: time="2026-03-07T00:54:38.810512402Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"27386773\" in 2.483797743s" Mar 7 00:54:38.812106 containerd[2132]: time="2026-03-07T00:54:38.812040142Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\"" Mar 7 00:54:38.815913 containerd[2132]: time="2026-03-07T00:54:38.815536469Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 7 00:54:38.849389 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:54:38.850655 (kubelet)[2707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:54:38.935777 kubelet[2707]: E0307 00:54:38.935671 2707 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:54:38.943865 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:54:38.945539 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:54:40.364290 containerd[2132]: time="2026-03-07T00:54:40.364200901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:40.371454 containerd[2132]: time="2026-03-07T00:54:40.371360307Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=23552106" Mar 7 00:54:40.373316 containerd[2132]: time="2026-03-07T00:54:40.373216099Z" level=info msg="ImageCreate event name:\"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:40.382034 containerd[2132]: time="2026-03-07T00:54:40.381914230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:40.384863 containerd[2132]: time="2026-03-07T00:54:40.384345685Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"25136510\" in 1.56871773s" Mar 7 00:54:40.384863 containerd[2132]: time="2026-03-07T00:54:40.384416280Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\"" Mar 7 00:54:40.385376 containerd[2132]: time="2026-03-07T00:54:40.385319732Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 7 00:54:41.758607 containerd[2132]: time="2026-03-07T00:54:41.758523748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:41.763037 containerd[2132]: time="2026-03-07T00:54:41.761450799Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=18301305" Mar 7 00:54:41.763173 containerd[2132]: time="2026-03-07T00:54:41.763112873Z" level=info msg="ImageCreate event name:\"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:41.770998 containerd[2132]: time="2026-03-07T00:54:41.770083317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:41.772513 containerd[2132]: time="2026-03-07T00:54:41.772444681Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"19885727\" in 1.385624187s" Mar 7 00:54:41.772608 containerd[2132]: time="2026-03-07T00:54:41.772510774Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\"" Mar 7 00:54:41.773271 containerd[2132]: time="2026-03-07T00:54:41.773210664Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 7 00:54:43.119709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1919846227.mount: Deactivated successfully. Mar 7 00:54:43.799055 containerd[2132]: time="2026-03-07T00:54:43.798881307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:43.803366 containerd[2132]: time="2026-03-07T00:54:43.803233218Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=28148870" Mar 7 00:54:43.804388 containerd[2132]: time="2026-03-07T00:54:43.804347243Z" level=info msg="ImageCreate event name:\"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:43.809579 containerd[2132]: time="2026-03-07T00:54:43.809510651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:43.811072 containerd[2132]: time="2026-03-07T00:54:43.811004114Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"28147889\" in 2.037720969s" Mar 7 00:54:43.811190 containerd[2132]: time="2026-03-07T00:54:43.811078131Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\"" Mar 7 00:54:43.811811 containerd[2132]: time="2026-03-07T00:54:43.811754825Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 7 00:54:44.348408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3421519252.mount: Deactivated successfully. Mar 7 00:54:45.448845 containerd[2132]: time="2026-03-07T00:54:45.448761675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:45.452581 containerd[2132]: time="2026-03-07T00:54:45.452515747Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Mar 7 00:54:45.456119 containerd[2132]: time="2026-03-07T00:54:45.456049112Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:45.464170 containerd[2132]: time="2026-03-07T00:54:45.463498223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:45.466455 containerd[2132]: time="2026-03-07T00:54:45.465854893Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.654039774s" Mar 7 00:54:45.466455 containerd[2132]: time="2026-03-07T00:54:45.465924564Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Mar 7 00:54:45.466661 containerd[2132]: time="2026-03-07T00:54:45.466581484Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 7 00:54:45.985354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4244628415.mount: Deactivated successfully. Mar 7 00:54:45.998213 containerd[2132]: time="2026-03-07T00:54:45.997806516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:46.000808 containerd[2132]: time="2026-03-07T00:54:46.000383327Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 7 00:54:46.002992 containerd[2132]: time="2026-03-07T00:54:46.002918202Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:46.008794 containerd[2132]: time="2026-03-07T00:54:46.008745687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:46.010977 containerd[2132]: time="2026-03-07T00:54:46.010447633Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 543.818581ms" Mar 7 00:54:46.010977 containerd[2132]: time="2026-03-07T00:54:46.010502344Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 7 00:54:46.011382 containerd[2132]: time="2026-03-07T00:54:46.011331191Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 7 00:54:46.605415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1103742663.mount: Deactivated successfully. Mar 7 00:54:48.846697 containerd[2132]: time="2026-03-07T00:54:48.846612811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:48.849695 containerd[2132]: time="2026-03-07T00:54:48.849626725Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21885780" Mar 7 00:54:48.852703 containerd[2132]: time="2026-03-07T00:54:48.852635465Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:48.860604 containerd[2132]: time="2026-03-07T00:54:48.860517536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:54:48.863920 containerd[2132]: time="2026-03-07T00:54:48.863051786Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 2.851663231s" Mar 7 00:54:48.863920 containerd[2132]: time="2026-03-07T00:54:48.863115382Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Mar 7 00:54:49.044505 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 7 00:54:49.055419 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:54:49.473299 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:54:49.485562 (kubelet)[2884]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 7 00:54:49.569991 kubelet[2884]: E0307 00:54:49.562416 2884 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 7 00:54:49.568216 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 7 00:54:49.568676 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 7 00:54:55.638436 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 7 00:54:57.519523 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:54:57.529475 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:54:57.589267 systemd[1]: Reloading requested from client PID 2904 ('systemctl') (unit session-7.scope)... Mar 7 00:54:57.589301 systemd[1]: Reloading... Mar 7 00:54:57.830999 zram_generator::config[2947]: No configuration found. Mar 7 00:54:58.099367 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 00:54:58.274119 systemd[1]: Reloading finished in 684 ms. Mar 7 00:54:58.369410 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 7 00:54:58.369619 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 7 00:54:58.371308 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:54:58.387456 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:54:58.738288 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:54:58.751623 (kubelet)[3019]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 00:54:58.825976 kubelet[3019]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 00:54:58.825976 kubelet[3019]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 00:54:58.825976 kubelet[3019]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 00:54:58.825976 kubelet[3019]: I0307 00:54:58.824856 3019 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 00:55:01.081436 kubelet[3019]: I0307 00:55:01.081379 3019 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 00:55:01.082208 kubelet[3019]: I0307 00:55:01.082175 3019 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 00:55:01.084076 kubelet[3019]: I0307 00:55:01.083734 3019 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 00:55:01.136347 kubelet[3019]: E0307 00:55:01.136278 3019 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.21.232:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.21.232:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 7 00:55:01.138319 kubelet[3019]: I0307 00:55:01.138260 3019 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 00:55:01.152841 kubelet[3019]: E0307 00:55:01.152768 3019 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 00:55:01.153037 kubelet[3019]: I0307 00:55:01.152825 3019 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 00:55:01.159741 kubelet[3019]: I0307 00:55:01.159664 3019 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 00:55:01.160585 kubelet[3019]: I0307 00:55:01.160521 3019 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 00:55:01.160907 kubelet[3019]: I0307 00:55:01.160573 3019 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-232","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 7 00:55:01.160907 kubelet[3019]: I0307 00:55:01.160908 3019 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 00:55:01.161184 kubelet[3019]: I0307 00:55:01.160930 3019 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 00:55:01.161350 kubelet[3019]: I0307 00:55:01.161318 3019 state_mem.go:36] "Initialized new in-memory state store" Mar 7 00:55:01.167344 kubelet[3019]: I0307 00:55:01.167278 3019 kubelet.go:480] "Attempting to sync node with API server" Mar 7 00:55:01.167567 kubelet[3019]: I0307 00:55:01.167519 3019 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 00:55:01.169287 kubelet[3019]: I0307 00:55:01.169239 3019 kubelet.go:386] "Adding apiserver pod source" Mar 7 00:55:01.169507 kubelet[3019]: I0307 00:55:01.169296 3019 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 00:55:01.177421 kubelet[3019]: I0307 00:55:01.177312 3019 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 00:55:01.179567 kubelet[3019]: I0307 00:55:01.178671 3019 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 00:55:01.179567 kubelet[3019]: W0307 00:55:01.179041 3019 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 7 00:55:01.184006 kubelet[3019]: I0307 00:55:01.183834 3019 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 00:55:01.184006 kubelet[3019]: I0307 00:55:01.183911 3019 server.go:1289] "Started kubelet" Mar 7 00:55:01.184378 kubelet[3019]: E0307 00:55:01.184229 3019 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.21.232:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-232&limit=500&resourceVersion=0\": dial tcp 172.31.21.232:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 00:55:01.189993 kubelet[3019]: I0307 00:55:01.189151 3019 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 00:55:01.189993 kubelet[3019]: I0307 00:55:01.189712 3019 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 00:55:01.189993 kubelet[3019]: I0307 00:55:01.189795 3019 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 00:55:01.192812 kubelet[3019]: I0307 00:55:01.192775 3019 server.go:317] "Adding debug handlers to kubelet server" Mar 7 00:55:01.198402 kubelet[3019]: I0307 00:55:01.198355 3019 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 00:55:01.208066 kubelet[3019]: I0307 00:55:01.208029 3019 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 00:55:01.208639 kubelet[3019]: E0307 00:55:01.208605 3019 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-21-232\" not found" Mar 7 00:55:01.210441 kubelet[3019]: I0307 00:55:01.209174 3019 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 00:55:01.210441 kubelet[3019]: I0307 00:55:01.209285 3019 reconciler.go:26] "Reconciler: start to sync state" Mar 7 00:55:01.210441 kubelet[3019]: I0307 00:55:01.209893 3019 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 00:55:01.211064 kubelet[3019]: E0307 00:55:01.211027 3019 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.21.232:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.232:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 00:55:01.211457 kubelet[3019]: I0307 00:55:01.211427 3019 factory.go:223] Registration of the systemd container factory successfully Mar 7 00:55:01.211733 kubelet[3019]: I0307 00:55:01.211703 3019 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 00:55:01.216145 kubelet[3019]: E0307 00:55:01.213605 3019 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.21.232:6443/api/v1/namespaces/default/events\": dial tcp 172.31.21.232:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-21-232.189a690b70ed4af1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-232,UID:ip-172-31-21-232,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-232,},FirstTimestamp:2026-03-07 00:55:01.183867633 +0000 UTC m=+2.425020078,LastTimestamp:2026-03-07 00:55:01.183867633 +0000 UTC m=+2.425020078,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-232,}" Mar 7 00:55:01.216371 kubelet[3019]: E0307 00:55:01.216323 3019 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.21.232:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.232:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 00:55:01.216725 kubelet[3019]: E0307 00:55:01.216437 3019 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-232?timeout=10s\": dial tcp 172.31.21.232:6443: connect: connection refused" interval="200ms" Mar 7 00:55:01.222350 kubelet[3019]: I0307 00:55:01.222292 3019 factory.go:223] Registration of the containerd container factory successfully Mar 7 00:55:01.234193 kubelet[3019]: E0307 00:55:01.234151 3019 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 00:55:01.267008 kubelet[3019]: I0307 00:55:01.266755 3019 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 00:55:01.268991 kubelet[3019]: I0307 00:55:01.268957 3019 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 00:55:01.269158 kubelet[3019]: I0307 00:55:01.269141 3019 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 00:55:01.269287 kubelet[3019]: I0307 00:55:01.269268 3019 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 00:55:01.269414 kubelet[3019]: I0307 00:55:01.269395 3019 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 00:55:01.269572 kubelet[3019]: E0307 00:55:01.269543 3019 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 00:55:01.275569 kubelet[3019]: I0307 00:55:01.275534 3019 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 00:55:01.275755 kubelet[3019]: I0307 00:55:01.275733 3019 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 00:55:01.275898 kubelet[3019]: I0307 00:55:01.275880 3019 state_mem.go:36] "Initialized new in-memory state store" Mar 7 00:55:01.278413 kubelet[3019]: I0307 00:55:01.278376 3019 policy_none.go:49] "None policy: Start" Mar 7 00:55:01.278587 kubelet[3019]: I0307 00:55:01.278568 3019 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 00:55:01.278728 kubelet[3019]: I0307 00:55:01.278709 3019 state_mem.go:35] "Initializing new in-memory state store" Mar 7 00:55:01.285808 kubelet[3019]: E0307 00:55:01.285734 3019 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.21.232:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.232:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 00:55:01.293612 kubelet[3019]: E0307 00:55:01.293556 3019 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 00:55:01.293898 kubelet[3019]: I0307 00:55:01.293863 3019 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 00:55:01.293987 kubelet[3019]: I0307 00:55:01.293895 3019 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 00:55:01.300259 kubelet[3019]: I0307 00:55:01.300211 3019 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 00:55:01.302475 kubelet[3019]: E0307 00:55:01.302418 3019 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 00:55:01.302644 kubelet[3019]: E0307 00:55:01.302514 3019 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-21-232\" not found" Mar 7 00:55:01.384588 kubelet[3019]: E0307 00:55:01.384223 3019 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-232\" not found" node="ip-172-31-21-232" Mar 7 00:55:01.398574 kubelet[3019]: E0307 00:55:01.396981 3019 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-232\" not found" node="ip-172-31-21-232" Mar 7 00:55:01.398574 kubelet[3019]: E0307 00:55:01.398109 3019 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-232\" not found" node="ip-172-31-21-232" Mar 7 00:55:01.404379 kubelet[3019]: I0307 00:55:01.404288 3019 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-232" Mar 7 00:55:01.405254 kubelet[3019]: E0307 00:55:01.405181 3019 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.232:6443/api/v1/nodes\": dial tcp 172.31.21.232:6443: connect: connection refused" node="ip-172-31-21-232" Mar 7 00:55:01.410785 kubelet[3019]: I0307 00:55:01.410686 3019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e2ec30414068819167aba0f6cd215250-ca-certs\") pod \"kube-apiserver-ip-172-31-21-232\" (UID: \"e2ec30414068819167aba0f6cd215250\") " pod="kube-system/kube-apiserver-ip-172-31-21-232" Mar 7 00:55:01.411060 kubelet[3019]: I0307 00:55:01.411032 3019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e2ec30414068819167aba0f6cd215250-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-232\" (UID: \"e2ec30414068819167aba0f6cd215250\") " pod="kube-system/kube-apiserver-ip-172-31-21-232" Mar 7 00:55:01.411379 kubelet[3019]: I0307 00:55:01.411238 3019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bdc4ca9cda49d6ea874260e18d34154f-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-232\" (UID: \"bdc4ca9cda49d6ea874260e18d34154f\") " pod="kube-system/kube-controller-manager-ip-172-31-21-232" Mar 7 00:55:01.411379 kubelet[3019]: I0307 00:55:01.411324 3019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bdc4ca9cda49d6ea874260e18d34154f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-232\" (UID: \"bdc4ca9cda49d6ea874260e18d34154f\") " pod="kube-system/kube-controller-manager-ip-172-31-21-232" Mar 7 00:55:01.411619 kubelet[3019]: I0307 00:55:01.411543 3019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bdc4ca9cda49d6ea874260e18d34154f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-232\" (UID: \"bdc4ca9cda49d6ea874260e18d34154f\") " pod="kube-system/kube-controller-manager-ip-172-31-21-232" Mar 7 00:55:01.411875 kubelet[3019]: I0307 00:55:01.411738 3019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e2ec30414068819167aba0f6cd215250-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-232\" (UID: \"e2ec30414068819167aba0f6cd215250\") " pod="kube-system/kube-apiserver-ip-172-31-21-232" Mar 7 00:55:01.411875 kubelet[3019]: I0307 00:55:01.411821 3019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bdc4ca9cda49d6ea874260e18d34154f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-232\" (UID: \"bdc4ca9cda49d6ea874260e18d34154f\") " pod="kube-system/kube-controller-manager-ip-172-31-21-232" Mar 7 00:55:01.412150 kubelet[3019]: I0307 00:55:01.412067 3019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bdc4ca9cda49d6ea874260e18d34154f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-232\" (UID: \"bdc4ca9cda49d6ea874260e18d34154f\") " pod="kube-system/kube-controller-manager-ip-172-31-21-232" Mar 7 00:55:01.412304 kubelet[3019]: I0307 00:55:01.412239 3019 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/169d97534dfedf217d6594a5164987b3-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-232\" (UID: \"169d97534dfedf217d6594a5164987b3\") " pod="kube-system/kube-scheduler-ip-172-31-21-232" Mar 7 00:55:01.417529 kubelet[3019]: E0307 00:55:01.417472 3019 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-232?timeout=10s\": dial tcp 172.31.21.232:6443: connect: connection refused" interval="400ms" Mar 7 00:55:01.607620 kubelet[3019]: I0307 00:55:01.607142 3019 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-232" Mar 7 00:55:01.607620 kubelet[3019]: E0307 00:55:01.607585 3019 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.232:6443/api/v1/nodes\": dial tcp 172.31.21.232:6443: connect: connection refused" node="ip-172-31-21-232" Mar 7 00:55:01.686285 containerd[2132]: time="2026-03-07T00:55:01.685964188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-232,Uid:e2ec30414068819167aba0f6cd215250,Namespace:kube-system,Attempt:0,}" Mar 7 00:55:01.699470 containerd[2132]: time="2026-03-07T00:55:01.698824595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-232,Uid:169d97534dfedf217d6594a5164987b3,Namespace:kube-system,Attempt:0,}" Mar 7 00:55:01.699470 containerd[2132]: time="2026-03-07T00:55:01.699084224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-232,Uid:bdc4ca9cda49d6ea874260e18d34154f,Namespace:kube-system,Attempt:0,}" Mar 7 00:55:01.818975 kubelet[3019]: E0307 00:55:01.818887 3019 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-232?timeout=10s\": dial tcp 172.31.21.232:6443: connect: connection refused" interval="800ms" Mar 7 00:55:02.011982 kubelet[3019]: I0307 00:55:02.011651 3019 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-232" Mar 7 00:55:02.012309 kubelet[3019]: E0307 00:55:02.012209 3019 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.232:6443/api/v1/nodes\": dial tcp 172.31.21.232:6443: connect: connection refused" node="ip-172-31-21-232" Mar 7 00:55:02.142153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2066577255.mount: Deactivated successfully. Mar 7 00:55:02.149537 containerd[2132]: time="2026-03-07T00:55:02.149479064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:55:02.151410 containerd[2132]: time="2026-03-07T00:55:02.151366696Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:55:02.153481 containerd[2132]: time="2026-03-07T00:55:02.153439508Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 00:55:02.153637 containerd[2132]: time="2026-03-07T00:55:02.153609236Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 7 00:55:02.153852 containerd[2132]: time="2026-03-07T00:55:02.153793445Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:55:02.156085 containerd[2132]: time="2026-03-07T00:55:02.155907209Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 7 00:55:02.165998 containerd[2132]: time="2026-03-07T00:55:02.164270877Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:55:02.171632 containerd[2132]: time="2026-03-07T00:55:02.171550139Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 472.515739ms" Mar 7 00:55:02.176166 containerd[2132]: time="2026-03-07T00:55:02.176077577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 7 00:55:02.181297 containerd[2132]: time="2026-03-07T00:55:02.180894012Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 494.798528ms" Mar 7 00:55:02.196160 containerd[2132]: time="2026-03-07T00:55:02.195707988Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 496.527055ms" Mar 7 00:55:02.254588 kubelet[3019]: E0307 00:55:02.254519 3019 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.21.232:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.232:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 7 00:55:02.384468 containerd[2132]: time="2026-03-07T00:55:02.384135280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:55:02.384728 containerd[2132]: time="2026-03-07T00:55:02.384313316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:55:02.384728 containerd[2132]: time="2026-03-07T00:55:02.384393577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:02.389203 containerd[2132]: time="2026-03-07T00:55:02.388167158Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:55:02.389203 containerd[2132]: time="2026-03-07T00:55:02.388315529Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:55:02.389203 containerd[2132]: time="2026-03-07T00:55:02.388341858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:02.389203 containerd[2132]: time="2026-03-07T00:55:02.388508825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:02.389203 containerd[2132]: time="2026-03-07T00:55:02.384658981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:02.400150 containerd[2132]: time="2026-03-07T00:55:02.396590604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:55:02.400150 containerd[2132]: time="2026-03-07T00:55:02.396910900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:55:02.400150 containerd[2132]: time="2026-03-07T00:55:02.397045632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:02.400150 containerd[2132]: time="2026-03-07T00:55:02.398407796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:02.456431 kubelet[3019]: E0307 00:55:02.455901 3019 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.21.232:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-232&limit=500&resourceVersion=0\": dial tcp 172.31.21.232:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 7 00:55:02.553761 containerd[2132]: time="2026-03-07T00:55:02.553491797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-232,Uid:bdc4ca9cda49d6ea874260e18d34154f,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ea7598bee3ac8c5cb6b183e9534c64840bbfcc24c0f81350c1b7eb0077467da\"" Mar 7 00:55:02.566721 containerd[2132]: time="2026-03-07T00:55:02.566462287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-232,Uid:e2ec30414068819167aba0f6cd215250,Namespace:kube-system,Attempt:0,} returns sandbox id \"125875b94b8ba0d6371cfa7c3fd71294abe6248a2401b0e7c1925e977fa6cf99\"" Mar 7 00:55:02.572548 containerd[2132]: time="2026-03-07T00:55:02.572323568Z" level=info msg="CreateContainer within sandbox \"0ea7598bee3ac8c5cb6b183e9534c64840bbfcc24c0f81350c1b7eb0077467da\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 7 00:55:02.573051 containerd[2132]: time="2026-03-07T00:55:02.572844736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-232,Uid:169d97534dfedf217d6594a5164987b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f322f356d1a9dd55e9bfe8d32c65f912fe1a12d12c42be904b9c4ccc3558ac3\"" Mar 7 00:55:02.576988 containerd[2132]: time="2026-03-07T00:55:02.576615929Z" level=info msg="CreateContainer within sandbox \"125875b94b8ba0d6371cfa7c3fd71294abe6248a2401b0e7c1925e977fa6cf99\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 7 00:55:02.578030 kubelet[3019]: E0307 00:55:02.577912 3019 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.21.232:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.232:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 7 00:55:02.585299 containerd[2132]: time="2026-03-07T00:55:02.585248939Z" level=info msg="CreateContainer within sandbox \"7f322f356d1a9dd55e9bfe8d32c65f912fe1a12d12c42be904b9c4ccc3558ac3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 7 00:55:02.598296 containerd[2132]: time="2026-03-07T00:55:02.598217485Z" level=info msg="CreateContainer within sandbox \"0ea7598bee3ac8c5cb6b183e9534c64840bbfcc24c0f81350c1b7eb0077467da\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"24b88711913ed4234104103c7deed704d1e662acca7ef50598a9e25250e1d7c8\"" Mar 7 00:55:02.599462 containerd[2132]: time="2026-03-07T00:55:02.599406667Z" level=info msg="StartContainer for \"24b88711913ed4234104103c7deed704d1e662acca7ef50598a9e25250e1d7c8\"" Mar 7 00:55:02.603705 containerd[2132]: time="2026-03-07T00:55:02.603435496Z" level=info msg="CreateContainer within sandbox \"125875b94b8ba0d6371cfa7c3fd71294abe6248a2401b0e7c1925e977fa6cf99\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"58d4e5273c11e087d5e22f0b165f3832da01e461c08b2fb2946088b3dda81f6f\"" Mar 7 00:55:02.605110 containerd[2132]: time="2026-03-07T00:55:02.604917109Z" level=info msg="StartContainer for \"58d4e5273c11e087d5e22f0b165f3832da01e461c08b2fb2946088b3dda81f6f\"" Mar 7 00:55:02.613358 containerd[2132]: time="2026-03-07T00:55:02.613181787Z" level=info msg="CreateContainer within sandbox \"7f322f356d1a9dd55e9bfe8d32c65f912fe1a12d12c42be904b9c4ccc3558ac3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c40fa14cb24556da1a3650ec7d3e382bda38908b03a6161924f1ccf55c27710b\"" Mar 7 00:55:02.615034 containerd[2132]: time="2026-03-07T00:55:02.614063112Z" level=info msg="StartContainer for \"c40fa14cb24556da1a3650ec7d3e382bda38908b03a6161924f1ccf55c27710b\"" Mar 7 00:55:02.620376 kubelet[3019]: E0307 00:55:02.620283 3019 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-232?timeout=10s\": dial tcp 172.31.21.232:6443: connect: connection refused" interval="1.6s" Mar 7 00:55:02.778417 kubelet[3019]: E0307 00:55:02.778353 3019 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.21.232:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.232:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 7 00:55:02.812421 containerd[2132]: time="2026-03-07T00:55:02.811398639Z" level=info msg="StartContainer for \"c40fa14cb24556da1a3650ec7d3e382bda38908b03a6161924f1ccf55c27710b\" returns successfully" Mar 7 00:55:02.824355 containerd[2132]: time="2026-03-07T00:55:02.822697306Z" level=info msg="StartContainer for \"24b88711913ed4234104103c7deed704d1e662acca7ef50598a9e25250e1d7c8\" returns successfully" Mar 7 00:55:02.831961 kubelet[3019]: I0307 00:55:02.830243 3019 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-232" Mar 7 00:55:02.833786 kubelet[3019]: E0307 00:55:02.833707 3019 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.232:6443/api/v1/nodes\": dial tcp 172.31.21.232:6443: connect: connection refused" node="ip-172-31-21-232" Mar 7 00:55:02.885619 containerd[2132]: time="2026-03-07T00:55:02.885562944Z" level=info msg="StartContainer for \"58d4e5273c11e087d5e22f0b165f3832da01e461c08b2fb2946088b3dda81f6f\" returns successfully" Mar 7 00:55:03.293149 kubelet[3019]: E0307 00:55:03.291434 3019 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-232\" not found" node="ip-172-31-21-232" Mar 7 00:55:03.307524 kubelet[3019]: E0307 00:55:03.304903 3019 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-232\" not found" node="ip-172-31-21-232" Mar 7 00:55:03.312785 kubelet[3019]: E0307 00:55:03.312736 3019 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-232\" not found" node="ip-172-31-21-232" Mar 7 00:55:04.315958 kubelet[3019]: E0307 00:55:04.314140 3019 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-232\" not found" node="ip-172-31-21-232" Mar 7 00:55:04.315958 kubelet[3019]: E0307 00:55:04.314749 3019 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-232\" not found" node="ip-172-31-21-232" Mar 7 00:55:04.443099 kubelet[3019]: I0307 00:55:04.440483 3019 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-232" Mar 7 00:55:06.487315 kubelet[3019]: E0307 00:55:06.487241 3019 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-21-232\" not found" node="ip-172-31-21-232" Mar 7 00:55:06.512539 kubelet[3019]: I0307 00:55:06.512221 3019 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-21-232" Mar 7 00:55:06.512539 kubelet[3019]: E0307 00:55:06.512294 3019 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-21-232\": node \"ip-172-31-21-232\" not found" Mar 7 00:55:06.609827 kubelet[3019]: I0307 00:55:06.609766 3019 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-232" Mar 7 00:55:06.638807 kubelet[3019]: E0307 00:55:06.638726 3019 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-21-232\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-21-232" Mar 7 00:55:06.638807 kubelet[3019]: I0307 00:55:06.638791 3019 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-232" Mar 7 00:55:06.646973 kubelet[3019]: E0307 00:55:06.645145 3019 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-21-232\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-21-232" Mar 7 00:55:06.646973 kubelet[3019]: I0307 00:55:06.645196 3019 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-232" Mar 7 00:55:06.656219 kubelet[3019]: E0307 00:55:06.656172 3019 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-21-232\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-21-232" Mar 7 00:55:07.190528 kubelet[3019]: I0307 00:55:07.190116 3019 apiserver.go:52] "Watching apiserver" Mar 7 00:55:07.210097 kubelet[3019]: I0307 00:55:07.210021 3019 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 00:55:09.537452 update_engine[2108]: I20260307 00:55:09.537367 2108 update_attempter.cc:509] Updating boot flags... Mar 7 00:55:09.608064 kubelet[3019]: I0307 00:55:09.607154 3019 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-232" Mar 7 00:55:09.694038 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3324) Mar 7 00:55:10.200990 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3328) Mar 7 00:55:10.208136 systemd[1]: Reloading requested from client PID 3416 ('systemctl') (unit session-7.scope)... Mar 7 00:55:10.208168 systemd[1]: Reloading... Mar 7 00:55:10.548993 zram_generator::config[3536]: No configuration found. Mar 7 00:55:10.814500 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 7 00:55:11.015992 systemd[1]: Reloading finished in 807 ms. Mar 7 00:55:11.127133 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:55:11.164705 systemd[1]: kubelet.service: Deactivated successfully. Mar 7 00:55:11.167630 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:55:11.178447 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 7 00:55:11.527698 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 7 00:55:11.547481 (kubelet)[3603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 7 00:55:11.650785 kubelet[3603]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 00:55:11.650785 kubelet[3603]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 7 00:55:11.650785 kubelet[3603]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 7 00:55:11.651437 kubelet[3603]: I0307 00:55:11.650870 3603 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 7 00:55:11.682102 kubelet[3603]: I0307 00:55:11.682038 3603 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 7 00:55:11.682102 kubelet[3603]: I0307 00:55:11.682089 3603 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 7 00:55:11.683005 kubelet[3603]: I0307 00:55:11.682497 3603 server.go:956] "Client rotation is on, will bootstrap in background" Mar 7 00:55:11.685424 kubelet[3603]: I0307 00:55:11.685021 3603 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 7 00:55:11.690628 kubelet[3603]: I0307 00:55:11.689603 3603 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 7 00:55:11.697286 kubelet[3603]: E0307 00:55:11.696959 3603 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 7 00:55:11.697286 kubelet[3603]: I0307 00:55:11.697009 3603 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 7 00:55:11.703731 kubelet[3603]: I0307 00:55:11.703675 3603 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 7 00:55:11.705338 kubelet[3603]: I0307 00:55:11.704825 3603 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 7 00:55:11.705338 kubelet[3603]: I0307 00:55:11.704884 3603 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-232","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 7 00:55:11.705338 kubelet[3603]: I0307 00:55:11.705170 3603 topology_manager.go:138] "Creating topology manager with none policy" Mar 7 00:55:11.705338 kubelet[3603]: I0307 00:55:11.705190 3603 container_manager_linux.go:303] "Creating device plugin manager" Mar 7 00:55:11.705338 kubelet[3603]: I0307 00:55:11.705273 3603 state_mem.go:36] "Initialized new in-memory state store" Mar 7 00:55:11.706455 kubelet[3603]: I0307 00:55:11.706083 3603 kubelet.go:480] "Attempting to sync node with API server" Mar 7 00:55:11.707134 kubelet[3603]: I0307 00:55:11.707103 3603 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 7 00:55:11.707297 kubelet[3603]: I0307 00:55:11.707280 3603 kubelet.go:386] "Adding apiserver pod source" Mar 7 00:55:11.707580 kubelet[3603]: I0307 00:55:11.707392 3603 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 7 00:55:11.726759 kubelet[3603]: I0307 00:55:11.725394 3603 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 7 00:55:11.726759 kubelet[3603]: I0307 00:55:11.726435 3603 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 7 00:55:11.748409 kubelet[3603]: I0307 00:55:11.748372 3603 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 7 00:55:11.748699 kubelet[3603]: I0307 00:55:11.748681 3603 server.go:1289] "Started kubelet" Mar 7 00:55:11.755465 kubelet[3603]: I0307 00:55:11.755429 3603 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 7 00:55:11.762248 kubelet[3603]: I0307 00:55:11.761061 3603 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 7 00:55:11.764691 kubelet[3603]: I0307 00:55:11.764616 3603 server.go:317] "Adding debug handlers to kubelet server" Mar 7 00:55:11.774009 kubelet[3603]: I0307 00:55:11.772898 3603 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 7 00:55:11.774703 kubelet[3603]: I0307 00:55:11.774652 3603 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 7 00:55:11.785007 kubelet[3603]: I0307 00:55:11.783824 3603 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 7 00:55:11.789726 kubelet[3603]: I0307 00:55:11.789675 3603 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 7 00:55:11.790897 kubelet[3603]: E0307 00:55:11.790836 3603 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-21-232\" not found" Mar 7 00:55:11.795784 kubelet[3603]: I0307 00:55:11.795580 3603 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 7 00:55:11.798022 kubelet[3603]: I0307 00:55:11.795848 3603 reconciler.go:26] "Reconciler: start to sync state" Mar 7 00:55:11.821043 kubelet[3603]: I0307 00:55:11.819042 3603 factory.go:223] Registration of the systemd container factory successfully Mar 7 00:55:11.825611 kubelet[3603]: I0307 00:55:11.824177 3603 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 7 00:55:11.842074 kubelet[3603]: I0307 00:55:11.841923 3603 factory.go:223] Registration of the containerd container factory successfully Mar 7 00:55:11.849534 kubelet[3603]: E0307 00:55:11.849490 3603 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 7 00:55:11.853661 kubelet[3603]: I0307 00:55:11.853605 3603 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 7 00:55:11.871643 kubelet[3603]: I0307 00:55:11.871600 3603 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 7 00:55:11.872395 kubelet[3603]: I0307 00:55:11.871829 3603 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 7 00:55:11.872395 kubelet[3603]: I0307 00:55:11.871868 3603 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 7 00:55:11.872395 kubelet[3603]: I0307 00:55:11.871884 3603 kubelet.go:2436] "Starting kubelet main sync loop" Mar 7 00:55:11.872395 kubelet[3603]: E0307 00:55:11.871996 3603 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 7 00:55:11.972376 kubelet[3603]: E0307 00:55:11.972313 3603 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 7 00:55:12.004432 kubelet[3603]: I0307 00:55:12.004386 3603 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 7 00:55:12.004608 kubelet[3603]: I0307 00:55:12.004585 3603 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 7 00:55:12.004762 kubelet[3603]: I0307 00:55:12.004746 3603 state_mem.go:36] "Initialized new in-memory state store" Mar 7 00:55:12.005227 kubelet[3603]: I0307 00:55:12.005205 3603 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 7 00:55:12.005406 kubelet[3603]: I0307 00:55:12.005332 3603 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 7 00:55:12.005406 kubelet[3603]: I0307 00:55:12.005373 3603 policy_none.go:49] "None policy: Start" Mar 7 00:55:12.005736 kubelet[3603]: I0307 00:55:12.005570 3603 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 7 00:55:12.005736 kubelet[3603]: I0307 00:55:12.005601 3603 state_mem.go:35] "Initializing new in-memory state store" Mar 7 00:55:12.006140 kubelet[3603]: I0307 00:55:12.006012 3603 state_mem.go:75] "Updated machine memory state" Mar 7 00:55:12.009769 kubelet[3603]: E0307 00:55:12.009729 3603 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 7 00:55:12.009769 kubelet[3603]: I0307 00:55:12.010087 3603 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 7 00:55:12.009769 kubelet[3603]: I0307 00:55:12.010116 3603 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 7 00:55:12.013237 kubelet[3603]: I0307 00:55:12.013197 3603 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 7 00:55:12.020307 kubelet[3603]: E0307 00:55:12.020114 3603 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 7 00:55:12.129101 kubelet[3603]: I0307 00:55:12.127116 3603 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-232" Mar 7 00:55:12.143480 kubelet[3603]: I0307 00:55:12.143030 3603 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-21-232" Mar 7 00:55:12.145148 kubelet[3603]: I0307 00:55:12.144435 3603 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-21-232" Mar 7 00:55:12.176111 kubelet[3603]: I0307 00:55:12.174117 3603 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-232" Mar 7 00:55:12.176111 kubelet[3603]: I0307 00:55:12.174818 3603 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-232" Mar 7 00:55:12.176111 kubelet[3603]: I0307 00:55:12.175377 3603 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-232" Mar 7 00:55:12.187804 kubelet[3603]: E0307 00:55:12.187257 3603 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-21-232\" already exists" pod="kube-system/kube-scheduler-ip-172-31-21-232" Mar 7 00:55:12.196997 kubelet[3603]: I0307 00:55:12.196857 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/169d97534dfedf217d6594a5164987b3-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-232\" (UID: \"169d97534dfedf217d6594a5164987b3\") " pod="kube-system/kube-scheduler-ip-172-31-21-232" Mar 7 00:55:12.196997 kubelet[3603]: I0307 00:55:12.196913 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e2ec30414068819167aba0f6cd215250-ca-certs\") pod \"kube-apiserver-ip-172-31-21-232\" (UID: \"e2ec30414068819167aba0f6cd215250\") " pod="kube-system/kube-apiserver-ip-172-31-21-232" Mar 7 00:55:12.196997 kubelet[3603]: I0307 00:55:12.196976 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bdc4ca9cda49d6ea874260e18d34154f-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-232\" (UID: \"bdc4ca9cda49d6ea874260e18d34154f\") " pod="kube-system/kube-controller-manager-ip-172-31-21-232" Mar 7 00:55:12.197269 kubelet[3603]: I0307 00:55:12.197017 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bdc4ca9cda49d6ea874260e18d34154f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-232\" (UID: \"bdc4ca9cda49d6ea874260e18d34154f\") " pod="kube-system/kube-controller-manager-ip-172-31-21-232" Mar 7 00:55:12.197269 kubelet[3603]: I0307 00:55:12.197078 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bdc4ca9cda49d6ea874260e18d34154f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-232\" (UID: \"bdc4ca9cda49d6ea874260e18d34154f\") " pod="kube-system/kube-controller-manager-ip-172-31-21-232" Mar 7 00:55:12.197269 kubelet[3603]: I0307 00:55:12.197113 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e2ec30414068819167aba0f6cd215250-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-232\" (UID: \"e2ec30414068819167aba0f6cd215250\") " pod="kube-system/kube-apiserver-ip-172-31-21-232" Mar 7 00:55:12.197439 kubelet[3603]: I0307 00:55:12.197153 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e2ec30414068819167aba0f6cd215250-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-232\" (UID: \"e2ec30414068819167aba0f6cd215250\") " pod="kube-system/kube-apiserver-ip-172-31-21-232" Mar 7 00:55:12.197439 kubelet[3603]: I0307 00:55:12.197403 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bdc4ca9cda49d6ea874260e18d34154f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-232\" (UID: \"bdc4ca9cda49d6ea874260e18d34154f\") " pod="kube-system/kube-controller-manager-ip-172-31-21-232" Mar 7 00:55:12.197548 kubelet[3603]: I0307 00:55:12.197446 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bdc4ca9cda49d6ea874260e18d34154f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-232\" (UID: \"bdc4ca9cda49d6ea874260e18d34154f\") " pod="kube-system/kube-controller-manager-ip-172-31-21-232" Mar 7 00:55:12.711373 kubelet[3603]: I0307 00:55:12.709739 3603 apiserver.go:52] "Watching apiserver" Mar 7 00:55:12.796348 kubelet[3603]: I0307 00:55:12.796283 3603 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 7 00:55:12.913726 kubelet[3603]: I0307 00:55:12.912476 3603 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-232" Mar 7 00:55:12.927289 kubelet[3603]: E0307 00:55:12.927217 3603 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-21-232\" already exists" pod="kube-system/kube-scheduler-ip-172-31-21-232" Mar 7 00:55:12.968527 kubelet[3603]: I0307 00:55:12.967830 3603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-21-232" podStartSLOduration=3.96774519 podStartE2EDuration="3.96774519s" podCreationTimestamp="2026-03-07 00:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:55:12.966100476 +0000 UTC m=+1.408770496" watchObservedRunningTime="2026-03-07 00:55:12.96774519 +0000 UTC m=+1.410415210" Mar 7 00:55:13.009033 kubelet[3603]: I0307 00:55:13.008076 3603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-21-232" podStartSLOduration=1.00805064 podStartE2EDuration="1.00805064s" podCreationTimestamp="2026-03-07 00:55:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:55:12.985931182 +0000 UTC m=+1.428601178" watchObservedRunningTime="2026-03-07 00:55:13.00805064 +0000 UTC m=+1.450720648" Mar 7 00:55:14.469718 kubelet[3603]: I0307 00:55:14.469521 3603 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 7 00:55:14.472717 containerd[2132]: time="2026-03-07T00:55:14.472087023Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 7 00:55:14.474235 kubelet[3603]: I0307 00:55:14.473297 3603 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 7 00:55:15.495653 kubelet[3603]: I0307 00:55:15.495347 3603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-21-232" podStartSLOduration=3.495201444 podStartE2EDuration="3.495201444s" podCreationTimestamp="2026-03-07 00:55:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:55:13.010070542 +0000 UTC m=+1.452740574" watchObservedRunningTime="2026-03-07 00:55:15.495201444 +0000 UTC m=+3.937871488" Mar 7 00:55:15.523157 kubelet[3603]: I0307 00:55:15.523078 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4556ace8-1151-44be-a4fa-218b8987fe89-lib-modules\") pod \"kube-proxy-bgxs8\" (UID: \"4556ace8-1151-44be-a4fa-218b8987fe89\") " pod="kube-system/kube-proxy-bgxs8" Mar 7 00:55:15.523157 kubelet[3603]: I0307 00:55:15.523155 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlrgl\" (UniqueName: \"kubernetes.io/projected/4556ace8-1151-44be-a4fa-218b8987fe89-kube-api-access-tlrgl\") pod \"kube-proxy-bgxs8\" (UID: \"4556ace8-1151-44be-a4fa-218b8987fe89\") " pod="kube-system/kube-proxy-bgxs8" Mar 7 00:55:15.523392 kubelet[3603]: I0307 00:55:15.523204 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4556ace8-1151-44be-a4fa-218b8987fe89-kube-proxy\") pod \"kube-proxy-bgxs8\" (UID: \"4556ace8-1151-44be-a4fa-218b8987fe89\") " pod="kube-system/kube-proxy-bgxs8" Mar 7 00:55:15.523392 kubelet[3603]: I0307 00:55:15.523239 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4556ace8-1151-44be-a4fa-218b8987fe89-xtables-lock\") pod \"kube-proxy-bgxs8\" (UID: \"4556ace8-1151-44be-a4fa-218b8987fe89\") " pod="kube-system/kube-proxy-bgxs8" Mar 7 00:55:15.724988 kubelet[3603]: I0307 00:55:15.724002 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2eec4f92-c7af-4304-9957-8515f51f1e00-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-hsqdm\" (UID: \"2eec4f92-c7af-4304-9957-8515f51f1e00\") " pod="tigera-operator/tigera-operator-6bf85f8dd-hsqdm" Mar 7 00:55:15.724988 kubelet[3603]: I0307 00:55:15.724073 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khl4q\" (UniqueName: \"kubernetes.io/projected/2eec4f92-c7af-4304-9957-8515f51f1e00-kube-api-access-khl4q\") pod \"tigera-operator-6bf85f8dd-hsqdm\" (UID: \"2eec4f92-c7af-4304-9957-8515f51f1e00\") " pod="tigera-operator/tigera-operator-6bf85f8dd-hsqdm" Mar 7 00:55:15.815282 containerd[2132]: time="2026-03-07T00:55:15.815011235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bgxs8,Uid:4556ace8-1151-44be-a4fa-218b8987fe89,Namespace:kube-system,Attempt:0,}" Mar 7 00:55:15.879248 containerd[2132]: time="2026-03-07T00:55:15.878910409Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:55:15.879248 containerd[2132]: time="2026-03-07T00:55:15.879049438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:55:15.879248 containerd[2132]: time="2026-03-07T00:55:15.879075191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:15.881402 containerd[2132]: time="2026-03-07T00:55:15.879379291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:15.963724 containerd[2132]: time="2026-03-07T00:55:15.963661712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bgxs8,Uid:4556ace8-1151-44be-a4fa-218b8987fe89,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8e29417afae8734ba54713dca080ae83b370e7bfdfe2fa250d7a57ebd9e36ea\"" Mar 7 00:55:15.972711 containerd[2132]: time="2026-03-07T00:55:15.972530484Z" level=info msg="CreateContainer within sandbox \"b8e29417afae8734ba54713dca080ae83b370e7bfdfe2fa250d7a57ebd9e36ea\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 7 00:55:15.990773 containerd[2132]: time="2026-03-07T00:55:15.990705323Z" level=info msg="CreateContainer within sandbox \"b8e29417afae8734ba54713dca080ae83b370e7bfdfe2fa250d7a57ebd9e36ea\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b915fcf3307988de7aa5f53d17345774aa54ff457787bdadc6d8c55fc7b0b0fd\"" Mar 7 00:55:15.991801 containerd[2132]: time="2026-03-07T00:55:15.991712134Z" level=info msg="StartContainer for \"b915fcf3307988de7aa5f53d17345774aa54ff457787bdadc6d8c55fc7b0b0fd\"" Mar 7 00:55:16.028317 containerd[2132]: time="2026-03-07T00:55:16.028176794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-hsqdm,Uid:2eec4f92-c7af-4304-9957-8515f51f1e00,Namespace:tigera-operator,Attempt:0,}" Mar 7 00:55:16.086872 containerd[2132]: time="2026-03-07T00:55:16.085007075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:55:16.086872 containerd[2132]: time="2026-03-07T00:55:16.085114085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:55:16.086872 containerd[2132]: time="2026-03-07T00:55:16.085140114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:16.086872 containerd[2132]: time="2026-03-07T00:55:16.085308054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:16.104483 containerd[2132]: time="2026-03-07T00:55:16.104426984Z" level=info msg="StartContainer for \"b915fcf3307988de7aa5f53d17345774aa54ff457787bdadc6d8c55fc7b0b0fd\" returns successfully" Mar 7 00:55:16.208025 containerd[2132]: time="2026-03-07T00:55:16.206704634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-hsqdm,Uid:2eec4f92-c7af-4304-9957-8515f51f1e00,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3e28f4d7044bda53ff991290ac458dfdf8eea429860ff884e1325c6c9ddfb27f\"" Mar 7 00:55:16.213380 containerd[2132]: time="2026-03-07T00:55:16.212863424Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 7 00:55:17.515261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3828747322.mount: Deactivated successfully. Mar 7 00:55:18.255066 kubelet[3603]: I0307 00:55:18.254875 3603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bgxs8" podStartSLOduration=3.254832249 podStartE2EDuration="3.254832249s" podCreationTimestamp="2026-03-07 00:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:55:16.967006381 +0000 UTC m=+5.409676413" watchObservedRunningTime="2026-03-07 00:55:18.254832249 +0000 UTC m=+6.697502257" Mar 7 00:55:21.228336 containerd[2132]: time="2026-03-07T00:55:21.228248086Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:21.230359 containerd[2132]: time="2026-03-07T00:55:21.229985247Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Mar 7 00:55:21.232116 containerd[2132]: time="2026-03-07T00:55:21.231526421Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:21.236970 containerd[2132]: time="2026-03-07T00:55:21.236883270Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:21.238631 containerd[2132]: time="2026-03-07T00:55:21.238583740Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 5.025348778s" Mar 7 00:55:21.238795 containerd[2132]: time="2026-03-07T00:55:21.238765198Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Mar 7 00:55:21.246998 containerd[2132]: time="2026-03-07T00:55:21.246895758Z" level=info msg="CreateContainer within sandbox \"3e28f4d7044bda53ff991290ac458dfdf8eea429860ff884e1325c6c9ddfb27f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 7 00:55:21.268103 containerd[2132]: time="2026-03-07T00:55:21.268036691Z" level=info msg="CreateContainer within sandbox \"3e28f4d7044bda53ff991290ac458dfdf8eea429860ff884e1325c6c9ddfb27f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"27443c9cf4d44d66933d807315e566724ada3cac6bffaddb9024409722532062\"" Mar 7 00:55:21.269605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4139020943.mount: Deactivated successfully. Mar 7 00:55:21.273104 containerd[2132]: time="2026-03-07T00:55:21.269841769Z" level=info msg="StartContainer for \"27443c9cf4d44d66933d807315e566724ada3cac6bffaddb9024409722532062\"" Mar 7 00:55:21.370976 containerd[2132]: time="2026-03-07T00:55:21.370868382Z" level=info msg="StartContainer for \"27443c9cf4d44d66933d807315e566724ada3cac6bffaddb9024409722532062\" returns successfully" Mar 7 00:55:21.973563 kubelet[3603]: I0307 00:55:21.973372 3603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-hsqdm" podStartSLOduration=1.943371561 podStartE2EDuration="6.973347271s" podCreationTimestamp="2026-03-07 00:55:15 +0000 UTC" firstStartedPulling="2026-03-07 00:55:16.210183373 +0000 UTC m=+4.652853369" lastFinishedPulling="2026-03-07 00:55:21.240159083 +0000 UTC m=+9.682829079" observedRunningTime="2026-03-07 00:55:21.971041483 +0000 UTC m=+10.413711539" watchObservedRunningTime="2026-03-07 00:55:21.973347271 +0000 UTC m=+10.416017291" Mar 7 00:55:30.315308 sudo[2483]: pam_unix(sudo:session): session closed for user root Mar 7 00:55:30.398297 sshd[2479]: pam_unix(sshd:session): session closed for user core Mar 7 00:55:30.408752 systemd[1]: sshd@6-172.31.21.232:22-20.161.92.111:46998.service: Deactivated successfully. Mar 7 00:55:30.424766 systemd[1]: session-7.scope: Deactivated successfully. Mar 7 00:55:30.427035 systemd-logind[2105]: Session 7 logged out. Waiting for processes to exit. Mar 7 00:55:30.435024 systemd-logind[2105]: Removed session 7. Mar 7 00:55:43.257986 kubelet[3603]: E0307 00:55:43.256220 3603 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ip-172-31-21-232\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-21-232' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"typha-certs\"" type="*v1.Secret" Mar 7 00:55:43.257986 kubelet[3603]: I0307 00:55:43.256480 3603 status_manager.go:895] "Failed to get status for pod" podUID="5c6762bf-83a9-4908-bdb6-b92075c6a475" pod="calico-system/calico-typha-7f5df69fcb-529xz" err="pods \"calico-typha-7f5df69fcb-529xz\" is forbidden: User \"system:node:ip-172-31-21-232\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-21-232' and this object" Mar 7 00:55:43.257986 kubelet[3603]: E0307 00:55:43.256591 3603 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-21-232\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-21-232' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Mar 7 00:55:43.257986 kubelet[3603]: E0307 00:55:43.256680 3603 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:ip-172-31-21-232\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-21-232' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"tigera-ca-bundle\"" type="*v1.ConfigMap" Mar 7 00:55:43.317043 kubelet[3603]: I0307 00:55:43.316809 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5c6762bf-83a9-4908-bdb6-b92075c6a475-typha-certs\") pod \"calico-typha-7f5df69fcb-529xz\" (UID: \"5c6762bf-83a9-4908-bdb6-b92075c6a475\") " pod="calico-system/calico-typha-7f5df69fcb-529xz" Mar 7 00:55:43.317043 kubelet[3603]: I0307 00:55:43.316901 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szbn2\" (UniqueName: \"kubernetes.io/projected/5c6762bf-83a9-4908-bdb6-b92075c6a475-kube-api-access-szbn2\") pod \"calico-typha-7f5df69fcb-529xz\" (UID: \"5c6762bf-83a9-4908-bdb6-b92075c6a475\") " pod="calico-system/calico-typha-7f5df69fcb-529xz" Mar 7 00:55:43.317043 kubelet[3603]: I0307 00:55:43.316970 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c6762bf-83a9-4908-bdb6-b92075c6a475-tigera-ca-bundle\") pod \"calico-typha-7f5df69fcb-529xz\" (UID: \"5c6762bf-83a9-4908-bdb6-b92075c6a475\") " pod="calico-system/calico-typha-7f5df69fcb-529xz" Mar 7 00:55:43.621908 kubelet[3603]: I0307 00:55:43.620216 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ec5842f4-f3b2-4cae-a751-292221337ab0-policysync\") pod \"calico-node-znk9c\" (UID: \"ec5842f4-f3b2-4cae-a751-292221337ab0\") " pod="calico-system/calico-node-znk9c" Mar 7 00:55:43.621908 kubelet[3603]: I0307 00:55:43.620298 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcz94\" (UniqueName: \"kubernetes.io/projected/ec5842f4-f3b2-4cae-a751-292221337ab0-kube-api-access-qcz94\") pod \"calico-node-znk9c\" (UID: \"ec5842f4-f3b2-4cae-a751-292221337ab0\") " pod="calico-system/calico-node-znk9c" Mar 7 00:55:43.621908 kubelet[3603]: I0307 00:55:43.620344 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ec5842f4-f3b2-4cae-a751-292221337ab0-var-run-calico\") pod \"calico-node-znk9c\" (UID: \"ec5842f4-f3b2-4cae-a751-292221337ab0\") " pod="calico-system/calico-node-znk9c" Mar 7 00:55:43.621908 kubelet[3603]: I0307 00:55:43.620403 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec5842f4-f3b2-4cae-a751-292221337ab0-xtables-lock\") pod \"calico-node-znk9c\" (UID: \"ec5842f4-f3b2-4cae-a751-292221337ab0\") " pod="calico-system/calico-node-znk9c" Mar 7 00:55:43.621908 kubelet[3603]: I0307 00:55:43.620438 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/ec5842f4-f3b2-4cae-a751-292221337ab0-bpffs\") pod \"calico-node-znk9c\" (UID: \"ec5842f4-f3b2-4cae-a751-292221337ab0\") " pod="calico-system/calico-node-znk9c" Mar 7 00:55:43.622684 kubelet[3603]: I0307 00:55:43.620484 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ec5842f4-f3b2-4cae-a751-292221337ab0-node-certs\") pod \"calico-node-znk9c\" (UID: \"ec5842f4-f3b2-4cae-a751-292221337ab0\") " pod="calico-system/calico-node-znk9c" Mar 7 00:55:43.622684 kubelet[3603]: I0307 00:55:43.620552 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec5842f4-f3b2-4cae-a751-292221337ab0-tigera-ca-bundle\") pod \"calico-node-znk9c\" (UID: \"ec5842f4-f3b2-4cae-a751-292221337ab0\") " pod="calico-system/calico-node-znk9c" Mar 7 00:55:43.622684 kubelet[3603]: I0307 00:55:43.620599 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ec5842f4-f3b2-4cae-a751-292221337ab0-cni-bin-dir\") pod \"calico-node-znk9c\" (UID: \"ec5842f4-f3b2-4cae-a751-292221337ab0\") " pod="calico-system/calico-node-znk9c" Mar 7 00:55:43.622684 kubelet[3603]: I0307 00:55:43.620632 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/ec5842f4-f3b2-4cae-a751-292221337ab0-sys-fs\") pod \"calico-node-znk9c\" (UID: \"ec5842f4-f3b2-4cae-a751-292221337ab0\") " pod="calico-system/calico-node-znk9c" Mar 7 00:55:43.622684 kubelet[3603]: I0307 00:55:43.620667 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ec5842f4-f3b2-4cae-a751-292221337ab0-flexvol-driver-host\") pod \"calico-node-znk9c\" (UID: \"ec5842f4-f3b2-4cae-a751-292221337ab0\") " pod="calico-system/calico-node-znk9c" Mar 7 00:55:43.622982 kubelet[3603]: I0307 00:55:43.620729 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/ec5842f4-f3b2-4cae-a751-292221337ab0-nodeproc\") pod \"calico-node-znk9c\" (UID: \"ec5842f4-f3b2-4cae-a751-292221337ab0\") " pod="calico-system/calico-node-znk9c" Mar 7 00:55:43.622982 kubelet[3603]: I0307 00:55:43.620770 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ec5842f4-f3b2-4cae-a751-292221337ab0-cni-log-dir\") pod \"calico-node-znk9c\" (UID: \"ec5842f4-f3b2-4cae-a751-292221337ab0\") " pod="calico-system/calico-node-znk9c" Mar 7 00:55:43.622982 kubelet[3603]: I0307 00:55:43.620811 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ec5842f4-f3b2-4cae-a751-292221337ab0-cni-net-dir\") pod \"calico-node-znk9c\" (UID: \"ec5842f4-f3b2-4cae-a751-292221337ab0\") " pod="calico-system/calico-node-znk9c" Mar 7 00:55:43.622982 kubelet[3603]: I0307 00:55:43.620908 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec5842f4-f3b2-4cae-a751-292221337ab0-lib-modules\") pod \"calico-node-znk9c\" (UID: \"ec5842f4-f3b2-4cae-a751-292221337ab0\") " pod="calico-system/calico-node-znk9c" Mar 7 00:55:43.622982 kubelet[3603]: I0307 00:55:43.621057 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ec5842f4-f3b2-4cae-a751-292221337ab0-var-lib-calico\") pod \"calico-node-znk9c\" (UID: \"ec5842f4-f3b2-4cae-a751-292221337ab0\") " pod="calico-system/calico-node-znk9c" Mar 7 00:55:43.747826 kubelet[3603]: E0307 00:55:43.746140 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.747826 kubelet[3603]: W0307 00:55:43.746188 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.747826 kubelet[3603]: E0307 00:55:43.746228 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.752219 kubelet[3603]: E0307 00:55:43.752123 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tdrbz" podUID="5241805d-3644-4de6-80b4-779148c6e9c9" Mar 7 00:55:43.797357 kubelet[3603]: E0307 00:55:43.797299 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.797357 kubelet[3603]: W0307 00:55:43.797344 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.797604 kubelet[3603]: E0307 00:55:43.797380 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.799977 kubelet[3603]: E0307 00:55:43.798401 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.799977 kubelet[3603]: W0307 00:55:43.798438 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.799977 kubelet[3603]: E0307 00:55:43.798539 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.799977 kubelet[3603]: E0307 00:55:43.799402 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.799977 kubelet[3603]: W0307 00:55:43.799541 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.799977 kubelet[3603]: E0307 00:55:43.799570 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.805075 kubelet[3603]: E0307 00:55:43.800442 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.805075 kubelet[3603]: W0307 00:55:43.800488 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.805075 kubelet[3603]: E0307 00:55:43.800524 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.805075 kubelet[3603]: E0307 00:55:43.801911 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.805075 kubelet[3603]: W0307 00:55:43.802012 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.805075 kubelet[3603]: E0307 00:55:43.802050 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.805075 kubelet[3603]: E0307 00:55:43.803308 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.805075 kubelet[3603]: W0307 00:55:43.803341 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.805075 kubelet[3603]: E0307 00:55:43.803401 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.805075 kubelet[3603]: E0307 00:55:43.804753 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.805705 kubelet[3603]: W0307 00:55:43.804786 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.805705 kubelet[3603]: E0307 00:55:43.804818 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.810994 kubelet[3603]: E0307 00:55:43.806153 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.810994 kubelet[3603]: W0307 00:55:43.806199 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.810994 kubelet[3603]: E0307 00:55:43.806236 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.810994 kubelet[3603]: E0307 00:55:43.806716 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.810994 kubelet[3603]: W0307 00:55:43.806748 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.810994 kubelet[3603]: E0307 00:55:43.806803 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.810994 kubelet[3603]: E0307 00:55:43.807533 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.810994 kubelet[3603]: W0307 00:55:43.807564 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.810994 kubelet[3603]: E0307 00:55:43.807597 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.810994 kubelet[3603]: E0307 00:55:43.808235 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.811619 kubelet[3603]: W0307 00:55:43.808263 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.811619 kubelet[3603]: E0307 00:55:43.808294 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.811619 kubelet[3603]: E0307 00:55:43.808762 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.811619 kubelet[3603]: W0307 00:55:43.808786 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.811619 kubelet[3603]: E0307 00:55:43.808815 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.811619 kubelet[3603]: E0307 00:55:43.809348 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.811619 kubelet[3603]: W0307 00:55:43.809376 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.811619 kubelet[3603]: E0307 00:55:43.809408 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.811619 kubelet[3603]: E0307 00:55:43.809844 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.811619 kubelet[3603]: W0307 00:55:43.809900 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.816916 kubelet[3603]: E0307 00:55:43.809932 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.816916 kubelet[3603]: E0307 00:55:43.810418 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.816916 kubelet[3603]: W0307 00:55:43.810441 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.816916 kubelet[3603]: E0307 00:55:43.810466 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.816916 kubelet[3603]: E0307 00:55:43.810911 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.816916 kubelet[3603]: W0307 00:55:43.810964 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.816916 kubelet[3603]: E0307 00:55:43.811028 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.816916 kubelet[3603]: E0307 00:55:43.811592 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.816916 kubelet[3603]: W0307 00:55:43.811653 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.816916 kubelet[3603]: E0307 00:55:43.811786 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.817522 kubelet[3603]: E0307 00:55:43.812368 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.817522 kubelet[3603]: W0307 00:55:43.812395 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.817522 kubelet[3603]: E0307 00:55:43.812426 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.817522 kubelet[3603]: E0307 00:55:43.813057 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.817522 kubelet[3603]: W0307 00:55:43.813103 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.817522 kubelet[3603]: E0307 00:55:43.813137 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.817522 kubelet[3603]: E0307 00:55:43.813816 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.817522 kubelet[3603]: W0307 00:55:43.813853 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.817522 kubelet[3603]: E0307 00:55:43.813911 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.825977 kubelet[3603]: E0307 00:55:43.825872 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.825977 kubelet[3603]: W0307 00:55:43.825909 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.826419 kubelet[3603]: E0307 00:55:43.826229 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.826419 kubelet[3603]: I0307 00:55:43.826335 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkd6d\" (UniqueName: \"kubernetes.io/projected/5241805d-3644-4de6-80b4-779148c6e9c9-kube-api-access-rkd6d\") pod \"csi-node-driver-tdrbz\" (UID: \"5241805d-3644-4de6-80b4-779148c6e9c9\") " pod="calico-system/csi-node-driver-tdrbz" Mar 7 00:55:43.828758 kubelet[3603]: E0307 00:55:43.828474 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.828758 kubelet[3603]: W0307 00:55:43.828509 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.828758 kubelet[3603]: E0307 00:55:43.828544 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.831061 kubelet[3603]: E0307 00:55:43.830171 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.831061 kubelet[3603]: W0307 00:55:43.830209 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.831061 kubelet[3603]: E0307 00:55:43.830278 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.834288 kubelet[3603]: E0307 00:55:43.833361 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.834288 kubelet[3603]: W0307 00:55:43.833399 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.834288 kubelet[3603]: E0307 00:55:43.833432 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.834288 kubelet[3603]: I0307 00:55:43.833486 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5241805d-3644-4de6-80b4-779148c6e9c9-kubelet-dir\") pod \"csi-node-driver-tdrbz\" (UID: \"5241805d-3644-4de6-80b4-779148c6e9c9\") " pod="calico-system/csi-node-driver-tdrbz" Mar 7 00:55:43.837204 kubelet[3603]: E0307 00:55:43.836556 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.837204 kubelet[3603]: W0307 00:55:43.836604 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.837204 kubelet[3603]: E0307 00:55:43.836658 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.837204 kubelet[3603]: I0307 00:55:43.836731 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5241805d-3644-4de6-80b4-779148c6e9c9-socket-dir\") pod \"csi-node-driver-tdrbz\" (UID: \"5241805d-3644-4de6-80b4-779148c6e9c9\") " pod="calico-system/csi-node-driver-tdrbz" Mar 7 00:55:43.840298 kubelet[3603]: E0307 00:55:43.840235 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.840480 kubelet[3603]: W0307 00:55:43.840317 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.840480 kubelet[3603]: E0307 00:55:43.840355 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.844552 kubelet[3603]: E0307 00:55:43.844491 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.845787 kubelet[3603]: W0307 00:55:43.844557 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.845787 kubelet[3603]: E0307 00:55:43.844598 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.846060 kubelet[3603]: E0307 00:55:43.845346 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.847990 kubelet[3603]: W0307 00:55:43.846028 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.847990 kubelet[3603]: E0307 00:55:43.846108 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.847990 kubelet[3603]: I0307 00:55:43.846729 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5241805d-3644-4de6-80b4-779148c6e9c9-varrun\") pod \"csi-node-driver-tdrbz\" (UID: \"5241805d-3644-4de6-80b4-779148c6e9c9\") " pod="calico-system/csi-node-driver-tdrbz" Mar 7 00:55:43.849338 kubelet[3603]: E0307 00:55:43.849264 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.849338 kubelet[3603]: W0307 00:55:43.849321 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.850192 kubelet[3603]: E0307 00:55:43.849362 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.850192 kubelet[3603]: E0307 00:55:43.850088 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.850192 kubelet[3603]: W0307 00:55:43.850122 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.850192 kubelet[3603]: E0307 00:55:43.850180 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.851566 kubelet[3603]: E0307 00:55:43.850827 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.851566 kubelet[3603]: W0307 00:55:43.850858 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.851566 kubelet[3603]: E0307 00:55:43.850915 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.851566 kubelet[3603]: I0307 00:55:43.851049 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5241805d-3644-4de6-80b4-779148c6e9c9-registration-dir\") pod \"csi-node-driver-tdrbz\" (UID: \"5241805d-3644-4de6-80b4-779148c6e9c9\") " pod="calico-system/csi-node-driver-tdrbz" Mar 7 00:55:43.851566 kubelet[3603]: E0307 00:55:43.851476 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.851566 kubelet[3603]: W0307 00:55:43.851531 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.851566 kubelet[3603]: E0307 00:55:43.851563 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.854605 kubelet[3603]: E0307 00:55:43.854551 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.854605 kubelet[3603]: W0307 00:55:43.854592 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.856201 kubelet[3603]: E0307 00:55:43.854652 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.856201 kubelet[3603]: E0307 00:55:43.855197 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.856201 kubelet[3603]: W0307 00:55:43.855224 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.856201 kubelet[3603]: E0307 00:55:43.855289 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.856201 kubelet[3603]: E0307 00:55:43.855712 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.856201 kubelet[3603]: W0307 00:55:43.855735 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.856201 kubelet[3603]: E0307 00:55:43.855762 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.952327 kubelet[3603]: E0307 00:55:43.952158 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.952327 kubelet[3603]: W0307 00:55:43.952205 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.952327 kubelet[3603]: E0307 00:55:43.952276 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.954155 kubelet[3603]: E0307 00:55:43.952961 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.954155 kubelet[3603]: W0307 00:55:43.953006 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.954155 kubelet[3603]: E0307 00:55:43.953042 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.956234 kubelet[3603]: E0307 00:55:43.956161 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.956450 kubelet[3603]: W0307 00:55:43.956217 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.956450 kubelet[3603]: E0307 00:55:43.956285 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.958838 kubelet[3603]: E0307 00:55:43.958765 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.958838 kubelet[3603]: W0307 00:55:43.958816 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.959138 kubelet[3603]: E0307 00:55:43.958856 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.962179 kubelet[3603]: E0307 00:55:43.962083 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.962179 kubelet[3603]: W0307 00:55:43.962158 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.962179 kubelet[3603]: E0307 00:55:43.962196 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.964362 kubelet[3603]: E0307 00:55:43.962929 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.964362 kubelet[3603]: W0307 00:55:43.963052 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.964362 kubelet[3603]: E0307 00:55:43.963147 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.965617 kubelet[3603]: E0307 00:55:43.964827 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.965617 kubelet[3603]: W0307 00:55:43.964863 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.965617 kubelet[3603]: E0307 00:55:43.964903 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.968004 kubelet[3603]: E0307 00:55:43.966956 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.968004 kubelet[3603]: W0307 00:55:43.967004 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.968004 kubelet[3603]: E0307 00:55:43.967041 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.973385 kubelet[3603]: E0307 00:55:43.968842 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.973385 kubelet[3603]: W0307 00:55:43.968897 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.973385 kubelet[3603]: E0307 00:55:43.968956 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.973385 kubelet[3603]: E0307 00:55:43.970188 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.973385 kubelet[3603]: W0307 00:55:43.970219 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.973385 kubelet[3603]: E0307 00:55:43.970287 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.973385 kubelet[3603]: E0307 00:55:43.971841 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.973385 kubelet[3603]: W0307 00:55:43.971907 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.973385 kubelet[3603]: E0307 00:55:43.971978 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.973385 kubelet[3603]: E0307 00:55:43.972993 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.975826 kubelet[3603]: W0307 00:55:43.973030 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.975826 kubelet[3603]: E0307 00:55:43.973067 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.975826 kubelet[3603]: E0307 00:55:43.974223 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.975826 kubelet[3603]: W0307 00:55:43.974257 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.975826 kubelet[3603]: E0307 00:55:43.974291 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.975826 kubelet[3603]: E0307 00:55:43.975152 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.975826 kubelet[3603]: W0307 00:55:43.975183 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.975826 kubelet[3603]: E0307 00:55:43.975244 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.977388 kubelet[3603]: E0307 00:55:43.975869 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.977388 kubelet[3603]: W0307 00:55:43.975924 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.977388 kubelet[3603]: E0307 00:55:43.975995 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.979517 kubelet[3603]: E0307 00:55:43.978306 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.979517 kubelet[3603]: W0307 00:55:43.978364 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.979517 kubelet[3603]: E0307 00:55:43.978402 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.979517 kubelet[3603]: E0307 00:55:43.979088 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.979517 kubelet[3603]: W0307 00:55:43.979117 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.979517 kubelet[3603]: E0307 00:55:43.979150 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.981009 kubelet[3603]: E0307 00:55:43.980070 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.981009 kubelet[3603]: W0307 00:55:43.980102 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.981009 kubelet[3603]: E0307 00:55:43.980134 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.981009 kubelet[3603]: E0307 00:55:43.980565 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.981009 kubelet[3603]: W0307 00:55:43.980587 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.981009 kubelet[3603]: E0307 00:55:43.980611 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.982088 kubelet[3603]: E0307 00:55:43.981822 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.982088 kubelet[3603]: W0307 00:55:43.981853 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.982088 kubelet[3603]: E0307 00:55:43.981884 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.985606 kubelet[3603]: E0307 00:55:43.983557 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.985606 kubelet[3603]: W0307 00:55:43.983610 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.985606 kubelet[3603]: E0307 00:55:43.983647 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.985606 kubelet[3603]: E0307 00:55:43.984807 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.985606 kubelet[3603]: W0307 00:55:43.984837 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.985606 kubelet[3603]: E0307 00:55:43.984872 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.987177 kubelet[3603]: E0307 00:55:43.986085 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.987177 kubelet[3603]: W0307 00:55:43.986117 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.987177 kubelet[3603]: E0307 00:55:43.986149 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.989969 kubelet[3603]: E0307 00:55:43.988350 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.989969 kubelet[3603]: W0307 00:55:43.988399 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.989969 kubelet[3603]: E0307 00:55:43.988434 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:43.990202 kubelet[3603]: E0307 00:55:43.990164 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:43.990202 kubelet[3603]: W0307 00:55:43.990193 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:43.990340 kubelet[3603]: E0307 00:55:43.990227 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:44.258226 kubelet[3603]: E0307 00:55:44.258177 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:44.259710 kubelet[3603]: W0307 00:55:44.258823 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:44.259710 kubelet[3603]: E0307 00:55:44.258889 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:44.269548 kubelet[3603]: E0307 00:55:44.269518 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:44.270003 kubelet[3603]: W0307 00:55:44.269922 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:44.270092 kubelet[3603]: E0307 00:55:44.270010 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:44.273696 kubelet[3603]: E0307 00:55:44.273643 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:44.273696 kubelet[3603]: W0307 00:55:44.273683 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:44.273886 kubelet[3603]: E0307 00:55:44.273719 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:44.289820 kubelet[3603]: E0307 00:55:44.288444 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:44.289820 kubelet[3603]: W0307 00:55:44.288482 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:44.289820 kubelet[3603]: E0307 00:55:44.288515 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:44.290789 kubelet[3603]: E0307 00:55:44.290750 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:44.290789 kubelet[3603]: W0307 00:55:44.290787 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:44.290961 kubelet[3603]: E0307 00:55:44.290820 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:44.420102 kubelet[3603]: E0307 00:55:44.419243 3603 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Mar 7 00:55:44.420102 kubelet[3603]: E0307 00:55:44.419392 3603 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5c6762bf-83a9-4908-bdb6-b92075c6a475-typha-certs podName:5c6762bf-83a9-4908-bdb6-b92075c6a475 nodeName:}" failed. No retries permitted until 2026-03-07 00:55:44.91935836 +0000 UTC m=+33.362028380 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/5c6762bf-83a9-4908-bdb6-b92075c6a475-typha-certs") pod "calico-typha-7f5df69fcb-529xz" (UID: "5c6762bf-83a9-4908-bdb6-b92075c6a475") : failed to sync secret cache: timed out waiting for the condition Mar 7 00:55:44.466719 containerd[2132]: time="2026-03-07T00:55:44.466469568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-znk9c,Uid:ec5842f4-f3b2-4cae-a751-292221337ab0,Namespace:calico-system,Attempt:0,}" Mar 7 00:55:44.476792 kubelet[3603]: E0307 00:55:44.476454 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:44.476792 kubelet[3603]: W0307 00:55:44.476509 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:44.476792 kubelet[3603]: E0307 00:55:44.476548 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:44.511673 containerd[2132]: time="2026-03-07T00:55:44.511348513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:55:44.511673 containerd[2132]: time="2026-03-07T00:55:44.511573001Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:55:44.512082 containerd[2132]: time="2026-03-07T00:55:44.511695246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:44.512169 containerd[2132]: time="2026-03-07T00:55:44.512100304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:44.578563 kubelet[3603]: E0307 00:55:44.578174 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:44.578563 kubelet[3603]: W0307 00:55:44.578213 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:44.578563 kubelet[3603]: E0307 00:55:44.578250 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:44.597040 containerd[2132]: time="2026-03-07T00:55:44.596905118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-znk9c,Uid:ec5842f4-f3b2-4cae-a751-292221337ab0,Namespace:calico-system,Attempt:0,} returns sandbox id \"ccb94da89dada0fc8f683ba725d3886c6d4c52b09e382da9b8c7807d4963afae\"" Mar 7 00:55:44.602438 containerd[2132]: time="2026-03-07T00:55:44.602335912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 7 00:55:44.680435 kubelet[3603]: E0307 00:55:44.680228 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:44.680435 kubelet[3603]: W0307 00:55:44.680269 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:44.680435 kubelet[3603]: E0307 00:55:44.680306 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:44.783705 kubelet[3603]: E0307 00:55:44.782630 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:44.783705 kubelet[3603]: W0307 00:55:44.782674 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:44.783705 kubelet[3603]: E0307 00:55:44.782712 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:44.884557 kubelet[3603]: E0307 00:55:44.884351 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:44.884557 kubelet[3603]: W0307 00:55:44.884388 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:44.884557 kubelet[3603]: E0307 00:55:44.884445 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:44.985594 kubelet[3603]: E0307 00:55:44.985539 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:44.985594 kubelet[3603]: W0307 00:55:44.985587 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:44.985852 kubelet[3603]: E0307 00:55:44.985625 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:44.986380 kubelet[3603]: E0307 00:55:44.986341 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:44.986380 kubelet[3603]: W0307 00:55:44.986380 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:44.986591 kubelet[3603]: E0307 00:55:44.986413 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:44.986908 kubelet[3603]: E0307 00:55:44.986870 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:44.986908 kubelet[3603]: W0307 00:55:44.986907 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:44.987099 kubelet[3603]: E0307 00:55:44.986961 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:44.987541 kubelet[3603]: E0307 00:55:44.987503 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:44.987640 kubelet[3603]: W0307 00:55:44.987544 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:44.987640 kubelet[3603]: E0307 00:55:44.987579 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:44.988114 kubelet[3603]: E0307 00:55:44.988074 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:44.988114 kubelet[3603]: W0307 00:55:44.988111 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:44.988263 kubelet[3603]: E0307 00:55:44.988143 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:44.997382 kubelet[3603]: E0307 00:55:44.997327 3603 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 7 00:55:44.997382 kubelet[3603]: W0307 00:55:44.997370 3603 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 7 00:55:44.997627 kubelet[3603]: E0307 00:55:44.997408 3603 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 7 00:55:45.053799 containerd[2132]: time="2026-03-07T00:55:45.053154564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f5df69fcb-529xz,Uid:5c6762bf-83a9-4908-bdb6-b92075c6a475,Namespace:calico-system,Attempt:0,}" Mar 7 00:55:45.097257 containerd[2132]: time="2026-03-07T00:55:45.097061551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:55:45.097568 containerd[2132]: time="2026-03-07T00:55:45.097204723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:55:45.097568 containerd[2132]: time="2026-03-07T00:55:45.097530097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:45.098138 containerd[2132]: time="2026-03-07T00:55:45.097991644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:55:45.288055 containerd[2132]: time="2026-03-07T00:55:45.287729355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f5df69fcb-529xz,Uid:5c6762bf-83a9-4908-bdb6-b92075c6a475,Namespace:calico-system,Attempt:0,} returns sandbox id \"d2b91821011d378231e2de7951029ce9629888eafad0b519ce237b97845bef06\"" Mar 7 00:55:45.741026 systemd[1]: run-containerd-runc-k8s.io-d2b91821011d378231e2de7951029ce9629888eafad0b519ce237b97845bef06-runc.095JAT.mount: Deactivated successfully. Mar 7 00:55:45.873468 kubelet[3603]: E0307 00:55:45.872609 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tdrbz" podUID="5241805d-3644-4de6-80b4-779148c6e9c9" Mar 7 00:55:46.330616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount723362539.mount: Deactivated successfully. Mar 7 00:55:46.461036 containerd[2132]: time="2026-03-07T00:55:46.460775605Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:46.463171 containerd[2132]: time="2026-03-07T00:55:46.462916863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=5855345" Mar 7 00:55:46.466991 containerd[2132]: time="2026-03-07T00:55:46.465490625Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:46.471212 containerd[2132]: time="2026-03-07T00:55:46.471152235Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:46.473149 containerd[2132]: time="2026-03-07T00:55:46.473077637Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.870640706s" Mar 7 00:55:46.473370 containerd[2132]: time="2026-03-07T00:55:46.473321803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Mar 7 00:55:46.477837 containerd[2132]: time="2026-03-07T00:55:46.476765052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 7 00:55:46.484770 containerd[2132]: time="2026-03-07T00:55:46.484701498Z" level=info msg="CreateContainer within sandbox \"ccb94da89dada0fc8f683ba725d3886c6d4c52b09e382da9b8c7807d4963afae\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 7 00:55:46.516762 containerd[2132]: time="2026-03-07T00:55:46.516414999Z" level=info msg="CreateContainer within sandbox \"ccb94da89dada0fc8f683ba725d3886c6d4c52b09e382da9b8c7807d4963afae\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5d4aba91963f247c0320b0435b55d27f2aab15a8384da9113a08879ed795cf28\"" Mar 7 00:55:46.519702 containerd[2132]: time="2026-03-07T00:55:46.519267204Z" level=info msg="StartContainer for \"5d4aba91963f247c0320b0435b55d27f2aab15a8384da9113a08879ed795cf28\"" Mar 7 00:55:46.647733 containerd[2132]: time="2026-03-07T00:55:46.647313140Z" level=info msg="StartContainer for \"5d4aba91963f247c0320b0435b55d27f2aab15a8384da9113a08879ed795cf28\" returns successfully" Mar 7 00:55:47.021163 containerd[2132]: time="2026-03-07T00:55:47.020926392Z" level=info msg="shim disconnected" id=5d4aba91963f247c0320b0435b55d27f2aab15a8384da9113a08879ed795cf28 namespace=k8s.io Mar 7 00:55:47.021163 containerd[2132]: time="2026-03-07T00:55:47.021088257Z" level=warning msg="cleaning up after shim disconnected" id=5d4aba91963f247c0320b0435b55d27f2aab15a8384da9113a08879ed795cf28 namespace=k8s.io Mar 7 00:55:47.021163 containerd[2132]: time="2026-03-07T00:55:47.021111369Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:55:47.873317 kubelet[3603]: E0307 00:55:47.873257 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tdrbz" podUID="5241805d-3644-4de6-80b4-779148c6e9c9" Mar 7 00:55:49.002360 containerd[2132]: time="2026-03-07T00:55:49.002227181Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:49.004194 containerd[2132]: time="2026-03-07T00:55:49.003984655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=32467511" Mar 7 00:55:49.005897 containerd[2132]: time="2026-03-07T00:55:49.005170800Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:49.009710 containerd[2132]: time="2026-03-07T00:55:49.009626215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:49.011685 containerd[2132]: time="2026-03-07T00:55:49.011485200Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.534627882s" Mar 7 00:55:49.011685 containerd[2132]: time="2026-03-07T00:55:49.011547859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Mar 7 00:55:49.017453 containerd[2132]: time="2026-03-07T00:55:49.017375608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 7 00:55:49.046681 containerd[2132]: time="2026-03-07T00:55:49.046322638Z" level=info msg="CreateContainer within sandbox \"d2b91821011d378231e2de7951029ce9629888eafad0b519ce237b97845bef06\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 7 00:55:49.072048 containerd[2132]: time="2026-03-07T00:55:49.071912971Z" level=info msg="CreateContainer within sandbox \"d2b91821011d378231e2de7951029ce9629888eafad0b519ce237b97845bef06\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"18ab41336fa380710cebd9b61fb1f9375891e49f494946dfb784bbbdb50bddc9\"" Mar 7 00:55:49.074758 containerd[2132]: time="2026-03-07T00:55:49.074691616Z" level=info msg="StartContainer for \"18ab41336fa380710cebd9b61fb1f9375891e49f494946dfb784bbbdb50bddc9\"" Mar 7 00:55:49.202264 containerd[2132]: time="2026-03-07T00:55:49.200914849Z" level=info msg="StartContainer for \"18ab41336fa380710cebd9b61fb1f9375891e49f494946dfb784bbbdb50bddc9\" returns successfully" Mar 7 00:55:49.873980 kubelet[3603]: E0307 00:55:49.873153 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tdrbz" podUID="5241805d-3644-4de6-80b4-779148c6e9c9" Mar 7 00:55:50.123581 kubelet[3603]: I0307 00:55:50.123305 3603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7f5df69fcb-529xz" podStartSLOduration=3.405268024 podStartE2EDuration="7.123281856s" podCreationTimestamp="2026-03-07 00:55:43 +0000 UTC" firstStartedPulling="2026-03-07 00:55:45.295710775 +0000 UTC m=+33.738380783" lastFinishedPulling="2026-03-07 00:55:49.013724619 +0000 UTC m=+37.456394615" observedRunningTime="2026-03-07 00:55:50.098140307 +0000 UTC m=+38.540810328" watchObservedRunningTime="2026-03-07 00:55:50.123281856 +0000 UTC m=+38.565951852" Mar 7 00:55:51.877418 kubelet[3603]: E0307 00:55:51.877336 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tdrbz" podUID="5241805d-3644-4de6-80b4-779148c6e9c9" Mar 7 00:55:53.872876 kubelet[3603]: E0307 00:55:53.872798 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tdrbz" podUID="5241805d-3644-4de6-80b4-779148c6e9c9" Mar 7 00:55:55.875171 kubelet[3603]: E0307 00:55:55.875039 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tdrbz" podUID="5241805d-3644-4de6-80b4-779148c6e9c9" Mar 7 00:55:57.872461 kubelet[3603]: E0307 00:55:57.872342 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tdrbz" podUID="5241805d-3644-4de6-80b4-779148c6e9c9" Mar 7 00:55:58.743401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2193856891.mount: Deactivated successfully. Mar 7 00:55:58.810293 containerd[2132]: time="2026-03-07T00:55:58.810208209Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:58.813130 containerd[2132]: time="2026-03-07T00:55:58.812995570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Mar 7 00:55:58.816989 containerd[2132]: time="2026-03-07T00:55:58.815650289Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:58.821909 containerd[2132]: time="2026-03-07T00:55:58.821836789Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:55:58.823884 containerd[2132]: time="2026-03-07T00:55:58.823784006Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 9.806330804s" Mar 7 00:55:58.823884 containerd[2132]: time="2026-03-07T00:55:58.823866079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Mar 7 00:55:58.835436 containerd[2132]: time="2026-03-07T00:55:58.835363361Z" level=info msg="CreateContainer within sandbox \"ccb94da89dada0fc8f683ba725d3886c6d4c52b09e382da9b8c7807d4963afae\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 7 00:55:58.870296 containerd[2132]: time="2026-03-07T00:55:58.870221125Z" level=info msg="CreateContainer within sandbox \"ccb94da89dada0fc8f683ba725d3886c6d4c52b09e382da9b8c7807d4963afae\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"c2b0df13bb205e02be8eea57048f6521f90a2fa1b1fd1a6f45270238828a5a7c\"" Mar 7 00:55:58.873032 containerd[2132]: time="2026-03-07T00:55:58.871303670Z" level=info msg="StartContainer for \"c2b0df13bb205e02be8eea57048f6521f90a2fa1b1fd1a6f45270238828a5a7c\"" Mar 7 00:55:59.013230 containerd[2132]: time="2026-03-07T00:55:59.013072176Z" level=info msg="StartContainer for \"c2b0df13bb205e02be8eea57048f6521f90a2fa1b1fd1a6f45270238828a5a7c\" returns successfully" Mar 7 00:55:59.517763 containerd[2132]: time="2026-03-07T00:55:59.517652952Z" level=info msg="shim disconnected" id=c2b0df13bb205e02be8eea57048f6521f90a2fa1b1fd1a6f45270238828a5a7c namespace=k8s.io Mar 7 00:55:59.517763 containerd[2132]: time="2026-03-07T00:55:59.517757272Z" level=warning msg="cleaning up after shim disconnected" id=c2b0df13bb205e02be8eea57048f6521f90a2fa1b1fd1a6f45270238828a5a7c namespace=k8s.io Mar 7 00:55:59.518237 containerd[2132]: time="2026-03-07T00:55:59.517781884Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:55:59.742997 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2b0df13bb205e02be8eea57048f6521f90a2fa1b1fd1a6f45270238828a5a7c-rootfs.mount: Deactivated successfully. Mar 7 00:55:59.876806 kubelet[3603]: E0307 00:55:59.876508 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tdrbz" podUID="5241805d-3644-4de6-80b4-779148c6e9c9" Mar 7 00:56:00.115048 containerd[2132]: time="2026-03-07T00:56:00.114773923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 7 00:56:01.876838 kubelet[3603]: E0307 00:56:01.876301 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tdrbz" podUID="5241805d-3644-4de6-80b4-779148c6e9c9" Mar 7 00:56:03.873184 kubelet[3603]: E0307 00:56:03.872452 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tdrbz" podUID="5241805d-3644-4de6-80b4-779148c6e9c9" Mar 7 00:56:04.299837 containerd[2132]: time="2026-03-07T00:56:04.299586177Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:04.301318 containerd[2132]: time="2026-03-07T00:56:04.301220673Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Mar 7 00:56:04.302239 containerd[2132]: time="2026-03-07T00:56:04.301761039Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:04.306292 containerd[2132]: time="2026-03-07T00:56:04.306241931Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:04.308271 containerd[2132]: time="2026-03-07T00:56:04.308055617Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 4.193217366s" Mar 7 00:56:04.308271 containerd[2132]: time="2026-03-07T00:56:04.308115935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Mar 7 00:56:04.316385 containerd[2132]: time="2026-03-07T00:56:04.316316298Z" level=info msg="CreateContainer within sandbox \"ccb94da89dada0fc8f683ba725d3886c6d4c52b09e382da9b8c7807d4963afae\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 7 00:56:04.342461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2376215660.mount: Deactivated successfully. Mar 7 00:56:04.354049 containerd[2132]: time="2026-03-07T00:56:04.353716957Z" level=info msg="CreateContainer within sandbox \"ccb94da89dada0fc8f683ba725d3886c6d4c52b09e382da9b8c7807d4963afae\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b9dbcf08f93228a3ed5a7dcb63b22ce2191f6e7c2673e55063754504f8c16fb7\"" Mar 7 00:56:04.356154 containerd[2132]: time="2026-03-07T00:56:04.356020068Z" level=info msg="StartContainer for \"b9dbcf08f93228a3ed5a7dcb63b22ce2191f6e7c2673e55063754504f8c16fb7\"" Mar 7 00:56:04.475192 containerd[2132]: time="2026-03-07T00:56:04.475003022Z" level=info msg="StartContainer for \"b9dbcf08f93228a3ed5a7dcb63b22ce2191f6e7c2673e55063754504f8c16fb7\" returns successfully" Mar 7 00:56:05.876538 kubelet[3603]: E0307 00:56:05.876396 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tdrbz" podUID="5241805d-3644-4de6-80b4-779148c6e9c9" Mar 7 00:56:06.330877 containerd[2132]: time="2026-03-07T00:56:06.330711302Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 7 00:56:06.372991 kubelet[3603]: I0307 00:56:06.371909 3603 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 7 00:56:06.390907 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9dbcf08f93228a3ed5a7dcb63b22ce2191f6e7c2673e55063754504f8c16fb7-rootfs.mount: Deactivated successfully. Mar 7 00:56:06.399406 containerd[2132]: time="2026-03-07T00:56:06.397262901Z" level=info msg="shim disconnected" id=b9dbcf08f93228a3ed5a7dcb63b22ce2191f6e7c2673e55063754504f8c16fb7 namespace=k8s.io Mar 7 00:56:06.403877 containerd[2132]: time="2026-03-07T00:56:06.399424498Z" level=warning msg="cleaning up after shim disconnected" id=b9dbcf08f93228a3ed5a7dcb63b22ce2191f6e7c2673e55063754504f8c16fb7 namespace=k8s.io Mar 7 00:56:06.403877 containerd[2132]: time="2026-03-07T00:56:06.399469916Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:56:06.589137 kubelet[3603]: I0307 00:56:06.588985 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp82p\" (UniqueName: \"kubernetes.io/projected/42e4545a-e486-4f54-bd6f-2806121371ca-kube-api-access-wp82p\") pod \"coredns-674b8bbfcf-sndg8\" (UID: \"42e4545a-e486-4f54-bd6f-2806121371ca\") " pod="kube-system/coredns-674b8bbfcf-sndg8" Mar 7 00:56:06.589506 kubelet[3603]: I0307 00:56:06.589455 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6210dbc0-cd47-4e52-8ece-fe359619300c-tigera-ca-bundle\") pod \"calico-kube-controllers-578b9ccf58-j8drf\" (UID: \"6210dbc0-cd47-4e52-8ece-fe359619300c\") " pod="calico-system/calico-kube-controllers-578b9ccf58-j8drf" Mar 7 00:56:06.589730 kubelet[3603]: I0307 00:56:06.589695 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78cfc558-a091-4954-aac1-f01bb0fadc54-config-volume\") pod \"coredns-674b8bbfcf-z42qv\" (UID: \"78cfc558-a091-4954-aac1-f01bb0fadc54\") " pod="kube-system/coredns-674b8bbfcf-z42qv" Mar 7 00:56:06.590093 kubelet[3603]: I0307 00:56:06.590056 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42e4545a-e486-4f54-bd6f-2806121371ca-config-volume\") pod \"coredns-674b8bbfcf-sndg8\" (UID: \"42e4545a-e486-4f54-bd6f-2806121371ca\") " pod="kube-system/coredns-674b8bbfcf-sndg8" Mar 7 00:56:06.595736 kubelet[3603]: I0307 00:56:06.594355 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbqxc\" (UniqueName: \"kubernetes.io/projected/6210dbc0-cd47-4e52-8ece-fe359619300c-kube-api-access-jbqxc\") pod \"calico-kube-controllers-578b9ccf58-j8drf\" (UID: \"6210dbc0-cd47-4e52-8ece-fe359619300c\") " pod="calico-system/calico-kube-controllers-578b9ccf58-j8drf" Mar 7 00:56:06.595736 kubelet[3603]: I0307 00:56:06.594446 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvj9z\" (UniqueName: \"kubernetes.io/projected/78cfc558-a091-4954-aac1-f01bb0fadc54-kube-api-access-fvj9z\") pod \"coredns-674b8bbfcf-z42qv\" (UID: \"78cfc558-a091-4954-aac1-f01bb0fadc54\") " pod="kube-system/coredns-674b8bbfcf-z42qv" Mar 7 00:56:06.695366 kubelet[3603]: I0307 00:56:06.695259 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/983870dd-d757-45a0-8256-46362244d803-nginx-config\") pod \"whisker-69447f8b7b-6p8bz\" (UID: \"983870dd-d757-45a0-8256-46362244d803\") " pod="calico-system/whisker-69447f8b7b-6p8bz" Mar 7 00:56:06.695560 kubelet[3603]: I0307 00:56:06.695423 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/983870dd-d757-45a0-8256-46362244d803-whisker-ca-bundle\") pod \"whisker-69447f8b7b-6p8bz\" (UID: \"983870dd-d757-45a0-8256-46362244d803\") " pod="calico-system/whisker-69447f8b7b-6p8bz" Mar 7 00:56:06.695633 kubelet[3603]: I0307 00:56:06.695570 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/983870dd-d757-45a0-8256-46362244d803-whisker-backend-key-pair\") pod \"whisker-69447f8b7b-6p8bz\" (UID: \"983870dd-d757-45a0-8256-46362244d803\") " pod="calico-system/whisker-69447f8b7b-6p8bz" Mar 7 00:56:06.695699 kubelet[3603]: I0307 00:56:06.695683 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htsfg\" (UniqueName: \"kubernetes.io/projected/466a569a-796d-4554-bfdc-84553d49d7a8-kube-api-access-htsfg\") pod \"calico-apiserver-67454779cb-lj22q\" (UID: \"466a569a-796d-4554-bfdc-84553d49d7a8\") " pod="calico-system/calico-apiserver-67454779cb-lj22q" Mar 7 00:56:06.695778 kubelet[3603]: I0307 00:56:06.695762 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdbvk\" (UniqueName: \"kubernetes.io/projected/983870dd-d757-45a0-8256-46362244d803-kube-api-access-hdbvk\") pod \"whisker-69447f8b7b-6p8bz\" (UID: \"983870dd-d757-45a0-8256-46362244d803\") " pod="calico-system/whisker-69447f8b7b-6p8bz" Mar 7 00:56:06.697024 kubelet[3603]: I0307 00:56:06.695885 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45a627cb-a427-4b7a-bf60-da0e9b3da1b5-config\") pod \"goldmane-5b85766d88-svm6z\" (UID: \"45a627cb-a427-4b7a-bf60-da0e9b3da1b5\") " pod="calico-system/goldmane-5b85766d88-svm6z" Mar 7 00:56:06.697024 kubelet[3603]: I0307 00:56:06.696031 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9bt8\" (UniqueName: \"kubernetes.io/projected/45a627cb-a427-4b7a-bf60-da0e9b3da1b5-kube-api-access-f9bt8\") pod \"goldmane-5b85766d88-svm6z\" (UID: \"45a627cb-a427-4b7a-bf60-da0e9b3da1b5\") " pod="calico-system/goldmane-5b85766d88-svm6z" Mar 7 00:56:06.697024 kubelet[3603]: I0307 00:56:06.696076 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dksbv\" (UniqueName: \"kubernetes.io/projected/055c9d7f-4150-48cb-a2a9-4df82e634570-kube-api-access-dksbv\") pod \"calico-apiserver-67454779cb-jkkb4\" (UID: \"055c9d7f-4150-48cb-a2a9-4df82e634570\") " pod="calico-system/calico-apiserver-67454779cb-jkkb4" Mar 7 00:56:06.697024 kubelet[3603]: I0307 00:56:06.696242 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45a627cb-a427-4b7a-bf60-da0e9b3da1b5-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-svm6z\" (UID: \"45a627cb-a427-4b7a-bf60-da0e9b3da1b5\") " pod="calico-system/goldmane-5b85766d88-svm6z" Mar 7 00:56:06.697024 kubelet[3603]: I0307 00:56:06.696400 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/055c9d7f-4150-48cb-a2a9-4df82e634570-calico-apiserver-certs\") pod \"calico-apiserver-67454779cb-jkkb4\" (UID: \"055c9d7f-4150-48cb-a2a9-4df82e634570\") " pod="calico-system/calico-apiserver-67454779cb-jkkb4" Mar 7 00:56:06.697417 kubelet[3603]: I0307 00:56:06.696510 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/45a627cb-a427-4b7a-bf60-da0e9b3da1b5-goldmane-key-pair\") pod \"goldmane-5b85766d88-svm6z\" (UID: \"45a627cb-a427-4b7a-bf60-da0e9b3da1b5\") " pod="calico-system/goldmane-5b85766d88-svm6z" Mar 7 00:56:06.697417 kubelet[3603]: I0307 00:56:06.696593 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/466a569a-796d-4554-bfdc-84553d49d7a8-calico-apiserver-certs\") pod \"calico-apiserver-67454779cb-lj22q\" (UID: \"466a569a-796d-4554-bfdc-84553d49d7a8\") " pod="calico-system/calico-apiserver-67454779cb-lj22q" Mar 7 00:56:06.851082 containerd[2132]: time="2026-03-07T00:56:06.850386861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z42qv,Uid:78cfc558-a091-4954-aac1-f01bb0fadc54,Namespace:kube-system,Attempt:0,}" Mar 7 00:56:06.868446 containerd[2132]: time="2026-03-07T00:56:06.868361284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sndg8,Uid:42e4545a-e486-4f54-bd6f-2806121371ca,Namespace:kube-system,Attempt:0,}" Mar 7 00:56:06.889109 containerd[2132]: time="2026-03-07T00:56:06.887260181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-578b9ccf58-j8drf,Uid:6210dbc0-cd47-4e52-8ece-fe359619300c,Namespace:calico-system,Attempt:0,}" Mar 7 00:56:06.895369 containerd[2132]: time="2026-03-07T00:56:06.895308835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67454779cb-lj22q,Uid:466a569a-796d-4554-bfdc-84553d49d7a8,Namespace:calico-system,Attempt:0,}" Mar 7 00:56:06.904985 containerd[2132]: time="2026-03-07T00:56:06.904856835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-svm6z,Uid:45a627cb-a427-4b7a-bf60-da0e9b3da1b5,Namespace:calico-system,Attempt:0,}" Mar 7 00:56:07.143418 containerd[2132]: time="2026-03-07T00:56:07.142700895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69447f8b7b-6p8bz,Uid:983870dd-d757-45a0-8256-46362244d803,Namespace:calico-system,Attempt:0,}" Mar 7 00:56:07.144685 containerd[2132]: time="2026-03-07T00:56:07.144604531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67454779cb-jkkb4,Uid:055c9d7f-4150-48cb-a2a9-4df82e634570,Namespace:calico-system,Attempt:0,}" Mar 7 00:56:07.239748 containerd[2132]: time="2026-03-07T00:56:07.239659539Z" level=info msg="CreateContainer within sandbox \"ccb94da89dada0fc8f683ba725d3886c6d4c52b09e382da9b8c7807d4963afae\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 7 00:56:07.324632 containerd[2132]: time="2026-03-07T00:56:07.323861075Z" level=info msg="CreateContainer within sandbox \"ccb94da89dada0fc8f683ba725d3886c6d4c52b09e382da9b8c7807d4963afae\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e7a5e4a6e706ee278697abd0cb593969cc1ef8c505dd0311e07700a86a7fe5c1\"" Mar 7 00:56:07.327123 containerd[2132]: time="2026-03-07T00:56:07.327054719Z" level=info msg="StartContainer for \"e7a5e4a6e706ee278697abd0cb593969cc1ef8c505dd0311e07700a86a7fe5c1\"" Mar 7 00:56:07.532978 containerd[2132]: time="2026-03-07T00:56:07.531728134Z" level=error msg="Failed to destroy network for sandbox \"68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:07.539037 containerd[2132]: time="2026-03-07T00:56:07.538538766Z" level=error msg="encountered an error cleaning up failed sandbox \"68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:07.542364 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d-shm.mount: Deactivated successfully. Mar 7 00:56:07.553384 containerd[2132]: time="2026-03-07T00:56:07.553308679Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-578b9ccf58-j8drf,Uid:6210dbc0-cd47-4e52-8ece-fe359619300c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:07.555544 kubelet[3603]: E0307 00:56:07.555472 3603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:07.559298 kubelet[3603]: E0307 00:56:07.555587 3603 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-578b9ccf58-j8drf" Mar 7 00:56:07.559298 kubelet[3603]: E0307 00:56:07.555623 3603 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-578b9ccf58-j8drf" Mar 7 00:56:07.559298 kubelet[3603]: E0307 00:56:07.555704 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-578b9ccf58-j8drf_calico-system(6210dbc0-cd47-4e52-8ece-fe359619300c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-578b9ccf58-j8drf_calico-system(6210dbc0-cd47-4e52-8ece-fe359619300c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-578b9ccf58-j8drf" podUID="6210dbc0-cd47-4e52-8ece-fe359619300c" Mar 7 00:56:07.564829 containerd[2132]: time="2026-03-07T00:56:07.564753843Z" level=error msg="Failed to destroy network for sandbox \"89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:07.573363 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b-shm.mount: Deactivated successfully. Mar 7 00:56:07.576594 containerd[2132]: time="2026-03-07T00:56:07.576478759Z" level=error msg="encountered an error cleaning up failed sandbox \"89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:07.576870 containerd[2132]: time="2026-03-07T00:56:07.576615339Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-svm6z,Uid:45a627cb-a427-4b7a-bf60-da0e9b3da1b5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:07.578037 kubelet[3603]: E0307 00:56:07.576904 3603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:07.578201 kubelet[3603]: E0307 00:56:07.578102 3603 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-svm6z" Mar 7 00:56:07.578201 kubelet[3603]: E0307 00:56:07.578169 3603 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-svm6z" Mar 7 00:56:07.580407 kubelet[3603]: E0307 00:56:07.578291 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-svm6z_calico-system(45a627cb-a427-4b7a-bf60-da0e9b3da1b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-svm6z_calico-system(45a627cb-a427-4b7a-bf60-da0e9b3da1b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-svm6z" podUID="45a627cb-a427-4b7a-bf60-da0e9b3da1b5" Mar 7 00:56:07.588061 containerd[2132]: time="2026-03-07T00:56:07.587996799Z" level=error msg="Failed to destroy network for sandbox \"2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:07.592555 containerd[2132]: time="2026-03-07T00:56:07.591571802Z" level=error msg="encountered an error cleaning up failed sandbox \"2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:07.592976 containerd[2132]: time="2026-03-07T00:56:07.592773999Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z42qv,Uid:78cfc558-a091-4954-aac1-f01bb0fadc54,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:07.593196 kubelet[3603]: E0307 00:56:07.593132 3603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:07.593294 kubelet[3603]: E0307 00:56:07.593217 3603 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-z42qv" Mar 7 00:56:07.593364 kubelet[3603]: E0307 00:56:07.593284 3603 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-z42qv" Mar 7 00:56:07.593421 kubelet[3603]: E0307 00:56:07.593358 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-z42qv_kube-system(78cfc558-a091-4954-aac1-f01bb0fadc54)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-z42qv_kube-system(78cfc558-a091-4954-aac1-f01bb0fadc54)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-z42qv" podUID="78cfc558-a091-4954-aac1-f01bb0fadc54" Mar 7 00:56:07.598917 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1-shm.mount: Deactivated successfully. Mar 7 00:56:07.641825 containerd[2132]: time="2026-03-07T00:56:07.641579637Z" level=error msg="Failed to destroy network for sandbox \"1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:07.642922 containerd[2132]: time="2026-03-07T00:56:07.642791067Z" level=error msg="encountered an error cleaning up failed sandbox \"1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:07.643295 containerd[2132]: time="2026-03-07T00:56:07.643076125Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67454779cb-lj22q,Uid:466a569a-796d-4554-bfdc-84553d49d7a8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:07.646102 containerd[2132]: time="2026-03-07T00:56:07.645927214Z" level=error msg="Failed to destroy network for sandbox \"b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:07.646728 kubelet[3603]: E0307 00:56:07.646462 3603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:07.648970 kubelet[3603]: E0307 00:56:07.647793 3603 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-67454779cb-lj22q" Mar 7 00:56:07.648970 kubelet[3603]: E0307 00:56:07.648034 3603 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-67454779cb-lj22q" Mar 7 00:56:07.648970 kubelet[3603]: E0307 00:56:07.648500 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67454779cb-lj22q_calico-system(466a569a-796d-4554-bfdc-84553d49d7a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67454779cb-lj22q_calico-system(466a569a-796d-4554-bfdc-84553d49d7a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-67454779cb-lj22q" podUID="466a569a-796d-4554-bfdc-84553d49d7a8" Mar 7 00:56:07.650333 containerd[2132]: time="2026-03-07T00:56:07.648150377Z" level=error msg="encountered an error cleaning up failed sandbox \"b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:07.650333 containerd[2132]: time="2026-03-07T00:56:07.648266799Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sndg8,Uid:42e4545a-e486-4f54-bd6f-2806121371ca,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:07.651568 kubelet[3603]: E0307 00:56:07.650966 3603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:07.651568 kubelet[3603]: E0307 00:56:07.651085 3603 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-sndg8" Mar 7 00:56:07.651568 kubelet[3603]: E0307 00:56:07.651250 3603 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-sndg8" Mar 7 00:56:07.651931 kubelet[3603]: E0307 00:56:07.651511 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-sndg8_kube-system(42e4545a-e486-4f54-bd6f-2806121371ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-sndg8_kube-system(42e4545a-e486-4f54-bd6f-2806121371ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-sndg8" podUID="42e4545a-e486-4f54-bd6f-2806121371ca" Mar 7 00:56:07.770927 containerd[2132]: time="2026-03-07T00:56:07.770490438Z" level=error msg="Failed to destroy network for sandbox \"5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:07.771437 containerd[2132]: time="2026-03-07T00:56:07.771123958Z" level=error msg="encountered an error cleaning up failed sandbox \"5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:07.771542 containerd[2132]: time="2026-03-07T00:56:07.771471496Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67454779cb-jkkb4,Uid:055c9d7f-4150-48cb-a2a9-4df82e634570,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:07.772101 kubelet[3603]: E0307 00:56:07.771758 3603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:07.772101 kubelet[3603]: E0307 00:56:07.772011 3603 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-67454779cb-jkkb4" Mar 7 00:56:07.772101 kubelet[3603]: E0307 00:56:07.772073 3603 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-67454779cb-jkkb4" Mar 7 00:56:07.773272 kubelet[3603]: E0307 00:56:07.772198 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67454779cb-jkkb4_calico-system(055c9d7f-4150-48cb-a2a9-4df82e634570)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67454779cb-jkkb4_calico-system(055c9d7f-4150-48cb-a2a9-4df82e634570)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-67454779cb-jkkb4" podUID="055c9d7f-4150-48cb-a2a9-4df82e634570" Mar 7 00:56:07.773814 containerd[2132]: time="2026-03-07T00:56:07.773582727Z" level=info msg="StartContainer for \"e7a5e4a6e706ee278697abd0cb593969cc1ef8c505dd0311e07700a86a7fe5c1\" returns successfully" Mar 7 00:56:07.777456 containerd[2132]: time="2026-03-07T00:56:07.777205802Z" level=error msg="Failed to destroy network for sandbox \"63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:07.779196 containerd[2132]: time="2026-03-07T00:56:07.778536271Z" level=error msg="encountered an error cleaning up failed sandbox \"63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:07.779196 containerd[2132]: time="2026-03-07T00:56:07.778694810Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69447f8b7b-6p8bz,Uid:983870dd-d757-45a0-8256-46362244d803,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:07.781343 kubelet[3603]: E0307 00:56:07.779053 3603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:07.781724 kubelet[3603]: E0307 00:56:07.781478 3603 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69447f8b7b-6p8bz" Mar 7 00:56:07.781724 kubelet[3603]: E0307 00:56:07.781548 3603 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69447f8b7b-6p8bz" Mar 7 00:56:07.781724 kubelet[3603]: E0307 00:56:07.781649 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-69447f8b7b-6p8bz_calico-system(983870dd-d757-45a0-8256-46362244d803)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-69447f8b7b-6p8bz_calico-system(983870dd-d757-45a0-8256-46362244d803)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-69447f8b7b-6p8bz" podUID="983870dd-d757-45a0-8256-46362244d803" Mar 7 00:56:07.885016 containerd[2132]: time="2026-03-07T00:56:07.882505722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tdrbz,Uid:5241805d-3644-4de6-80b4-779148c6e9c9,Namespace:calico-system,Attempt:0,}" Mar 7 00:56:08.169786 kubelet[3603]: I0307 00:56:08.167658 3603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" Mar 7 00:56:08.170710 containerd[2132]: time="2026-03-07T00:56:08.170513204Z" level=info msg="StopPodSandbox for \"5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610\"" Mar 7 00:56:08.173036 kubelet[3603]: I0307 00:56:08.172891 3603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" Mar 7 00:56:08.174470 containerd[2132]: time="2026-03-07T00:56:08.173896014Z" level=info msg="Ensure that sandbox 5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610 in task-service has been cleanup successfully" Mar 7 00:56:08.174900 containerd[2132]: time="2026-03-07T00:56:08.174857947Z" level=info msg="StopPodSandbox for \"1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd\"" Mar 7 00:56:08.175864 containerd[2132]: time="2026-03-07T00:56:08.175247866Z" level=info msg="Ensure that sandbox 1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd in task-service has been cleanup successfully" Mar 7 00:56:08.185997 kubelet[3603]: I0307 00:56:08.185108 3603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" Mar 7 00:56:08.189920 containerd[2132]: time="2026-03-07T00:56:08.189571336Z" level=info msg="StopPodSandbox for \"68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d\"" Mar 7 00:56:08.191835 containerd[2132]: time="2026-03-07T00:56:08.189917637Z" level=info msg="Ensure that sandbox 68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d in task-service has been cleanup successfully" Mar 7 00:56:08.198566 kubelet[3603]: I0307 00:56:08.198505 3603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" Mar 7 00:56:08.204388 kubelet[3603]: I0307 00:56:08.201730 3603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" Mar 7 00:56:08.204546 containerd[2132]: time="2026-03-07T00:56:08.204218812Z" level=info msg="StopPodSandbox for \"b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569\"" Mar 7 00:56:08.207370 containerd[2132]: time="2026-03-07T00:56:08.206739628Z" level=info msg="Ensure that sandbox b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569 in task-service has been cleanup successfully" Mar 7 00:56:08.217809 containerd[2132]: time="2026-03-07T00:56:08.216427906Z" level=info msg="StopPodSandbox for \"2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1\"" Mar 7 00:56:08.234986 containerd[2132]: time="2026-03-07T00:56:08.233240112Z" level=info msg="Ensure that sandbox 2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1 in task-service has been cleanup successfully" Mar 7 00:56:08.311620 kubelet[3603]: I0307 00:56:08.311470 3603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" Mar 7 00:56:08.317985 containerd[2132]: time="2026-03-07T00:56:08.316775796Z" level=info msg="StopPodSandbox for \"63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a\"" Mar 7 00:56:08.332988 containerd[2132]: time="2026-03-07T00:56:08.332881941Z" level=info msg="Ensure that sandbox 63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a in task-service has been cleanup successfully" Mar 7 00:56:08.340377 kubelet[3603]: I0307 00:56:08.340271 3603 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" Mar 7 00:56:08.345105 containerd[2132]: time="2026-03-07T00:56:08.345010570Z" level=info msg="StopPodSandbox for \"89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b\"" Mar 7 00:56:08.346274 containerd[2132]: time="2026-03-07T00:56:08.346226118Z" level=info msg="Ensure that sandbox 89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b in task-service has been cleanup successfully" Mar 7 00:56:08.399213 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610-shm.mount: Deactivated successfully. Mar 7 00:56:08.399567 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a-shm.mount: Deactivated successfully. Mar 7 00:56:08.399825 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd-shm.mount: Deactivated successfully. Mar 7 00:56:08.402214 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569-shm.mount: Deactivated successfully. Mar 7 00:56:08.499319 systemd[1]: run-containerd-runc-k8s.io-e7a5e4a6e706ee278697abd0cb593969cc1ef8c505dd0311e07700a86a7fe5c1-runc.VYL2mN.mount: Deactivated successfully. Mar 7 00:56:08.985310 kubelet[3603]: I0307 00:56:08.982820 3603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-znk9c" podStartSLOduration=6.272069009 podStartE2EDuration="25.982796056s" podCreationTimestamp="2026-03-07 00:55:43 +0000 UTC" firstStartedPulling="2026-03-07 00:55:44.599807125 +0000 UTC m=+33.042477133" lastFinishedPulling="2026-03-07 00:56:04.310534184 +0000 UTC m=+52.753204180" observedRunningTime="2026-03-07 00:56:08.325712234 +0000 UTC m=+56.768382254" watchObservedRunningTime="2026-03-07 00:56:08.982796056 +0000 UTC m=+57.425466064" Mar 7 00:56:09.396671 containerd[2132]: 2026-03-07 00:56:08.254 [INFO][4699] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7354c26ead3280d93223be13623240d56368df76edebc673e1b37f33ad1b1516" Mar 7 00:56:09.396671 containerd[2132]: 2026-03-07 00:56:08.254 [INFO][4699] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7354c26ead3280d93223be13623240d56368df76edebc673e1b37f33ad1b1516" iface="eth0" netns="/var/run/netns/cni-59d60963-9f3d-f6b0-a2a6-5c6f15344bda" Mar 7 00:56:09.396671 containerd[2132]: 2026-03-07 00:56:08.255 [INFO][4699] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7354c26ead3280d93223be13623240d56368df76edebc673e1b37f33ad1b1516" iface="eth0" netns="/var/run/netns/cni-59d60963-9f3d-f6b0-a2a6-5c6f15344bda" Mar 7 00:56:09.396671 containerd[2132]: 2026-03-07 00:56:08.262 [INFO][4699] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7354c26ead3280d93223be13623240d56368df76edebc673e1b37f33ad1b1516" iface="eth0" netns="/var/run/netns/cni-59d60963-9f3d-f6b0-a2a6-5c6f15344bda" Mar 7 00:56:09.396671 containerd[2132]: 2026-03-07 00:56:08.262 [INFO][4699] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7354c26ead3280d93223be13623240d56368df76edebc673e1b37f33ad1b1516" Mar 7 00:56:09.396671 containerd[2132]: 2026-03-07 00:56:08.262 [INFO][4699] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7354c26ead3280d93223be13623240d56368df76edebc673e1b37f33ad1b1516" Mar 7 00:56:09.396671 containerd[2132]: 2026-03-07 00:56:09.075 [INFO][4758] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7354c26ead3280d93223be13623240d56368df76edebc673e1b37f33ad1b1516" HandleID="k8s-pod-network.7354c26ead3280d93223be13623240d56368df76edebc673e1b37f33ad1b1516" Workload="ip--172--31--21--232-k8s-csi--node--driver--tdrbz-eth0" Mar 7 00:56:09.396671 containerd[2132]: 2026-03-07 00:56:09.078 [INFO][4758] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:09.396671 containerd[2132]: 2026-03-07 00:56:09.078 [INFO][4758] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:09.396671 containerd[2132]: 2026-03-07 00:56:09.256 [WARNING][4758] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7354c26ead3280d93223be13623240d56368df76edebc673e1b37f33ad1b1516" HandleID="k8s-pod-network.7354c26ead3280d93223be13623240d56368df76edebc673e1b37f33ad1b1516" Workload="ip--172--31--21--232-k8s-csi--node--driver--tdrbz-eth0" Mar 7 00:56:09.396671 containerd[2132]: 2026-03-07 00:56:09.257 [INFO][4758] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7354c26ead3280d93223be13623240d56368df76edebc673e1b37f33ad1b1516" HandleID="k8s-pod-network.7354c26ead3280d93223be13623240d56368df76edebc673e1b37f33ad1b1516" Workload="ip--172--31--21--232-k8s-csi--node--driver--tdrbz-eth0" Mar 7 00:56:09.396671 containerd[2132]: 2026-03-07 00:56:09.282 [INFO][4758] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:09.396671 containerd[2132]: 2026-03-07 00:56:09.370 [INFO][4699] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7354c26ead3280d93223be13623240d56368df76edebc673e1b37f33ad1b1516" Mar 7 00:56:09.414016 systemd[1]: run-netns-cni\x2d59d60963\x2d9f3d\x2df6b0\x2da2a6\x2d5c6f15344bda.mount: Deactivated successfully. Mar 7 00:56:09.427080 containerd[2132]: 2026-03-07 00:56:08.976 [INFO][4784] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" Mar 7 00:56:09.427080 containerd[2132]: 2026-03-07 00:56:08.979 [INFO][4784] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" iface="eth0" netns="/var/run/netns/cni-ee31772a-92e5-36a6-ae53-7de7a81cd0cd" Mar 7 00:56:09.427080 containerd[2132]: 2026-03-07 00:56:08.983 [INFO][4784] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" iface="eth0" netns="/var/run/netns/cni-ee31772a-92e5-36a6-ae53-7de7a81cd0cd" Mar 7 00:56:09.427080 containerd[2132]: 2026-03-07 00:56:08.991 [INFO][4784] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" iface="eth0" netns="/var/run/netns/cni-ee31772a-92e5-36a6-ae53-7de7a81cd0cd" Mar 7 00:56:09.427080 containerd[2132]: 2026-03-07 00:56:08.991 [INFO][4784] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" Mar 7 00:56:09.427080 containerd[2132]: 2026-03-07 00:56:08.991 [INFO][4784] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" Mar 7 00:56:09.427080 containerd[2132]: 2026-03-07 00:56:09.246 [INFO][4839] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" HandleID="k8s-pod-network.2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" Workload="ip--172--31--21--232-k8s-coredns--674b8bbfcf--z42qv-eth0" Mar 7 00:56:09.427080 containerd[2132]: 2026-03-07 00:56:09.252 [INFO][4839] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:09.427080 containerd[2132]: 2026-03-07 00:56:09.286 [INFO][4839] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:09.427080 containerd[2132]: 2026-03-07 00:56:09.353 [WARNING][4839] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" HandleID="k8s-pod-network.2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" Workload="ip--172--31--21--232-k8s-coredns--674b8bbfcf--z42qv-eth0" Mar 7 00:56:09.427080 containerd[2132]: 2026-03-07 00:56:09.353 [INFO][4839] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" HandleID="k8s-pod-network.2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" Workload="ip--172--31--21--232-k8s-coredns--674b8bbfcf--z42qv-eth0" Mar 7 00:56:09.427080 containerd[2132]: 2026-03-07 00:56:09.360 [INFO][4839] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:09.427080 containerd[2132]: 2026-03-07 00:56:09.387 [INFO][4784] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" Mar 7 00:56:09.451480 systemd[1]: run-netns-cni\x2dee31772a\x2d92e5\x2d36a6\x2dae53\x2d7de7a81cd0cd.mount: Deactivated successfully. Mar 7 00:56:09.473587 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7354c26ead3280d93223be13623240d56368df76edebc673e1b37f33ad1b1516-shm.mount: Deactivated successfully. Mar 7 00:56:09.483744 containerd[2132]: time="2026-03-07T00:56:09.483653135Z" level=info msg="TearDown network for sandbox \"2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1\" successfully" Mar 7 00:56:09.483744 containerd[2132]: time="2026-03-07T00:56:09.483716442Z" level=info msg="StopPodSandbox for \"2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1\" returns successfully" Mar 7 00:56:09.499656 containerd[2132]: time="2026-03-07T00:56:09.499322898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z42qv,Uid:78cfc558-a091-4954-aac1-f01bb0fadc54,Namespace:kube-system,Attempt:1,}" Mar 7 00:56:09.521760 systemd[1]: Started sshd@7-172.31.21.232:22-20.161.92.111:36494.service - OpenSSH per-connection server daemon (20.161.92.111:36494). Mar 7 00:56:09.560920 containerd[2132]: time="2026-03-07T00:56:09.558639238Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tdrbz,Uid:5241805d-3644-4de6-80b4-779148c6e9c9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7354c26ead3280d93223be13623240d56368df76edebc673e1b37f33ad1b1516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:09.561133 kubelet[3603]: E0307 00:56:09.560797 3603 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7354c26ead3280d93223be13623240d56368df76edebc673e1b37f33ad1b1516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 7 00:56:09.561133 kubelet[3603]: E0307 00:56:09.560889 3603 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7354c26ead3280d93223be13623240d56368df76edebc673e1b37f33ad1b1516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tdrbz" Mar 7 00:56:09.561133 kubelet[3603]: E0307 00:56:09.560928 3603 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7354c26ead3280d93223be13623240d56368df76edebc673e1b37f33ad1b1516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tdrbz" Mar 7 00:56:09.562520 kubelet[3603]: E0307 00:56:09.561095 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tdrbz_calico-system(5241805d-3644-4de6-80b4-779148c6e9c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tdrbz_calico-system(5241805d-3644-4de6-80b4-779148c6e9c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7354c26ead3280d93223be13623240d56368df76edebc673e1b37f33ad1b1516\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tdrbz" podUID="5241805d-3644-4de6-80b4-779148c6e9c9" Mar 7 00:56:09.670882 containerd[2132]: 2026-03-07 00:56:09.232 [INFO][4748] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" Mar 7 00:56:09.670882 containerd[2132]: 2026-03-07 00:56:09.232 [INFO][4748] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" iface="eth0" netns="/var/run/netns/cni-649f5e67-4441-8f4a-ee0d-a64713e7b89d" Mar 7 00:56:09.670882 containerd[2132]: 2026-03-07 00:56:09.233 [INFO][4748] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" iface="eth0" netns="/var/run/netns/cni-649f5e67-4441-8f4a-ee0d-a64713e7b89d" Mar 7 00:56:09.670882 containerd[2132]: 2026-03-07 00:56:09.240 [INFO][4748] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" iface="eth0" netns="/var/run/netns/cni-649f5e67-4441-8f4a-ee0d-a64713e7b89d" Mar 7 00:56:09.670882 containerd[2132]: 2026-03-07 00:56:09.243 [INFO][4748] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" Mar 7 00:56:09.670882 containerd[2132]: 2026-03-07 00:56:09.243 [INFO][4748] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" Mar 7 00:56:09.670882 containerd[2132]: 2026-03-07 00:56:09.429 [INFO][4867] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" HandleID="k8s-pod-network.5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" Workload="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--jkkb4-eth0" Mar 7 00:56:09.670882 containerd[2132]: 2026-03-07 00:56:09.429 [INFO][4867] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:09.670882 containerd[2132]: 2026-03-07 00:56:09.430 [INFO][4867] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:09.670882 containerd[2132]: 2026-03-07 00:56:09.487 [WARNING][4867] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" HandleID="k8s-pod-network.5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" Workload="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--jkkb4-eth0" Mar 7 00:56:09.670882 containerd[2132]: 2026-03-07 00:56:09.487 [INFO][4867] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" HandleID="k8s-pod-network.5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" Workload="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--jkkb4-eth0" Mar 7 00:56:09.670882 containerd[2132]: 2026-03-07 00:56:09.513 [INFO][4867] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:09.670882 containerd[2132]: 2026-03-07 00:56:09.584 [INFO][4748] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" Mar 7 00:56:09.670882 containerd[2132]: time="2026-03-07T00:56:09.668726526Z" level=info msg="TearDown network for sandbox \"5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610\" successfully" Mar 7 00:56:09.670882 containerd[2132]: time="2026-03-07T00:56:09.668771297Z" level=info msg="StopPodSandbox for \"5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610\" returns successfully" Mar 7 00:56:09.675786 containerd[2132]: time="2026-03-07T00:56:09.675457030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67454779cb-jkkb4,Uid:055c9d7f-4150-48cb-a2a9-4df82e634570,Namespace:calico-system,Attempt:1,}" Mar 7 00:56:09.832722 containerd[2132]: 2026-03-07 00:56:09.262 [INFO][4760] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" Mar 7 00:56:09.832722 containerd[2132]: 2026-03-07 00:56:09.269 [INFO][4760] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" iface="eth0" netns="/var/run/netns/cni-8b1e1853-a6d5-408f-0d61-00f83e1ad52a" Mar 7 00:56:09.832722 containerd[2132]: 2026-03-07 00:56:09.274 [INFO][4760] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" iface="eth0" netns="/var/run/netns/cni-8b1e1853-a6d5-408f-0d61-00f83e1ad52a" Mar 7 00:56:09.832722 containerd[2132]: 2026-03-07 00:56:09.294 [INFO][4760] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" iface="eth0" netns="/var/run/netns/cni-8b1e1853-a6d5-408f-0d61-00f83e1ad52a" Mar 7 00:56:09.832722 containerd[2132]: 2026-03-07 00:56:09.294 [INFO][4760] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" Mar 7 00:56:09.832722 containerd[2132]: 2026-03-07 00:56:09.294 [INFO][4760] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" Mar 7 00:56:09.832722 containerd[2132]: 2026-03-07 00:56:09.700 [INFO][4874] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" HandleID="k8s-pod-network.b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" Workload="ip--172--31--21--232-k8s-coredns--674b8bbfcf--sndg8-eth0" Mar 7 00:56:09.832722 containerd[2132]: 2026-03-07 00:56:09.707 [INFO][4874] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:09.832722 containerd[2132]: 2026-03-07 00:56:09.708 [INFO][4874] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:09.832722 containerd[2132]: 2026-03-07 00:56:09.750 [WARNING][4874] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" HandleID="k8s-pod-network.b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" Workload="ip--172--31--21--232-k8s-coredns--674b8bbfcf--sndg8-eth0" Mar 7 00:56:09.832722 containerd[2132]: 2026-03-07 00:56:09.751 [INFO][4874] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" HandleID="k8s-pod-network.b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" Workload="ip--172--31--21--232-k8s-coredns--674b8bbfcf--sndg8-eth0" Mar 7 00:56:09.832722 containerd[2132]: 2026-03-07 00:56:09.759 [INFO][4874] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:09.832722 containerd[2132]: 2026-03-07 00:56:09.797 [INFO][4760] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" Mar 7 00:56:09.833842 containerd[2132]: time="2026-03-07T00:56:09.833662699Z" level=info msg="TearDown network for sandbox \"b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569\" successfully" Mar 7 00:56:09.833842 containerd[2132]: time="2026-03-07T00:56:09.833709294Z" level=info msg="StopPodSandbox for \"b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569\" returns successfully" Mar 7 00:56:09.836852 containerd[2132]: time="2026-03-07T00:56:09.836320851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sndg8,Uid:42e4545a-e486-4f54-bd6f-2806121371ca,Namespace:kube-system,Attempt:1,}" Mar 7 00:56:09.852442 containerd[2132]: 2026-03-07 00:56:09.117 [INFO][4731] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" Mar 7 00:56:09.852442 containerd[2132]: 2026-03-07 00:56:09.121 [INFO][4731] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" iface="eth0" netns="/var/run/netns/cni-6ba42b17-816a-0365-f33c-f85d47157c6c" Mar 7 00:56:09.852442 containerd[2132]: 2026-03-07 00:56:09.123 [INFO][4731] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" iface="eth0" netns="/var/run/netns/cni-6ba42b17-816a-0365-f33c-f85d47157c6c" Mar 7 00:56:09.852442 containerd[2132]: 2026-03-07 00:56:09.126 [INFO][4731] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" iface="eth0" netns="/var/run/netns/cni-6ba42b17-816a-0365-f33c-f85d47157c6c" Mar 7 00:56:09.852442 containerd[2132]: 2026-03-07 00:56:09.128 [INFO][4731] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" Mar 7 00:56:09.852442 containerd[2132]: 2026-03-07 00:56:09.128 [INFO][4731] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" Mar 7 00:56:09.852442 containerd[2132]: 2026-03-07 00:56:09.763 [INFO][4850] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" HandleID="k8s-pod-network.1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" Workload="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--lj22q-eth0" Mar 7 00:56:09.852442 containerd[2132]: 2026-03-07 00:56:09.778 [INFO][4850] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:09.852442 containerd[2132]: 2026-03-07 00:56:09.778 [INFO][4850] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:09.852442 containerd[2132]: 2026-03-07 00:56:09.804 [WARNING][4850] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" HandleID="k8s-pod-network.1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" Workload="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--lj22q-eth0" Mar 7 00:56:09.852442 containerd[2132]: 2026-03-07 00:56:09.804 [INFO][4850] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" HandleID="k8s-pod-network.1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" Workload="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--lj22q-eth0" Mar 7 00:56:09.852442 containerd[2132]: 2026-03-07 00:56:09.808 [INFO][4850] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:09.852442 containerd[2132]: 2026-03-07 00:56:09.830 [INFO][4731] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" Mar 7 00:56:09.854662 containerd[2132]: time="2026-03-07T00:56:09.854168514Z" level=info msg="TearDown network for sandbox \"1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd\" successfully" Mar 7 00:56:09.854662 containerd[2132]: time="2026-03-07T00:56:09.854219300Z" level=info msg="StopPodSandbox for \"1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd\" returns successfully" Mar 7 00:56:09.858644 containerd[2132]: time="2026-03-07T00:56:09.856783902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67454779cb-lj22q,Uid:466a569a-796d-4554-bfdc-84553d49d7a8,Namespace:calico-system,Attempt:1,}" Mar 7 00:56:09.892438 containerd[2132]: 2026-03-07 00:56:09.184 [INFO][4809] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" Mar 7 00:56:09.892438 containerd[2132]: 2026-03-07 00:56:09.191 [INFO][4809] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" iface="eth0" netns="/var/run/netns/cni-d9ad5705-6094-0560-fc92-b1277e28c07c" Mar 7 00:56:09.892438 containerd[2132]: 2026-03-07 00:56:09.202 [INFO][4809] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" iface="eth0" netns="/var/run/netns/cni-d9ad5705-6094-0560-fc92-b1277e28c07c" Mar 7 00:56:09.892438 containerd[2132]: 2026-03-07 00:56:09.212 [INFO][4809] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" iface="eth0" netns="/var/run/netns/cni-d9ad5705-6094-0560-fc92-b1277e28c07c" Mar 7 00:56:09.892438 containerd[2132]: 2026-03-07 00:56:09.212 [INFO][4809] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" Mar 7 00:56:09.892438 containerd[2132]: 2026-03-07 00:56:09.212 [INFO][4809] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" Mar 7 00:56:09.892438 containerd[2132]: 2026-03-07 00:56:09.770 [INFO][4858] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" HandleID="k8s-pod-network.89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" Workload="ip--172--31--21--232-k8s-goldmane--5b85766d88--svm6z-eth0" Mar 7 00:56:09.892438 containerd[2132]: 2026-03-07 00:56:09.775 [INFO][4858] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:09.892438 containerd[2132]: 2026-03-07 00:56:09.808 [INFO][4858] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:09.892438 containerd[2132]: 2026-03-07 00:56:09.860 [WARNING][4858] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" HandleID="k8s-pod-network.89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" Workload="ip--172--31--21--232-k8s-goldmane--5b85766d88--svm6z-eth0" Mar 7 00:56:09.892438 containerd[2132]: 2026-03-07 00:56:09.860 [INFO][4858] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" HandleID="k8s-pod-network.89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" Workload="ip--172--31--21--232-k8s-goldmane--5b85766d88--svm6z-eth0" Mar 7 00:56:09.892438 containerd[2132]: 2026-03-07 00:56:09.866 [INFO][4858] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:09.892438 containerd[2132]: 2026-03-07 00:56:09.879 [INFO][4809] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" Mar 7 00:56:09.894925 containerd[2132]: time="2026-03-07T00:56:09.893536657Z" level=info msg="TearDown network for sandbox \"89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b\" successfully" Mar 7 00:56:09.894925 containerd[2132]: time="2026-03-07T00:56:09.894083662Z" level=info msg="StopPodSandbox for \"89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b\" returns successfully" Mar 7 00:56:09.901742 containerd[2132]: time="2026-03-07T00:56:09.901178152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-svm6z,Uid:45a627cb-a427-4b7a-bf60-da0e9b3da1b5,Namespace:calico-system,Attempt:1,}" Mar 7 00:56:09.948205 containerd[2132]: 2026-03-07 00:56:09.188 [INFO][4757] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" Mar 7 00:56:09.948205 containerd[2132]: 2026-03-07 00:56:09.196 [INFO][4757] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" iface="eth0" netns="/var/run/netns/cni-9b1296ad-98d0-7b64-7939-77671d18365b" Mar 7 00:56:09.948205 containerd[2132]: 2026-03-07 00:56:09.206 [INFO][4757] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" iface="eth0" netns="/var/run/netns/cni-9b1296ad-98d0-7b64-7939-77671d18365b" Mar 7 00:56:09.948205 containerd[2132]: 2026-03-07 00:56:09.211 [INFO][4757] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" iface="eth0" netns="/var/run/netns/cni-9b1296ad-98d0-7b64-7939-77671d18365b" Mar 7 00:56:09.948205 containerd[2132]: 2026-03-07 00:56:09.211 [INFO][4757] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" Mar 7 00:56:09.948205 containerd[2132]: 2026-03-07 00:56:09.211 [INFO][4757] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" Mar 7 00:56:09.948205 containerd[2132]: 2026-03-07 00:56:09.844 [INFO][4859] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" HandleID="k8s-pod-network.68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" Workload="ip--172--31--21--232-k8s-calico--kube--controllers--578b9ccf58--j8drf-eth0" Mar 7 00:56:09.948205 containerd[2132]: 2026-03-07 00:56:09.849 [INFO][4859] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:09.948205 containerd[2132]: 2026-03-07 00:56:09.871 [INFO][4859] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:09.948205 containerd[2132]: 2026-03-07 00:56:09.908 [WARNING][4859] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" HandleID="k8s-pod-network.68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" Workload="ip--172--31--21--232-k8s-calico--kube--controllers--578b9ccf58--j8drf-eth0" Mar 7 00:56:09.948205 containerd[2132]: 2026-03-07 00:56:09.909 [INFO][4859] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" HandleID="k8s-pod-network.68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" Workload="ip--172--31--21--232-k8s-calico--kube--controllers--578b9ccf58--j8drf-eth0" Mar 7 00:56:09.948205 containerd[2132]: 2026-03-07 00:56:09.913 [INFO][4859] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:09.948205 containerd[2132]: 2026-03-07 00:56:09.928 [INFO][4757] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" Mar 7 00:56:09.956119 containerd[2132]: time="2026-03-07T00:56:09.953918817Z" level=info msg="TearDown network for sandbox \"68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d\" successfully" Mar 7 00:56:09.956119 containerd[2132]: time="2026-03-07T00:56:09.954007926Z" level=info msg="StopPodSandbox for \"68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d\" returns successfully" Mar 7 00:56:09.959649 containerd[2132]: time="2026-03-07T00:56:09.959397011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-578b9ccf58-j8drf,Uid:6210dbc0-cd47-4e52-8ece-fe359619300c,Namespace:calico-system,Attempt:1,}" Mar 7 00:56:10.001907 containerd[2132]: 2026-03-07 00:56:09.272 [INFO][4805] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" Mar 7 00:56:10.001907 containerd[2132]: 2026-03-07 00:56:09.273 [INFO][4805] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" iface="eth0" netns="/var/run/netns/cni-beccdd22-854a-b758-618d-676599818a0b" Mar 7 00:56:10.001907 containerd[2132]: 2026-03-07 00:56:09.284 [INFO][4805] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" iface="eth0" netns="/var/run/netns/cni-beccdd22-854a-b758-618d-676599818a0b" Mar 7 00:56:10.001907 containerd[2132]: 2026-03-07 00:56:09.295 [INFO][4805] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" iface="eth0" netns="/var/run/netns/cni-beccdd22-854a-b758-618d-676599818a0b" Mar 7 00:56:10.001907 containerd[2132]: 2026-03-07 00:56:09.295 [INFO][4805] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" Mar 7 00:56:10.001907 containerd[2132]: 2026-03-07 00:56:09.296 [INFO][4805] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" Mar 7 00:56:10.001907 containerd[2132]: 2026-03-07 00:56:09.896 [INFO][4875] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" HandleID="k8s-pod-network.63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" Workload="ip--172--31--21--232-k8s-whisker--69447f8b7b--6p8bz-eth0" Mar 7 00:56:10.001907 containerd[2132]: 2026-03-07 00:56:09.899 [INFO][4875] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:10.001907 containerd[2132]: 2026-03-07 00:56:09.917 [INFO][4875] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:10.001907 containerd[2132]: 2026-03-07 00:56:09.952 [WARNING][4875] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" HandleID="k8s-pod-network.63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" Workload="ip--172--31--21--232-k8s-whisker--69447f8b7b--6p8bz-eth0" Mar 7 00:56:10.001907 containerd[2132]: 2026-03-07 00:56:09.952 [INFO][4875] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" HandleID="k8s-pod-network.63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" Workload="ip--172--31--21--232-k8s-whisker--69447f8b7b--6p8bz-eth0" Mar 7 00:56:10.001907 containerd[2132]: 2026-03-07 00:56:09.957 [INFO][4875] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:10.001907 containerd[2132]: 2026-03-07 00:56:09.972 [INFO][4805] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" Mar 7 00:56:10.003769 containerd[2132]: time="2026-03-07T00:56:10.003027956Z" level=info msg="TearDown network for sandbox \"63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a\" successfully" Mar 7 00:56:10.003769 containerd[2132]: time="2026-03-07T00:56:10.003079162Z" level=info msg="StopPodSandbox for \"63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a\" returns successfully" Mar 7 00:56:10.046201 kubelet[3603]: I0307 00:56:10.045141 3603 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/983870dd-d757-45a0-8256-46362244d803-nginx-config\") pod \"983870dd-d757-45a0-8256-46362244d803\" (UID: \"983870dd-d757-45a0-8256-46362244d803\") " Mar 7 00:56:10.046201 kubelet[3603]: I0307 00:56:10.045390 3603 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdbvk\" (UniqueName: \"kubernetes.io/projected/983870dd-d757-45a0-8256-46362244d803-kube-api-access-hdbvk\") pod \"983870dd-d757-45a0-8256-46362244d803\" (UID: \"983870dd-d757-45a0-8256-46362244d803\") " Mar 7 00:56:10.046201 kubelet[3603]: I0307 00:56:10.045463 3603 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/983870dd-d757-45a0-8256-46362244d803-whisker-backend-key-pair\") pod \"983870dd-d757-45a0-8256-46362244d803\" (UID: \"983870dd-d757-45a0-8256-46362244d803\") " Mar 7 00:56:10.046201 kubelet[3603]: I0307 00:56:10.045530 3603 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/983870dd-d757-45a0-8256-46362244d803-whisker-ca-bundle\") pod \"983870dd-d757-45a0-8256-46362244d803\" (UID: \"983870dd-d757-45a0-8256-46362244d803\") " Mar 7 00:56:10.049377 kubelet[3603]: I0307 00:56:10.048546 3603 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/983870dd-d757-45a0-8256-46362244d803-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "983870dd-d757-45a0-8256-46362244d803" (UID: "983870dd-d757-45a0-8256-46362244d803"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 00:56:10.049529 kubelet[3603]: I0307 00:56:10.049486 3603 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/983870dd-d757-45a0-8256-46362244d803-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "983870dd-d757-45a0-8256-46362244d803" (UID: "983870dd-d757-45a0-8256-46362244d803"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 7 00:56:10.062480 kubelet[3603]: I0307 00:56:10.060780 3603 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/983870dd-d757-45a0-8256-46362244d803-kube-api-access-hdbvk" (OuterVolumeSpecName: "kube-api-access-hdbvk") pod "983870dd-d757-45a0-8256-46362244d803" (UID: "983870dd-d757-45a0-8256-46362244d803"). InnerVolumeSpecName "kube-api-access-hdbvk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 7 00:56:10.073345 kubelet[3603]: I0307 00:56:10.073080 3603 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/983870dd-d757-45a0-8256-46362244d803-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "983870dd-d757-45a0-8256-46362244d803" (UID: "983870dd-d757-45a0-8256-46362244d803"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 7 00:56:10.075526 sshd[4890]: Accepted publickey for core from 20.161.92.111 port 36494 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:10.082537 sshd[4890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:10.097811 systemd-logind[2105]: New session 8 of user core. Mar 7 00:56:10.113972 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 7 00:56:10.146740 kubelet[3603]: I0307 00:56:10.146427 3603 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/983870dd-d757-45a0-8256-46362244d803-whisker-ca-bundle\") on node \"ip-172-31-21-232\" DevicePath \"\"" Mar 7 00:56:10.146740 kubelet[3603]: I0307 00:56:10.146500 3603 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/983870dd-d757-45a0-8256-46362244d803-nginx-config\") on node \"ip-172-31-21-232\" DevicePath \"\"" Mar 7 00:56:10.146740 kubelet[3603]: I0307 00:56:10.146549 3603 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hdbvk\" (UniqueName: \"kubernetes.io/projected/983870dd-d757-45a0-8256-46362244d803-kube-api-access-hdbvk\") on node \"ip-172-31-21-232\" DevicePath \"\"" Mar 7 00:56:10.146740 kubelet[3603]: I0307 00:56:10.146574 3603 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/983870dd-d757-45a0-8256-46362244d803-whisker-backend-key-pair\") on node \"ip-172-31-21-232\" DevicePath \"\"" Mar 7 00:56:10.376070 containerd[2132]: time="2026-03-07T00:56:10.375994036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tdrbz,Uid:5241805d-3644-4de6-80b4-779148c6e9c9,Namespace:calico-system,Attempt:0,}" Mar 7 00:56:10.442700 systemd[1]: run-netns-cni\x2d649f5e67\x2d4441\x2d8f4a\x2dee0d\x2da64713e7b89d.mount: Deactivated successfully. Mar 7 00:56:10.445097 systemd[1]: run-netns-cni\x2dbeccdd22\x2d854a\x2db758\x2d618d\x2d676599818a0b.mount: Deactivated successfully. Mar 7 00:56:10.445386 systemd[1]: run-netns-cni\x2dd9ad5705\x2d6094\x2d0560\x2dfc92\x2db1277e28c07c.mount: Deactivated successfully. Mar 7 00:56:10.445615 systemd[1]: run-netns-cni\x2d6ba42b17\x2d816a\x2d0365\x2df33c\x2df85d47157c6c.mount: Deactivated successfully. Mar 7 00:56:10.445867 systemd[1]: run-netns-cni\x2d8b1e1853\x2da6d5\x2d408f\x2d0d61\x2d00f83e1ad52a.mount: Deactivated successfully. Mar 7 00:56:10.446127 systemd[1]: run-netns-cni\x2d9b1296ad\x2d98d0\x2d7b64\x2d7939\x2d77671d18365b.mount: Deactivated successfully. Mar 7 00:56:10.446353 systemd[1]: var-lib-kubelet-pods-983870dd\x2dd757\x2d45a0\x2d8256\x2d46362244d803-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhdbvk.mount: Deactivated successfully. Mar 7 00:56:10.446585 systemd[1]: var-lib-kubelet-pods-983870dd\x2dd757\x2d45a0\x2d8256\x2d46362244d803-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 7 00:56:10.609399 (udev-worker)[5046]: Network interface NamePolicy= disabled on kernel command line. Mar 7 00:56:10.635731 systemd-networkd[1695]: califafedb475a6: Link UP Mar 7 00:56:10.638836 systemd-networkd[1695]: califafedb475a6: Gained carrier Mar 7 00:56:10.930411 containerd[2132]: 2026-03-07 00:56:09.715 [ERROR][4901] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 00:56:10.930411 containerd[2132]: 2026-03-07 00:56:09.785 [INFO][4901] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--232-k8s-coredns--674b8bbfcf--z42qv-eth0 coredns-674b8bbfcf- kube-system 78cfc558-a091-4954-aac1-f01bb0fadc54 979 0 2026-03-07 00:55:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-21-232 coredns-674b8bbfcf-z42qv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califafedb475a6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ab8bdf5f38934d12a9cbe8ceac22a35ff0b2cb8db69e8b7f14c628f13e4c69a2" Namespace="kube-system" Pod="coredns-674b8bbfcf-z42qv" WorkloadEndpoint="ip--172--31--21--232-k8s-coredns--674b8bbfcf--z42qv-" Mar 7 00:56:10.930411 containerd[2132]: 2026-03-07 00:56:09.785 [INFO][4901] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ab8bdf5f38934d12a9cbe8ceac22a35ff0b2cb8db69e8b7f14c628f13e4c69a2" Namespace="kube-system" Pod="coredns-674b8bbfcf-z42qv" WorkloadEndpoint="ip--172--31--21--232-k8s-coredns--674b8bbfcf--z42qv-eth0" Mar 7 00:56:10.930411 containerd[2132]: 2026-03-07 00:56:10.137 [INFO][4943] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ab8bdf5f38934d12a9cbe8ceac22a35ff0b2cb8db69e8b7f14c628f13e4c69a2" HandleID="k8s-pod-network.ab8bdf5f38934d12a9cbe8ceac22a35ff0b2cb8db69e8b7f14c628f13e4c69a2" Workload="ip--172--31--21--232-k8s-coredns--674b8bbfcf--z42qv-eth0" Mar 7 00:56:10.930411 containerd[2132]: 2026-03-07 00:56:10.201 [INFO][4943] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ab8bdf5f38934d12a9cbe8ceac22a35ff0b2cb8db69e8b7f14c628f13e4c69a2" HandleID="k8s-pod-network.ab8bdf5f38934d12a9cbe8ceac22a35ff0b2cb8db69e8b7f14c628f13e4c69a2" Workload="ip--172--31--21--232-k8s-coredns--674b8bbfcf--z42qv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400037dcb0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-21-232", "pod":"coredns-674b8bbfcf-z42qv", "timestamp":"2026-03-07 00:56:10.137824734 +0000 UTC"}, Hostname:"ip-172-31-21-232", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002ae420)} Mar 7 00:56:10.930411 containerd[2132]: 2026-03-07 00:56:10.202 [INFO][4943] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:10.930411 containerd[2132]: 2026-03-07 00:56:10.202 [INFO][4943] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:10.930411 containerd[2132]: 2026-03-07 00:56:10.203 [INFO][4943] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-232' Mar 7 00:56:10.930411 containerd[2132]: 2026-03-07 00:56:10.221 [INFO][4943] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ab8bdf5f38934d12a9cbe8ceac22a35ff0b2cb8db69e8b7f14c628f13e4c69a2" host="ip-172-31-21-232" Mar 7 00:56:10.930411 containerd[2132]: 2026-03-07 00:56:10.257 [INFO][4943] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-21-232" Mar 7 00:56:10.930411 containerd[2132]: 2026-03-07 00:56:10.284 [INFO][4943] ipam/ipam.go 526: Trying affinity for 192.168.75.0/26 host="ip-172-31-21-232" Mar 7 00:56:10.930411 containerd[2132]: 2026-03-07 00:56:10.290 [INFO][4943] ipam/ipam.go 160: Attempting to load block cidr=192.168.75.0/26 host="ip-172-31-21-232" Mar 7 00:56:10.930411 containerd[2132]: 2026-03-07 00:56:10.301 [INFO][4943] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ip-172-31-21-232" Mar 7 00:56:10.930411 containerd[2132]: 2026-03-07 00:56:10.303 [INFO][4943] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.ab8bdf5f38934d12a9cbe8ceac22a35ff0b2cb8db69e8b7f14c628f13e4c69a2" host="ip-172-31-21-232" Mar 7 00:56:10.930411 containerd[2132]: 2026-03-07 00:56:10.316 [INFO][4943] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ab8bdf5f38934d12a9cbe8ceac22a35ff0b2cb8db69e8b7f14c628f13e4c69a2 Mar 7 00:56:10.930411 containerd[2132]: 2026-03-07 00:56:10.331 [INFO][4943] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.ab8bdf5f38934d12a9cbe8ceac22a35ff0b2cb8db69e8b7f14c628f13e4c69a2" host="ip-172-31-21-232" Mar 7 00:56:10.930411 containerd[2132]: 2026-03-07 00:56:10.422 [INFO][4943] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.75.1/26] block=192.168.75.0/26 handle="k8s-pod-network.ab8bdf5f38934d12a9cbe8ceac22a35ff0b2cb8db69e8b7f14c628f13e4c69a2" host="ip-172-31-21-232" Mar 7 00:56:10.930411 containerd[2132]: 2026-03-07 00:56:10.422 [INFO][4943] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.75.1/26] handle="k8s-pod-network.ab8bdf5f38934d12a9cbe8ceac22a35ff0b2cb8db69e8b7f14c628f13e4c69a2" host="ip-172-31-21-232" Mar 7 00:56:10.930411 containerd[2132]: 2026-03-07 00:56:10.422 [INFO][4943] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:10.930411 containerd[2132]: 2026-03-07 00:56:10.422 [INFO][4943] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.75.1/26] IPv6=[] ContainerID="ab8bdf5f38934d12a9cbe8ceac22a35ff0b2cb8db69e8b7f14c628f13e4c69a2" HandleID="k8s-pod-network.ab8bdf5f38934d12a9cbe8ceac22a35ff0b2cb8db69e8b7f14c628f13e4c69a2" Workload="ip--172--31--21--232-k8s-coredns--674b8bbfcf--z42qv-eth0" Mar 7 00:56:10.941300 containerd[2132]: 2026-03-07 00:56:10.495 [INFO][4901] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ab8bdf5f38934d12a9cbe8ceac22a35ff0b2cb8db69e8b7f14c628f13e4c69a2" Namespace="kube-system" Pod="coredns-674b8bbfcf-z42qv" WorkloadEndpoint="ip--172--31--21--232-k8s-coredns--674b8bbfcf--z42qv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--232-k8s-coredns--674b8bbfcf--z42qv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"78cfc558-a091-4954-aac1-f01bb0fadc54", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-232", ContainerID:"", Pod:"coredns-674b8bbfcf-z42qv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califafedb475a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:10.941300 containerd[2132]: 2026-03-07 00:56:10.495 [INFO][4901] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.1/32] ContainerID="ab8bdf5f38934d12a9cbe8ceac22a35ff0b2cb8db69e8b7f14c628f13e4c69a2" Namespace="kube-system" Pod="coredns-674b8bbfcf-z42qv" WorkloadEndpoint="ip--172--31--21--232-k8s-coredns--674b8bbfcf--z42qv-eth0" Mar 7 00:56:10.941300 containerd[2132]: 2026-03-07 00:56:10.495 [INFO][4901] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califafedb475a6 ContainerID="ab8bdf5f38934d12a9cbe8ceac22a35ff0b2cb8db69e8b7f14c628f13e4c69a2" Namespace="kube-system" Pod="coredns-674b8bbfcf-z42qv" WorkloadEndpoint="ip--172--31--21--232-k8s-coredns--674b8bbfcf--z42qv-eth0" Mar 7 00:56:10.941300 containerd[2132]: 2026-03-07 00:56:10.710 [INFO][4901] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ab8bdf5f38934d12a9cbe8ceac22a35ff0b2cb8db69e8b7f14c628f13e4c69a2" Namespace="kube-system" Pod="coredns-674b8bbfcf-z42qv" WorkloadEndpoint="ip--172--31--21--232-k8s-coredns--674b8bbfcf--z42qv-eth0" Mar 7 00:56:10.941300 containerd[2132]: 2026-03-07 00:56:10.765 [INFO][4901] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ab8bdf5f38934d12a9cbe8ceac22a35ff0b2cb8db69e8b7f14c628f13e4c69a2" Namespace="kube-system" Pod="coredns-674b8bbfcf-z42qv" WorkloadEndpoint="ip--172--31--21--232-k8s-coredns--674b8bbfcf--z42qv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--232-k8s-coredns--674b8bbfcf--z42qv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"78cfc558-a091-4954-aac1-f01bb0fadc54", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-232", ContainerID:"ab8bdf5f38934d12a9cbe8ceac22a35ff0b2cb8db69e8b7f14c628f13e4c69a2", Pod:"coredns-674b8bbfcf-z42qv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califafedb475a6", MAC:"b2:62:7e:5e:24:a3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:10.941300 containerd[2132]: 2026-03-07 00:56:10.885 [INFO][4901] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ab8bdf5f38934d12a9cbe8ceac22a35ff0b2cb8db69e8b7f14c628f13e4c69a2" Namespace="kube-system" Pod="coredns-674b8bbfcf-z42qv" WorkloadEndpoint="ip--172--31--21--232-k8s-coredns--674b8bbfcf--z42qv-eth0" Mar 7 00:56:11.139781 sshd[4890]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:11.162561 systemd[1]: sshd@7-172.31.21.232:22-20.161.92.111:36494.service: Deactivated successfully. Mar 7 00:56:11.165863 kubelet[3603]: I0307 00:56:11.163412 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/efc204ae-0a76-49f1-a8ed-4de31999589d-nginx-config\") pod \"whisker-7f85b996cc-22brp\" (UID: \"efc204ae-0a76-49f1-a8ed-4de31999589d\") " pod="calico-system/whisker-7f85b996cc-22brp" Mar 7 00:56:11.165863 kubelet[3603]: I0307 00:56:11.163610 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efc204ae-0a76-49f1-a8ed-4de31999589d-whisker-ca-bundle\") pod \"whisker-7f85b996cc-22brp\" (UID: \"efc204ae-0a76-49f1-a8ed-4de31999589d\") " pod="calico-system/whisker-7f85b996cc-22brp" Mar 7 00:56:11.165863 kubelet[3603]: I0307 00:56:11.163668 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/efc204ae-0a76-49f1-a8ed-4de31999589d-whisker-backend-key-pair\") pod \"whisker-7f85b996cc-22brp\" (UID: \"efc204ae-0a76-49f1-a8ed-4de31999589d\") " pod="calico-system/whisker-7f85b996cc-22brp" Mar 7 00:56:11.165863 kubelet[3603]: I0307 00:56:11.163712 3603 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cm2q\" (UniqueName: \"kubernetes.io/projected/efc204ae-0a76-49f1-a8ed-4de31999589d-kube-api-access-7cm2q\") pod \"whisker-7f85b996cc-22brp\" (UID: \"efc204ae-0a76-49f1-a8ed-4de31999589d\") " pod="calico-system/whisker-7f85b996cc-22brp" Mar 7 00:56:11.183038 systemd[1]: session-8.scope: Deactivated successfully. Mar 7 00:56:11.190164 systemd-logind[2105]: Session 8 logged out. Waiting for processes to exit. Mar 7 00:56:11.205804 systemd-logind[2105]: Removed session 8. Mar 7 00:56:11.230967 containerd[2132]: time="2026-03-07T00:56:11.220580124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:56:11.230967 containerd[2132]: time="2026-03-07T00:56:11.220738039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:56:11.230967 containerd[2132]: time="2026-03-07T00:56:11.220769471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:56:11.230967 containerd[2132]: time="2026-03-07T00:56:11.221040434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:56:11.396792 systemd-networkd[1695]: calief7407eacb5: Link UP Mar 7 00:56:11.398661 systemd-networkd[1695]: calief7407eacb5: Gained carrier Mar 7 00:56:11.402686 (udev-worker)[5045]: Network interface NamePolicy= disabled on kernel command line. Mar 7 00:56:11.434855 containerd[2132]: time="2026-03-07T00:56:11.432253410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f85b996cc-22brp,Uid:efc204ae-0a76-49f1-a8ed-4de31999589d,Namespace:calico-system,Attempt:0,}" Mar 7 00:56:11.571415 containerd[2132]: 2026-03-07 00:56:10.000 [ERROR][4918] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 00:56:11.571415 containerd[2132]: 2026-03-07 00:56:10.137 [INFO][4918] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--232-k8s-calico--apiserver--67454779cb--jkkb4-eth0 calico-apiserver-67454779cb- calico-system 055c9d7f-4150-48cb-a2a9-4df82e634570 999 0 2026-03-07 00:55:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67454779cb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-21-232 calico-apiserver-67454779cb-jkkb4 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calief7407eacb5 [] [] }} ContainerID="6b29454282ff1a28949476cae35bffc946025279d36c3264def7546ae878c70d" Namespace="calico-system" Pod="calico-apiserver-67454779cb-jkkb4" WorkloadEndpoint="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--jkkb4-" Mar 7 00:56:11.571415 containerd[2132]: 2026-03-07 00:56:10.139 [INFO][4918] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6b29454282ff1a28949476cae35bffc946025279d36c3264def7546ae878c70d" Namespace="calico-system" Pod="calico-apiserver-67454779cb-jkkb4" WorkloadEndpoint="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--jkkb4-eth0" Mar 7 00:56:11.571415 containerd[2132]: 2026-03-07 00:56:10.295 [INFO][5010] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b29454282ff1a28949476cae35bffc946025279d36c3264def7546ae878c70d" HandleID="k8s-pod-network.6b29454282ff1a28949476cae35bffc946025279d36c3264def7546ae878c70d" Workload="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--jkkb4-eth0" Mar 7 00:56:11.571415 containerd[2132]: 2026-03-07 00:56:10.403 [INFO][5010] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6b29454282ff1a28949476cae35bffc946025279d36c3264def7546ae878c70d" HandleID="k8s-pod-network.6b29454282ff1a28949476cae35bffc946025279d36c3264def7546ae878c70d" Workload="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--jkkb4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004b0f80), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-232", "pod":"calico-apiserver-67454779cb-jkkb4", "timestamp":"2026-03-07 00:56:10.295851789 +0000 UTC"}, Hostname:"ip-172-31-21-232", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000184c60)} Mar 7 00:56:11.571415 containerd[2132]: 2026-03-07 00:56:10.405 [INFO][5010] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:11.571415 containerd[2132]: 2026-03-07 00:56:10.423 [INFO][5010] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:11.571415 containerd[2132]: 2026-03-07 00:56:10.424 [INFO][5010] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-232' Mar 7 00:56:11.571415 containerd[2132]: 2026-03-07 00:56:10.484 [INFO][5010] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6b29454282ff1a28949476cae35bffc946025279d36c3264def7546ae878c70d" host="ip-172-31-21-232" Mar 7 00:56:11.571415 containerd[2132]: 2026-03-07 00:56:10.607 [INFO][5010] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-21-232" Mar 7 00:56:11.571415 containerd[2132]: 2026-03-07 00:56:10.846 [INFO][5010] ipam/ipam.go 526: Trying affinity for 192.168.75.0/26 host="ip-172-31-21-232" Mar 7 00:56:11.571415 containerd[2132]: 2026-03-07 00:56:10.918 [INFO][5010] ipam/ipam.go 160: Attempting to load block cidr=192.168.75.0/26 host="ip-172-31-21-232" Mar 7 00:56:11.571415 containerd[2132]: 2026-03-07 00:56:11.063 [INFO][5010] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ip-172-31-21-232" Mar 7 00:56:11.571415 containerd[2132]: 2026-03-07 00:56:11.066 [INFO][5010] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.6b29454282ff1a28949476cae35bffc946025279d36c3264def7546ae878c70d" host="ip-172-31-21-232" Mar 7 00:56:11.571415 containerd[2132]: 2026-03-07 00:56:11.125 [INFO][5010] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6b29454282ff1a28949476cae35bffc946025279d36c3264def7546ae878c70d Mar 7 00:56:11.571415 containerd[2132]: 2026-03-07 00:56:11.216 [INFO][5010] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.6b29454282ff1a28949476cae35bffc946025279d36c3264def7546ae878c70d" host="ip-172-31-21-232" Mar 7 00:56:11.571415 containerd[2132]: 2026-03-07 00:56:11.252 [INFO][5010] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.75.2/26] block=192.168.75.0/26 handle="k8s-pod-network.6b29454282ff1a28949476cae35bffc946025279d36c3264def7546ae878c70d" host="ip-172-31-21-232" Mar 7 00:56:11.571415 containerd[2132]: 2026-03-07 00:56:11.253 [INFO][5010] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.75.2/26] handle="k8s-pod-network.6b29454282ff1a28949476cae35bffc946025279d36c3264def7546ae878c70d" host="ip-172-31-21-232" Mar 7 00:56:11.571415 containerd[2132]: 2026-03-07 00:56:11.260 [INFO][5010] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:11.571415 containerd[2132]: 2026-03-07 00:56:11.261 [INFO][5010] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.75.2/26] IPv6=[] ContainerID="6b29454282ff1a28949476cae35bffc946025279d36c3264def7546ae878c70d" HandleID="k8s-pod-network.6b29454282ff1a28949476cae35bffc946025279d36c3264def7546ae878c70d" Workload="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--jkkb4-eth0" Mar 7 00:56:11.576639 containerd[2132]: 2026-03-07 00:56:11.366 [INFO][4918] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6b29454282ff1a28949476cae35bffc946025279d36c3264def7546ae878c70d" Namespace="calico-system" Pod="calico-apiserver-67454779cb-jkkb4" WorkloadEndpoint="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--jkkb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--232-k8s-calico--apiserver--67454779cb--jkkb4-eth0", GenerateName:"calico-apiserver-67454779cb-", Namespace:"calico-system", SelfLink:"", UID:"055c9d7f-4150-48cb-a2a9-4df82e634570", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67454779cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-232", ContainerID:"", Pod:"calico-apiserver-67454779cb-jkkb4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calief7407eacb5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:11.576639 containerd[2132]: 2026-03-07 00:56:11.367 [INFO][4918] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.2/32] ContainerID="6b29454282ff1a28949476cae35bffc946025279d36c3264def7546ae878c70d" Namespace="calico-system" Pod="calico-apiserver-67454779cb-jkkb4" WorkloadEndpoint="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--jkkb4-eth0" Mar 7 00:56:11.576639 containerd[2132]: 2026-03-07 00:56:11.367 [INFO][4918] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calief7407eacb5 ContainerID="6b29454282ff1a28949476cae35bffc946025279d36c3264def7546ae878c70d" Namespace="calico-system" Pod="calico-apiserver-67454779cb-jkkb4" WorkloadEndpoint="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--jkkb4-eth0" Mar 7 00:56:11.576639 containerd[2132]: 2026-03-07 00:56:11.400 [INFO][4918] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b29454282ff1a28949476cae35bffc946025279d36c3264def7546ae878c70d" Namespace="calico-system" Pod="calico-apiserver-67454779cb-jkkb4" WorkloadEndpoint="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--jkkb4-eth0" Mar 7 00:56:11.576639 containerd[2132]: 2026-03-07 00:56:11.428 [INFO][4918] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6b29454282ff1a28949476cae35bffc946025279d36c3264def7546ae878c70d" Namespace="calico-system" Pod="calico-apiserver-67454779cb-jkkb4" WorkloadEndpoint="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--jkkb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--232-k8s-calico--apiserver--67454779cb--jkkb4-eth0", GenerateName:"calico-apiserver-67454779cb-", Namespace:"calico-system", SelfLink:"", UID:"055c9d7f-4150-48cb-a2a9-4df82e634570", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67454779cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-232", ContainerID:"6b29454282ff1a28949476cae35bffc946025279d36c3264def7546ae878c70d", Pod:"calico-apiserver-67454779cb-jkkb4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calief7407eacb5", MAC:"06:22:3b:b0:ab:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:11.576639 containerd[2132]: 2026-03-07 00:56:11.491 [INFO][4918] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6b29454282ff1a28949476cae35bffc946025279d36c3264def7546ae878c70d" Namespace="calico-system" Pod="calico-apiserver-67454779cb-jkkb4" WorkloadEndpoint="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--jkkb4-eth0" Mar 7 00:56:11.812350 systemd-networkd[1695]: cali579a5141479: Link UP Mar 7 00:56:11.812868 systemd-networkd[1695]: cali579a5141479: Gained carrier Mar 7 00:56:11.918136 kubelet[3603]: I0307 00:56:11.917971 3603 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="983870dd-d757-45a0-8256-46362244d803" path="/var/lib/kubelet/pods/983870dd-d757-45a0-8256-46362244d803/volumes" Mar 7 00:56:11.923206 containerd[2132]: time="2026-03-07T00:56:11.922760416Z" level=info msg="StopPodSandbox for \"63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a\"" Mar 7 00:56:11.926502 containerd[2132]: 2026-03-07 00:56:10.180 [ERROR][4957] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 00:56:11.926502 containerd[2132]: 2026-03-07 00:56:10.301 [INFO][4957] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--232-k8s-coredns--674b8bbfcf--sndg8-eth0 coredns-674b8bbfcf- kube-system 42e4545a-e486-4f54-bd6f-2806121371ca 1003 0 2026-03-07 00:55:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-21-232 coredns-674b8bbfcf-sndg8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali579a5141479 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fe9929dfaa427b97d6f40f60d9f515f87e6cc6f1d01902eed8e5d5870002c603" Namespace="kube-system" Pod="coredns-674b8bbfcf-sndg8" WorkloadEndpoint="ip--172--31--21--232-k8s-coredns--674b8bbfcf--sndg8-" Mar 7 00:56:11.926502 containerd[2132]: 2026-03-07 00:56:10.303 [INFO][4957] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fe9929dfaa427b97d6f40f60d9f515f87e6cc6f1d01902eed8e5d5870002c603" Namespace="kube-system" Pod="coredns-674b8bbfcf-sndg8" WorkloadEndpoint="ip--172--31--21--232-k8s-coredns--674b8bbfcf--sndg8-eth0" Mar 7 00:56:11.926502 containerd[2132]: 2026-03-07 00:56:10.928 [INFO][5023] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fe9929dfaa427b97d6f40f60d9f515f87e6cc6f1d01902eed8e5d5870002c603" HandleID="k8s-pod-network.fe9929dfaa427b97d6f40f60d9f515f87e6cc6f1d01902eed8e5d5870002c603" Workload="ip--172--31--21--232-k8s-coredns--674b8bbfcf--sndg8-eth0" Mar 7 00:56:11.926502 containerd[2132]: 2026-03-07 00:56:11.059 [INFO][5023] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="fe9929dfaa427b97d6f40f60d9f515f87e6cc6f1d01902eed8e5d5870002c603" HandleID="k8s-pod-network.fe9929dfaa427b97d6f40f60d9f515f87e6cc6f1d01902eed8e5d5870002c603" Workload="ip--172--31--21--232-k8s-coredns--674b8bbfcf--sndg8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400064e060), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-21-232", "pod":"coredns-674b8bbfcf-sndg8", "timestamp":"2026-03-07 00:56:10.928418518 +0000 UTC"}, Hostname:"ip-172-31-21-232", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002f2000)} Mar 7 00:56:11.926502 containerd[2132]: 2026-03-07 00:56:11.059 [INFO][5023] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:11.926502 containerd[2132]: 2026-03-07 00:56:11.260 [INFO][5023] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:11.926502 containerd[2132]: 2026-03-07 00:56:11.261 [INFO][5023] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-232' Mar 7 00:56:11.926502 containerd[2132]: 2026-03-07 00:56:11.276 [INFO][5023] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.fe9929dfaa427b97d6f40f60d9f515f87e6cc6f1d01902eed8e5d5870002c603" host="ip-172-31-21-232" Mar 7 00:56:11.926502 containerd[2132]: 2026-03-07 00:56:11.334 [INFO][5023] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-21-232" Mar 7 00:56:11.926502 containerd[2132]: 2026-03-07 00:56:11.440 [INFO][5023] ipam/ipam.go 526: Trying affinity for 192.168.75.0/26 host="ip-172-31-21-232" Mar 7 00:56:11.926502 containerd[2132]: 2026-03-07 00:56:11.453 [INFO][5023] ipam/ipam.go 160: Attempting to load block cidr=192.168.75.0/26 host="ip-172-31-21-232" Mar 7 00:56:11.926502 containerd[2132]: 2026-03-07 00:56:11.490 [INFO][5023] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ip-172-31-21-232" Mar 7 00:56:11.926502 containerd[2132]: 2026-03-07 00:56:11.494 [INFO][5023] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.fe9929dfaa427b97d6f40f60d9f515f87e6cc6f1d01902eed8e5d5870002c603" host="ip-172-31-21-232" Mar 7 00:56:11.926502 containerd[2132]: 2026-03-07 00:56:11.520 [INFO][5023] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.fe9929dfaa427b97d6f40f60d9f515f87e6cc6f1d01902eed8e5d5870002c603 Mar 7 00:56:11.926502 containerd[2132]: 2026-03-07 00:56:11.578 [INFO][5023] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.fe9929dfaa427b97d6f40f60d9f515f87e6cc6f1d01902eed8e5d5870002c603" host="ip-172-31-21-232" Mar 7 00:56:11.926502 containerd[2132]: 2026-03-07 00:56:11.643 [INFO][5023] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.75.3/26] block=192.168.75.0/26 handle="k8s-pod-network.fe9929dfaa427b97d6f40f60d9f515f87e6cc6f1d01902eed8e5d5870002c603" host="ip-172-31-21-232" Mar 7 00:56:11.926502 containerd[2132]: 2026-03-07 00:56:11.643 [INFO][5023] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.75.3/26] handle="k8s-pod-network.fe9929dfaa427b97d6f40f60d9f515f87e6cc6f1d01902eed8e5d5870002c603" host="ip-172-31-21-232" Mar 7 00:56:11.926502 containerd[2132]: 2026-03-07 00:56:11.643 [INFO][5023] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:11.926502 containerd[2132]: 2026-03-07 00:56:11.643 [INFO][5023] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.75.3/26] IPv6=[] ContainerID="fe9929dfaa427b97d6f40f60d9f515f87e6cc6f1d01902eed8e5d5870002c603" HandleID="k8s-pod-network.fe9929dfaa427b97d6f40f60d9f515f87e6cc6f1d01902eed8e5d5870002c603" Workload="ip--172--31--21--232-k8s-coredns--674b8bbfcf--sndg8-eth0" Mar 7 00:56:11.929921 containerd[2132]: 2026-03-07 00:56:11.733 [INFO][4957] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fe9929dfaa427b97d6f40f60d9f515f87e6cc6f1d01902eed8e5d5870002c603" Namespace="kube-system" Pod="coredns-674b8bbfcf-sndg8" WorkloadEndpoint="ip--172--31--21--232-k8s-coredns--674b8bbfcf--sndg8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--232-k8s-coredns--674b8bbfcf--sndg8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"42e4545a-e486-4f54-bd6f-2806121371ca", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-232", ContainerID:"", Pod:"coredns-674b8bbfcf-sndg8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali579a5141479", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:11.929921 containerd[2132]: 2026-03-07 00:56:11.733 [INFO][4957] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.3/32] ContainerID="fe9929dfaa427b97d6f40f60d9f515f87e6cc6f1d01902eed8e5d5870002c603" Namespace="kube-system" Pod="coredns-674b8bbfcf-sndg8" WorkloadEndpoint="ip--172--31--21--232-k8s-coredns--674b8bbfcf--sndg8-eth0" Mar 7 00:56:11.929921 containerd[2132]: 2026-03-07 00:56:11.733 [INFO][4957] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali579a5141479 ContainerID="fe9929dfaa427b97d6f40f60d9f515f87e6cc6f1d01902eed8e5d5870002c603" Namespace="kube-system" Pod="coredns-674b8bbfcf-sndg8" WorkloadEndpoint="ip--172--31--21--232-k8s-coredns--674b8bbfcf--sndg8-eth0" Mar 7 00:56:11.929921 containerd[2132]: 2026-03-07 00:56:11.809 [INFO][4957] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fe9929dfaa427b97d6f40f60d9f515f87e6cc6f1d01902eed8e5d5870002c603" Namespace="kube-system" Pod="coredns-674b8bbfcf-sndg8" WorkloadEndpoint="ip--172--31--21--232-k8s-coredns--674b8bbfcf--sndg8-eth0" Mar 7 00:56:11.929921 containerd[2132]: 2026-03-07 00:56:11.864 [INFO][4957] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fe9929dfaa427b97d6f40f60d9f515f87e6cc6f1d01902eed8e5d5870002c603" Namespace="kube-system" Pod="coredns-674b8bbfcf-sndg8" WorkloadEndpoint="ip--172--31--21--232-k8s-coredns--674b8bbfcf--sndg8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--232-k8s-coredns--674b8bbfcf--sndg8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"42e4545a-e486-4f54-bd6f-2806121371ca", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-232", ContainerID:"fe9929dfaa427b97d6f40f60d9f515f87e6cc6f1d01902eed8e5d5870002c603", Pod:"coredns-674b8bbfcf-sndg8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali579a5141479", MAC:"52:43:a2:5a:fd:9a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:11.929921 containerd[2132]: 2026-03-07 00:56:11.911 [INFO][4957] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fe9929dfaa427b97d6f40f60d9f515f87e6cc6f1d01902eed8e5d5870002c603" Namespace="kube-system" Pod="coredns-674b8bbfcf-sndg8" WorkloadEndpoint="ip--172--31--21--232-k8s-coredns--674b8bbfcf--sndg8-eth0" Mar 7 00:56:12.100217 systemd-networkd[1695]: califafedb475a6: Gained IPv6LL Mar 7 00:56:12.249903 containerd[2132]: time="2026-03-07T00:56:12.249613011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:56:12.260403 containerd[2132]: time="2026-03-07T00:56:12.252268402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:56:12.260403 containerd[2132]: time="2026-03-07T00:56:12.252329140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:56:12.260403 containerd[2132]: time="2026-03-07T00:56:12.252560364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:56:12.489513 containerd[2132]: time="2026-03-07T00:56:12.489357237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z42qv,Uid:78cfc558-a091-4954-aac1-f01bb0fadc54,Namespace:kube-system,Attempt:1,} returns sandbox id \"ab8bdf5f38934d12a9cbe8ceac22a35ff0b2cb8db69e8b7f14c628f13e4c69a2\"" Mar 7 00:56:12.524404 containerd[2132]: time="2026-03-07T00:56:12.524344846Z" level=info msg="CreateContainer within sandbox \"ab8bdf5f38934d12a9cbe8ceac22a35ff0b2cb8db69e8b7f14c628f13e4c69a2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 00:56:12.596403 containerd[2132]: time="2026-03-07T00:56:12.551171713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:56:12.597634 containerd[2132]: time="2026-03-07T00:56:12.596742671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:56:12.605931 containerd[2132]: time="2026-03-07T00:56:12.604047518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:56:12.605931 containerd[2132]: time="2026-03-07T00:56:12.604269773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:56:12.707254 systemd-networkd[1695]: cali2b12ce616f3: Link UP Mar 7 00:56:12.727803 systemd-networkd[1695]: cali2b12ce616f3: Gained carrier Mar 7 00:56:12.854974 containerd[2132]: time="2026-03-07T00:56:12.854670873Z" level=info msg="CreateContainer within sandbox \"ab8bdf5f38934d12a9cbe8ceac22a35ff0b2cb8db69e8b7f14c628f13e4c69a2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1e2ffd95cc909dd9d36085bdb37a43f2308a61fc65d274a30bfbfdc8cceef75d\"" Mar 7 00:56:12.874904 containerd[2132]: time="2026-03-07T00:56:12.869465939Z" level=info msg="StartContainer for \"1e2ffd95cc909dd9d36085bdb37a43f2308a61fc65d274a30bfbfdc8cceef75d\"" Mar 7 00:56:12.944232 containerd[2132]: 2026-03-07 00:56:10.417 [ERROR][4995] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 00:56:12.944232 containerd[2132]: 2026-03-07 00:56:10.980 [INFO][4995] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--232-k8s-calico--kube--controllers--578b9ccf58--j8drf-eth0 calico-kube-controllers-578b9ccf58- calico-system 6210dbc0-cd47-4e52-8ece-fe359619300c 995 0 2026-03-07 00:55:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:578b9ccf58 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-21-232 calico-kube-controllers-578b9ccf58-j8drf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2b12ce616f3 [] [] }} ContainerID="05d61fbc36566aeda39595187fe7ce217ec13be2b963cc9ba92740d7a439d6fe" Namespace="calico-system" Pod="calico-kube-controllers-578b9ccf58-j8drf" WorkloadEndpoint="ip--172--31--21--232-k8s-calico--kube--controllers--578b9ccf58--j8drf-" Mar 7 00:56:12.944232 containerd[2132]: 2026-03-07 00:56:10.984 [INFO][4995] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="05d61fbc36566aeda39595187fe7ce217ec13be2b963cc9ba92740d7a439d6fe" Namespace="calico-system" Pod="calico-kube-controllers-578b9ccf58-j8drf" WorkloadEndpoint="ip--172--31--21--232-k8s-calico--kube--controllers--578b9ccf58--j8drf-eth0" Mar 7 00:56:12.944232 containerd[2132]: 2026-03-07 00:56:12.090 [INFO][5093] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="05d61fbc36566aeda39595187fe7ce217ec13be2b963cc9ba92740d7a439d6fe" HandleID="k8s-pod-network.05d61fbc36566aeda39595187fe7ce217ec13be2b963cc9ba92740d7a439d6fe" Workload="ip--172--31--21--232-k8s-calico--kube--controllers--578b9ccf58--j8drf-eth0" Mar 7 00:56:12.944232 containerd[2132]: 2026-03-07 00:56:12.240 [INFO][5093] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="05d61fbc36566aeda39595187fe7ce217ec13be2b963cc9ba92740d7a439d6fe" HandleID="k8s-pod-network.05d61fbc36566aeda39595187fe7ce217ec13be2b963cc9ba92740d7a439d6fe" Workload="ip--172--31--21--232-k8s-calico--kube--controllers--578b9ccf58--j8drf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400061a7a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-232", "pod":"calico-kube-controllers-578b9ccf58-j8drf", "timestamp":"2026-03-07 00:56:12.090005582 +0000 UTC"}, Hostname:"ip-172-31-21-232", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001c9600)} Mar 7 00:56:12.944232 containerd[2132]: 2026-03-07 00:56:12.244 [INFO][5093] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:12.944232 containerd[2132]: 2026-03-07 00:56:12.246 [INFO][5093] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:12.944232 containerd[2132]: 2026-03-07 00:56:12.246 [INFO][5093] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-232' Mar 7 00:56:12.944232 containerd[2132]: 2026-03-07 00:56:12.278 [INFO][5093] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.05d61fbc36566aeda39595187fe7ce217ec13be2b963cc9ba92740d7a439d6fe" host="ip-172-31-21-232" Mar 7 00:56:12.944232 containerd[2132]: 2026-03-07 00:56:12.316 [INFO][5093] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-21-232" Mar 7 00:56:12.944232 containerd[2132]: 2026-03-07 00:56:12.386 [INFO][5093] ipam/ipam.go 526: Trying affinity for 192.168.75.0/26 host="ip-172-31-21-232" Mar 7 00:56:12.944232 containerd[2132]: 2026-03-07 00:56:12.405 [INFO][5093] ipam/ipam.go 160: Attempting to load block cidr=192.168.75.0/26 host="ip-172-31-21-232" Mar 7 00:56:12.944232 containerd[2132]: 2026-03-07 00:56:12.434 [INFO][5093] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ip-172-31-21-232" Mar 7 00:56:12.944232 containerd[2132]: 2026-03-07 00:56:12.434 [INFO][5093] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.05d61fbc36566aeda39595187fe7ce217ec13be2b963cc9ba92740d7a439d6fe" host="ip-172-31-21-232" Mar 7 00:56:12.944232 containerd[2132]: 2026-03-07 00:56:12.447 [INFO][5093] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.05d61fbc36566aeda39595187fe7ce217ec13be2b963cc9ba92740d7a439d6fe Mar 7 00:56:12.944232 containerd[2132]: 2026-03-07 00:56:12.469 [INFO][5093] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.05d61fbc36566aeda39595187fe7ce217ec13be2b963cc9ba92740d7a439d6fe" host="ip-172-31-21-232" Mar 7 00:56:12.944232 containerd[2132]: 2026-03-07 00:56:12.512 [INFO][5093] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.75.4/26] block=192.168.75.0/26 handle="k8s-pod-network.05d61fbc36566aeda39595187fe7ce217ec13be2b963cc9ba92740d7a439d6fe" host="ip-172-31-21-232" Mar 7 00:56:12.944232 containerd[2132]: 2026-03-07 00:56:12.529 [INFO][5093] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.75.4/26] handle="k8s-pod-network.05d61fbc36566aeda39595187fe7ce217ec13be2b963cc9ba92740d7a439d6fe" host="ip-172-31-21-232" Mar 7 00:56:12.944232 containerd[2132]: 2026-03-07 00:56:12.532 [INFO][5093] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:12.944232 containerd[2132]: 2026-03-07 00:56:12.536 [INFO][5093] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.75.4/26] IPv6=[] ContainerID="05d61fbc36566aeda39595187fe7ce217ec13be2b963cc9ba92740d7a439d6fe" HandleID="k8s-pod-network.05d61fbc36566aeda39595187fe7ce217ec13be2b963cc9ba92740d7a439d6fe" Workload="ip--172--31--21--232-k8s-calico--kube--controllers--578b9ccf58--j8drf-eth0" Mar 7 00:56:12.948818 containerd[2132]: 2026-03-07 00:56:12.593 [INFO][4995] cni-plugin/k8s.go 418: Populated endpoint ContainerID="05d61fbc36566aeda39595187fe7ce217ec13be2b963cc9ba92740d7a439d6fe" Namespace="calico-system" Pod="calico-kube-controllers-578b9ccf58-j8drf" WorkloadEndpoint="ip--172--31--21--232-k8s-calico--kube--controllers--578b9ccf58--j8drf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--232-k8s-calico--kube--controllers--578b9ccf58--j8drf-eth0", GenerateName:"calico-kube-controllers-578b9ccf58-", Namespace:"calico-system", SelfLink:"", UID:"6210dbc0-cd47-4e52-8ece-fe359619300c", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"578b9ccf58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-232", ContainerID:"", Pod:"calico-kube-controllers-578b9ccf58-j8drf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2b12ce616f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:12.948818 containerd[2132]: 2026-03-07 00:56:12.603 [INFO][4995] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.4/32] ContainerID="05d61fbc36566aeda39595187fe7ce217ec13be2b963cc9ba92740d7a439d6fe" Namespace="calico-system" Pod="calico-kube-controllers-578b9ccf58-j8drf" WorkloadEndpoint="ip--172--31--21--232-k8s-calico--kube--controllers--578b9ccf58--j8drf-eth0" Mar 7 00:56:12.948818 containerd[2132]: 2026-03-07 00:56:12.603 [INFO][4995] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b12ce616f3 ContainerID="05d61fbc36566aeda39595187fe7ce217ec13be2b963cc9ba92740d7a439d6fe" Namespace="calico-system" Pod="calico-kube-controllers-578b9ccf58-j8drf" WorkloadEndpoint="ip--172--31--21--232-k8s-calico--kube--controllers--578b9ccf58--j8drf-eth0" Mar 7 00:56:12.948818 containerd[2132]: 2026-03-07 00:56:12.774 [INFO][4995] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="05d61fbc36566aeda39595187fe7ce217ec13be2b963cc9ba92740d7a439d6fe" Namespace="calico-system" Pod="calico-kube-controllers-578b9ccf58-j8drf" WorkloadEndpoint="ip--172--31--21--232-k8s-calico--kube--controllers--578b9ccf58--j8drf-eth0" Mar 7 00:56:12.948818 containerd[2132]: 2026-03-07 00:56:12.821 [INFO][4995] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="05d61fbc36566aeda39595187fe7ce217ec13be2b963cc9ba92740d7a439d6fe" Namespace="calico-system" Pod="calico-kube-controllers-578b9ccf58-j8drf" WorkloadEndpoint="ip--172--31--21--232-k8s-calico--kube--controllers--578b9ccf58--j8drf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--232-k8s-calico--kube--controllers--578b9ccf58--j8drf-eth0", GenerateName:"calico-kube-controllers-578b9ccf58-", Namespace:"calico-system", SelfLink:"", UID:"6210dbc0-cd47-4e52-8ece-fe359619300c", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"578b9ccf58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-232", ContainerID:"05d61fbc36566aeda39595187fe7ce217ec13be2b963cc9ba92740d7a439d6fe", Pod:"calico-kube-controllers-578b9ccf58-j8drf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2b12ce616f3", MAC:"9a:1f:e6:56:46:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:12.948818 containerd[2132]: 2026-03-07 00:56:12.880 [INFO][4995] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="05d61fbc36566aeda39595187fe7ce217ec13be2b963cc9ba92740d7a439d6fe" Namespace="calico-system" Pod="calico-kube-controllers-578b9ccf58-j8drf" WorkloadEndpoint="ip--172--31--21--232-k8s-calico--kube--controllers--578b9ccf58--j8drf-eth0" Mar 7 00:56:13.006060 systemd-networkd[1695]: cali55d577c6ce6: Link UP Mar 7 00:56:13.010503 systemd-networkd[1695]: cali55d577c6ce6: Gained carrier Mar 7 00:56:13.061104 systemd-networkd[1695]: calief7407eacb5: Gained IPv6LL Mar 7 00:56:13.090263 containerd[2132]: 2026-03-07 00:56:11.085 [ERROR][5037] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 00:56:13.090263 containerd[2132]: 2026-03-07 00:56:11.347 [INFO][5037] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--232-k8s-csi--node--driver--tdrbz-eth0 csi-node-driver- calico-system 5241805d-3644-4de6-80b4-779148c6e9c9 958 0 2026-03-07 00:55:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-21-232 csi-node-driver-tdrbz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali55d577c6ce6 [] [] }} ContainerID="20dbbd752f9649fd799be457519b68de2bf54ab3f79340cadb7dfc76c040ffd7" Namespace="calico-system" Pod="csi-node-driver-tdrbz" WorkloadEndpoint="ip--172--31--21--232-k8s-csi--node--driver--tdrbz-" Mar 7 00:56:13.090263 containerd[2132]: 2026-03-07 00:56:11.347 [INFO][5037] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="20dbbd752f9649fd799be457519b68de2bf54ab3f79340cadb7dfc76c040ffd7" Namespace="calico-system" Pod="csi-node-driver-tdrbz" WorkloadEndpoint="ip--172--31--21--232-k8s-csi--node--driver--tdrbz-eth0" Mar 7 00:56:13.090263 containerd[2132]: 2026-03-07 00:56:12.468 [INFO][5147] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="20dbbd752f9649fd799be457519b68de2bf54ab3f79340cadb7dfc76c040ffd7" HandleID="k8s-pod-network.20dbbd752f9649fd799be457519b68de2bf54ab3f79340cadb7dfc76c040ffd7" Workload="ip--172--31--21--232-k8s-csi--node--driver--tdrbz-eth0" Mar 7 00:56:13.090263 containerd[2132]: 2026-03-07 00:56:12.559 [INFO][5147] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="20dbbd752f9649fd799be457519b68de2bf54ab3f79340cadb7dfc76c040ffd7" HandleID="k8s-pod-network.20dbbd752f9649fd799be457519b68de2bf54ab3f79340cadb7dfc76c040ffd7" Workload="ip--172--31--21--232-k8s-csi--node--driver--tdrbz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000122240), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-232", "pod":"csi-node-driver-tdrbz", "timestamp":"2026-03-07 00:56:12.468432736 +0000 UTC"}, Hostname:"ip-172-31-21-232", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001842c0)} Mar 7 00:56:13.090263 containerd[2132]: 2026-03-07 00:56:12.559 [INFO][5147] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:13.090263 containerd[2132]: 2026-03-07 00:56:12.559 [INFO][5147] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:13.090263 containerd[2132]: 2026-03-07 00:56:12.559 [INFO][5147] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-232' Mar 7 00:56:13.090263 containerd[2132]: 2026-03-07 00:56:12.578 [INFO][5147] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.20dbbd752f9649fd799be457519b68de2bf54ab3f79340cadb7dfc76c040ffd7" host="ip-172-31-21-232" Mar 7 00:56:13.090263 containerd[2132]: 2026-03-07 00:56:12.612 [INFO][5147] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-21-232" Mar 7 00:56:13.090263 containerd[2132]: 2026-03-07 00:56:12.739 [INFO][5147] ipam/ipam.go 526: Trying affinity for 192.168.75.0/26 host="ip-172-31-21-232" Mar 7 00:56:13.090263 containerd[2132]: 2026-03-07 00:56:12.796 [INFO][5147] ipam/ipam.go 160: Attempting to load block cidr=192.168.75.0/26 host="ip-172-31-21-232" Mar 7 00:56:13.090263 containerd[2132]: 2026-03-07 00:56:12.811 [INFO][5147] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ip-172-31-21-232" Mar 7 00:56:13.090263 containerd[2132]: 2026-03-07 00:56:12.817 [INFO][5147] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.20dbbd752f9649fd799be457519b68de2bf54ab3f79340cadb7dfc76c040ffd7" host="ip-172-31-21-232" Mar 7 00:56:13.090263 containerd[2132]: 2026-03-07 00:56:12.824 [INFO][5147] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.20dbbd752f9649fd799be457519b68de2bf54ab3f79340cadb7dfc76c040ffd7 Mar 7 00:56:13.090263 containerd[2132]: 2026-03-07 00:56:12.854 [INFO][5147] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.20dbbd752f9649fd799be457519b68de2bf54ab3f79340cadb7dfc76c040ffd7" host="ip-172-31-21-232" Mar 7 00:56:13.090263 containerd[2132]: 2026-03-07 00:56:12.879 [INFO][5147] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.75.5/26] block=192.168.75.0/26 handle="k8s-pod-network.20dbbd752f9649fd799be457519b68de2bf54ab3f79340cadb7dfc76c040ffd7" host="ip-172-31-21-232" Mar 7 00:56:13.090263 containerd[2132]: 2026-03-07 00:56:12.879 [INFO][5147] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.75.5/26] handle="k8s-pod-network.20dbbd752f9649fd799be457519b68de2bf54ab3f79340cadb7dfc76c040ffd7" host="ip-172-31-21-232" Mar 7 00:56:13.090263 containerd[2132]: 2026-03-07 00:56:12.879 [INFO][5147] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:13.090263 containerd[2132]: 2026-03-07 00:56:12.879 [INFO][5147] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.75.5/26] IPv6=[] ContainerID="20dbbd752f9649fd799be457519b68de2bf54ab3f79340cadb7dfc76c040ffd7" HandleID="k8s-pod-network.20dbbd752f9649fd799be457519b68de2bf54ab3f79340cadb7dfc76c040ffd7" Workload="ip--172--31--21--232-k8s-csi--node--driver--tdrbz-eth0" Mar 7 00:56:13.092846 containerd[2132]: 2026-03-07 00:56:12.950 [INFO][5037] cni-plugin/k8s.go 418: Populated endpoint ContainerID="20dbbd752f9649fd799be457519b68de2bf54ab3f79340cadb7dfc76c040ffd7" Namespace="calico-system" Pod="csi-node-driver-tdrbz" WorkloadEndpoint="ip--172--31--21--232-k8s-csi--node--driver--tdrbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--232-k8s-csi--node--driver--tdrbz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5241805d-3644-4de6-80b4-779148c6e9c9", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-232", ContainerID:"", Pod:"csi-node-driver-tdrbz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali55d577c6ce6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:13.092846 containerd[2132]: 2026-03-07 00:56:12.950 [INFO][5037] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.5/32] ContainerID="20dbbd752f9649fd799be457519b68de2bf54ab3f79340cadb7dfc76c040ffd7" Namespace="calico-system" Pod="csi-node-driver-tdrbz" WorkloadEndpoint="ip--172--31--21--232-k8s-csi--node--driver--tdrbz-eth0" Mar 7 00:56:13.092846 containerd[2132]: 2026-03-07 00:56:12.955 [INFO][5037] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali55d577c6ce6 ContainerID="20dbbd752f9649fd799be457519b68de2bf54ab3f79340cadb7dfc76c040ffd7" Namespace="calico-system" Pod="csi-node-driver-tdrbz" WorkloadEndpoint="ip--172--31--21--232-k8s-csi--node--driver--tdrbz-eth0" Mar 7 00:56:13.092846 containerd[2132]: 2026-03-07 00:56:13.022 [INFO][5037] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="20dbbd752f9649fd799be457519b68de2bf54ab3f79340cadb7dfc76c040ffd7" Namespace="calico-system" Pod="csi-node-driver-tdrbz" WorkloadEndpoint="ip--172--31--21--232-k8s-csi--node--driver--tdrbz-eth0" Mar 7 00:56:13.092846 containerd[2132]: 2026-03-07 00:56:13.026 [INFO][5037] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="20dbbd752f9649fd799be457519b68de2bf54ab3f79340cadb7dfc76c040ffd7" Namespace="calico-system" Pod="csi-node-driver-tdrbz" WorkloadEndpoint="ip--172--31--21--232-k8s-csi--node--driver--tdrbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--232-k8s-csi--node--driver--tdrbz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5241805d-3644-4de6-80b4-779148c6e9c9", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-232", ContainerID:"20dbbd752f9649fd799be457519b68de2bf54ab3f79340cadb7dfc76c040ffd7", Pod:"csi-node-driver-tdrbz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali55d577c6ce6", MAC:"46:de:9d:7c:3c:f4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:13.092846 containerd[2132]: 2026-03-07 00:56:13.058 [INFO][5037] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="20dbbd752f9649fd799be457519b68de2bf54ab3f79340cadb7dfc76c040ffd7" Namespace="calico-system" Pod="csi-node-driver-tdrbz" WorkloadEndpoint="ip--172--31--21--232-k8s-csi--node--driver--tdrbz-eth0" Mar 7 00:56:13.130037 containerd[2132]: time="2026-03-07T00:56:13.129800203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-sndg8,Uid:42e4545a-e486-4f54-bd6f-2806121371ca,Namespace:kube-system,Attempt:1,} returns sandbox id \"fe9929dfaa427b97d6f40f60d9f515f87e6cc6f1d01902eed8e5d5870002c603\"" Mar 7 00:56:13.146421 containerd[2132]: time="2026-03-07T00:56:13.145844373Z" level=info msg="CreateContainer within sandbox \"fe9929dfaa427b97d6f40f60d9f515f87e6cc6f1d01902eed8e5d5870002c603\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 7 00:56:13.148085 containerd[2132]: time="2026-03-07T00:56:13.147797773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:56:13.159891 containerd[2132]: time="2026-03-07T00:56:13.149660708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:56:13.159891 containerd[2132]: time="2026-03-07T00:56:13.159353800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:56:13.159891 containerd[2132]: time="2026-03-07T00:56:13.159577004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:56:13.227053 systemd-networkd[1695]: calibf3ce04cd78: Link UP Mar 7 00:56:13.230590 systemd-networkd[1695]: calibf3ce04cd78: Gained carrier Mar 7 00:56:13.395044 kubelet[3603]: E0307 00:56:13.394447 3603 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort/pod055c9d7f-4150-48cb-a2a9-4df82e634570/6b29454282ff1a28949476cae35bffc946025279d36c3264def7546ae878c70d\": RecentStats: unable to find data in memory cache]" Mar 7 00:56:13.424002 containerd[2132]: time="2026-03-07T00:56:13.420921873Z" level=info msg="CreateContainer within sandbox \"fe9929dfaa427b97d6f40f60d9f515f87e6cc6f1d01902eed8e5d5870002c603\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"01a955452e6e4be7b58ac7da2cca20ae480021f9c18683b712724a61e482fc35\"" Mar 7 00:56:13.430003 containerd[2132]: time="2026-03-07T00:56:13.429088798Z" level=info msg="StartContainer for \"01a955452e6e4be7b58ac7da2cca20ae480021f9c18683b712724a61e482fc35\"" Mar 7 00:56:13.460427 containerd[2132]: 2026-03-07 00:56:10.578 [ERROR][4972] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 00:56:13.460427 containerd[2132]: 2026-03-07 00:56:11.004 [INFO][4972] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--232-k8s-goldmane--5b85766d88--svm6z-eth0 goldmane-5b85766d88- calico-system 45a627cb-a427-4b7a-bf60-da0e9b3da1b5 996 0 2026-03-07 00:55:41 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-21-232 goldmane-5b85766d88-svm6z eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calibf3ce04cd78 [] [] }} ContainerID="3037bd8d5c85fda3bf49e61e6feb4f0542256c2fe51a206f85cdd1033e49e58b" Namespace="calico-system" Pod="goldmane-5b85766d88-svm6z" WorkloadEndpoint="ip--172--31--21--232-k8s-goldmane--5b85766d88--svm6z-" Mar 7 00:56:13.460427 containerd[2132]: 2026-03-07 00:56:11.009 [INFO][4972] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3037bd8d5c85fda3bf49e61e6feb4f0542256c2fe51a206f85cdd1033e49e58b" Namespace="calico-system" Pod="goldmane-5b85766d88-svm6z" WorkloadEndpoint="ip--172--31--21--232-k8s-goldmane--5b85766d88--svm6z-eth0" Mar 7 00:56:13.460427 containerd[2132]: 2026-03-07 00:56:12.465 [INFO][5092] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3037bd8d5c85fda3bf49e61e6feb4f0542256c2fe51a206f85cdd1033e49e58b" HandleID="k8s-pod-network.3037bd8d5c85fda3bf49e61e6feb4f0542256c2fe51a206f85cdd1033e49e58b" Workload="ip--172--31--21--232-k8s-goldmane--5b85766d88--svm6z-eth0" Mar 7 00:56:13.460427 containerd[2132]: 2026-03-07 00:56:12.565 [INFO][5092] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3037bd8d5c85fda3bf49e61e6feb4f0542256c2fe51a206f85cdd1033e49e58b" HandleID="k8s-pod-network.3037bd8d5c85fda3bf49e61e6feb4f0542256c2fe51a206f85cdd1033e49e58b" Workload="ip--172--31--21--232-k8s-goldmane--5b85766d88--svm6z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ce710), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-232", "pod":"goldmane-5b85766d88-svm6z", "timestamp":"2026-03-07 00:56:12.465574791 +0000 UTC"}, Hostname:"ip-172-31-21-232", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003b6000)} Mar 7 00:56:13.460427 containerd[2132]: 2026-03-07 00:56:12.565 [INFO][5092] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:13.460427 containerd[2132]: 2026-03-07 00:56:12.879 [INFO][5092] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:13.460427 containerd[2132]: 2026-03-07 00:56:12.881 [INFO][5092] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-232' Mar 7 00:56:13.460427 containerd[2132]: 2026-03-07 00:56:12.896 [INFO][5092] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3037bd8d5c85fda3bf49e61e6feb4f0542256c2fe51a206f85cdd1033e49e58b" host="ip-172-31-21-232" Mar 7 00:56:13.460427 containerd[2132]: 2026-03-07 00:56:12.926 [INFO][5092] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-21-232" Mar 7 00:56:13.460427 containerd[2132]: 2026-03-07 00:56:12.989 [INFO][5092] ipam/ipam.go 526: Trying affinity for 192.168.75.0/26 host="ip-172-31-21-232" Mar 7 00:56:13.460427 containerd[2132]: 2026-03-07 00:56:13.000 [INFO][5092] ipam/ipam.go 160: Attempting to load block cidr=192.168.75.0/26 host="ip-172-31-21-232" Mar 7 00:56:13.460427 containerd[2132]: 2026-03-07 00:56:13.019 [INFO][5092] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ip-172-31-21-232" Mar 7 00:56:13.460427 containerd[2132]: 2026-03-07 00:56:13.019 [INFO][5092] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.3037bd8d5c85fda3bf49e61e6feb4f0542256c2fe51a206f85cdd1033e49e58b" host="ip-172-31-21-232" Mar 7 00:56:13.460427 containerd[2132]: 2026-03-07 00:56:13.042 [INFO][5092] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3037bd8d5c85fda3bf49e61e6feb4f0542256c2fe51a206f85cdd1033e49e58b Mar 7 00:56:13.460427 containerd[2132]: 2026-03-07 00:56:13.075 [INFO][5092] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.3037bd8d5c85fda3bf49e61e6feb4f0542256c2fe51a206f85cdd1033e49e58b" host="ip-172-31-21-232" Mar 7 00:56:13.460427 containerd[2132]: 2026-03-07 00:56:13.102 [INFO][5092] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.75.6/26] block=192.168.75.0/26 handle="k8s-pod-network.3037bd8d5c85fda3bf49e61e6feb4f0542256c2fe51a206f85cdd1033e49e58b" host="ip-172-31-21-232" Mar 7 00:56:13.460427 containerd[2132]: 2026-03-07 00:56:13.102 [INFO][5092] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.75.6/26] handle="k8s-pod-network.3037bd8d5c85fda3bf49e61e6feb4f0542256c2fe51a206f85cdd1033e49e58b" host="ip-172-31-21-232" Mar 7 00:56:13.460427 containerd[2132]: 2026-03-07 00:56:13.103 [INFO][5092] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:13.460427 containerd[2132]: 2026-03-07 00:56:13.103 [INFO][5092] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.75.6/26] IPv6=[] ContainerID="3037bd8d5c85fda3bf49e61e6feb4f0542256c2fe51a206f85cdd1033e49e58b" HandleID="k8s-pod-network.3037bd8d5c85fda3bf49e61e6feb4f0542256c2fe51a206f85cdd1033e49e58b" Workload="ip--172--31--21--232-k8s-goldmane--5b85766d88--svm6z-eth0" Mar 7 00:56:13.464253 containerd[2132]: 2026-03-07 00:56:13.156 [INFO][4972] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3037bd8d5c85fda3bf49e61e6feb4f0542256c2fe51a206f85cdd1033e49e58b" Namespace="calico-system" Pod="goldmane-5b85766d88-svm6z" WorkloadEndpoint="ip--172--31--21--232-k8s-goldmane--5b85766d88--svm6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--232-k8s-goldmane--5b85766d88--svm6z-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"45a627cb-a427-4b7a-bf60-da0e9b3da1b5", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-232", ContainerID:"", Pod:"goldmane-5b85766d88-svm6z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.75.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibf3ce04cd78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:13.464253 containerd[2132]: 2026-03-07 00:56:13.159 [INFO][4972] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.6/32] ContainerID="3037bd8d5c85fda3bf49e61e6feb4f0542256c2fe51a206f85cdd1033e49e58b" Namespace="calico-system" Pod="goldmane-5b85766d88-svm6z" WorkloadEndpoint="ip--172--31--21--232-k8s-goldmane--5b85766d88--svm6z-eth0" Mar 7 00:56:13.464253 containerd[2132]: 2026-03-07 00:56:13.163 [INFO][4972] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibf3ce04cd78 ContainerID="3037bd8d5c85fda3bf49e61e6feb4f0542256c2fe51a206f85cdd1033e49e58b" Namespace="calico-system" Pod="goldmane-5b85766d88-svm6z" WorkloadEndpoint="ip--172--31--21--232-k8s-goldmane--5b85766d88--svm6z-eth0" Mar 7 00:56:13.464253 containerd[2132]: 2026-03-07 00:56:13.234 [INFO][4972] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3037bd8d5c85fda3bf49e61e6feb4f0542256c2fe51a206f85cdd1033e49e58b" Namespace="calico-system" Pod="goldmane-5b85766d88-svm6z" WorkloadEndpoint="ip--172--31--21--232-k8s-goldmane--5b85766d88--svm6z-eth0" Mar 7 00:56:13.464253 containerd[2132]: 2026-03-07 00:56:13.242 [INFO][4972] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3037bd8d5c85fda3bf49e61e6feb4f0542256c2fe51a206f85cdd1033e49e58b" Namespace="calico-system" Pod="goldmane-5b85766d88-svm6z" WorkloadEndpoint="ip--172--31--21--232-k8s-goldmane--5b85766d88--svm6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--232-k8s-goldmane--5b85766d88--svm6z-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"45a627cb-a427-4b7a-bf60-da0e9b3da1b5", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-232", ContainerID:"3037bd8d5c85fda3bf49e61e6feb4f0542256c2fe51a206f85cdd1033e49e58b", Pod:"goldmane-5b85766d88-svm6z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.75.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibf3ce04cd78", MAC:"e2:15:e3:db:d6:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:13.464253 containerd[2132]: 2026-03-07 00:56:13.308 [INFO][4972] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3037bd8d5c85fda3bf49e61e6feb4f0542256c2fe51a206f85cdd1033e49e58b" Namespace="calico-system" Pod="goldmane-5b85766d88-svm6z" WorkloadEndpoint="ip--172--31--21--232-k8s-goldmane--5b85766d88--svm6z-eth0" Mar 7 00:56:13.602491 systemd-networkd[1695]: cali579a5141479: Gained IPv6LL Mar 7 00:56:13.652007 kernel: calico-node[5157]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 7 00:56:13.650330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1158415351.mount: Deactivated successfully. Mar 7 00:56:13.774812 systemd-networkd[1695]: calibdcf285ae42: Link UP Mar 7 00:56:13.792532 systemd-networkd[1695]: calibdcf285ae42: Gained carrier Mar 7 00:56:13.926536 containerd[2132]: time="2026-03-07T00:56:13.926101172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67454779cb-jkkb4,Uid:055c9d7f-4150-48cb-a2a9-4df82e634570,Namespace:calico-system,Attempt:1,} returns sandbox id \"6b29454282ff1a28949476cae35bffc946025279d36c3264def7546ae878c70d\"" Mar 7 00:56:13.946414 containerd[2132]: time="2026-03-07T00:56:13.942534768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:56:13.946414 containerd[2132]: time="2026-03-07T00:56:13.942694688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:56:13.946414 containerd[2132]: time="2026-03-07T00:56:13.942744441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:56:13.946414 containerd[2132]: time="2026-03-07T00:56:13.945535104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:56:13.970228 containerd[2132]: 2026-03-07 00:56:10.351 [ERROR][4969] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 7 00:56:13.970228 containerd[2132]: 2026-03-07 00:56:10.880 [INFO][4969] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--232-k8s-calico--apiserver--67454779cb--lj22q-eth0 calico-apiserver-67454779cb- calico-system 466a569a-796d-4554-bfdc-84553d49d7a8 988 0 2026-03-07 00:55:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67454779cb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-21-232 calico-apiserver-67454779cb-lj22q eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calibdcf285ae42 [] [] }} ContainerID="5f558316f92b9d205fd60e730412d855a593fb9bcaa9c2d0847fcb2f61849466" Namespace="calico-system" Pod="calico-apiserver-67454779cb-lj22q" WorkloadEndpoint="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--lj22q-" Mar 7 00:56:13.970228 containerd[2132]: 2026-03-07 00:56:10.880 [INFO][4969] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5f558316f92b9d205fd60e730412d855a593fb9bcaa9c2d0847fcb2f61849466" Namespace="calico-system" Pod="calico-apiserver-67454779cb-lj22q" WorkloadEndpoint="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--lj22q-eth0" Mar 7 00:56:13.970228 containerd[2132]: 2026-03-07 00:56:12.509 [INFO][5091] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5f558316f92b9d205fd60e730412d855a593fb9bcaa9c2d0847fcb2f61849466" HandleID="k8s-pod-network.5f558316f92b9d205fd60e730412d855a593fb9bcaa9c2d0847fcb2f61849466" Workload="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--lj22q-eth0" Mar 7 00:56:13.970228 containerd[2132]: 2026-03-07 00:56:12.604 [INFO][5091] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5f558316f92b9d205fd60e730412d855a593fb9bcaa9c2d0847fcb2f61849466" HandleID="k8s-pod-network.5f558316f92b9d205fd60e730412d855a593fb9bcaa9c2d0847fcb2f61849466" Workload="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--lj22q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001118c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-232", "pod":"calico-apiserver-67454779cb-lj22q", "timestamp":"2026-03-07 00:56:12.509538939 +0000 UTC"}, Hostname:"ip-172-31-21-232", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40000f89a0)} Mar 7 00:56:13.970228 containerd[2132]: 2026-03-07 00:56:12.604 [INFO][5091] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:13.970228 containerd[2132]: 2026-03-07 00:56:13.119 [INFO][5091] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:13.970228 containerd[2132]: 2026-03-07 00:56:13.119 [INFO][5091] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-232' Mar 7 00:56:13.970228 containerd[2132]: 2026-03-07 00:56:13.132 [INFO][5091] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5f558316f92b9d205fd60e730412d855a593fb9bcaa9c2d0847fcb2f61849466" host="ip-172-31-21-232" Mar 7 00:56:13.970228 containerd[2132]: 2026-03-07 00:56:13.186 [INFO][5091] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-21-232" Mar 7 00:56:13.970228 containerd[2132]: 2026-03-07 00:56:13.236 [INFO][5091] ipam/ipam.go 526: Trying affinity for 192.168.75.0/26 host="ip-172-31-21-232" Mar 7 00:56:13.970228 containerd[2132]: 2026-03-07 00:56:13.252 [INFO][5091] ipam/ipam.go 160: Attempting to load block cidr=192.168.75.0/26 host="ip-172-31-21-232" Mar 7 00:56:13.970228 containerd[2132]: 2026-03-07 00:56:13.304 [INFO][5091] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ip-172-31-21-232" Mar 7 00:56:13.970228 containerd[2132]: 2026-03-07 00:56:13.307 [INFO][5091] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.5f558316f92b9d205fd60e730412d855a593fb9bcaa9c2d0847fcb2f61849466" host="ip-172-31-21-232" Mar 7 00:56:13.970228 containerd[2132]: 2026-03-07 00:56:13.374 [INFO][5091] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5f558316f92b9d205fd60e730412d855a593fb9bcaa9c2d0847fcb2f61849466 Mar 7 00:56:13.970228 containerd[2132]: 2026-03-07 00:56:13.409 [INFO][5091] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.5f558316f92b9d205fd60e730412d855a593fb9bcaa9c2d0847fcb2f61849466" host="ip-172-31-21-232" Mar 7 00:56:13.970228 containerd[2132]: 2026-03-07 00:56:13.444 [INFO][5091] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.75.7/26] block=192.168.75.0/26 handle="k8s-pod-network.5f558316f92b9d205fd60e730412d855a593fb9bcaa9c2d0847fcb2f61849466" host="ip-172-31-21-232" Mar 7 00:56:13.970228 containerd[2132]: 2026-03-07 00:56:13.444 [INFO][5091] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.75.7/26] handle="k8s-pod-network.5f558316f92b9d205fd60e730412d855a593fb9bcaa9c2d0847fcb2f61849466" host="ip-172-31-21-232" Mar 7 00:56:13.970228 containerd[2132]: 2026-03-07 00:56:13.444 [INFO][5091] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:13.970228 containerd[2132]: 2026-03-07 00:56:13.444 [INFO][5091] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.75.7/26] IPv6=[] ContainerID="5f558316f92b9d205fd60e730412d855a593fb9bcaa9c2d0847fcb2f61849466" HandleID="k8s-pod-network.5f558316f92b9d205fd60e730412d855a593fb9bcaa9c2d0847fcb2f61849466" Workload="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--lj22q-eth0" Mar 7 00:56:13.971755 containerd[2132]: 2026-03-07 00:56:13.536 [INFO][4969] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5f558316f92b9d205fd60e730412d855a593fb9bcaa9c2d0847fcb2f61849466" Namespace="calico-system" Pod="calico-apiserver-67454779cb-lj22q" WorkloadEndpoint="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--lj22q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--232-k8s-calico--apiserver--67454779cb--lj22q-eth0", GenerateName:"calico-apiserver-67454779cb-", Namespace:"calico-system", SelfLink:"", UID:"466a569a-796d-4554-bfdc-84553d49d7a8", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67454779cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-232", ContainerID:"", Pod:"calico-apiserver-67454779cb-lj22q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibdcf285ae42", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:13.971755 containerd[2132]: 2026-03-07 00:56:13.536 [INFO][4969] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.7/32] ContainerID="5f558316f92b9d205fd60e730412d855a593fb9bcaa9c2d0847fcb2f61849466" Namespace="calico-system" Pod="calico-apiserver-67454779cb-lj22q" WorkloadEndpoint="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--lj22q-eth0" Mar 7 00:56:13.971755 containerd[2132]: 2026-03-07 00:56:13.536 [INFO][4969] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibdcf285ae42 ContainerID="5f558316f92b9d205fd60e730412d855a593fb9bcaa9c2d0847fcb2f61849466" Namespace="calico-system" Pod="calico-apiserver-67454779cb-lj22q" WorkloadEndpoint="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--lj22q-eth0" Mar 7 00:56:13.971755 containerd[2132]: 2026-03-07 00:56:13.814 [INFO][4969] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5f558316f92b9d205fd60e730412d855a593fb9bcaa9c2d0847fcb2f61849466" Namespace="calico-system" Pod="calico-apiserver-67454779cb-lj22q" WorkloadEndpoint="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--lj22q-eth0" Mar 7 00:56:13.971755 containerd[2132]: 2026-03-07 00:56:13.820 [INFO][4969] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5f558316f92b9d205fd60e730412d855a593fb9bcaa9c2d0847fcb2f61849466" Namespace="calico-system" Pod="calico-apiserver-67454779cb-lj22q" WorkloadEndpoint="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--lj22q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--232-k8s-calico--apiserver--67454779cb--lj22q-eth0", GenerateName:"calico-apiserver-67454779cb-", Namespace:"calico-system", SelfLink:"", UID:"466a569a-796d-4554-bfdc-84553d49d7a8", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67454779cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-232", ContainerID:"5f558316f92b9d205fd60e730412d855a593fb9bcaa9c2d0847fcb2f61849466", Pod:"calico-apiserver-67454779cb-lj22q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibdcf285ae42", MAC:"1e:65:19:1d:98:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:13.971755 containerd[2132]: 2026-03-07 00:56:13.897 [INFO][4969] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5f558316f92b9d205fd60e730412d855a593fb9bcaa9c2d0847fcb2f61849466" Namespace="calico-system" Pod="calico-apiserver-67454779cb-lj22q" WorkloadEndpoint="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--lj22q-eth0" Mar 7 00:56:14.021180 containerd[2132]: time="2026-03-07T00:56:14.018860573Z" level=info msg="StartContainer for \"1e2ffd95cc909dd9d36085bdb37a43f2308a61fc65d274a30bfbfdc8cceef75d\" returns successfully" Mar 7 00:56:14.082564 containerd[2132]: time="2026-03-07T00:56:14.081411005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 00:56:14.264495 systemd[1]: run-containerd-runc-k8s.io-20dbbd752f9649fd799be457519b68de2bf54ab3f79340cadb7dfc76c040ffd7-runc.6njXqq.mount: Deactivated successfully. Mar 7 00:56:14.288078 containerd[2132]: time="2026-03-07T00:56:14.287125196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:56:14.292708 containerd[2132]: time="2026-03-07T00:56:14.288184545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:56:14.292708 containerd[2132]: time="2026-03-07T00:56:14.288650594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:56:14.292708 containerd[2132]: time="2026-03-07T00:56:14.289083291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:56:14.342265 systemd-networkd[1695]: cali2b12ce616f3: Gained IPv6LL Mar 7 00:56:14.412822 containerd[2132]: time="2026-03-07T00:56:14.412713215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-578b9ccf58-j8drf,Uid:6210dbc0-cd47-4e52-8ece-fe359619300c,Namespace:calico-system,Attempt:1,} returns sandbox id \"05d61fbc36566aeda39595187fe7ce217ec13be2b963cc9ba92740d7a439d6fe\"" Mar 7 00:56:14.514483 containerd[2132]: time="2026-03-07T00:56:14.512625155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:56:14.516702 containerd[2132]: time="2026-03-07T00:56:14.514893449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:56:14.518513 containerd[2132]: time="2026-03-07T00:56:14.515302193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:56:14.540376 containerd[2132]: time="2026-03-07T00:56:14.536448265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:56:14.589256 containerd[2132]: 2026-03-07 00:56:12.968 [WARNING][5257] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" WorkloadEndpoint="ip--172--31--21--232-k8s-whisker--69447f8b7b--6p8bz-eth0" Mar 7 00:56:14.589256 containerd[2132]: 2026-03-07 00:56:12.973 [INFO][5257] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" Mar 7 00:56:14.589256 containerd[2132]: 2026-03-07 00:56:12.973 [INFO][5257] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" iface="eth0" netns="" Mar 7 00:56:14.589256 containerd[2132]: 2026-03-07 00:56:12.973 [INFO][5257] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" Mar 7 00:56:14.589256 containerd[2132]: 2026-03-07 00:56:12.973 [INFO][5257] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" Mar 7 00:56:14.589256 containerd[2132]: 2026-03-07 00:56:14.308 [INFO][5378] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" HandleID="k8s-pod-network.63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" Workload="ip--172--31--21--232-k8s-whisker--69447f8b7b--6p8bz-eth0" Mar 7 00:56:14.589256 containerd[2132]: 2026-03-07 00:56:14.331 [INFO][5378] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:14.589256 containerd[2132]: 2026-03-07 00:56:14.332 [INFO][5378] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:14.589256 containerd[2132]: 2026-03-07 00:56:14.397 [WARNING][5378] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" HandleID="k8s-pod-network.63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" Workload="ip--172--31--21--232-k8s-whisker--69447f8b7b--6p8bz-eth0" Mar 7 00:56:14.589256 containerd[2132]: 2026-03-07 00:56:14.398 [INFO][5378] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" HandleID="k8s-pod-network.63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" Workload="ip--172--31--21--232-k8s-whisker--69447f8b7b--6p8bz-eth0" Mar 7 00:56:14.589256 containerd[2132]: 2026-03-07 00:56:14.411 [INFO][5378] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:14.589256 containerd[2132]: 2026-03-07 00:56:14.469 [INFO][5257] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" Mar 7 00:56:14.589256 containerd[2132]: time="2026-03-07T00:56:14.584180676Z" level=info msg="TearDown network for sandbox \"63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a\" successfully" Mar 7 00:56:14.589256 containerd[2132]: time="2026-03-07T00:56:14.584218855Z" level=info msg="StopPodSandbox for \"63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a\" returns successfully" Mar 7 00:56:14.614983 containerd[2132]: time="2026-03-07T00:56:14.614074187Z" level=info msg="StartContainer for \"01a955452e6e4be7b58ac7da2cca20ae480021f9c18683b712724a61e482fc35\" returns successfully" Mar 7 00:56:14.626983 containerd[2132]: time="2026-03-07T00:56:14.623325819Z" level=info msg="RemovePodSandbox for \"63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a\"" Mar 7 00:56:14.685815 containerd[2132]: time="2026-03-07T00:56:14.685268399Z" level=info msg="Forcibly stopping sandbox \"63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a\"" Mar 7 00:56:14.727624 systemd-networkd[1695]: calibf3ce04cd78: Gained IPv6LL Mar 7 00:56:14.796515 systemd-networkd[1695]: cali0449fb06b85: Link UP Mar 7 00:56:14.807268 systemd-networkd[1695]: cali0449fb06b85: Gained carrier Mar 7 00:56:14.821003 containerd[2132]: time="2026-03-07T00:56:14.819448320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tdrbz,Uid:5241805d-3644-4de6-80b4-779148c6e9c9,Namespace:calico-system,Attempt:0,} returns sandbox id \"20dbbd752f9649fd799be457519b68de2bf54ab3f79340cadb7dfc76c040ffd7\"" Mar 7 00:56:14.864078 systemd-networkd[1695]: cali55d577c6ce6: Gained IPv6LL Mar 7 00:56:14.980238 systemd-networkd[1695]: calibdcf285ae42: Gained IPv6LL Mar 7 00:56:15.020191 containerd[2132]: 2026-03-07 00:56:13.016 [INFO][5174] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--232-k8s-whisker--7f85b996cc--22brp-eth0 whisker-7f85b996cc- calico-system efc204ae-0a76-49f1-a8ed-4de31999589d 1037 0 2026-03-07 00:56:10 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7f85b996cc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-21-232 whisker-7f85b996cc-22brp eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0449fb06b85 [] [] }} ContainerID="60e4a5438cd9f19835950ab6cb3f9099dfb17c8fcbd1d319d4aca2b9635d141e" Namespace="calico-system" Pod="whisker-7f85b996cc-22brp" WorkloadEndpoint="ip--172--31--21--232-k8s-whisker--7f85b996cc--22brp-" Mar 7 00:56:15.020191 containerd[2132]: 2026-03-07 00:56:13.020 [INFO][5174] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="60e4a5438cd9f19835950ab6cb3f9099dfb17c8fcbd1d319d4aca2b9635d141e" Namespace="calico-system" Pod="whisker-7f85b996cc-22brp" WorkloadEndpoint="ip--172--31--21--232-k8s-whisker--7f85b996cc--22brp-eth0" Mar 7 00:56:15.020191 containerd[2132]: 2026-03-07 00:56:14.187 [INFO][5393] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="60e4a5438cd9f19835950ab6cb3f9099dfb17c8fcbd1d319d4aca2b9635d141e" HandleID="k8s-pod-network.60e4a5438cd9f19835950ab6cb3f9099dfb17c8fcbd1d319d4aca2b9635d141e" Workload="ip--172--31--21--232-k8s-whisker--7f85b996cc--22brp-eth0" Mar 7 00:56:15.020191 containerd[2132]: 2026-03-07 00:56:14.331 [INFO][5393] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="60e4a5438cd9f19835950ab6cb3f9099dfb17c8fcbd1d319d4aca2b9635d141e" HandleID="k8s-pod-network.60e4a5438cd9f19835950ab6cb3f9099dfb17c8fcbd1d319d4aca2b9635d141e" Workload="ip--172--31--21--232-k8s-whisker--7f85b996cc--22brp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000318300), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-232", "pod":"whisker-7f85b996cc-22brp", "timestamp":"2026-03-07 00:56:14.187510597 +0000 UTC"}, Hostname:"ip-172-31-21-232", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003d8000)} Mar 7 00:56:15.020191 containerd[2132]: 2026-03-07 00:56:14.355 [INFO][5393] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:15.020191 containerd[2132]: 2026-03-07 00:56:14.414 [INFO][5393] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:15.020191 containerd[2132]: 2026-03-07 00:56:14.414 [INFO][5393] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-232' Mar 7 00:56:15.020191 containerd[2132]: 2026-03-07 00:56:14.439 [INFO][5393] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.60e4a5438cd9f19835950ab6cb3f9099dfb17c8fcbd1d319d4aca2b9635d141e" host="ip-172-31-21-232" Mar 7 00:56:15.020191 containerd[2132]: 2026-03-07 00:56:14.460 [INFO][5393] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-21-232" Mar 7 00:56:15.020191 containerd[2132]: 2026-03-07 00:56:14.502 [INFO][5393] ipam/ipam.go 526: Trying affinity for 192.168.75.0/26 host="ip-172-31-21-232" Mar 7 00:56:15.020191 containerd[2132]: 2026-03-07 00:56:14.509 [INFO][5393] ipam/ipam.go 160: Attempting to load block cidr=192.168.75.0/26 host="ip-172-31-21-232" Mar 7 00:56:15.020191 containerd[2132]: 2026-03-07 00:56:14.604 [INFO][5393] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ip-172-31-21-232" Mar 7 00:56:15.020191 containerd[2132]: 2026-03-07 00:56:14.604 [INFO][5393] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.60e4a5438cd9f19835950ab6cb3f9099dfb17c8fcbd1d319d4aca2b9635d141e" host="ip-172-31-21-232" Mar 7 00:56:15.020191 containerd[2132]: 2026-03-07 00:56:14.613 [INFO][5393] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.60e4a5438cd9f19835950ab6cb3f9099dfb17c8fcbd1d319d4aca2b9635d141e Mar 7 00:56:15.020191 containerd[2132]: 2026-03-07 00:56:14.662 [INFO][5393] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.60e4a5438cd9f19835950ab6cb3f9099dfb17c8fcbd1d319d4aca2b9635d141e" host="ip-172-31-21-232" Mar 7 00:56:15.020191 containerd[2132]: 2026-03-07 00:56:14.738 [INFO][5393] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.75.8/26] block=192.168.75.0/26 handle="k8s-pod-network.60e4a5438cd9f19835950ab6cb3f9099dfb17c8fcbd1d319d4aca2b9635d141e" host="ip-172-31-21-232" Mar 7 00:56:15.020191 containerd[2132]: 2026-03-07 00:56:14.738 [INFO][5393] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.75.8/26] handle="k8s-pod-network.60e4a5438cd9f19835950ab6cb3f9099dfb17c8fcbd1d319d4aca2b9635d141e" host="ip-172-31-21-232" Mar 7 00:56:15.020191 containerd[2132]: 2026-03-07 00:56:14.738 [INFO][5393] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:15.020191 containerd[2132]: 2026-03-07 00:56:14.738 [INFO][5393] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.75.8/26] IPv6=[] ContainerID="60e4a5438cd9f19835950ab6cb3f9099dfb17c8fcbd1d319d4aca2b9635d141e" HandleID="k8s-pod-network.60e4a5438cd9f19835950ab6cb3f9099dfb17c8fcbd1d319d4aca2b9635d141e" Workload="ip--172--31--21--232-k8s-whisker--7f85b996cc--22brp-eth0" Mar 7 00:56:15.022661 containerd[2132]: 2026-03-07 00:56:14.772 [INFO][5174] cni-plugin/k8s.go 418: Populated endpoint ContainerID="60e4a5438cd9f19835950ab6cb3f9099dfb17c8fcbd1d319d4aca2b9635d141e" Namespace="calico-system" Pod="whisker-7f85b996cc-22brp" WorkloadEndpoint="ip--172--31--21--232-k8s-whisker--7f85b996cc--22brp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--232-k8s-whisker--7f85b996cc--22brp-eth0", GenerateName:"whisker-7f85b996cc-", Namespace:"calico-system", SelfLink:"", UID:"efc204ae-0a76-49f1-a8ed-4de31999589d", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 56, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f85b996cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-232", ContainerID:"", Pod:"whisker-7f85b996cc-22brp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.75.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0449fb06b85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:15.022661 containerd[2132]: 2026-03-07 00:56:14.772 [INFO][5174] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.75.8/32] ContainerID="60e4a5438cd9f19835950ab6cb3f9099dfb17c8fcbd1d319d4aca2b9635d141e" Namespace="calico-system" Pod="whisker-7f85b996cc-22brp" WorkloadEndpoint="ip--172--31--21--232-k8s-whisker--7f85b996cc--22brp-eth0" Mar 7 00:56:15.022661 containerd[2132]: 2026-03-07 00:56:14.773 [INFO][5174] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0449fb06b85 ContainerID="60e4a5438cd9f19835950ab6cb3f9099dfb17c8fcbd1d319d4aca2b9635d141e" Namespace="calico-system" Pod="whisker-7f85b996cc-22brp" WorkloadEndpoint="ip--172--31--21--232-k8s-whisker--7f85b996cc--22brp-eth0" Mar 7 00:56:15.022661 containerd[2132]: 2026-03-07 00:56:14.860 [INFO][5174] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="60e4a5438cd9f19835950ab6cb3f9099dfb17c8fcbd1d319d4aca2b9635d141e" Namespace="calico-system" Pod="whisker-7f85b996cc-22brp" WorkloadEndpoint="ip--172--31--21--232-k8s-whisker--7f85b996cc--22brp-eth0" Mar 7 00:56:15.022661 containerd[2132]: 2026-03-07 00:56:14.886 [INFO][5174] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="60e4a5438cd9f19835950ab6cb3f9099dfb17c8fcbd1d319d4aca2b9635d141e" Namespace="calico-system" Pod="whisker-7f85b996cc-22brp" WorkloadEndpoint="ip--172--31--21--232-k8s-whisker--7f85b996cc--22brp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--232-k8s-whisker--7f85b996cc--22brp-eth0", GenerateName:"whisker-7f85b996cc-", Namespace:"calico-system", SelfLink:"", UID:"efc204ae-0a76-49f1-a8ed-4de31999589d", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 56, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f85b996cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-232", ContainerID:"60e4a5438cd9f19835950ab6cb3f9099dfb17c8fcbd1d319d4aca2b9635d141e", Pod:"whisker-7f85b996cc-22brp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.75.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0449fb06b85", MAC:"c6:1f:db:1d:1e:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:56:15.022661 containerd[2132]: 2026-03-07 00:56:14.956 [INFO][5174] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="60e4a5438cd9f19835950ab6cb3f9099dfb17c8fcbd1d319d4aca2b9635d141e" Namespace="calico-system" Pod="whisker-7f85b996cc-22brp" WorkloadEndpoint="ip--172--31--21--232-k8s-whisker--7f85b996cc--22brp-eth0" Mar 7 00:56:15.162762 kubelet[3603]: I0307 00:56:15.162598 3603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-sndg8" podStartSLOduration=60.162559885 podStartE2EDuration="1m0.162559885s" podCreationTimestamp="2026-03-07 00:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:56:15.157062073 +0000 UTC m=+63.599732129" watchObservedRunningTime="2026-03-07 00:56:15.162559885 +0000 UTC m=+63.605229929" Mar 7 00:56:15.410007 containerd[2132]: time="2026-03-07T00:56:15.407035987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67454779cb-lj22q,Uid:466a569a-796d-4554-bfdc-84553d49d7a8,Namespace:calico-system,Attempt:1,} returns sandbox id \"5f558316f92b9d205fd60e730412d855a593fb9bcaa9c2d0847fcb2f61849466\"" Mar 7 00:56:15.427146 containerd[2132]: time="2026-03-07T00:56:15.424862100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-svm6z,Uid:45a627cb-a427-4b7a-bf60-da0e9b3da1b5,Namespace:calico-system,Attempt:1,} returns sandbox id \"3037bd8d5c85fda3bf49e61e6feb4f0542256c2fe51a206f85cdd1033e49e58b\"" Mar 7 00:56:15.468332 containerd[2132]: time="2026-03-07T00:56:15.466592291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 7 00:56:15.468332 containerd[2132]: time="2026-03-07T00:56:15.466721932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 7 00:56:15.468332 containerd[2132]: time="2026-03-07T00:56:15.466770136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:56:15.471563 containerd[2132]: time="2026-03-07T00:56:15.471336570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 7 00:56:15.493170 systemd-resolved[2027]: Under memory pressure, flushing caches. Mar 7 00:56:15.500432 systemd-journald[1614]: Under memory pressure, flushing caches. Mar 7 00:56:15.493274 systemd-resolved[2027]: Flushed all caches. Mar 7 00:56:15.586254 containerd[2132]: 2026-03-07 00:56:15.360 [WARNING][5685] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" WorkloadEndpoint="ip--172--31--21--232-k8s-whisker--69447f8b7b--6p8bz-eth0" Mar 7 00:56:15.586254 containerd[2132]: 2026-03-07 00:56:15.360 [INFO][5685] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" Mar 7 00:56:15.586254 containerd[2132]: 2026-03-07 00:56:15.360 [INFO][5685] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" iface="eth0" netns="" Mar 7 00:56:15.586254 containerd[2132]: 2026-03-07 00:56:15.361 [INFO][5685] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" Mar 7 00:56:15.586254 containerd[2132]: 2026-03-07 00:56:15.361 [INFO][5685] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" Mar 7 00:56:15.586254 containerd[2132]: 2026-03-07 00:56:15.525 [INFO][5728] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" HandleID="k8s-pod-network.63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" Workload="ip--172--31--21--232-k8s-whisker--69447f8b7b--6p8bz-eth0" Mar 7 00:56:15.586254 containerd[2132]: 2026-03-07 00:56:15.526 [INFO][5728] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:56:15.586254 containerd[2132]: 2026-03-07 00:56:15.526 [INFO][5728] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:56:15.586254 containerd[2132]: 2026-03-07 00:56:15.555 [WARNING][5728] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" HandleID="k8s-pod-network.63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" Workload="ip--172--31--21--232-k8s-whisker--69447f8b7b--6p8bz-eth0" Mar 7 00:56:15.586254 containerd[2132]: 2026-03-07 00:56:15.555 [INFO][5728] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" HandleID="k8s-pod-network.63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" Workload="ip--172--31--21--232-k8s-whisker--69447f8b7b--6p8bz-eth0" Mar 7 00:56:15.586254 containerd[2132]: 2026-03-07 00:56:15.567 [INFO][5728] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:56:15.586254 containerd[2132]: 2026-03-07 00:56:15.582 [INFO][5685] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a" Mar 7 00:56:15.587214 containerd[2132]: time="2026-03-07T00:56:15.586308260Z" level=info msg="TearDown network for sandbox \"63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a\" successfully" Mar 7 00:56:15.604889 containerd[2132]: time="2026-03-07T00:56:15.602290875Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 00:56:15.604889 containerd[2132]: time="2026-03-07T00:56:15.603784242Z" level=info msg="RemovePodSandbox \"63373eaefbe8a70ea3a90d94f2262ed16e0762c87bf8310664e2719ec294071a\" returns successfully" Mar 7 00:56:15.736379 containerd[2132]: time="2026-03-07T00:56:15.735241766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f85b996cc-22brp,Uid:efc204ae-0a76-49f1-a8ed-4de31999589d,Namespace:calico-system,Attempt:0,} returns sandbox id \"60e4a5438cd9f19835950ab6cb3f9099dfb17c8fcbd1d319d4aca2b9635d141e\"" Mar 7 00:56:15.792970 systemd-networkd[1695]: vxlan.calico: Link UP Mar 7 00:56:15.792990 systemd-networkd[1695]: vxlan.calico: Gained carrier Mar 7 00:56:16.176977 kubelet[3603]: I0307 00:56:16.172316 3603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-z42qv" podStartSLOduration=61.172287327 podStartE2EDuration="1m1.172287327s" podCreationTimestamp="2026-03-07 00:55:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-07 00:56:15.199674981 +0000 UTC m=+63.642345073" watchObservedRunningTime="2026-03-07 00:56:16.172287327 +0000 UTC m=+64.614957383" Mar 7 00:56:16.248537 systemd[1]: Started sshd@8-172.31.21.232:22-20.161.92.111:59080.service - OpenSSH per-connection server daemon (20.161.92.111:59080). Mar 7 00:56:16.517816 systemd-networkd[1695]: cali0449fb06b85: Gained IPv6LL Mar 7 00:56:16.830246 sshd[5805]: Accepted publickey for core from 20.161.92.111 port 59080 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:16.835030 sshd[5805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:16.851893 systemd-logind[2105]: New session 9 of user core. Mar 7 00:56:16.856829 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 7 00:56:17.162269 systemd-networkd[1695]: vxlan.calico: Gained IPv6LL Mar 7 00:56:17.500080 sshd[5805]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:17.510095 systemd[1]: sshd@8-172.31.21.232:22-20.161.92.111:59080.service: Deactivated successfully. Mar 7 00:56:17.514325 systemd-logind[2105]: Session 9 logged out. Waiting for processes to exit. Mar 7 00:56:17.523198 systemd[1]: session-9.scope: Deactivated successfully. Mar 7 00:56:17.532895 systemd-logind[2105]: Removed session 9. Mar 7 00:56:18.554407 containerd[2132]: time="2026-03-07T00:56:18.553705958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:18.556889 containerd[2132]: time="2026-03-07T00:56:18.556828538Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Mar 7 00:56:18.558829 containerd[2132]: time="2026-03-07T00:56:18.558760508Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:18.564707 containerd[2132]: time="2026-03-07T00:56:18.564635620Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:18.566823 containerd[2132]: time="2026-03-07T00:56:18.566728530Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 4.485223446s" Mar 7 00:56:18.566823 containerd[2132]: time="2026-03-07T00:56:18.566814613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 7 00:56:18.570826 containerd[2132]: time="2026-03-07T00:56:18.570731066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 7 00:56:18.579366 containerd[2132]: time="2026-03-07T00:56:18.579100977Z" level=info msg="CreateContainer within sandbox \"6b29454282ff1a28949476cae35bffc946025279d36c3264def7546ae878c70d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 00:56:18.617142 containerd[2132]: time="2026-03-07T00:56:18.616411579Z" level=info msg="CreateContainer within sandbox \"6b29454282ff1a28949476cae35bffc946025279d36c3264def7546ae878c70d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e99b85465d2cd754a4d9fa9ff1484e3ebfcbe6f3b0c79c709df255aa43f8a7fb\"" Mar 7 00:56:18.623035 containerd[2132]: time="2026-03-07T00:56:18.622110780Z" level=info msg="StartContainer for \"e99b85465d2cd754a4d9fa9ff1484e3ebfcbe6f3b0c79c709df255aa43f8a7fb\"" Mar 7 00:56:18.626105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1228353793.mount: Deactivated successfully. Mar 7 00:56:18.815495 containerd[2132]: time="2026-03-07T00:56:18.815034619Z" level=info msg="StartContainer for \"e99b85465d2cd754a4d9fa9ff1484e3ebfcbe6f3b0c79c709df255aa43f8a7fb\" returns successfully" Mar 7 00:56:19.187129 kubelet[3603]: I0307 00:56:19.185639 3603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-67454779cb-jkkb4" podStartSLOduration=35.689769556 podStartE2EDuration="40.185580806s" podCreationTimestamp="2026-03-07 00:55:39 +0000 UTC" firstStartedPulling="2026-03-07 00:56:14.074354298 +0000 UTC m=+62.517024306" lastFinishedPulling="2026-03-07 00:56:18.570165464 +0000 UTC m=+67.012835556" observedRunningTime="2026-03-07 00:56:19.184548722 +0000 UTC m=+67.627218838" watchObservedRunningTime="2026-03-07 00:56:19.185580806 +0000 UTC m=+67.628250814" Mar 7 00:56:19.832260 ntpd[2093]: Listen normally on 6 vxlan.calico 192.168.75.0:123 Mar 7 00:56:19.833633 ntpd[2093]: 7 Mar 00:56:19 ntpd[2093]: Listen normally on 6 vxlan.calico 192.168.75.0:123 Mar 7 00:56:19.833633 ntpd[2093]: 7 Mar 00:56:19 ntpd[2093]: Listen normally on 7 califafedb475a6 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 7 00:56:19.833633 ntpd[2093]: 7 Mar 00:56:19 ntpd[2093]: Listen normally on 8 calief7407eacb5 [fe80::ecee:eeff:feee:eeee%5]:123 Mar 7 00:56:19.833633 ntpd[2093]: 7 Mar 00:56:19 ntpd[2093]: Listen normally on 9 cali579a5141479 [fe80::ecee:eeff:feee:eeee%6]:123 Mar 7 00:56:19.833633 ntpd[2093]: 7 Mar 00:56:19 ntpd[2093]: Listen normally on 10 cali2b12ce616f3 [fe80::ecee:eeff:feee:eeee%7]:123 Mar 7 00:56:19.833633 ntpd[2093]: 7 Mar 00:56:19 ntpd[2093]: Listen normally on 11 cali55d577c6ce6 [fe80::ecee:eeff:feee:eeee%8]:123 Mar 7 00:56:19.833633 ntpd[2093]: 7 Mar 00:56:19 ntpd[2093]: Listen normally on 12 calibf3ce04cd78 [fe80::ecee:eeff:feee:eeee%9]:123 Mar 7 00:56:19.833633 ntpd[2093]: 7 Mar 00:56:19 ntpd[2093]: Listen normally on 13 calibdcf285ae42 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 7 00:56:19.833633 ntpd[2093]: 7 Mar 00:56:19 ntpd[2093]: Listen normally on 14 cali0449fb06b85 [fe80::ecee:eeff:feee:eeee%11]:123 Mar 7 00:56:19.833633 ntpd[2093]: 7 Mar 00:56:19 ntpd[2093]: Listen normally on 15 vxlan.calico [fe80::64aa:26ff:fe25:1dab%12]:123 Mar 7 00:56:19.832411 ntpd[2093]: Listen normally on 7 califafedb475a6 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 7 00:56:19.832503 ntpd[2093]: Listen normally on 8 calief7407eacb5 [fe80::ecee:eeff:feee:eeee%5]:123 Mar 7 00:56:19.832582 ntpd[2093]: Listen normally on 9 cali579a5141479 [fe80::ecee:eeff:feee:eeee%6]:123 Mar 7 00:56:19.832654 ntpd[2093]: Listen normally on 10 cali2b12ce616f3 [fe80::ecee:eeff:feee:eeee%7]:123 Mar 7 00:56:19.832726 ntpd[2093]: Listen normally on 11 cali55d577c6ce6 [fe80::ecee:eeff:feee:eeee%8]:123 Mar 7 00:56:19.832807 ntpd[2093]: Listen normally on 12 calibf3ce04cd78 [fe80::ecee:eeff:feee:eeee%9]:123 Mar 7 00:56:19.832881 ntpd[2093]: Listen normally on 13 calibdcf285ae42 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 7 00:56:19.833011 ntpd[2093]: Listen normally on 14 cali0449fb06b85 [fe80::ecee:eeff:feee:eeee%11]:123 Mar 7 00:56:19.833105 ntpd[2093]: Listen normally on 15 vxlan.calico [fe80::64aa:26ff:fe25:1dab%12]:123 Mar 7 00:56:20.165770 kubelet[3603]: I0307 00:56:20.164242 3603 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 00:56:22.594800 systemd[1]: Started sshd@9-172.31.21.232:22-20.161.92.111:51654.service - OpenSSH per-connection server daemon (20.161.92.111:51654). Mar 7 00:56:23.149836 sshd[5941]: Accepted publickey for core from 20.161.92.111 port 51654 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:23.154603 sshd[5941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:23.168108 systemd-logind[2105]: New session 10 of user core. Mar 7 00:56:23.174701 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 7 00:56:23.294668 containerd[2132]: time="2026-03-07T00:56:23.294589024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:23.297607 containerd[2132]: time="2026-03-07T00:56:23.297526928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Mar 7 00:56:23.300485 containerd[2132]: time="2026-03-07T00:56:23.300409485Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:23.306681 containerd[2132]: time="2026-03-07T00:56:23.306591170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:23.308989 containerd[2132]: time="2026-03-07T00:56:23.308192086Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 4.737374157s" Mar 7 00:56:23.308989 containerd[2132]: time="2026-03-07T00:56:23.308256210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Mar 7 00:56:23.311597 containerd[2132]: time="2026-03-07T00:56:23.311312614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 7 00:56:23.347540 containerd[2132]: time="2026-03-07T00:56:23.347321141Z" level=info msg="CreateContainer within sandbox \"05d61fbc36566aeda39595187fe7ce217ec13be2b963cc9ba92740d7a439d6fe\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 7 00:56:23.378067 containerd[2132]: time="2026-03-07T00:56:23.378001850Z" level=info msg="CreateContainer within sandbox \"05d61fbc36566aeda39595187fe7ce217ec13be2b963cc9ba92740d7a439d6fe\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1a1a4102c89f528a7110c3c1a32bd9d62d5b4de3c0251fa54787775f9742073e\"" Mar 7 00:56:23.380672 containerd[2132]: time="2026-03-07T00:56:23.380454424Z" level=info msg="StartContainer for \"1a1a4102c89f528a7110c3c1a32bd9d62d5b4de3c0251fa54787775f9742073e\"" Mar 7 00:56:23.569641 containerd[2132]: time="2026-03-07T00:56:23.569554988Z" level=info msg="StartContainer for \"1a1a4102c89f528a7110c3c1a32bd9d62d5b4de3c0251fa54787775f9742073e\" returns successfully" Mar 7 00:56:23.752736 sshd[5941]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:23.777013 systemd[1]: sshd@9-172.31.21.232:22-20.161.92.111:51654.service: Deactivated successfully. Mar 7 00:56:23.800635 systemd[1]: session-10.scope: Deactivated successfully. Mar 7 00:56:23.802075 systemd-logind[2105]: Session 10 logged out. Waiting for processes to exit. Mar 7 00:56:23.814198 systemd-logind[2105]: Removed session 10. Mar 7 00:56:24.226570 kubelet[3603]: I0307 00:56:24.226455 3603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-578b9ccf58-j8drf" podStartSLOduration=32.342010746 podStartE2EDuration="41.226430926s" podCreationTimestamp="2026-03-07 00:55:43 +0000 UTC" firstStartedPulling="2026-03-07 00:56:14.426062819 +0000 UTC m=+62.868732827" lastFinishedPulling="2026-03-07 00:56:23.310483011 +0000 UTC m=+71.753153007" observedRunningTime="2026-03-07 00:56:24.221642621 +0000 UTC m=+72.664312737" watchObservedRunningTime="2026-03-07 00:56:24.226430926 +0000 UTC m=+72.669100934" Mar 7 00:56:24.677486 containerd[2132]: time="2026-03-07T00:56:24.677385294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:24.679551 containerd[2132]: time="2026-03-07T00:56:24.679113606Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Mar 7 00:56:24.681166 containerd[2132]: time="2026-03-07T00:56:24.681056837Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:24.686663 containerd[2132]: time="2026-03-07T00:56:24.686552836Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:24.688014 containerd[2132]: time="2026-03-07T00:56:24.687511815Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.376141813s" Mar 7 00:56:24.688014 containerd[2132]: time="2026-03-07T00:56:24.687569660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Mar 7 00:56:24.690247 containerd[2132]: time="2026-03-07T00:56:24.690177555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 7 00:56:24.699073 containerd[2132]: time="2026-03-07T00:56:24.698862407Z" level=info msg="CreateContainer within sandbox \"20dbbd752f9649fd799be457519b68de2bf54ab3f79340cadb7dfc76c040ffd7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 7 00:56:24.727986 containerd[2132]: time="2026-03-07T00:56:24.727370631Z" level=info msg="CreateContainer within sandbox \"20dbbd752f9649fd799be457519b68de2bf54ab3f79340cadb7dfc76c040ffd7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8b8b4a6be19e3bdb08a16030f5adaf3f7b8fb75baa80e3911df588033e31686a\"" Mar 7 00:56:24.735674 containerd[2132]: time="2026-03-07T00:56:24.733099005Z" level=info msg="StartContainer for \"8b8b4a6be19e3bdb08a16030f5adaf3f7b8fb75baa80e3911df588033e31686a\"" Mar 7 00:56:24.859576 containerd[2132]: time="2026-03-07T00:56:24.859501680Z" level=info msg="StartContainer for \"8b8b4a6be19e3bdb08a16030f5adaf3f7b8fb75baa80e3911df588033e31686a\" returns successfully" Mar 7 00:56:25.203220 containerd[2132]: time="2026-03-07T00:56:25.202691283Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:25.205972 containerd[2132]: time="2026-03-07T00:56:25.204598244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 7 00:56:25.213255 containerd[2132]: time="2026-03-07T00:56:25.213161284Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 522.916843ms" Mar 7 00:56:25.213255 containerd[2132]: time="2026-03-07T00:56:25.213252277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 7 00:56:25.217666 containerd[2132]: time="2026-03-07T00:56:25.217491608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 7 00:56:25.229611 containerd[2132]: time="2026-03-07T00:56:25.229548838Z" level=info msg="CreateContainer within sandbox \"5f558316f92b9d205fd60e730412d855a593fb9bcaa9c2d0847fcb2f61849466\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 7 00:56:25.257111 containerd[2132]: time="2026-03-07T00:56:25.256882694Z" level=info msg="CreateContainer within sandbox \"5f558316f92b9d205fd60e730412d855a593fb9bcaa9c2d0847fcb2f61849466\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"58f30c696ef04809c4ec17a3f235d5c7387e5f9065c3ab6bbe4f6a638d4b2761\"" Mar 7 00:56:25.262144 containerd[2132]: time="2026-03-07T00:56:25.258322934Z" level=info msg="StartContainer for \"58f30c696ef04809c4ec17a3f235d5c7387e5f9065c3ab6bbe4f6a638d4b2761\"" Mar 7 00:56:25.391332 containerd[2132]: time="2026-03-07T00:56:25.391259310Z" level=info msg="StartContainer for \"58f30c696ef04809c4ec17a3f235d5c7387e5f9065c3ab6bbe4f6a638d4b2761\" returns successfully" Mar 7 00:56:26.274734 kubelet[3603]: I0307 00:56:26.274006 3603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-67454779cb-lj22q" podStartSLOduration=37.478866398 podStartE2EDuration="47.273824626s" podCreationTimestamp="2026-03-07 00:55:39 +0000 UTC" firstStartedPulling="2026-03-07 00:56:15.420699812 +0000 UTC m=+63.863369820" lastFinishedPulling="2026-03-07 00:56:25.215657968 +0000 UTC m=+73.658328048" observedRunningTime="2026-03-07 00:56:26.273046097 +0000 UTC m=+74.715716225" watchObservedRunningTime="2026-03-07 00:56:26.273824626 +0000 UTC m=+74.716494646" Mar 7 00:56:27.233965 kubelet[3603]: I0307 00:56:27.233520 3603 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 00:56:27.733883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4028813335.mount: Deactivated successfully. Mar 7 00:56:28.703433 containerd[2132]: time="2026-03-07T00:56:28.701139914Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:28.705261 containerd[2132]: time="2026-03-07T00:56:28.705168839Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Mar 7 00:56:28.711154 containerd[2132]: time="2026-03-07T00:56:28.709985707Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:28.722074 containerd[2132]: time="2026-03-07T00:56:28.721998082Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:28.724576 containerd[2132]: time="2026-03-07T00:56:28.724410267Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 3.506827342s" Mar 7 00:56:28.724812 containerd[2132]: time="2026-03-07T00:56:28.724775238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Mar 7 00:56:28.730461 containerd[2132]: time="2026-03-07T00:56:28.730394514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 7 00:56:28.738348 containerd[2132]: time="2026-03-07T00:56:28.738027905Z" level=info msg="CreateContainer within sandbox \"3037bd8d5c85fda3bf49e61e6feb4f0542256c2fe51a206f85cdd1033e49e58b\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 7 00:56:28.766976 containerd[2132]: time="2026-03-07T00:56:28.766348390Z" level=info msg="CreateContainer within sandbox \"3037bd8d5c85fda3bf49e61e6feb4f0542256c2fe51a206f85cdd1033e49e58b\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2180a64f22d4595dc24bc774a0bf6ccb2f4c8694fcc86217384d12ff558a5fc9\"" Mar 7 00:56:28.772472 containerd[2132]: time="2026-03-07T00:56:28.771741089Z" level=info msg="StartContainer for \"2180a64f22d4595dc24bc774a0bf6ccb2f4c8694fcc86217384d12ff558a5fc9\"" Mar 7 00:56:28.862209 systemd[1]: Started sshd@10-172.31.21.232:22-20.161.92.111:51666.service - OpenSSH per-connection server daemon (20.161.92.111:51666). Mar 7 00:56:29.118114 containerd[2132]: time="2026-03-07T00:56:29.117889487Z" level=info msg="StartContainer for \"2180a64f22d4595dc24bc774a0bf6ccb2f4c8694fcc86217384d12ff558a5fc9\" returns successfully" Mar 7 00:56:29.505186 sshd[6138]: Accepted publickey for core from 20.161.92.111 port 51666 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:29.514057 sshd[6138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:29.537898 systemd-logind[2105]: New session 11 of user core. Mar 7 00:56:29.547585 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 7 00:56:30.145303 sshd[6138]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:30.158028 systemd[1]: sshd@10-172.31.21.232:22-20.161.92.111:51666.service: Deactivated successfully. Mar 7 00:56:30.158827 systemd-logind[2105]: Session 11 logged out. Waiting for processes to exit. Mar 7 00:56:30.175646 systemd[1]: session-11.scope: Deactivated successfully. Mar 7 00:56:30.182594 systemd-logind[2105]: Removed session 11. Mar 7 00:56:30.236605 systemd[1]: Started sshd@11-172.31.21.232:22-20.161.92.111:56808.service - OpenSSH per-connection server daemon (20.161.92.111:56808). Mar 7 00:56:30.628469 kubelet[3603]: I0307 00:56:30.628188 3603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-svm6z" podStartSLOduration=36.336452811 podStartE2EDuration="49.62816399s" podCreationTimestamp="2026-03-07 00:55:41 +0000 UTC" firstStartedPulling="2026-03-07 00:56:15.437045657 +0000 UTC m=+63.879715665" lastFinishedPulling="2026-03-07 00:56:28.728756836 +0000 UTC m=+77.171426844" observedRunningTime="2026-03-07 00:56:29.301333161 +0000 UTC m=+77.744003181" watchObservedRunningTime="2026-03-07 00:56:30.62816399 +0000 UTC m=+79.070833998" Mar 7 00:56:30.797756 sshd[6203]: Accepted publickey for core from 20.161.92.111 port 56808 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:30.808246 sshd[6203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:30.844067 systemd-logind[2105]: New session 12 of user core. Mar 7 00:56:30.853607 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 7 00:56:30.901625 containerd[2132]: time="2026-03-07T00:56:30.900048506Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:30.904976 containerd[2132]: time="2026-03-07T00:56:30.904523550Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Mar 7 00:56:30.907374 containerd[2132]: time="2026-03-07T00:56:30.907310803Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:30.914803 containerd[2132]: time="2026-03-07T00:56:30.914749445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:30.917564 containerd[2132]: time="2026-03-07T00:56:30.916350601Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 2.185100119s" Mar 7 00:56:30.917564 containerd[2132]: time="2026-03-07T00:56:30.916418531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Mar 7 00:56:30.918657 containerd[2132]: time="2026-03-07T00:56:30.918549380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 7 00:56:30.927923 containerd[2132]: time="2026-03-07T00:56:30.927866348Z" level=info msg="CreateContainer within sandbox \"60e4a5438cd9f19835950ab6cb3f9099dfb17c8fcbd1d319d4aca2b9635d141e\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 7 00:56:30.961062 containerd[2132]: time="2026-03-07T00:56:30.960976987Z" level=info msg="CreateContainer within sandbox \"60e4a5438cd9f19835950ab6cb3f9099dfb17c8fcbd1d319d4aca2b9635d141e\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"4dacb8b4f924edd0187121d38aa2b639e1f99dd2e9bc1bfe4d49af4c226427f0\"" Mar 7 00:56:30.965830 containerd[2132]: time="2026-03-07T00:56:30.963359002Z" level=info msg="StartContainer for \"4dacb8b4f924edd0187121d38aa2b639e1f99dd2e9bc1bfe4d49af4c226427f0\"" Mar 7 00:56:31.161277 containerd[2132]: time="2026-03-07T00:56:31.159356977Z" level=info msg="StartContainer for \"4dacb8b4f924edd0187121d38aa2b639e1f99dd2e9bc1bfe4d49af4c226427f0\" returns successfully" Mar 7 00:56:31.501303 sshd[6203]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:31.517927 systemd[1]: sshd@11-172.31.21.232:22-20.161.92.111:56808.service: Deactivated successfully. Mar 7 00:56:31.538635 systemd[1]: session-12.scope: Deactivated successfully. Mar 7 00:56:31.539269 systemd-logind[2105]: Session 12 logged out. Waiting for processes to exit. Mar 7 00:56:31.547768 systemd-logind[2105]: Removed session 12. Mar 7 00:56:31.606448 systemd[1]: Started sshd@12-172.31.21.232:22-20.161.92.111:56824.service - OpenSSH per-connection server daemon (20.161.92.111:56824). Mar 7 00:56:32.184371 sshd[6295]: Accepted publickey for core from 20.161.92.111 port 56824 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:32.187415 sshd[6295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:32.203575 systemd-logind[2105]: New session 13 of user core. Mar 7 00:56:32.210686 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 7 00:56:32.727197 sshd[6295]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:32.735924 systemd[1]: sshd@12-172.31.21.232:22-20.161.92.111:56824.service: Deactivated successfully. Mar 7 00:56:32.746704 systemd[1]: session-13.scope: Deactivated successfully. Mar 7 00:56:32.750163 systemd-logind[2105]: Session 13 logged out. Waiting for processes to exit. Mar 7 00:56:32.754011 systemd-logind[2105]: Removed session 13. Mar 7 00:56:33.483053 containerd[2132]: time="2026-03-07T00:56:33.482411104Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:33.485516 containerd[2132]: time="2026-03-07T00:56:33.485431297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Mar 7 00:56:33.488370 containerd[2132]: time="2026-03-07T00:56:33.488242394Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:33.498335 containerd[2132]: time="2026-03-07T00:56:33.497187020Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:33.502072 containerd[2132]: time="2026-03-07T00:56:33.501789965Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 2.583158487s" Mar 7 00:56:33.502072 containerd[2132]: time="2026-03-07T00:56:33.501861424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Mar 7 00:56:33.509095 containerd[2132]: time="2026-03-07T00:56:33.508623300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 7 00:56:33.516033 containerd[2132]: time="2026-03-07T00:56:33.515918061Z" level=info msg="CreateContainer within sandbox \"20dbbd752f9649fd799be457519b68de2bf54ab3f79340cadb7dfc76c040ffd7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 7 00:56:33.553755 containerd[2132]: time="2026-03-07T00:56:33.553652295Z" level=info msg="CreateContainer within sandbox \"20dbbd752f9649fd799be457519b68de2bf54ab3f79340cadb7dfc76c040ffd7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"93fa1be6e414dd3cb6c0a8bfa0c5050452c17e45ce0c28536a343329490c8922\"" Mar 7 00:56:33.559479 containerd[2132]: time="2026-03-07T00:56:33.558877703Z" level=info msg="StartContainer for \"93fa1be6e414dd3cb6c0a8bfa0c5050452c17e45ce0c28536a343329490c8922\"" Mar 7 00:56:33.718170 containerd[2132]: time="2026-03-07T00:56:33.718082474Z" level=info msg="StartContainer for \"93fa1be6e414dd3cb6c0a8bfa0c5050452c17e45ce0c28536a343329490c8922\" returns successfully" Mar 7 00:56:34.075606 kubelet[3603]: I0307 00:56:34.075469 3603 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 7 00:56:34.075606 kubelet[3603]: I0307 00:56:34.075546 3603 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 7 00:56:34.332589 kubelet[3603]: I0307 00:56:34.331863 3603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-tdrbz" podStartSLOduration=32.783812591 podStartE2EDuration="51.331840436s" podCreationTimestamp="2026-03-07 00:55:43 +0000 UTC" firstStartedPulling="2026-03-07 00:56:14.957535798 +0000 UTC m=+63.400205806" lastFinishedPulling="2026-03-07 00:56:33.505563643 +0000 UTC m=+81.948233651" observedRunningTime="2026-03-07 00:56:34.330854384 +0000 UTC m=+82.773524404" watchObservedRunningTime="2026-03-07 00:56:34.331840436 +0000 UTC m=+82.774510432" Mar 7 00:56:35.785222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2768625205.mount: Deactivated successfully. Mar 7 00:56:35.824612 containerd[2132]: time="2026-03-07T00:56:35.824520554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:35.827293 containerd[2132]: time="2026-03-07T00:56:35.827212671Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Mar 7 00:56:35.830483 containerd[2132]: time="2026-03-07T00:56:35.830237655Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:35.838301 containerd[2132]: time="2026-03-07T00:56:35.838161135Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 7 00:56:35.840534 containerd[2132]: time="2026-03-07T00:56:35.840207810Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 2.331499748s" Mar 7 00:56:35.840534 containerd[2132]: time="2026-03-07T00:56:35.840283003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Mar 7 00:56:35.850851 containerd[2132]: time="2026-03-07T00:56:35.850739689Z" level=info msg="CreateContainer within sandbox \"60e4a5438cd9f19835950ab6cb3f9099dfb17c8fcbd1d319d4aca2b9635d141e\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 7 00:56:35.884561 containerd[2132]: time="2026-03-07T00:56:35.884383947Z" level=info msg="CreateContainer within sandbox \"60e4a5438cd9f19835950ab6cb3f9099dfb17c8fcbd1d319d4aca2b9635d141e\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"2db8a109c0dd4081b7abc401991ea629464f7eb0a7a077af6b22e7e28f105348\"" Mar 7 00:56:35.891105 containerd[2132]: time="2026-03-07T00:56:35.890455117Z" level=info msg="StartContainer for \"2db8a109c0dd4081b7abc401991ea629464f7eb0a7a077af6b22e7e28f105348\"" Mar 7 00:56:36.053899 containerd[2132]: time="2026-03-07T00:56:36.053675716Z" level=info msg="StartContainer for \"2db8a109c0dd4081b7abc401991ea629464f7eb0a7a077af6b22e7e28f105348\" returns successfully" Mar 7 00:56:36.512034 kubelet[3603]: I0307 00:56:36.511415 3603 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 00:56:36.559818 kubelet[3603]: I0307 00:56:36.559543 3603 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7f85b996cc-22brp" podStartSLOduration=6.46220285 podStartE2EDuration="26.559520495s" podCreationTimestamp="2026-03-07 00:56:10 +0000 UTC" firstStartedPulling="2026-03-07 00:56:15.745138372 +0000 UTC m=+64.187808368" lastFinishedPulling="2026-03-07 00:56:35.842456017 +0000 UTC m=+84.285126013" observedRunningTime="2026-03-07 00:56:36.347796557 +0000 UTC m=+84.790466589" watchObservedRunningTime="2026-03-07 00:56:36.559520495 +0000 UTC m=+85.002190503" Mar 7 00:56:37.814488 systemd[1]: Started sshd@13-172.31.21.232:22-20.161.92.111:56828.service - OpenSSH per-connection server daemon (20.161.92.111:56828). Mar 7 00:56:38.354336 sshd[6410]: Accepted publickey for core from 20.161.92.111 port 56828 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:38.358762 sshd[6410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:38.370656 systemd-logind[2105]: New session 14 of user core. Mar 7 00:56:38.378613 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 7 00:56:38.868189 sshd[6410]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:38.877874 systemd[1]: sshd@13-172.31.21.232:22-20.161.92.111:56828.service: Deactivated successfully. Mar 7 00:56:38.887379 systemd[1]: session-14.scope: Deactivated successfully. Mar 7 00:56:38.892444 systemd-logind[2105]: Session 14 logged out. Waiting for processes to exit. Mar 7 00:56:38.895472 systemd-logind[2105]: Removed session 14. Mar 7 00:56:43.974665 systemd[1]: Started sshd@14-172.31.21.232:22-20.161.92.111:37620.service - OpenSSH per-connection server daemon (20.161.92.111:37620). Mar 7 00:56:44.537970 sshd[6447]: Accepted publickey for core from 20.161.92.111 port 37620 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:44.537568 sshd[6447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:44.552286 systemd-logind[2105]: New session 15 of user core. Mar 7 00:56:44.559825 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 7 00:56:45.102739 sshd[6447]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:45.113423 systemd-logind[2105]: Session 15 logged out. Waiting for processes to exit. Mar 7 00:56:45.114018 systemd[1]: sshd@14-172.31.21.232:22-20.161.92.111:37620.service: Deactivated successfully. Mar 7 00:56:45.122786 systemd[1]: session-15.scope: Deactivated successfully. Mar 7 00:56:45.126997 systemd-logind[2105]: Removed session 15. Mar 7 00:56:45.189473 systemd[1]: Started sshd@15-172.31.21.232:22-20.161.92.111:37622.service - OpenSSH per-connection server daemon (20.161.92.111:37622). Mar 7 00:56:45.718928 sshd[6466]: Accepted publickey for core from 20.161.92.111 port 37622 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:45.723367 sshd[6466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:45.755074 systemd-logind[2105]: New session 16 of user core. Mar 7 00:56:45.760542 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 7 00:56:46.618792 sshd[6466]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:46.627918 systemd-logind[2105]: Session 16 logged out. Waiting for processes to exit. Mar 7 00:56:46.629482 systemd[1]: sshd@15-172.31.21.232:22-20.161.92.111:37622.service: Deactivated successfully. Mar 7 00:56:46.638213 systemd[1]: session-16.scope: Deactivated successfully. Mar 7 00:56:46.640565 systemd-logind[2105]: Removed session 16. Mar 7 00:56:46.712183 systemd[1]: Started sshd@16-172.31.21.232:22-20.161.92.111:37636.service - OpenSSH per-connection server daemon (20.161.92.111:37636). Mar 7 00:56:47.227864 sshd[6481]: Accepted publickey for core from 20.161.92.111 port 37636 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:47.231485 sshd[6481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:47.239377 systemd-logind[2105]: New session 17 of user core. Mar 7 00:56:47.243529 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 7 00:56:48.763575 sshd[6481]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:48.775104 systemd[1]: sshd@16-172.31.21.232:22-20.161.92.111:37636.service: Deactivated successfully. Mar 7 00:56:48.784232 systemd[1]: session-17.scope: Deactivated successfully. Mar 7 00:56:48.787514 systemd-logind[2105]: Session 17 logged out. Waiting for processes to exit. Mar 7 00:56:48.790320 systemd-logind[2105]: Removed session 17. Mar 7 00:56:48.846617 systemd[1]: Started sshd@17-172.31.21.232:22-20.161.92.111:37642.service - OpenSSH per-connection server daemon (20.161.92.111:37642). Mar 7 00:56:49.372103 sshd[6508]: Accepted publickey for core from 20.161.92.111 port 37642 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:49.375242 sshd[6508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:49.387372 systemd-logind[2105]: New session 18 of user core. Mar 7 00:56:49.400809 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 7 00:56:50.157670 sshd[6508]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:50.166700 systemd[1]: sshd@17-172.31.21.232:22-20.161.92.111:37642.service: Deactivated successfully. Mar 7 00:56:50.177522 systemd[1]: session-18.scope: Deactivated successfully. Mar 7 00:56:50.180352 systemd-logind[2105]: Session 18 logged out. Waiting for processes to exit. Mar 7 00:56:50.183706 systemd-logind[2105]: Removed session 18. Mar 7 00:56:50.249434 systemd[1]: Started sshd@18-172.31.21.232:22-20.161.92.111:39512.service - OpenSSH per-connection server daemon (20.161.92.111:39512). Mar 7 00:56:50.752776 sshd[6520]: Accepted publickey for core from 20.161.92.111 port 39512 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:50.755891 sshd[6520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:50.765519 systemd-logind[2105]: New session 19 of user core. Mar 7 00:56:50.776559 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 7 00:56:51.243270 sshd[6520]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:51.252199 systemd[1]: sshd@18-172.31.21.232:22-20.161.92.111:39512.service: Deactivated successfully. Mar 7 00:56:51.264263 systemd[1]: session-19.scope: Deactivated successfully. Mar 7 00:56:51.267701 systemd-logind[2105]: Session 19 logged out. Waiting for processes to exit. Mar 7 00:56:51.271244 systemd-logind[2105]: Removed session 19. Mar 7 00:56:56.334437 systemd[1]: Started sshd@19-172.31.21.232:22-20.161.92.111:39514.service - OpenSSH per-connection server daemon (20.161.92.111:39514). Mar 7 00:56:56.861901 sshd[6554]: Accepted publickey for core from 20.161.92.111 port 39514 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:56:56.865517 sshd[6554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:56:56.873665 systemd-logind[2105]: New session 20 of user core. Mar 7 00:56:56.883439 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 7 00:56:57.347177 sshd[6554]: pam_unix(sshd:session): session closed for user core Mar 7 00:56:57.355535 systemd[1]: sshd@19-172.31.21.232:22-20.161.92.111:39514.service: Deactivated successfully. Mar 7 00:56:57.363525 systemd[1]: session-20.scope: Deactivated successfully. Mar 7 00:56:57.365866 systemd-logind[2105]: Session 20 logged out. Waiting for processes to exit. Mar 7 00:56:57.369434 systemd-logind[2105]: Removed session 20. Mar 7 00:57:02.437434 systemd[1]: Started sshd@20-172.31.21.232:22-20.161.92.111:34004.service - OpenSSH per-connection server daemon (20.161.92.111:34004). Mar 7 00:57:02.945993 sshd[6617]: Accepted publickey for core from 20.161.92.111 port 34004 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:57:02.948569 sshd[6617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:57:02.958055 systemd-logind[2105]: New session 21 of user core. Mar 7 00:57:02.963532 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 7 00:57:03.434569 sshd[6617]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:03.452705 systemd[1]: sshd@20-172.31.21.232:22-20.161.92.111:34004.service: Deactivated successfully. Mar 7 00:57:03.465665 systemd-logind[2105]: Session 21 logged out. Waiting for processes to exit. Mar 7 00:57:03.467228 systemd[1]: session-21.scope: Deactivated successfully. Mar 7 00:57:03.474285 systemd-logind[2105]: Removed session 21. Mar 7 00:57:04.855533 kubelet[3603]: I0307 00:57:04.854773 3603 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 7 00:57:08.533190 systemd[1]: Started sshd@21-172.31.21.232:22-20.161.92.111:34012.service - OpenSSH per-connection server daemon (20.161.92.111:34012). Mar 7 00:57:09.095784 sshd[6653]: Accepted publickey for core from 20.161.92.111 port 34012 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:57:09.100744 sshd[6653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:57:09.112054 systemd-logind[2105]: New session 22 of user core. Mar 7 00:57:09.122563 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 7 00:57:09.599299 sshd[6653]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:09.612601 systemd[1]: sshd@21-172.31.21.232:22-20.161.92.111:34012.service: Deactivated successfully. Mar 7 00:57:09.620653 systemd[1]: session-22.scope: Deactivated successfully. Mar 7 00:57:09.623325 systemd-logind[2105]: Session 22 logged out. Waiting for processes to exit. Mar 7 00:57:09.625818 systemd-logind[2105]: Removed session 22. Mar 7 00:57:14.685478 systemd[1]: Started sshd@22-172.31.21.232:22-20.161.92.111:43408.service - OpenSSH per-connection server daemon (20.161.92.111:43408). Mar 7 00:57:15.215116 sshd[6689]: Accepted publickey for core from 20.161.92.111 port 43408 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:57:15.219306 sshd[6689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:57:15.228998 systemd-logind[2105]: New session 23 of user core. Mar 7 00:57:15.239533 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 7 00:57:15.614768 containerd[2132]: time="2026-03-07T00:57:15.614693931Z" level=info msg="StopPodSandbox for \"b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569\"" Mar 7 00:57:15.724232 sshd[6689]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:15.735358 systemd-logind[2105]: Session 23 logged out. Waiting for processes to exit. Mar 7 00:57:15.738564 systemd[1]: sshd@22-172.31.21.232:22-20.161.92.111:43408.service: Deactivated successfully. Mar 7 00:57:15.751135 systemd[1]: session-23.scope: Deactivated successfully. Mar 7 00:57:15.755249 systemd-logind[2105]: Removed session 23. Mar 7 00:57:15.792591 containerd[2132]: 2026-03-07 00:57:15.705 [WARNING][6708] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--232-k8s-coredns--674b8bbfcf--sndg8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"42e4545a-e486-4f54-bd6f-2806121371ca", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-232", ContainerID:"fe9929dfaa427b97d6f40f60d9f515f87e6cc6f1d01902eed8e5d5870002c603", Pod:"coredns-674b8bbfcf-sndg8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali579a5141479", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:57:15.792591 containerd[2132]: 2026-03-07 00:57:15.706 [INFO][6708] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" Mar 7 00:57:15.792591 containerd[2132]: 2026-03-07 00:57:15.706 [INFO][6708] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" iface="eth0" netns="" Mar 7 00:57:15.792591 containerd[2132]: 2026-03-07 00:57:15.706 [INFO][6708] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" Mar 7 00:57:15.792591 containerd[2132]: 2026-03-07 00:57:15.706 [INFO][6708] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" Mar 7 00:57:15.792591 containerd[2132]: 2026-03-07 00:57:15.766 [INFO][6715] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" HandleID="k8s-pod-network.b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" Workload="ip--172--31--21--232-k8s-coredns--674b8bbfcf--sndg8-eth0" Mar 7 00:57:15.792591 containerd[2132]: 2026-03-07 00:57:15.766 [INFO][6715] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:57:15.792591 containerd[2132]: 2026-03-07 00:57:15.766 [INFO][6715] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:57:15.792591 containerd[2132]: 2026-03-07 00:57:15.781 [WARNING][6715] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" HandleID="k8s-pod-network.b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" Workload="ip--172--31--21--232-k8s-coredns--674b8bbfcf--sndg8-eth0" Mar 7 00:57:15.792591 containerd[2132]: 2026-03-07 00:57:15.781 [INFO][6715] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" HandleID="k8s-pod-network.b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" Workload="ip--172--31--21--232-k8s-coredns--674b8bbfcf--sndg8-eth0" Mar 7 00:57:15.792591 containerd[2132]: 2026-03-07 00:57:15.784 [INFO][6715] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:57:15.792591 containerd[2132]: 2026-03-07 00:57:15.787 [INFO][6708] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" Mar 7 00:57:15.794026 containerd[2132]: time="2026-03-07T00:57:15.792668091Z" level=info msg="TearDown network for sandbox \"b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569\" successfully" Mar 7 00:57:15.794026 containerd[2132]: time="2026-03-07T00:57:15.792707674Z" level=info msg="StopPodSandbox for \"b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569\" returns successfully" Mar 7 00:57:15.794026 containerd[2132]: time="2026-03-07T00:57:15.793559716Z" level=info msg="RemovePodSandbox for \"b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569\"" Mar 7 00:57:15.794026 containerd[2132]: time="2026-03-07T00:57:15.793613863Z" level=info msg="Forcibly stopping sandbox \"b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569\"" Mar 7 00:57:16.010701 containerd[2132]: 2026-03-07 00:57:15.885 [WARNING][6733] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--232-k8s-coredns--674b8bbfcf--sndg8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"42e4545a-e486-4f54-bd6f-2806121371ca", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-232", ContainerID:"fe9929dfaa427b97d6f40f60d9f515f87e6cc6f1d01902eed8e5d5870002c603", Pod:"coredns-674b8bbfcf-sndg8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali579a5141479", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:57:16.010701 containerd[2132]: 2026-03-07 00:57:15.886 [INFO][6733] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" Mar 7 00:57:16.010701 containerd[2132]: 2026-03-07 00:57:15.886 [INFO][6733] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" iface="eth0" netns="" Mar 7 00:57:16.010701 containerd[2132]: 2026-03-07 00:57:15.886 [INFO][6733] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" Mar 7 00:57:16.010701 containerd[2132]: 2026-03-07 00:57:15.887 [INFO][6733] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" Mar 7 00:57:16.010701 containerd[2132]: 2026-03-07 00:57:15.952 [INFO][6740] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" HandleID="k8s-pod-network.b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" Workload="ip--172--31--21--232-k8s-coredns--674b8bbfcf--sndg8-eth0" Mar 7 00:57:16.010701 containerd[2132]: 2026-03-07 00:57:15.953 [INFO][6740] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:57:16.010701 containerd[2132]: 2026-03-07 00:57:15.953 [INFO][6740] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:57:16.010701 containerd[2132]: 2026-03-07 00:57:15.985 [WARNING][6740] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" HandleID="k8s-pod-network.b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" Workload="ip--172--31--21--232-k8s-coredns--674b8bbfcf--sndg8-eth0" Mar 7 00:57:16.010701 containerd[2132]: 2026-03-07 00:57:15.986 [INFO][6740] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" HandleID="k8s-pod-network.b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" Workload="ip--172--31--21--232-k8s-coredns--674b8bbfcf--sndg8-eth0" Mar 7 00:57:16.010701 containerd[2132]: 2026-03-07 00:57:15.997 [INFO][6740] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:57:16.010701 containerd[2132]: 2026-03-07 00:57:16.004 [INFO][6733] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569" Mar 7 00:57:16.012356 containerd[2132]: time="2026-03-07T00:57:16.010776208Z" level=info msg="TearDown network for sandbox \"b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569\" successfully" Mar 7 00:57:16.020857 containerd[2132]: time="2026-03-07T00:57:16.020786150Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 00:57:16.021031 containerd[2132]: time="2026-03-07T00:57:16.020908095Z" level=info msg="RemovePodSandbox \"b42203f3028d9b7ee0b4109c3ce3bc9ab4a9c299ea12d650124bbeaf5dd70569\" returns successfully" Mar 7 00:57:16.021815 containerd[2132]: time="2026-03-07T00:57:16.021751337Z" level=info msg="StopPodSandbox for \"1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd\"" Mar 7 00:57:16.170091 containerd[2132]: 2026-03-07 00:57:16.096 [WARNING][6754] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--232-k8s-calico--apiserver--67454779cb--lj22q-eth0", GenerateName:"calico-apiserver-67454779cb-", Namespace:"calico-system", SelfLink:"", UID:"466a569a-796d-4554-bfdc-84553d49d7a8", ResourceVersion:"1414", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67454779cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-232", ContainerID:"5f558316f92b9d205fd60e730412d855a593fb9bcaa9c2d0847fcb2f61849466", Pod:"calico-apiserver-67454779cb-lj22q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibdcf285ae42", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:57:16.170091 containerd[2132]: 2026-03-07 00:57:16.097 [INFO][6754] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" Mar 7 00:57:16.170091 containerd[2132]: 2026-03-07 00:57:16.097 [INFO][6754] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" iface="eth0" netns="" Mar 7 00:57:16.170091 containerd[2132]: 2026-03-07 00:57:16.097 [INFO][6754] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" Mar 7 00:57:16.170091 containerd[2132]: 2026-03-07 00:57:16.097 [INFO][6754] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" Mar 7 00:57:16.170091 containerd[2132]: 2026-03-07 00:57:16.142 [INFO][6762] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" HandleID="k8s-pod-network.1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" Workload="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--lj22q-eth0" Mar 7 00:57:16.170091 containerd[2132]: 2026-03-07 00:57:16.143 [INFO][6762] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:57:16.170091 containerd[2132]: 2026-03-07 00:57:16.143 [INFO][6762] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:57:16.170091 containerd[2132]: 2026-03-07 00:57:16.158 [WARNING][6762] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" HandleID="k8s-pod-network.1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" Workload="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--lj22q-eth0" Mar 7 00:57:16.170091 containerd[2132]: 2026-03-07 00:57:16.158 [INFO][6762] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" HandleID="k8s-pod-network.1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" Workload="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--lj22q-eth0" Mar 7 00:57:16.170091 containerd[2132]: 2026-03-07 00:57:16.162 [INFO][6762] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:57:16.170091 containerd[2132]: 2026-03-07 00:57:16.166 [INFO][6754] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" Mar 7 00:57:16.171570 containerd[2132]: time="2026-03-07T00:57:16.170149484Z" level=info msg="TearDown network for sandbox \"1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd\" successfully" Mar 7 00:57:16.171570 containerd[2132]: time="2026-03-07T00:57:16.170190844Z" level=info msg="StopPodSandbox for \"1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd\" returns successfully" Mar 7 00:57:16.171570 containerd[2132]: time="2026-03-07T00:57:16.171142584Z" level=info msg="RemovePodSandbox for \"1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd\"" Mar 7 00:57:16.171570 containerd[2132]: time="2026-03-07T00:57:16.171190692Z" level=info msg="Forcibly stopping sandbox \"1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd\"" Mar 7 00:57:16.329547 containerd[2132]: 2026-03-07 00:57:16.254 [WARNING][6776] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--232-k8s-calico--apiserver--67454779cb--lj22q-eth0", GenerateName:"calico-apiserver-67454779cb-", Namespace:"calico-system", SelfLink:"", UID:"466a569a-796d-4554-bfdc-84553d49d7a8", ResourceVersion:"1414", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67454779cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-232", ContainerID:"5f558316f92b9d205fd60e730412d855a593fb9bcaa9c2d0847fcb2f61849466", Pod:"calico-apiserver-67454779cb-lj22q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calibdcf285ae42", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:57:16.329547 containerd[2132]: 2026-03-07 00:57:16.254 [INFO][6776] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" Mar 7 00:57:16.329547 containerd[2132]: 2026-03-07 00:57:16.254 [INFO][6776] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" iface="eth0" netns="" Mar 7 00:57:16.329547 containerd[2132]: 2026-03-07 00:57:16.254 [INFO][6776] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" Mar 7 00:57:16.329547 containerd[2132]: 2026-03-07 00:57:16.254 [INFO][6776] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" Mar 7 00:57:16.329547 containerd[2132]: 2026-03-07 00:57:16.302 [INFO][6783] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" HandleID="k8s-pod-network.1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" Workload="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--lj22q-eth0" Mar 7 00:57:16.329547 containerd[2132]: 2026-03-07 00:57:16.303 [INFO][6783] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:57:16.329547 containerd[2132]: 2026-03-07 00:57:16.303 [INFO][6783] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:57:16.329547 containerd[2132]: 2026-03-07 00:57:16.318 [WARNING][6783] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" HandleID="k8s-pod-network.1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" Workload="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--lj22q-eth0" Mar 7 00:57:16.329547 containerd[2132]: 2026-03-07 00:57:16.318 [INFO][6783] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" HandleID="k8s-pod-network.1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" Workload="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--lj22q-eth0" Mar 7 00:57:16.329547 containerd[2132]: 2026-03-07 00:57:16.321 [INFO][6783] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:57:16.329547 containerd[2132]: 2026-03-07 00:57:16.325 [INFO][6776] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd" Mar 7 00:57:16.329547 containerd[2132]: time="2026-03-07T00:57:16.329341385Z" level=info msg="TearDown network for sandbox \"1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd\" successfully" Mar 7 00:57:16.337972 containerd[2132]: time="2026-03-07T00:57:16.337864000Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 00:57:16.338112 containerd[2132]: time="2026-03-07T00:57:16.338033645Z" level=info msg="RemovePodSandbox \"1ec85ca13d92625edd370be0038a8d6e6dbfa3cd10623edc0fc7a6c49efda6bd\" returns successfully" Mar 7 00:57:16.339479 containerd[2132]: time="2026-03-07T00:57:16.339001593Z" level=info msg="StopPodSandbox for \"89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b\"" Mar 7 00:57:16.491119 containerd[2132]: 2026-03-07 00:57:16.415 [WARNING][6797] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--232-k8s-goldmane--5b85766d88--svm6z-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"45a627cb-a427-4b7a-bf60-da0e9b3da1b5", ResourceVersion:"1207", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-232", ContainerID:"3037bd8d5c85fda3bf49e61e6feb4f0542256c2fe51a206f85cdd1033e49e58b", Pod:"goldmane-5b85766d88-svm6z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.75.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibf3ce04cd78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:57:16.491119 containerd[2132]: 2026-03-07 00:57:16.415 [INFO][6797] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" Mar 7 00:57:16.491119 containerd[2132]: 2026-03-07 00:57:16.415 [INFO][6797] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" iface="eth0" netns="" Mar 7 00:57:16.491119 containerd[2132]: 2026-03-07 00:57:16.415 [INFO][6797] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" Mar 7 00:57:16.491119 containerd[2132]: 2026-03-07 00:57:16.415 [INFO][6797] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" Mar 7 00:57:16.491119 containerd[2132]: 2026-03-07 00:57:16.464 [INFO][6806] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" HandleID="k8s-pod-network.89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" Workload="ip--172--31--21--232-k8s-goldmane--5b85766d88--svm6z-eth0" Mar 7 00:57:16.491119 containerd[2132]: 2026-03-07 00:57:16.464 [INFO][6806] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:57:16.491119 containerd[2132]: 2026-03-07 00:57:16.464 [INFO][6806] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:57:16.491119 containerd[2132]: 2026-03-07 00:57:16.479 [WARNING][6806] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" HandleID="k8s-pod-network.89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" Workload="ip--172--31--21--232-k8s-goldmane--5b85766d88--svm6z-eth0" Mar 7 00:57:16.491119 containerd[2132]: 2026-03-07 00:57:16.479 [INFO][6806] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" HandleID="k8s-pod-network.89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" Workload="ip--172--31--21--232-k8s-goldmane--5b85766d88--svm6z-eth0" Mar 7 00:57:16.491119 containerd[2132]: 2026-03-07 00:57:16.484 [INFO][6806] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:57:16.491119 containerd[2132]: 2026-03-07 00:57:16.487 [INFO][6797] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" Mar 7 00:57:16.491905 containerd[2132]: time="2026-03-07T00:57:16.491158482Z" level=info msg="TearDown network for sandbox \"89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b\" successfully" Mar 7 00:57:16.491905 containerd[2132]: time="2026-03-07T00:57:16.491198834Z" level=info msg="StopPodSandbox for \"89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b\" returns successfully" Mar 7 00:57:16.492893 containerd[2132]: time="2026-03-07T00:57:16.492829717Z" level=info msg="RemovePodSandbox for \"89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b\"" Mar 7 00:57:16.492893 containerd[2132]: time="2026-03-07T00:57:16.492887034Z" level=info msg="Forcibly stopping sandbox \"89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b\"" Mar 7 00:57:16.636840 containerd[2132]: 2026-03-07 00:57:16.567 [WARNING][6820] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--232-k8s-goldmane--5b85766d88--svm6z-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"45a627cb-a427-4b7a-bf60-da0e9b3da1b5", ResourceVersion:"1207", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-232", ContainerID:"3037bd8d5c85fda3bf49e61e6feb4f0542256c2fe51a206f85cdd1033e49e58b", Pod:"goldmane-5b85766d88-svm6z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.75.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibf3ce04cd78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:57:16.636840 containerd[2132]: 2026-03-07 00:57:16.567 [INFO][6820] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" Mar 7 00:57:16.636840 containerd[2132]: 2026-03-07 00:57:16.567 [INFO][6820] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" iface="eth0" netns="" Mar 7 00:57:16.636840 containerd[2132]: 2026-03-07 00:57:16.567 [INFO][6820] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" Mar 7 00:57:16.636840 containerd[2132]: 2026-03-07 00:57:16.567 [INFO][6820] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" Mar 7 00:57:16.636840 containerd[2132]: 2026-03-07 00:57:16.613 [INFO][6827] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" HandleID="k8s-pod-network.89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" Workload="ip--172--31--21--232-k8s-goldmane--5b85766d88--svm6z-eth0" Mar 7 00:57:16.636840 containerd[2132]: 2026-03-07 00:57:16.613 [INFO][6827] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:57:16.636840 containerd[2132]: 2026-03-07 00:57:16.613 [INFO][6827] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:57:16.636840 containerd[2132]: 2026-03-07 00:57:16.626 [WARNING][6827] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" HandleID="k8s-pod-network.89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" Workload="ip--172--31--21--232-k8s-goldmane--5b85766d88--svm6z-eth0" Mar 7 00:57:16.636840 containerd[2132]: 2026-03-07 00:57:16.627 [INFO][6827] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" HandleID="k8s-pod-network.89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" Workload="ip--172--31--21--232-k8s-goldmane--5b85766d88--svm6z-eth0" Mar 7 00:57:16.636840 containerd[2132]: 2026-03-07 00:57:16.629 [INFO][6827] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:57:16.636840 containerd[2132]: 2026-03-07 00:57:16.632 [INFO][6820] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b" Mar 7 00:57:16.636840 containerd[2132]: time="2026-03-07T00:57:16.636758139Z" level=info msg="TearDown network for sandbox \"89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b\" successfully" Mar 7 00:57:16.645382 containerd[2132]: time="2026-03-07T00:57:16.645267427Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 00:57:16.646731 containerd[2132]: time="2026-03-07T00:57:16.645391953Z" level=info msg="RemovePodSandbox \"89c7116a1a658ce2f541839e5839c8f67de5909442b89a855be20c020a147b1b\" returns successfully" Mar 7 00:57:16.646731 containerd[2132]: time="2026-03-07T00:57:16.646120958Z" level=info msg="StopPodSandbox for \"5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610\"" Mar 7 00:57:16.787469 containerd[2132]: 2026-03-07 00:57:16.718 [WARNING][6841] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--232-k8s-calico--apiserver--67454779cb--jkkb4-eth0", GenerateName:"calico-apiserver-67454779cb-", Namespace:"calico-system", SelfLink:"", UID:"055c9d7f-4150-48cb-a2a9-4df82e634570", ResourceVersion:"1277", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67454779cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-232", ContainerID:"6b29454282ff1a28949476cae35bffc946025279d36c3264def7546ae878c70d", Pod:"calico-apiserver-67454779cb-jkkb4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calief7407eacb5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:57:16.787469 containerd[2132]: 2026-03-07 00:57:16.719 [INFO][6841] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" Mar 7 00:57:16.787469 containerd[2132]: 2026-03-07 00:57:16.719 [INFO][6841] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" iface="eth0" netns="" Mar 7 00:57:16.787469 containerd[2132]: 2026-03-07 00:57:16.719 [INFO][6841] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" Mar 7 00:57:16.787469 containerd[2132]: 2026-03-07 00:57:16.719 [INFO][6841] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" Mar 7 00:57:16.787469 containerd[2132]: 2026-03-07 00:57:16.762 [INFO][6848] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" HandleID="k8s-pod-network.5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" Workload="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--jkkb4-eth0" Mar 7 00:57:16.787469 containerd[2132]: 2026-03-07 00:57:16.763 [INFO][6848] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:57:16.787469 containerd[2132]: 2026-03-07 00:57:16.763 [INFO][6848] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:57:16.787469 containerd[2132]: 2026-03-07 00:57:16.776 [WARNING][6848] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" HandleID="k8s-pod-network.5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" Workload="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--jkkb4-eth0" Mar 7 00:57:16.787469 containerd[2132]: 2026-03-07 00:57:16.776 [INFO][6848] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" HandleID="k8s-pod-network.5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" Workload="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--jkkb4-eth0" Mar 7 00:57:16.787469 containerd[2132]: 2026-03-07 00:57:16.779 [INFO][6848] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:57:16.787469 containerd[2132]: 2026-03-07 00:57:16.783 [INFO][6841] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" Mar 7 00:57:16.788889 containerd[2132]: time="2026-03-07T00:57:16.787525646Z" level=info msg="TearDown network for sandbox \"5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610\" successfully" Mar 7 00:57:16.788889 containerd[2132]: time="2026-03-07T00:57:16.787567775Z" level=info msg="StopPodSandbox for \"5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610\" returns successfully" Mar 7 00:57:16.788889 containerd[2132]: time="2026-03-07T00:57:16.788560791Z" level=info msg="RemovePodSandbox for \"5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610\"" Mar 7 00:57:16.788889 containerd[2132]: time="2026-03-07T00:57:16.788609043Z" level=info msg="Forcibly stopping sandbox \"5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610\"" Mar 7 00:57:16.934522 containerd[2132]: 2026-03-07 00:57:16.859 [WARNING][6862] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--232-k8s-calico--apiserver--67454779cb--jkkb4-eth0", GenerateName:"calico-apiserver-67454779cb-", Namespace:"calico-system", SelfLink:"", UID:"055c9d7f-4150-48cb-a2a9-4df82e634570", ResourceVersion:"1277", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67454779cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-232", ContainerID:"6b29454282ff1a28949476cae35bffc946025279d36c3264def7546ae878c70d", Pod:"calico-apiserver-67454779cb-jkkb4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calief7407eacb5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:57:16.934522 containerd[2132]: 2026-03-07 00:57:16.860 [INFO][6862] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" Mar 7 00:57:16.934522 containerd[2132]: 2026-03-07 00:57:16.860 [INFO][6862] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" iface="eth0" netns="" Mar 7 00:57:16.934522 containerd[2132]: 2026-03-07 00:57:16.860 [INFO][6862] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" Mar 7 00:57:16.934522 containerd[2132]: 2026-03-07 00:57:16.860 [INFO][6862] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" Mar 7 00:57:16.934522 containerd[2132]: 2026-03-07 00:57:16.907 [INFO][6869] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" HandleID="k8s-pod-network.5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" Workload="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--jkkb4-eth0" Mar 7 00:57:16.934522 containerd[2132]: 2026-03-07 00:57:16.907 [INFO][6869] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:57:16.934522 containerd[2132]: 2026-03-07 00:57:16.908 [INFO][6869] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:57:16.934522 containerd[2132]: 2026-03-07 00:57:16.923 [WARNING][6869] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" HandleID="k8s-pod-network.5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" Workload="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--jkkb4-eth0" Mar 7 00:57:16.934522 containerd[2132]: 2026-03-07 00:57:16.924 [INFO][6869] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" HandleID="k8s-pod-network.5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" Workload="ip--172--31--21--232-k8s-calico--apiserver--67454779cb--jkkb4-eth0" Mar 7 00:57:16.934522 containerd[2132]: 2026-03-07 00:57:16.927 [INFO][6869] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:57:16.934522 containerd[2132]: 2026-03-07 00:57:16.930 [INFO][6862] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610" Mar 7 00:57:16.937854 containerd[2132]: time="2026-03-07T00:57:16.934750398Z" level=info msg="TearDown network for sandbox \"5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610\" successfully" Mar 7 00:57:16.945020 containerd[2132]: time="2026-03-07T00:57:16.944655048Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 00:57:16.945020 containerd[2132]: time="2026-03-07T00:57:16.944858658Z" level=info msg="RemovePodSandbox \"5f9655d5e23957fc339b93bfca38af1edb28fb0c9e4653a2e749e255956c4610\" returns successfully" Mar 7 00:57:16.945957 containerd[2132]: time="2026-03-07T00:57:16.945869947Z" level=info msg="StopPodSandbox for \"68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d\"" Mar 7 00:57:17.131191 containerd[2132]: 2026-03-07 00:57:17.036 [WARNING][6883] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--232-k8s-calico--kube--controllers--578b9ccf58--j8drf-eth0", GenerateName:"calico-kube-controllers-578b9ccf58-", Namespace:"calico-system", SelfLink:"", UID:"6210dbc0-cd47-4e52-8ece-fe359619300c", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"578b9ccf58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-232", ContainerID:"05d61fbc36566aeda39595187fe7ce217ec13be2b963cc9ba92740d7a439d6fe", Pod:"calico-kube-controllers-578b9ccf58-j8drf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2b12ce616f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:57:17.131191 containerd[2132]: 2026-03-07 00:57:17.037 [INFO][6883] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" Mar 7 00:57:17.131191 containerd[2132]: 2026-03-07 00:57:17.037 [INFO][6883] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" iface="eth0" netns="" Mar 7 00:57:17.131191 containerd[2132]: 2026-03-07 00:57:17.037 [INFO][6883] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" Mar 7 00:57:17.131191 containerd[2132]: 2026-03-07 00:57:17.037 [INFO][6883] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" Mar 7 00:57:17.131191 containerd[2132]: 2026-03-07 00:57:17.091 [INFO][6890] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" HandleID="k8s-pod-network.68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" Workload="ip--172--31--21--232-k8s-calico--kube--controllers--578b9ccf58--j8drf-eth0" Mar 7 00:57:17.131191 containerd[2132]: 2026-03-07 00:57:17.092 [INFO][6890] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:57:17.131191 containerd[2132]: 2026-03-07 00:57:17.092 [INFO][6890] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:57:17.131191 containerd[2132]: 2026-03-07 00:57:17.115 [WARNING][6890] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" HandleID="k8s-pod-network.68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" Workload="ip--172--31--21--232-k8s-calico--kube--controllers--578b9ccf58--j8drf-eth0" Mar 7 00:57:17.131191 containerd[2132]: 2026-03-07 00:57:17.115 [INFO][6890] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" HandleID="k8s-pod-network.68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" Workload="ip--172--31--21--232-k8s-calico--kube--controllers--578b9ccf58--j8drf-eth0" Mar 7 00:57:17.131191 containerd[2132]: 2026-03-07 00:57:17.122 [INFO][6890] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:57:17.131191 containerd[2132]: 2026-03-07 00:57:17.127 [INFO][6883] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" Mar 7 00:57:17.133814 containerd[2132]: time="2026-03-07T00:57:17.131253442Z" level=info msg="TearDown network for sandbox \"68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d\" successfully" Mar 7 00:57:17.133814 containerd[2132]: time="2026-03-07T00:57:17.131294599Z" level=info msg="StopPodSandbox for \"68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d\" returns successfully" Mar 7 00:57:17.133814 containerd[2132]: time="2026-03-07T00:57:17.132835401Z" level=info msg="RemovePodSandbox for \"68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d\"" Mar 7 00:57:17.133814 containerd[2132]: time="2026-03-07T00:57:17.132926190Z" level=info msg="Forcibly stopping sandbox \"68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d\"" Mar 7 00:57:17.297763 containerd[2132]: 2026-03-07 00:57:17.221 [WARNING][6904] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--232-k8s-calico--kube--controllers--578b9ccf58--j8drf-eth0", GenerateName:"calico-kube-controllers-578b9ccf58-", Namespace:"calico-system", SelfLink:"", UID:"6210dbc0-cd47-4e52-8ece-fe359619300c", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"578b9ccf58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-232", ContainerID:"05d61fbc36566aeda39595187fe7ce217ec13be2b963cc9ba92740d7a439d6fe", Pod:"calico-kube-controllers-578b9ccf58-j8drf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2b12ce616f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:57:17.297763 containerd[2132]: 2026-03-07 00:57:17.222 [INFO][6904] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" Mar 7 00:57:17.297763 containerd[2132]: 2026-03-07 00:57:17.222 [INFO][6904] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" iface="eth0" netns="" Mar 7 00:57:17.297763 containerd[2132]: 2026-03-07 00:57:17.222 [INFO][6904] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" Mar 7 00:57:17.297763 containerd[2132]: 2026-03-07 00:57:17.222 [INFO][6904] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" Mar 7 00:57:17.297763 containerd[2132]: 2026-03-07 00:57:17.272 [INFO][6911] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" HandleID="k8s-pod-network.68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" Workload="ip--172--31--21--232-k8s-calico--kube--controllers--578b9ccf58--j8drf-eth0" Mar 7 00:57:17.297763 containerd[2132]: 2026-03-07 00:57:17.273 [INFO][6911] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:57:17.297763 containerd[2132]: 2026-03-07 00:57:17.273 [INFO][6911] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:57:17.297763 containerd[2132]: 2026-03-07 00:57:17.286 [WARNING][6911] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" HandleID="k8s-pod-network.68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" Workload="ip--172--31--21--232-k8s-calico--kube--controllers--578b9ccf58--j8drf-eth0" Mar 7 00:57:17.297763 containerd[2132]: 2026-03-07 00:57:17.287 [INFO][6911] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" HandleID="k8s-pod-network.68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" Workload="ip--172--31--21--232-k8s-calico--kube--controllers--578b9ccf58--j8drf-eth0" Mar 7 00:57:17.297763 containerd[2132]: 2026-03-07 00:57:17.290 [INFO][6911] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:57:17.297763 containerd[2132]: 2026-03-07 00:57:17.293 [INFO][6904] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d" Mar 7 00:57:17.299913 containerd[2132]: time="2026-03-07T00:57:17.298917761Z" level=info msg="TearDown network for sandbox \"68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d\" successfully" Mar 7 00:57:17.307232 containerd[2132]: time="2026-03-07T00:57:17.307131222Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 00:57:17.307435 containerd[2132]: time="2026-03-07T00:57:17.307247092Z" level=info msg="RemovePodSandbox \"68e4feabb696550da0aa3e4a0fcc1ef6277098c3e1ea4411cecb93f81ff72e4d\" returns successfully" Mar 7 00:57:17.308514 containerd[2132]: time="2026-03-07T00:57:17.308444174Z" level=info msg="StopPodSandbox for \"2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1\"" Mar 7 00:57:17.452400 containerd[2132]: 2026-03-07 00:57:17.383 [WARNING][6926] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--232-k8s-coredns--674b8bbfcf--z42qv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"78cfc558-a091-4954-aac1-f01bb0fadc54", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-232", ContainerID:"ab8bdf5f38934d12a9cbe8ceac22a35ff0b2cb8db69e8b7f14c628f13e4c69a2", Pod:"coredns-674b8bbfcf-z42qv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califafedb475a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:57:17.452400 containerd[2132]: 2026-03-07 00:57:17.384 [INFO][6926] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" Mar 7 00:57:17.452400 containerd[2132]: 2026-03-07 00:57:17.384 [INFO][6926] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" iface="eth0" netns="" Mar 7 00:57:17.452400 containerd[2132]: 2026-03-07 00:57:17.384 [INFO][6926] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" Mar 7 00:57:17.452400 containerd[2132]: 2026-03-07 00:57:17.384 [INFO][6926] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" Mar 7 00:57:17.452400 containerd[2132]: 2026-03-07 00:57:17.428 [INFO][6933] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" HandleID="k8s-pod-network.2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" Workload="ip--172--31--21--232-k8s-coredns--674b8bbfcf--z42qv-eth0" Mar 7 00:57:17.452400 containerd[2132]: 2026-03-07 00:57:17.428 [INFO][6933] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:57:17.452400 containerd[2132]: 2026-03-07 00:57:17.428 [INFO][6933] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:57:17.452400 containerd[2132]: 2026-03-07 00:57:17.442 [WARNING][6933] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" HandleID="k8s-pod-network.2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" Workload="ip--172--31--21--232-k8s-coredns--674b8bbfcf--z42qv-eth0" Mar 7 00:57:17.452400 containerd[2132]: 2026-03-07 00:57:17.442 [INFO][6933] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" HandleID="k8s-pod-network.2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" Workload="ip--172--31--21--232-k8s-coredns--674b8bbfcf--z42qv-eth0" Mar 7 00:57:17.452400 containerd[2132]: 2026-03-07 00:57:17.445 [INFO][6933] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:57:17.452400 containerd[2132]: 2026-03-07 00:57:17.448 [INFO][6926] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" Mar 7 00:57:17.453893 containerd[2132]: time="2026-03-07T00:57:17.452457478Z" level=info msg="TearDown network for sandbox \"2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1\" successfully" Mar 7 00:57:17.453893 containerd[2132]: time="2026-03-07T00:57:17.452495657Z" level=info msg="StopPodSandbox for \"2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1\" returns successfully" Mar 7 00:57:17.453893 containerd[2132]: time="2026-03-07T00:57:17.453505206Z" level=info msg="RemovePodSandbox for \"2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1\"" Mar 7 00:57:17.453893 containerd[2132]: time="2026-03-07T00:57:17.453563723Z" level=info msg="Forcibly stopping sandbox \"2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1\"" Mar 7 00:57:17.615237 containerd[2132]: 2026-03-07 00:57:17.543 [WARNING][6947] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--232-k8s-coredns--674b8bbfcf--z42qv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"78cfc558-a091-4954-aac1-f01bb0fadc54", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2026, time.March, 7, 0, 55, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-232", ContainerID:"ab8bdf5f38934d12a9cbe8ceac22a35ff0b2cb8db69e8b7f14c628f13e4c69a2", Pod:"coredns-674b8bbfcf-z42qv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califafedb475a6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 7 00:57:17.615237 containerd[2132]: 2026-03-07 00:57:17.544 [INFO][6947] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" Mar 7 00:57:17.615237 containerd[2132]: 2026-03-07 00:57:17.544 [INFO][6947] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" iface="eth0" netns="" Mar 7 00:57:17.615237 containerd[2132]: 2026-03-07 00:57:17.544 [INFO][6947] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" Mar 7 00:57:17.615237 containerd[2132]: 2026-03-07 00:57:17.544 [INFO][6947] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" Mar 7 00:57:17.615237 containerd[2132]: 2026-03-07 00:57:17.588 [INFO][6954] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" HandleID="k8s-pod-network.2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" Workload="ip--172--31--21--232-k8s-coredns--674b8bbfcf--z42qv-eth0" Mar 7 00:57:17.615237 containerd[2132]: 2026-03-07 00:57:17.588 [INFO][6954] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 7 00:57:17.615237 containerd[2132]: 2026-03-07 00:57:17.589 [INFO][6954] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 7 00:57:17.615237 containerd[2132]: 2026-03-07 00:57:17.604 [WARNING][6954] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" HandleID="k8s-pod-network.2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" Workload="ip--172--31--21--232-k8s-coredns--674b8bbfcf--z42qv-eth0" Mar 7 00:57:17.615237 containerd[2132]: 2026-03-07 00:57:17.604 [INFO][6954] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" HandleID="k8s-pod-network.2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" Workload="ip--172--31--21--232-k8s-coredns--674b8bbfcf--z42qv-eth0" Mar 7 00:57:17.615237 containerd[2132]: 2026-03-07 00:57:17.607 [INFO][6954] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 7 00:57:17.615237 containerd[2132]: 2026-03-07 00:57:17.611 [INFO][6947] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1" Mar 7 00:57:17.616136 containerd[2132]: time="2026-03-07T00:57:17.615256870Z" level=info msg="TearDown network for sandbox \"2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1\" successfully" Mar 7 00:57:17.622327 containerd[2132]: time="2026-03-07T00:57:17.622052807Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 7 00:57:17.622327 containerd[2132]: time="2026-03-07T00:57:17.622155554Z" level=info msg="RemovePodSandbox \"2f35b645a980a57a94d8e73c69bc66a97446c3f3378b702fa889e615cd954ee1\" returns successfully" Mar 7 00:57:20.814453 systemd[1]: Started sshd@23-172.31.21.232:22-20.161.92.111:33766.service - OpenSSH per-connection server daemon (20.161.92.111:33766). Mar 7 00:57:21.340511 sshd[6961]: Accepted publickey for core from 20.161.92.111 port 33766 ssh2: RSA SHA256:CACtkjS64SwL0ouDnrWRH1vlyxIcwr6xT7re/CsaoWw Mar 7 00:57:21.344252 sshd[6961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 7 00:57:21.353043 systemd-logind[2105]: New session 24 of user core. Mar 7 00:57:21.360803 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 7 00:57:21.827324 sshd[6961]: pam_unix(sshd:session): session closed for user core Mar 7 00:57:21.833142 systemd[1]: sshd@23-172.31.21.232:22-20.161.92.111:33766.service: Deactivated successfully. Mar 7 00:57:21.842815 systemd[1]: session-24.scope: Deactivated successfully. Mar 7 00:57:21.845673 systemd-logind[2105]: Session 24 logged out. Waiting for processes to exit. Mar 7 00:57:21.848071 systemd-logind[2105]: Removed session 24. Mar 7 00:57:35.962776 containerd[2132]: time="2026-03-07T00:57:35.962635999Z" level=info msg="shim disconnected" id=27443c9cf4d44d66933d807315e566724ada3cac6bffaddb9024409722532062 namespace=k8s.io Mar 7 00:57:35.964236 containerd[2132]: time="2026-03-07T00:57:35.962735829Z" level=warning msg="cleaning up after shim disconnected" id=27443c9cf4d44d66933d807315e566724ada3cac6bffaddb9024409722532062 namespace=k8s.io Mar 7 00:57:35.964236 containerd[2132]: time="2026-03-07T00:57:35.963601509Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:57:35.978122 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27443c9cf4d44d66933d807315e566724ada3cac6bffaddb9024409722532062-rootfs.mount: Deactivated successfully. Mar 7 00:57:35.990769 containerd[2132]: time="2026-03-07T00:57:35.990688354Z" level=warning msg="cleanup warnings time=\"2026-03-07T00:57:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 00:57:36.392511 containerd[2132]: time="2026-03-07T00:57:36.392343144Z" level=info msg="shim disconnected" id=24b88711913ed4234104103c7deed704d1e662acca7ef50598a9e25250e1d7c8 namespace=k8s.io Mar 7 00:57:36.392511 containerd[2132]: time="2026-03-07T00:57:36.392448424Z" level=warning msg="cleaning up after shim disconnected" id=24b88711913ed4234104103c7deed704d1e662acca7ef50598a9e25250e1d7c8 namespace=k8s.io Mar 7 00:57:36.394327 containerd[2132]: time="2026-03-07T00:57:36.392469603Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:57:36.401391 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24b88711913ed4234104103c7deed704d1e662acca7ef50598a9e25250e1d7c8-rootfs.mount: Deactivated successfully. Mar 7 00:57:36.567065 kubelet[3603]: I0307 00:57:36.567006 3603 scope.go:117] "RemoveContainer" containerID="24b88711913ed4234104103c7deed704d1e662acca7ef50598a9e25250e1d7c8" Mar 7 00:57:36.572819 kubelet[3603]: I0307 00:57:36.572765 3603 scope.go:117] "RemoveContainer" containerID="27443c9cf4d44d66933d807315e566724ada3cac6bffaddb9024409722532062" Mar 7 00:57:36.574300 containerd[2132]: time="2026-03-07T00:57:36.574240756Z" level=info msg="CreateContainer within sandbox \"0ea7598bee3ac8c5cb6b183e9534c64840bbfcc24c0f81350c1b7eb0077467da\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 7 00:57:36.576680 containerd[2132]: time="2026-03-07T00:57:36.576419809Z" level=info msg="CreateContainer within sandbox \"3e28f4d7044bda53ff991290ac458dfdf8eea429860ff884e1325c6c9ddfb27f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Mar 7 00:57:36.631741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount525404063.mount: Deactivated successfully. Mar 7 00:57:36.655130 containerd[2132]: time="2026-03-07T00:57:36.654303777Z" level=info msg="CreateContainer within sandbox \"0ea7598bee3ac8c5cb6b183e9534c64840bbfcc24c0f81350c1b7eb0077467da\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"ad7a2f423d79770f1722c3496076ccbfabf620cf56c88a6f5f5c0f447be5331a\"" Mar 7 00:57:36.659991 containerd[2132]: time="2026-03-07T00:57:36.656345433Z" level=info msg="StartContainer for \"ad7a2f423d79770f1722c3496076ccbfabf620cf56c88a6f5f5c0f447be5331a\"" Mar 7 00:57:36.678181 containerd[2132]: time="2026-03-07T00:57:36.675869530Z" level=info msg="CreateContainer within sandbox \"3e28f4d7044bda53ff991290ac458dfdf8eea429860ff884e1325c6c9ddfb27f\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"bc4195611fb2022227898a963ba539e81bd40f7575baed5761bf9b2fe9e56aa2\"" Mar 7 00:57:36.678659 containerd[2132]: time="2026-03-07T00:57:36.678613778Z" level=info msg="StartContainer for \"bc4195611fb2022227898a963ba539e81bd40f7575baed5761bf9b2fe9e56aa2\"" Mar 7 00:57:36.846337 containerd[2132]: time="2026-03-07T00:57:36.846251756Z" level=info msg="StartContainer for \"bc4195611fb2022227898a963ba539e81bd40f7575baed5761bf9b2fe9e56aa2\" returns successfully" Mar 7 00:57:36.862291 containerd[2132]: time="2026-03-07T00:57:36.862196853Z" level=info msg="StartContainer for \"ad7a2f423d79770f1722c3496076ccbfabf620cf56c88a6f5f5c0f447be5331a\" returns successfully" Mar 7 00:57:41.454033 containerd[2132]: time="2026-03-07T00:57:41.453601880Z" level=info msg="shim disconnected" id=c40fa14cb24556da1a3650ec7d3e382bda38908b03a6161924f1ccf55c27710b namespace=k8s.io Mar 7 00:57:41.454033 containerd[2132]: time="2026-03-07T00:57:41.453705035Z" level=warning msg="cleaning up after shim disconnected" id=c40fa14cb24556da1a3650ec7d3e382bda38908b03a6161924f1ccf55c27710b namespace=k8s.io Mar 7 00:57:41.454033 containerd[2132]: time="2026-03-07T00:57:41.453728219Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:57:41.461344 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c40fa14cb24556da1a3650ec7d3e382bda38908b03a6161924f1ccf55c27710b-rootfs.mount: Deactivated successfully. Mar 7 00:57:41.480093 containerd[2132]: time="2026-03-07T00:57:41.479872353Z" level=warning msg="cleanup warnings time=\"2026-03-07T00:57:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 7 00:57:41.600547 kubelet[3603]: I0307 00:57:41.599497 3603 scope.go:117] "RemoveContainer" containerID="c40fa14cb24556da1a3650ec7d3e382bda38908b03a6161924f1ccf55c27710b" Mar 7 00:57:41.618006 containerd[2132]: time="2026-03-07T00:57:41.616726502Z" level=info msg="CreateContainer within sandbox \"7f322f356d1a9dd55e9bfe8d32c65f912fe1a12d12c42be904b9c4ccc3558ac3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 7 00:57:41.646098 containerd[2132]: time="2026-03-07T00:57:41.646040172Z" level=info msg="CreateContainer within sandbox \"7f322f356d1a9dd55e9bfe8d32c65f912fe1a12d12c42be904b9c4ccc3558ac3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"9700e2d1dabc2ecb24ced6dbb89b15a4f84f18f1a80bb1c7a03de7e983fb81a5\"" Mar 7 00:57:41.647553 containerd[2132]: time="2026-03-07T00:57:41.647500798Z" level=info msg="StartContainer for \"9700e2d1dabc2ecb24ced6dbb89b15a4f84f18f1a80bb1c7a03de7e983fb81a5\"" Mar 7 00:57:41.777418 containerd[2132]: time="2026-03-07T00:57:41.777317847Z" level=info msg="StartContainer for \"9700e2d1dabc2ecb24ced6dbb89b15a4f84f18f1a80bb1c7a03de7e983fb81a5\" returns successfully" Mar 7 00:57:42.455670 systemd[1]: run-containerd-runc-k8s.io-9700e2d1dabc2ecb24ced6dbb89b15a4f84f18f1a80bb1c7a03de7e983fb81a5-runc.85WhzM.mount: Deactivated successfully. Mar 7 00:57:45.060214 kubelet[3603]: E0307 00:57:45.060127 3603 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-232?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 7 00:57:48.403093 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc4195611fb2022227898a963ba539e81bd40f7575baed5761bf9b2fe9e56aa2-rootfs.mount: Deactivated successfully. Mar 7 00:57:48.413189 containerd[2132]: time="2026-03-07T00:57:48.413064180Z" level=info msg="shim disconnected" id=bc4195611fb2022227898a963ba539e81bd40f7575baed5761bf9b2fe9e56aa2 namespace=k8s.io Mar 7 00:57:48.413189 containerd[2132]: time="2026-03-07T00:57:48.413140995Z" level=warning msg="cleaning up after shim disconnected" id=bc4195611fb2022227898a963ba539e81bd40f7575baed5761bf9b2fe9e56aa2 namespace=k8s.io Mar 7 00:57:48.413189 containerd[2132]: time="2026-03-07T00:57:48.413163086Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 7 00:57:48.627036 kubelet[3603]: I0307 00:57:48.625661 3603 scope.go:117] "RemoveContainer" containerID="27443c9cf4d44d66933d807315e566724ada3cac6bffaddb9024409722532062" Mar 7 00:57:48.627036 kubelet[3603]: I0307 00:57:48.626284 3603 scope.go:117] "RemoveContainer" containerID="bc4195611fb2022227898a963ba539e81bd40f7575baed5761bf9b2fe9e56aa2" Mar 7 00:57:48.627036 kubelet[3603]: E0307 00:57:48.626504 3603 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-6bf85f8dd-hsqdm_tigera-operator(2eec4f92-c7af-4304-9957-8515f51f1e00)\"" pod="tigera-operator/tigera-operator-6bf85f8dd-hsqdm" podUID="2eec4f92-c7af-4304-9957-8515f51f1e00" Mar 7 00:57:48.629210 containerd[2132]: time="2026-03-07T00:57:48.629158699Z" level=info msg="RemoveContainer for \"27443c9cf4d44d66933d807315e566724ada3cac6bffaddb9024409722532062\"" Mar 7 00:57:48.636912 containerd[2132]: time="2026-03-07T00:57:48.636832610Z" level=info msg="RemoveContainer for \"27443c9cf4d44d66933d807315e566724ada3cac6bffaddb9024409722532062\" returns successfully" Mar 7 00:57:55.061453 kubelet[3603]: E0307 00:57:55.061348 3603 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.232:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-232?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"