Mar 14 00:13:56.282398 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Mar 14 00:13:56.282448 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Mar 13 22:32:52 -00 2026 Mar 14 00:13:56.282475 kernel: KASLR disabled due to lack of seed Mar 14 00:13:56.282494 kernel: efi: EFI v2.7 by EDK II Mar 14 00:13:56.282511 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Mar 14 00:13:56.282528 kernel: ACPI: Early table checksum verification disabled Mar 14 00:13:56.282547 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Mar 14 00:13:56.282564 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Mar 14 00:13:56.282581 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 14 00:13:56.282598 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Mar 14 00:13:56.282621 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 14 00:13:56.282637 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Mar 14 00:13:56.282654 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Mar 14 00:13:56.282670 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Mar 14 00:13:56.282690 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 14 00:13:56.282713 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Mar 14 00:13:56.282732 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Mar 14 00:13:56.282783 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Mar 14 00:13:56.282806 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Mar 14 00:13:56.282826 kernel: printk: bootconsole [uart0] enabled Mar 14 00:13:56.282844 kernel: NUMA: Failed to initialise from firmware Mar 14 00:13:56.282863 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Mar 14 00:13:56.282881 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Mar 14 00:13:56.282898 kernel: Zone ranges: Mar 14 00:13:56.282916 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 14 00:13:56.282933 kernel: DMA32 empty Mar 14 00:13:56.282959 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Mar 14 00:13:56.282977 kernel: Movable zone start for each node Mar 14 00:13:56.282994 kernel: Early memory node ranges Mar 14 00:13:56.283032 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Mar 14 00:13:56.283053 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Mar 14 00:13:56.283070 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Mar 14 00:13:56.283087 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Mar 14 00:13:56.283106 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Mar 14 00:13:56.283124 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Mar 14 00:13:56.283142 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Mar 14 00:13:56.283160 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Mar 14 00:13:56.283178 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Mar 14 00:13:56.283203 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Mar 14 00:13:56.283222 kernel: psci: probing for conduit method from ACPI. Mar 14 00:13:56.283247 kernel: psci: PSCIv1.0 detected in firmware. Mar 14 00:13:56.283268 kernel: psci: Using standard PSCI v0.2 function IDs Mar 14 00:13:56.283286 kernel: psci: Trusted OS migration not required Mar 14 00:13:56.283309 kernel: psci: SMC Calling Convention v1.1 Mar 14 00:13:56.283327 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Mar 14 00:13:56.283346 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 14 00:13:56.283364 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 14 00:13:56.283383 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 14 00:13:56.283405 kernel: Detected PIPT I-cache on CPU0 Mar 14 00:13:56.283423 kernel: CPU features: detected: GIC system register CPU interface Mar 14 00:13:56.283441 kernel: CPU features: detected: Spectre-v2 Mar 14 00:13:56.283460 kernel: CPU features: detected: Spectre-v3a Mar 14 00:13:56.283478 kernel: CPU features: detected: Spectre-BHB Mar 14 00:13:56.283496 kernel: CPU features: detected: ARM erratum 1742098 Mar 14 00:13:56.283519 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Mar 14 00:13:56.283538 kernel: alternatives: applying boot alternatives Mar 14 00:13:56.283558 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=704dcf876dede90264a8630d1e6c631c8df8e652c7e2ae2e5d334e632916c980 Mar 14 00:13:56.283578 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 14 00:13:56.283596 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 14 00:13:56.283617 kernel: Fallback order for Node 0: 0 Mar 14 00:13:56.283638 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Mar 14 00:13:56.283656 kernel: Policy zone: Normal Mar 14 00:13:56.283675 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 14 00:13:56.283694 kernel: software IO TLB: area num 2. Mar 14 00:13:56.283713 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Mar 14 00:13:56.283740 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Mar 14 00:13:56.285861 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 14 00:13:56.285887 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 14 00:13:56.285910 kernel: rcu: RCU event tracing is enabled. Mar 14 00:13:56.285931 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 14 00:13:56.285951 kernel: Trampoline variant of Tasks RCU enabled. Mar 14 00:13:56.285972 kernel: Tracing variant of Tasks RCU enabled. Mar 14 00:13:56.285993 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 14 00:13:56.286013 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 14 00:13:56.286033 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 14 00:13:56.286052 kernel: GICv3: 96 SPIs implemented Mar 14 00:13:56.286088 kernel: GICv3: 0 Extended SPIs implemented Mar 14 00:13:56.286108 kernel: Root IRQ handler: gic_handle_irq Mar 14 00:13:56.286127 kernel: GICv3: GICv3 features: 16 PPIs Mar 14 00:13:56.286148 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Mar 14 00:13:56.286168 kernel: ITS [mem 0x10080000-0x1009ffff] Mar 14 00:13:56.286190 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Mar 14 00:13:56.286215 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Mar 14 00:13:56.286235 kernel: GICv3: using LPI property table @0x00000004000d0000 Mar 14 00:13:56.286256 kernel: ITS: Using hypervisor restricted LPI range [128] Mar 14 00:13:56.286278 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Mar 14 00:13:56.286298 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 14 00:13:56.286318 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Mar 14 00:13:56.286349 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Mar 14 00:13:56.286368 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Mar 14 00:13:56.286388 kernel: Console: colour dummy device 80x25 Mar 14 00:13:56.286409 kernel: printk: console [tty1] enabled Mar 14 00:13:56.286427 kernel: ACPI: Core revision 20230628 Mar 14 00:13:56.286447 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Mar 14 00:13:56.286466 kernel: pid_max: default: 32768 minimum: 301 Mar 14 00:13:56.286486 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 14 00:13:56.286505 kernel: landlock: Up and running. Mar 14 00:13:56.286529 kernel: SELinux: Initializing. Mar 14 00:13:56.286548 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:13:56.286567 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:13:56.286587 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:13:56.286608 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:13:56.286626 kernel: rcu: Hierarchical SRCU implementation. Mar 14 00:13:56.286645 kernel: rcu: Max phase no-delay instances is 400. Mar 14 00:13:56.286665 kernel: Platform MSI: ITS@0x10080000 domain created Mar 14 00:13:56.286684 kernel: PCI/MSI: ITS@0x10080000 domain created Mar 14 00:13:56.286709 kernel: Remapping and enabling EFI services. Mar 14 00:13:56.286728 kernel: smp: Bringing up secondary CPUs ... Mar 14 00:13:56.286778 kernel: Detected PIPT I-cache on CPU1 Mar 14 00:13:56.287857 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Mar 14 00:13:56.287880 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Mar 14 00:13:56.287899 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Mar 14 00:13:56.287919 kernel: smp: Brought up 1 node, 2 CPUs Mar 14 00:13:56.287938 kernel: SMP: Total of 2 processors activated. Mar 14 00:13:56.287957 kernel: CPU features: detected: 32-bit EL0 Support Mar 14 00:13:56.287988 kernel: CPU features: detected: 32-bit EL1 Support Mar 14 00:13:56.288008 kernel: CPU features: detected: CRC32 instructions Mar 14 00:13:56.288027 kernel: CPU: All CPU(s) started at EL1 Mar 14 00:13:56.288058 kernel: alternatives: applying system-wide alternatives Mar 14 00:13:56.288082 kernel: devtmpfs: initialized Mar 14 00:13:56.288101 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 14 00:13:56.288121 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 14 00:13:56.288140 kernel: pinctrl core: initialized pinctrl subsystem Mar 14 00:13:56.288160 kernel: SMBIOS 3.0.0 present. Mar 14 00:13:56.288184 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Mar 14 00:13:56.288204 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 14 00:13:56.288223 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 14 00:13:56.288243 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 14 00:13:56.288264 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 14 00:13:56.288285 kernel: audit: initializing netlink subsys (disabled) Mar 14 00:13:56.288304 kernel: audit: type=2000 audit(0.302:1): state=initialized audit_enabled=0 res=1 Mar 14 00:13:56.288324 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 14 00:13:56.288350 kernel: cpuidle: using governor menu Mar 14 00:13:56.288370 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 14 00:13:56.288390 kernel: ASID allocator initialised with 65536 entries Mar 14 00:13:56.288409 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 14 00:13:56.288428 kernel: Serial: AMBA PL011 UART driver Mar 14 00:13:56.288447 kernel: Modules: 17488 pages in range for non-PLT usage Mar 14 00:13:56.288466 kernel: Modules: 509008 pages in range for PLT usage Mar 14 00:13:56.288485 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 14 00:13:56.288504 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 14 00:13:56.288528 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 14 00:13:56.288549 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 14 00:13:56.288569 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 14 00:13:56.288588 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 14 00:13:56.288607 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 14 00:13:56.288626 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 14 00:13:56.288645 kernel: ACPI: Added _OSI(Module Device) Mar 14 00:13:56.288664 kernel: ACPI: Added _OSI(Processor Device) Mar 14 00:13:56.288684 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 14 00:13:56.288708 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 14 00:13:56.288728 kernel: ACPI: Interpreter enabled Mar 14 00:13:56.289811 kernel: ACPI: Using GIC for interrupt routing Mar 14 00:13:56.289862 kernel: ACPI: MCFG table detected, 1 entries Mar 14 00:13:56.289885 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Mar 14 00:13:56.290248 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 14 00:13:56.290503 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 14 00:13:56.290733 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 14 00:13:56.293208 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Mar 14 00:13:56.293461 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Mar 14 00:13:56.293495 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Mar 14 00:13:56.293516 kernel: acpiphp: Slot [1] registered Mar 14 00:13:56.293536 kernel: acpiphp: Slot [2] registered Mar 14 00:13:56.293556 kernel: acpiphp: Slot [3] registered Mar 14 00:13:56.293576 kernel: acpiphp: Slot [4] registered Mar 14 00:13:56.293594 kernel: acpiphp: Slot [5] registered Mar 14 00:13:56.293627 kernel: acpiphp: Slot [6] registered Mar 14 00:13:56.293647 kernel: acpiphp: Slot [7] registered Mar 14 00:13:56.293666 kernel: acpiphp: Slot [8] registered Mar 14 00:13:56.293686 kernel: acpiphp: Slot [9] registered Mar 14 00:13:56.293705 kernel: acpiphp: Slot [10] registered Mar 14 00:13:56.293724 kernel: acpiphp: Slot [11] registered Mar 14 00:13:56.293743 kernel: acpiphp: Slot [12] registered Mar 14 00:13:56.294270 kernel: acpiphp: Slot [13] registered Mar 14 00:13:56.294292 kernel: acpiphp: Slot [14] registered Mar 14 00:13:56.294311 kernel: acpiphp: Slot [15] registered Mar 14 00:13:56.294341 kernel: acpiphp: Slot [16] registered Mar 14 00:13:56.294360 kernel: acpiphp: Slot [17] registered Mar 14 00:13:56.294379 kernel: acpiphp: Slot [18] registered Mar 14 00:13:56.294399 kernel: acpiphp: Slot [19] registered Mar 14 00:13:56.294418 kernel: acpiphp: Slot [20] registered Mar 14 00:13:56.294437 kernel: acpiphp: Slot [21] registered Mar 14 00:13:56.294457 kernel: acpiphp: Slot [22] registered Mar 14 00:13:56.294475 kernel: acpiphp: Slot [23] registered Mar 14 00:13:56.294494 kernel: acpiphp: Slot [24] registered Mar 14 00:13:56.294519 kernel: acpiphp: Slot [25] registered Mar 14 00:13:56.294538 kernel: acpiphp: Slot [26] registered Mar 14 00:13:56.294558 kernel: acpiphp: Slot [27] registered Mar 14 00:13:56.294577 kernel: acpiphp: Slot [28] registered Mar 14 00:13:56.294595 kernel: acpiphp: Slot [29] registered Mar 14 00:13:56.294614 kernel: acpiphp: Slot [30] registered Mar 14 00:13:56.294633 kernel: acpiphp: Slot [31] registered Mar 14 00:13:56.294652 kernel: PCI host bridge to bus 0000:00 Mar 14 00:13:56.294986 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Mar 14 00:13:56.295263 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 14 00:13:56.295471 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Mar 14 00:13:56.295672 kernel: pci_bus 0000:00: root bus resource [bus 00] Mar 14 00:13:56.297091 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Mar 14 00:13:56.297377 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Mar 14 00:13:56.297614 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Mar 14 00:13:56.297922 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 14 00:13:56.298169 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Mar 14 00:13:56.298401 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 14 00:13:56.298669 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 14 00:13:56.303591 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Mar 14 00:13:56.303917 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Mar 14 00:13:56.304164 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Mar 14 00:13:56.304412 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 14 00:13:56.304641 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Mar 14 00:13:56.304954 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 14 00:13:56.305160 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Mar 14 00:13:56.305189 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 14 00:13:56.305211 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 14 00:13:56.305232 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 14 00:13:56.305252 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 14 00:13:56.305283 kernel: iommu: Default domain type: Translated Mar 14 00:13:56.305304 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 14 00:13:56.305324 kernel: efivars: Registered efivars operations Mar 14 00:13:56.305344 kernel: vgaarb: loaded Mar 14 00:13:56.305364 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 14 00:13:56.305384 kernel: VFS: Disk quotas dquot_6.6.0 Mar 14 00:13:56.305403 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 14 00:13:56.305423 kernel: pnp: PnP ACPI init Mar 14 00:13:56.305685 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Mar 14 00:13:56.305732 kernel: pnp: PnP ACPI: found 1 devices Mar 14 00:13:56.305793 kernel: NET: Registered PF_INET protocol family Mar 14 00:13:56.305818 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 14 00:13:56.305839 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 14 00:13:56.305860 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 14 00:13:56.305880 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 14 00:13:56.305900 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 14 00:13:56.305921 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 14 00:13:56.305949 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:13:56.305971 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:13:56.305991 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 14 00:13:56.306011 kernel: PCI: CLS 0 bytes, default 64 Mar 14 00:13:56.306030 kernel: kvm [1]: HYP mode not available Mar 14 00:13:56.306050 kernel: Initialise system trusted keyrings Mar 14 00:13:56.306070 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 14 00:13:56.306090 kernel: Key type asymmetric registered Mar 14 00:13:56.306110 kernel: Asymmetric key parser 'x509' registered Mar 14 00:13:56.306135 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 14 00:13:56.306155 kernel: io scheduler mq-deadline registered Mar 14 00:13:56.306175 kernel: io scheduler kyber registered Mar 14 00:13:56.306194 kernel: io scheduler bfq registered Mar 14 00:13:56.306464 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Mar 14 00:13:56.306497 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 14 00:13:56.306517 kernel: ACPI: button: Power Button [PWRB] Mar 14 00:13:56.306538 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Mar 14 00:13:56.306557 kernel: ACPI: button: Sleep Button [SLPB] Mar 14 00:13:56.306583 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 14 00:13:56.306603 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 14 00:13:56.308977 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Mar 14 00:13:56.309026 kernel: printk: console [ttyS0] disabled Mar 14 00:13:56.309047 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Mar 14 00:13:56.309067 kernel: printk: console [ttyS0] enabled Mar 14 00:13:56.309089 kernel: printk: bootconsole [uart0] disabled Mar 14 00:13:56.309110 kernel: thunder_xcv, ver 1.0 Mar 14 00:13:56.309130 kernel: thunder_bgx, ver 1.0 Mar 14 00:13:56.309160 kernel: nicpf, ver 1.0 Mar 14 00:13:56.309182 kernel: nicvf, ver 1.0 Mar 14 00:13:56.309455 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 14 00:13:56.309684 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-14T00:13:55 UTC (1773447235) Mar 14 00:13:56.309713 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 14 00:13:56.309733 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Mar 14 00:13:56.309810 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 14 00:13:56.309860 kernel: watchdog: Hard watchdog permanently disabled Mar 14 00:13:56.309896 kernel: NET: Registered PF_INET6 protocol family Mar 14 00:13:56.309917 kernel: Segment Routing with IPv6 Mar 14 00:13:56.309937 kernel: In-situ OAM (IOAM) with IPv6 Mar 14 00:13:56.309957 kernel: NET: Registered PF_PACKET protocol family Mar 14 00:13:56.309976 kernel: Key type dns_resolver registered Mar 14 00:13:56.309996 kernel: registered taskstats version 1 Mar 14 00:13:56.310016 kernel: Loading compiled-in X.509 certificates Mar 14 00:13:56.310036 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 16e13a4d63c54048487d2b18c824fa4694264505' Mar 14 00:13:56.310055 kernel: Key type .fscrypt registered Mar 14 00:13:56.310080 kernel: Key type fscrypt-provisioning registered Mar 14 00:13:56.310100 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 14 00:13:56.310119 kernel: ima: Allocated hash algorithm: sha1 Mar 14 00:13:56.310138 kernel: ima: No architecture policies found Mar 14 00:13:56.310158 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 14 00:13:56.310177 kernel: clk: Disabling unused clocks Mar 14 00:13:56.310197 kernel: Freeing unused kernel memory: 39424K Mar 14 00:13:56.310217 kernel: Run /init as init process Mar 14 00:13:56.310236 kernel: with arguments: Mar 14 00:13:56.310261 kernel: /init Mar 14 00:13:56.310282 kernel: with environment: Mar 14 00:13:56.310301 kernel: HOME=/ Mar 14 00:13:56.310321 kernel: TERM=linux Mar 14 00:13:56.310347 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:13:56.310373 systemd[1]: Detected virtualization amazon. Mar 14 00:13:56.310396 systemd[1]: Detected architecture arm64. Mar 14 00:13:56.310417 systemd[1]: Running in initrd. Mar 14 00:13:56.310444 systemd[1]: No hostname configured, using default hostname. Mar 14 00:13:56.310465 systemd[1]: Hostname set to . Mar 14 00:13:56.310487 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:13:56.310508 systemd[1]: Queued start job for default target initrd.target. Mar 14 00:13:56.310529 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:13:56.310550 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:13:56.310574 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 14 00:13:56.310596 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:13:56.310623 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 14 00:13:56.310644 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 14 00:13:56.310669 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 14 00:13:56.310692 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 14 00:13:56.310714 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:13:56.310736 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:13:56.314241 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:13:56.314277 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:13:56.314300 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:13:56.314322 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:13:56.314344 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:13:56.314365 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:13:56.314386 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:13:56.314408 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:13:56.314429 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:13:56.314458 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:13:56.314480 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:13:56.314501 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:13:56.314522 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 14 00:13:56.314544 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:13:56.314565 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 14 00:13:56.314587 systemd[1]: Starting systemd-fsck-usr.service... Mar 14 00:13:56.314608 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:13:56.314631 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:13:56.314659 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:13:56.314681 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 14 00:13:56.314703 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:13:56.314726 systemd[1]: Finished systemd-fsck-usr.service. Mar 14 00:13:56.314843 systemd-journald[252]: Collecting audit messages is disabled. Mar 14 00:13:56.314906 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:13:56.314930 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 14 00:13:56.314952 kernel: Bridge firewalling registered Mar 14 00:13:56.314981 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:13:56.315022 systemd-journald[252]: Journal started Mar 14 00:13:56.315067 systemd-journald[252]: Runtime Journal (/run/log/journal/ec22b15d3135b214081462ed4681642a) is 8.0M, max 75.3M, 67.3M free. Mar 14 00:13:56.248772 systemd-modules-load[253]: Inserted module 'overlay' Mar 14 00:13:56.327978 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:13:56.317880 systemd-modules-load[253]: Inserted module 'br_netfilter' Mar 14 00:13:56.334847 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:13:56.339808 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:13:56.362225 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:13:56.370090 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:13:56.382152 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:13:56.398471 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:13:56.425180 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:13:56.435877 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:13:56.439056 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:13:56.456140 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 14 00:13:56.462259 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:13:56.478088 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:13:56.507187 dracut-cmdline[290]: dracut-dracut-053 Mar 14 00:13:56.519694 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=704dcf876dede90264a8630d1e6c631c8df8e652c7e2ae2e5d334e632916c980 Mar 14 00:13:56.564205 systemd-resolved[292]: Positive Trust Anchors: Mar 14 00:13:56.564239 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:13:56.564303 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:13:56.695816 kernel: SCSI subsystem initialized Mar 14 00:13:56.703875 kernel: Loading iSCSI transport class v2.0-870. Mar 14 00:13:56.716797 kernel: iscsi: registered transport (tcp) Mar 14 00:13:56.739253 kernel: iscsi: registered transport (qla4xxx) Mar 14 00:13:56.739327 kernel: QLogic iSCSI HBA Driver Mar 14 00:13:56.818797 kernel: random: crng init done Mar 14 00:13:56.817164 systemd-resolved[292]: Defaulting to hostname 'linux'. Mar 14 00:13:56.821087 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:13:56.826460 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:13:56.852675 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 14 00:13:56.866044 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 14 00:13:56.910235 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 14 00:13:56.910315 kernel: device-mapper: uevent: version 1.0.3 Mar 14 00:13:56.912170 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 14 00:13:56.981808 kernel: raid6: neonx8 gen() 6807 MB/s Mar 14 00:13:56.998797 kernel: raid6: neonx4 gen() 6622 MB/s Mar 14 00:13:57.015789 kernel: raid6: neonx2 gen() 5503 MB/s Mar 14 00:13:57.032783 kernel: raid6: neonx1 gen() 3975 MB/s Mar 14 00:13:57.049782 kernel: raid6: int64x8 gen() 3829 MB/s Mar 14 00:13:57.066782 kernel: raid6: int64x4 gen() 3726 MB/s Mar 14 00:13:57.083782 kernel: raid6: int64x2 gen() 3609 MB/s Mar 14 00:13:57.101834 kernel: raid6: int64x1 gen() 2765 MB/s Mar 14 00:13:57.101878 kernel: raid6: using algorithm neonx8 gen() 6807 MB/s Mar 14 00:13:57.120782 kernel: raid6: .... xor() 4745 MB/s, rmw enabled Mar 14 00:13:57.120820 kernel: raid6: using neon recovery algorithm Mar 14 00:13:57.128788 kernel: xor: measuring software checksum speed Mar 14 00:13:57.131208 kernel: 8regs : 10054 MB/sec Mar 14 00:13:57.131241 kernel: 32regs : 11913 MB/sec Mar 14 00:13:57.132527 kernel: arm64_neon : 9386 MB/sec Mar 14 00:13:57.132559 kernel: xor: using function: 32regs (11913 MB/sec) Mar 14 00:13:57.217801 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 14 00:13:57.236876 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:13:57.248071 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:13:57.288566 systemd-udevd[474]: Using default interface naming scheme 'v255'. Mar 14 00:13:57.296873 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:13:57.315408 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 14 00:13:57.352451 dracut-pre-trigger[482]: rd.md=0: removing MD RAID activation Mar 14 00:13:57.408234 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:13:57.419073 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:13:57.533513 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:13:57.545121 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 14 00:13:57.589165 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 14 00:13:57.592530 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:13:57.595423 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:13:57.598121 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:13:57.615135 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 14 00:13:57.655826 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:13:57.735939 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 14 00:13:57.736004 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Mar 14 00:13:57.743042 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 14 00:13:57.751952 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 14 00:13:57.742085 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:13:57.742323 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:13:57.746397 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:13:57.749039 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:13:57.749301 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:13:57.752047 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:13:57.778956 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:13:57.798069 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 14 00:13:57.798124 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 14 00:13:57.802800 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:39:a2:30:3a:ef Mar 14 00:13:57.809783 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 14 00:13:57.814706 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:13:57.825247 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 14 00:13:57.825288 kernel: GPT:9289727 != 33554431 Mar 14 00:13:57.825323 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 14 00:13:57.827831 kernel: GPT:9289727 != 33554431 Mar 14 00:13:57.827896 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 14 00:13:57.828780 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:13:57.828879 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:13:57.843021 (udev-worker)[528]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:13:57.868674 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:13:57.919783 kernel: BTRFS: device fsid df62721e-ebc0-40bc-8956-1227b067a773 devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (544) Mar 14 00:13:57.949854 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (545) Mar 14 00:13:58.052045 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 14 00:13:58.070164 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 14 00:13:58.084279 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 14 00:13:58.090977 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 14 00:13:58.106331 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 14 00:13:58.126104 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 14 00:13:58.143802 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:13:58.144138 disk-uuid[666]: Primary Header is updated. Mar 14 00:13:58.144138 disk-uuid[666]: Secondary Entries is updated. Mar 14 00:13:58.144138 disk-uuid[666]: Secondary Header is updated. Mar 14 00:13:58.172777 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:13:59.186327 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:13:59.187553 disk-uuid[667]: The operation has completed successfully. Mar 14 00:13:59.363822 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 14 00:13:59.365123 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 14 00:13:59.428050 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 14 00:13:59.446686 sh[925]: Success Mar 14 00:13:59.474165 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 14 00:13:59.600221 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 14 00:13:59.606101 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 14 00:13:59.612934 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 14 00:13:59.654102 kernel: BTRFS info (device dm-0): first mount of filesystem df62721e-ebc0-40bc-8956-1227b067a773 Mar 14 00:13:59.654165 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:13:59.656186 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 14 00:13:59.657641 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 14 00:13:59.658888 kernel: BTRFS info (device dm-0): using free space tree Mar 14 00:13:59.688813 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 14 00:13:59.691253 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 14 00:13:59.695744 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 14 00:13:59.708014 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 14 00:13:59.716048 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 14 00:13:59.750323 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:13:59.750395 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:13:59.752307 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 14 00:13:59.772802 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 14 00:13:59.794103 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 14 00:13:59.798405 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:13:59.808304 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 14 00:13:59.827170 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 14 00:13:59.917477 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:13:59.931065 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:13:59.994307 systemd-networkd[1118]: lo: Link UP Mar 14 00:13:59.994329 systemd-networkd[1118]: lo: Gained carrier Mar 14 00:13:59.998314 systemd-networkd[1118]: Enumeration completed Mar 14 00:13:59.998455 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:13:59.999955 systemd-networkd[1118]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:13:59.999963 systemd-networkd[1118]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:14:00.004928 systemd[1]: Reached target network.target - Network. Mar 14 00:14:00.025099 systemd-networkd[1118]: eth0: Link UP Mar 14 00:14:00.025108 systemd-networkd[1118]: eth0: Gained carrier Mar 14 00:14:00.025127 systemd-networkd[1118]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:14:00.050910 systemd-networkd[1118]: eth0: DHCPv4 address 172.31.26.130/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 14 00:14:00.079891 ignition[1044]: Ignition 2.19.0 Mar 14 00:14:00.080403 ignition[1044]: Stage: fetch-offline Mar 14 00:14:00.082002 ignition[1044]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:14:00.082026 ignition[1044]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:14:00.083570 ignition[1044]: Ignition finished successfully Mar 14 00:14:00.092178 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:14:00.119132 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 14 00:14:00.143519 ignition[1126]: Ignition 2.19.0 Mar 14 00:14:00.144070 ignition[1126]: Stage: fetch Mar 14 00:14:00.144718 ignition[1126]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:14:00.144743 ignition[1126]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:14:00.144939 ignition[1126]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:14:00.165940 ignition[1126]: PUT result: OK Mar 14 00:14:00.169714 ignition[1126]: parsed url from cmdline: "" Mar 14 00:14:00.169737 ignition[1126]: no config URL provided Mar 14 00:14:00.169783 ignition[1126]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:14:00.169813 ignition[1126]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:14:00.169846 ignition[1126]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:14:00.173993 ignition[1126]: PUT result: OK Mar 14 00:14:00.174070 ignition[1126]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 14 00:14:00.179350 ignition[1126]: GET result: OK Mar 14 00:14:00.180091 ignition[1126]: parsing config with SHA512: 06a296bcb041fdad90de39df4e87f672066a112ab9c431bdee13ba36f7ffeadc9adcad4d13feabf67baf41c0c7942c0e5c83aef7433e0ef6d0cc33dfd4be80c5 Mar 14 00:14:00.193790 unknown[1126]: fetched base config from "system" Mar 14 00:14:00.193821 unknown[1126]: fetched base config from "system" Mar 14 00:14:00.193846 unknown[1126]: fetched user config from "aws" Mar 14 00:14:00.199017 ignition[1126]: fetch: fetch complete Mar 14 00:14:00.199031 ignition[1126]: fetch: fetch passed Mar 14 00:14:00.199136 ignition[1126]: Ignition finished successfully Mar 14 00:14:00.210852 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 14 00:14:00.222056 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 14 00:14:00.249349 ignition[1132]: Ignition 2.19.0 Mar 14 00:14:00.249918 ignition[1132]: Stage: kargs Mar 14 00:14:00.250549 ignition[1132]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:14:00.250585 ignition[1132]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:14:00.250803 ignition[1132]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:14:00.260416 ignition[1132]: PUT result: OK Mar 14 00:14:00.266247 ignition[1132]: kargs: kargs passed Mar 14 00:14:00.266398 ignition[1132]: Ignition finished successfully Mar 14 00:14:00.271597 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 14 00:14:00.286703 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 14 00:14:00.311134 ignition[1138]: Ignition 2.19.0 Mar 14 00:14:00.311156 ignition[1138]: Stage: disks Mar 14 00:14:00.312571 ignition[1138]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:14:00.312600 ignition[1138]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:14:00.312820 ignition[1138]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:14:00.322465 ignition[1138]: PUT result: OK Mar 14 00:14:00.327112 ignition[1138]: disks: disks passed Mar 14 00:14:00.327264 ignition[1138]: Ignition finished successfully Mar 14 00:14:00.329371 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 14 00:14:00.332932 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 14 00:14:00.336560 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:14:00.348435 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:14:00.348707 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:14:00.357418 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:14:00.370149 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 14 00:14:00.416556 systemd-fsck[1146]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 14 00:14:00.423441 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 14 00:14:00.435979 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 14 00:14:00.529790 kernel: EXT4-fs (nvme0n1p9): mounted filesystem af566013-4e57-4e7f-9689-a2e15898536d r/w with ordered data mode. Quota mode: none. Mar 14 00:14:00.530712 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 14 00:14:00.535194 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 14 00:14:00.556184 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:14:00.564542 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 14 00:14:00.567445 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 14 00:14:00.567526 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 14 00:14:00.567572 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:14:00.591795 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1165) Mar 14 00:14:00.596294 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:14:00.596339 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:14:00.596380 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 14 00:14:00.607271 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 14 00:14:00.614887 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 14 00:14:00.618026 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 14 00:14:00.622481 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:14:00.728302 initrd-setup-root[1190]: cut: /sysroot/etc/passwd: No such file or directory Mar 14 00:14:00.738498 initrd-setup-root[1197]: cut: /sysroot/etc/group: No such file or directory Mar 14 00:14:00.747690 initrd-setup-root[1204]: cut: /sysroot/etc/shadow: No such file or directory Mar 14 00:14:00.756787 initrd-setup-root[1211]: cut: /sysroot/etc/gshadow: No such file or directory Mar 14 00:14:00.908920 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 14 00:14:00.928641 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 14 00:14:00.938292 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 14 00:14:00.952539 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 14 00:14:00.958943 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:14:01.002841 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 14 00:14:01.008506 ignition[1278]: INFO : Ignition 2.19.0 Mar 14 00:14:01.008506 ignition[1278]: INFO : Stage: mount Mar 14 00:14:01.012273 ignition[1278]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:14:01.012273 ignition[1278]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:14:01.017186 ignition[1278]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:14:01.020442 ignition[1278]: INFO : PUT result: OK Mar 14 00:14:01.024671 ignition[1278]: INFO : mount: mount passed Mar 14 00:14:01.026910 ignition[1278]: INFO : Ignition finished successfully Mar 14 00:14:01.028016 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 14 00:14:01.039101 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 14 00:14:01.057053 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:14:01.084791 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1290) Mar 14 00:14:01.089668 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:14:01.089731 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:14:01.091144 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 14 00:14:01.096792 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 14 00:14:01.100609 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:14:01.135704 ignition[1307]: INFO : Ignition 2.19.0 Mar 14 00:14:01.135704 ignition[1307]: INFO : Stage: files Mar 14 00:14:01.140143 ignition[1307]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:14:01.140143 ignition[1307]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:14:01.140143 ignition[1307]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:14:01.148164 systemd-networkd[1118]: eth0: Gained IPv6LL Mar 14 00:14:01.151478 ignition[1307]: INFO : PUT result: OK Mar 14 00:14:01.156240 ignition[1307]: DEBUG : files: compiled without relabeling support, skipping Mar 14 00:14:01.159801 ignition[1307]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 14 00:14:01.159801 ignition[1307]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 14 00:14:01.167804 ignition[1307]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 14 00:14:01.171237 ignition[1307]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 14 00:14:01.174259 unknown[1307]: wrote ssh authorized keys file for user: core Mar 14 00:14:01.177464 ignition[1307]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 14 00:14:01.183011 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 14 00:14:01.183011 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 14 00:14:01.274917 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 14 00:14:01.423522 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 14 00:14:01.430375 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 14 00:14:01.430375 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 14 00:14:01.430375 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:14:01.430375 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:14:01.430375 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:14:01.430375 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:14:01.430375 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:14:01.430375 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:14:01.430375 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:14:01.430375 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:14:01.430375 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 14 00:14:01.430375 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 14 00:14:01.430375 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 14 00:14:01.430375 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-arm64.raw: attempt #1 Mar 14 00:14:01.898405 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 14 00:14:02.355986 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-arm64.raw" Mar 14 00:14:02.355986 ignition[1307]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 14 00:14:02.363782 ignition[1307]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:14:02.363782 ignition[1307]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:14:02.363782 ignition[1307]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 14 00:14:02.363782 ignition[1307]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 14 00:14:02.363782 ignition[1307]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 14 00:14:02.363782 ignition[1307]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:14:02.363782 ignition[1307]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:14:02.363782 ignition[1307]: INFO : files: files passed Mar 14 00:14:02.363782 ignition[1307]: INFO : Ignition finished successfully Mar 14 00:14:02.394105 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 14 00:14:02.405025 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 14 00:14:02.412078 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 14 00:14:02.419681 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 14 00:14:02.419888 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 14 00:14:02.457436 initrd-setup-root-after-ignition[1336]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:14:02.457436 initrd-setup-root-after-ignition[1336]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:14:02.466484 initrd-setup-root-after-ignition[1340]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:14:02.474823 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:14:02.477985 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 14 00:14:02.492995 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 14 00:14:02.543933 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 14 00:14:02.544317 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 14 00:14:02.552965 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 14 00:14:02.555341 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 14 00:14:02.557733 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 14 00:14:02.570214 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 14 00:14:02.599933 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:14:02.608100 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 14 00:14:02.635235 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:14:02.640811 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:14:02.643777 systemd[1]: Stopped target timers.target - Timer Units. Mar 14 00:14:02.644132 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 14 00:14:02.644361 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:14:02.645139 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 14 00:14:02.645540 systemd[1]: Stopped target basic.target - Basic System. Mar 14 00:14:02.645943 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 14 00:14:02.646300 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:14:02.646676 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 14 00:14:02.651580 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 14 00:14:02.652529 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:14:02.653332 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 14 00:14:02.653720 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 14 00:14:02.654128 systemd[1]: Stopped target swap.target - Swaps. Mar 14 00:14:02.654448 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 14 00:14:02.654657 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:14:02.657199 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:14:02.660553 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:14:02.660924 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 14 00:14:02.678014 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:14:02.678230 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 14 00:14:02.678445 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 14 00:14:02.685986 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 14 00:14:02.686252 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:14:02.691257 systemd[1]: ignition-files.service: Deactivated successfully. Mar 14 00:14:02.691468 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 14 00:14:02.707407 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 14 00:14:02.712026 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 14 00:14:02.712370 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:14:02.739036 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 14 00:14:02.751875 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 14 00:14:02.752270 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:14:02.756593 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 14 00:14:02.756895 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:14:02.803313 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 14 00:14:02.804682 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 14 00:14:02.814031 ignition[1360]: INFO : Ignition 2.19.0 Mar 14 00:14:02.817629 ignition[1360]: INFO : Stage: umount Mar 14 00:14:02.817629 ignition[1360]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:14:02.817629 ignition[1360]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:14:02.817629 ignition[1360]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:14:02.833962 ignition[1360]: INFO : PUT result: OK Mar 14 00:14:02.841799 ignition[1360]: INFO : umount: umount passed Mar 14 00:14:02.843944 ignition[1360]: INFO : Ignition finished successfully Mar 14 00:14:02.844708 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 14 00:14:02.846139 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 14 00:14:02.848937 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 14 00:14:02.852591 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 14 00:14:02.852782 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 14 00:14:02.856736 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 14 00:14:02.856937 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 14 00:14:02.860518 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 14 00:14:02.860606 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 14 00:14:02.863199 systemd[1]: Stopped target network.target - Network. Mar 14 00:14:02.867874 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 14 00:14:02.867983 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:14:02.872497 systemd[1]: Stopped target paths.target - Path Units. Mar 14 00:14:02.876572 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 14 00:14:02.880641 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:14:02.885436 systemd[1]: Stopped target slices.target - Slice Units. Mar 14 00:14:02.887504 systemd[1]: Stopped target sockets.target - Socket Units. Mar 14 00:14:02.890338 systemd[1]: iscsid.socket: Deactivated successfully. Mar 14 00:14:02.890427 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:14:02.894021 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 14 00:14:02.894101 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:14:02.897146 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 14 00:14:02.897250 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 14 00:14:02.903289 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 14 00:14:02.903790 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 14 00:14:02.909060 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 14 00:14:02.910557 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 14 00:14:02.917833 systemd-networkd[1118]: eth0: DHCPv6 lease lost Mar 14 00:14:02.922740 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 14 00:14:02.923433 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 14 00:14:02.928293 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 14 00:14:02.944176 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 14 00:14:02.962374 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 14 00:14:02.962478 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:14:02.986091 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 14 00:14:02.991399 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 14 00:14:02.991546 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:14:02.994492 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:14:02.994621 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:14:02.997304 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 14 00:14:02.997397 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 14 00:14:03.002471 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 14 00:14:03.002568 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:14:03.027469 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:14:03.051636 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 14 00:14:03.053948 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 14 00:14:03.057849 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 14 00:14:03.058023 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 14 00:14:03.075643 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 14 00:14:03.076572 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:14:03.082516 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 14 00:14:03.082663 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 14 00:14:03.086535 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 14 00:14:03.086634 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:14:03.089192 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 14 00:14:03.089406 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:14:03.101090 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 14 00:14:03.101194 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 14 00:14:03.103700 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:14:03.103809 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:14:03.120151 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 14 00:14:03.128644 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 14 00:14:03.128771 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:14:03.131816 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:14:03.131902 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:14:03.135291 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 14 00:14:03.135980 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 14 00:14:03.158601 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 14 00:14:03.160804 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 14 00:14:03.167488 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 14 00:14:03.179025 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 14 00:14:03.198457 systemd[1]: Switching root. Mar 14 00:14:03.235801 systemd-journald[252]: Journal stopped Mar 14 00:14:05.124956 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Mar 14 00:14:05.125087 kernel: SELinux: policy capability network_peer_controls=1 Mar 14 00:14:05.125132 kernel: SELinux: policy capability open_perms=1 Mar 14 00:14:05.125169 kernel: SELinux: policy capability extended_socket_class=1 Mar 14 00:14:05.125200 kernel: SELinux: policy capability always_check_network=0 Mar 14 00:14:05.125232 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 14 00:14:05.125264 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 14 00:14:05.125302 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 14 00:14:05.125333 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 14 00:14:05.125366 kernel: audit: type=1403 audit(1773447243.471:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 14 00:14:05.125407 systemd[1]: Successfully loaded SELinux policy in 50.788ms. Mar 14 00:14:05.125446 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.479ms. Mar 14 00:14:05.125484 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:14:05.125517 systemd[1]: Detected virtualization amazon. Mar 14 00:14:05.125550 systemd[1]: Detected architecture arm64. Mar 14 00:14:05.125581 systemd[1]: Detected first boot. Mar 14 00:14:05.125613 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:14:05.125644 zram_generator::config[1404]: No configuration found. Mar 14 00:14:05.125680 systemd[1]: Populated /etc with preset unit settings. Mar 14 00:14:05.125711 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 14 00:14:05.125776 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 14 00:14:05.125816 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 14 00:14:05.125850 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 14 00:14:05.125883 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 14 00:14:05.125915 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 14 00:14:05.125949 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 14 00:14:05.125980 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 14 00:14:05.126052 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 14 00:14:05.126093 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 14 00:14:05.126131 systemd[1]: Created slice user.slice - User and Session Slice. Mar 14 00:14:05.126166 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:14:05.126198 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:14:05.126713 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 14 00:14:05.126840 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 14 00:14:05.126878 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 14 00:14:05.126913 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:14:05.126966 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 14 00:14:05.127004 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:14:05.127042 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 14 00:14:05.127072 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 14 00:14:05.127103 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 14 00:14:05.127135 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 14 00:14:05.127167 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:14:05.127200 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:14:05.127232 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:14:05.127264 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:14:05.127298 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 14 00:14:05.127332 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 14 00:14:05.127374 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:14:05.127404 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:14:05.127437 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:14:05.127469 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 14 00:14:05.127502 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 14 00:14:05.127535 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 14 00:14:05.127566 systemd[1]: Mounting media.mount - External Media Directory... Mar 14 00:14:05.127602 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 14 00:14:05.127632 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 14 00:14:05.127663 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 14 00:14:05.127698 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 14 00:14:05.127728 systemd[1]: Reached target machines.target - Containers. Mar 14 00:14:05.127818 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 14 00:14:05.127855 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:14:05.127885 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:14:05.127921 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 14 00:14:05.127952 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:14:05.127984 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:14:05.128017 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:14:05.128048 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 14 00:14:05.128077 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:14:05.128110 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 14 00:14:05.128140 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 14 00:14:05.128176 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 14 00:14:05.128208 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 14 00:14:05.128238 systemd[1]: Stopped systemd-fsck-usr.service. Mar 14 00:14:05.128268 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:14:05.128298 kernel: ACPI: bus type drm_connector registered Mar 14 00:14:05.128329 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:14:05.128360 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 14 00:14:05.128393 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 14 00:14:05.128426 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:14:05.128458 systemd[1]: verity-setup.service: Deactivated successfully. Mar 14 00:14:05.128492 systemd[1]: Stopped verity-setup.service. Mar 14 00:14:05.128522 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 14 00:14:05.128554 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 14 00:14:05.128585 systemd[1]: Mounted media.mount - External Media Directory. Mar 14 00:14:05.128614 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 14 00:14:05.128647 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 14 00:14:05.128678 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 14 00:14:05.128713 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:14:05.128743 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 14 00:14:05.128823 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 14 00:14:05.128857 kernel: fuse: init (API version 7.39) Mar 14 00:14:05.128886 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:14:05.128919 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:14:05.128956 kernel: loop: module loaded Mar 14 00:14:05.128986 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:14:05.129016 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:14:05.129046 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:14:05.129076 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:14:05.129106 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 14 00:14:05.129136 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 14 00:14:05.129167 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:14:05.129854 systemd-journald[1489]: Collecting audit messages is disabled. Mar 14 00:14:05.129934 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:14:05.129966 systemd-journald[1489]: Journal started Mar 14 00:14:05.130018 systemd-journald[1489]: Runtime Journal (/run/log/journal/ec22b15d3135b214081462ed4681642a) is 8.0M, max 75.3M, 67.3M free. Mar 14 00:14:04.509887 systemd[1]: Queued start job for default target multi-user.target. Mar 14 00:14:04.533635 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 14 00:14:04.534488 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 14 00:14:05.137596 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:14:05.139408 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:14:05.143853 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 14 00:14:05.148174 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 14 00:14:05.182157 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 14 00:14:05.190346 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 14 00:14:05.198969 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 14 00:14:05.211032 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 14 00:14:05.213860 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 14 00:14:05.214065 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:14:05.218578 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 14 00:14:05.233015 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 14 00:14:05.244055 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 14 00:14:05.246598 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:14:05.257069 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 14 00:14:05.262083 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 14 00:14:05.264802 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:14:05.274326 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 14 00:14:05.277057 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:14:05.286149 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:14:05.293113 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 14 00:14:05.301081 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 14 00:14:05.308204 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 14 00:14:05.311348 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 14 00:14:05.314540 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 14 00:14:05.384921 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 14 00:14:05.387820 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 14 00:14:05.401064 systemd-journald[1489]: Time spent on flushing to /var/log/journal/ec22b15d3135b214081462ed4681642a is 143.881ms for 900 entries. Mar 14 00:14:05.401064 systemd-journald[1489]: System Journal (/var/log/journal/ec22b15d3135b214081462ed4681642a) is 8.0M, max 195.6M, 187.6M free. Mar 14 00:14:05.574292 systemd-journald[1489]: Received client request to flush runtime journal. Mar 14 00:14:05.574391 kernel: loop0: detected capacity change from 0 to 209336 Mar 14 00:14:05.574456 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 14 00:14:05.403430 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 14 00:14:05.485478 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:14:05.509026 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:14:05.520193 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 14 00:14:05.526695 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 14 00:14:05.531564 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 14 00:14:05.550904 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 14 00:14:05.566321 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:14:05.586916 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 14 00:14:05.592493 udevadm[1545]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 14 00:14:05.614356 kernel: loop1: detected capacity change from 0 to 52536 Mar 14 00:14:05.651658 systemd-tmpfiles[1549]: ACLs are not supported, ignoring. Mar 14 00:14:05.651699 systemd-tmpfiles[1549]: ACLs are not supported, ignoring. Mar 14 00:14:05.667297 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:14:05.722804 kernel: loop2: detected capacity change from 0 to 114328 Mar 14 00:14:05.790859 kernel: loop3: detected capacity change from 0 to 114432 Mar 14 00:14:05.828799 kernel: loop4: detected capacity change from 0 to 209336 Mar 14 00:14:05.867870 kernel: loop5: detected capacity change from 0 to 52536 Mar 14 00:14:05.897874 kernel: loop6: detected capacity change from 0 to 114328 Mar 14 00:14:05.925804 kernel: loop7: detected capacity change from 0 to 114432 Mar 14 00:14:05.958988 (sd-merge)[1558]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 14 00:14:05.961195 (sd-merge)[1558]: Merged extensions into '/usr'. Mar 14 00:14:05.975013 systemd[1]: Reloading requested from client PID 1533 ('systemd-sysext') (unit systemd-sysext.service)... Mar 14 00:14:05.975045 systemd[1]: Reloading... Mar 14 00:14:06.179826 zram_generator::config[1584]: No configuration found. Mar 14 00:14:06.265846 ldconfig[1528]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 14 00:14:06.494294 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:14:06.614700 systemd[1]: Reloading finished in 638 ms. Mar 14 00:14:06.655472 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 14 00:14:06.659434 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 14 00:14:06.663802 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 14 00:14:06.680119 systemd[1]: Starting ensure-sysext.service... Mar 14 00:14:06.685952 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:14:06.696053 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:14:06.726913 systemd[1]: Reloading requested from client PID 1637 ('systemctl') (unit ensure-sysext.service)... Mar 14 00:14:06.726978 systemd[1]: Reloading... Mar 14 00:14:06.746345 systemd-tmpfiles[1638]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 14 00:14:06.748027 systemd-tmpfiles[1638]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 14 00:14:06.752262 systemd-tmpfiles[1638]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 14 00:14:06.752936 systemd-tmpfiles[1638]: ACLs are not supported, ignoring. Mar 14 00:14:06.753094 systemd-tmpfiles[1638]: ACLs are not supported, ignoring. Mar 14 00:14:06.765120 systemd-tmpfiles[1638]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:14:06.765145 systemd-tmpfiles[1638]: Skipping /boot Mar 14 00:14:06.800592 systemd-tmpfiles[1638]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:14:06.800629 systemd-tmpfiles[1638]: Skipping /boot Mar 14 00:14:06.819955 systemd-udevd[1639]: Using default interface naming scheme 'v255'. Mar 14 00:14:06.957788 zram_generator::config[1665]: No configuration found. Mar 14 00:14:07.057888 (udev-worker)[1684]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:14:07.246808 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1675) Mar 14 00:14:07.378566 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:14:07.533597 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 14 00:14:07.534164 systemd[1]: Reloading finished in 806 ms. Mar 14 00:14:07.566851 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:14:07.574852 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:14:07.630622 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 14 00:14:07.650812 systemd[1]: Finished ensure-sysext.service. Mar 14 00:14:07.685425 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 14 00:14:07.699028 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:14:07.712077 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 14 00:14:07.717189 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:14:07.722087 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 14 00:14:07.732103 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:14:07.743106 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:14:07.753094 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:14:07.759049 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:14:07.764898 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:14:07.772604 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 14 00:14:07.779161 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 14 00:14:07.796033 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:14:07.804850 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:14:07.815216 systemd[1]: Reached target time-set.target - System Time Set. Mar 14 00:14:07.820063 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 14 00:14:07.835124 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:14:07.848509 lvm[1840]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:14:07.855701 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:14:07.856088 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:14:07.876164 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:14:07.877878 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:14:07.894302 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 14 00:14:07.914551 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:14:07.915060 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:14:07.918467 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:14:07.919930 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:14:07.924627 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:14:07.924813 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:14:07.945592 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 14 00:14:07.959661 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 14 00:14:07.970882 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 14 00:14:07.985204 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 14 00:14:08.009367 augenrules[1872]: No rules Mar 14 00:14:08.016393 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:14:08.026269 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 14 00:14:08.029400 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:14:08.039426 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 14 00:14:08.067489 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 14 00:14:08.084897 lvm[1880]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:14:08.093181 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 14 00:14:08.096649 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 14 00:14:08.104971 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 14 00:14:08.138285 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 14 00:14:08.209396 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:14:08.252779 systemd-resolved[1852]: Positive Trust Anchors: Mar 14 00:14:08.252808 systemd-resolved[1852]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:14:08.252874 systemd-resolved[1852]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:14:08.255055 systemd-networkd[1851]: lo: Link UP Mar 14 00:14:08.255069 systemd-networkd[1851]: lo: Gained carrier Mar 14 00:14:08.258388 systemd-networkd[1851]: Enumeration completed Mar 14 00:14:08.258725 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:14:08.261904 systemd-networkd[1851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:14:08.261912 systemd-networkd[1851]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:14:08.264467 systemd-networkd[1851]: eth0: Link UP Mar 14 00:14:08.265024 systemd-networkd[1851]: eth0: Gained carrier Mar 14 00:14:08.265204 systemd-networkd[1851]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:14:08.271027 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 14 00:14:08.278863 systemd-networkd[1851]: eth0: DHCPv4 address 172.31.26.130/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 14 00:14:08.282079 systemd-resolved[1852]: Defaulting to hostname 'linux'. Mar 14 00:14:08.287559 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:14:08.290329 systemd[1]: Reached target network.target - Network. Mar 14 00:14:08.295155 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:14:08.299037 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:14:08.301557 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 14 00:14:08.304377 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 14 00:14:08.307498 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 14 00:14:08.310136 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 14 00:14:08.313020 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 14 00:14:08.315833 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 14 00:14:08.315888 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:14:08.317888 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:14:08.321998 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 14 00:14:08.326976 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 14 00:14:08.341227 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 14 00:14:08.344648 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 14 00:14:08.347123 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:14:08.349365 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:14:08.352071 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:14:08.352232 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:14:08.354469 systemd[1]: Starting containerd.service - containerd container runtime... Mar 14 00:14:08.372968 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 14 00:14:08.381304 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 14 00:14:08.402842 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 14 00:14:08.413104 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 14 00:14:08.416992 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 14 00:14:08.422252 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 14 00:14:08.452562 jq[1900]: false Mar 14 00:14:08.445067 systemd[1]: Started ntpd.service - Network Time Service. Mar 14 00:14:08.455025 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 14 00:14:08.475557 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 14 00:14:08.488086 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 14 00:14:08.496255 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 14 00:14:08.508772 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 14 00:14:08.512833 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 14 00:14:08.513719 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 14 00:14:08.516596 systemd[1]: Starting update-engine.service - Update Engine... Mar 14 00:14:08.523966 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 14 00:14:08.533503 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 14 00:14:08.533930 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 14 00:14:08.575417 dbus-daemon[1899]: [system] SELinux support is enabled Mar 14 00:14:08.577568 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 14 00:14:08.580003 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 14 00:14:08.587919 ntpd[1905]: 14 Mar 00:14:08 ntpd[1905]: ntpd 4.2.8p17@1.4004-o Fri Mar 13 21:57:55 UTC 2026 (1): Starting Mar 14 00:14:08.587919 ntpd[1905]: 14 Mar 00:14:08 ntpd[1905]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 14 00:14:08.587919 ntpd[1905]: 14 Mar 00:14:08 ntpd[1905]: ---------------------------------------------------- Mar 14 00:14:08.587919 ntpd[1905]: 14 Mar 00:14:08 ntpd[1905]: ntp-4 is maintained by Network Time Foundation, Mar 14 00:14:08.587919 ntpd[1905]: 14 Mar 00:14:08 ntpd[1905]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 14 00:14:08.587919 ntpd[1905]: 14 Mar 00:14:08 ntpd[1905]: corporation. Support and training for ntp-4 are Mar 14 00:14:08.587919 ntpd[1905]: 14 Mar 00:14:08 ntpd[1905]: available at https://www.nwtime.org/support Mar 14 00:14:08.587919 ntpd[1905]: 14 Mar 00:14:08 ntpd[1905]: ---------------------------------------------------- Mar 14 00:14:08.585933 ntpd[1905]: ntpd 4.2.8p17@1.4004-o Fri Mar 13 21:57:55 UTC 2026 (1): Starting Mar 14 00:14:08.583984 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 14 00:14:08.585980 ntpd[1905]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 14 00:14:08.586001 ntpd[1905]: ---------------------------------------------------- Mar 14 00:14:08.586021 ntpd[1905]: ntp-4 is maintained by Network Time Foundation, Mar 14 00:14:08.586040 ntpd[1905]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 14 00:14:08.586058 ntpd[1905]: corporation. Support and training for ntp-4 are Mar 14 00:14:08.586078 ntpd[1905]: available at https://www.nwtime.org/support Mar 14 00:14:08.586097 ntpd[1905]: ---------------------------------------------------- Mar 14 00:14:08.598098 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 14 00:14:08.598218 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 14 00:14:08.601413 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 14 00:14:08.620087 ntpd[1905]: 14 Mar 00:14:08 ntpd[1905]: proto: precision = 0.108 usec (-23) Mar 14 00:14:08.617999 ntpd[1905]: proto: precision = 0.108 usec (-23) Mar 14 00:14:08.601469 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 14 00:14:08.625482 ntpd[1905]: basedate set to 2026-03-01 Mar 14 00:14:08.625534 ntpd[1905]: gps base set to 2026-03-01 (week 2408) Mar 14 00:14:08.625697 ntpd[1905]: 14 Mar 00:14:08 ntpd[1905]: basedate set to 2026-03-01 Mar 14 00:14:08.625697 ntpd[1905]: 14 Mar 00:14:08 ntpd[1905]: gps base set to 2026-03-01 (week 2408) Mar 14 00:14:08.629026 dbus-daemon[1899]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1851 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 14 00:14:08.640101 dbus-daemon[1899]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 14 00:14:08.648228 ntpd[1905]: Listen and drop on 0 v6wildcard [::]:123 Mar 14 00:14:08.648408 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 14 00:14:08.650258 ntpd[1905]: 14 Mar 00:14:08 ntpd[1905]: Listen and drop on 0 v6wildcard [::]:123 Mar 14 00:14:08.650258 ntpd[1905]: 14 Mar 00:14:08 ntpd[1905]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 14 00:14:08.648315 ntpd[1905]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 14 00:14:08.663078 ntpd[1905]: Listen normally on 2 lo 127.0.0.1:123 Mar 14 00:14:08.666948 ntpd[1905]: 14 Mar 00:14:08 ntpd[1905]: Listen normally on 2 lo 127.0.0.1:123 Mar 14 00:14:08.666948 ntpd[1905]: 14 Mar 00:14:08 ntpd[1905]: Listen normally on 3 eth0 172.31.26.130:123 Mar 14 00:14:08.666948 ntpd[1905]: 14 Mar 00:14:08 ntpd[1905]: Listen normally on 4 lo [::1]:123 Mar 14 00:14:08.666948 ntpd[1905]: 14 Mar 00:14:08 ntpd[1905]: bind(21) AF_INET6 fe80::439:a2ff:fe30:3aef%2#123 flags 0x11 failed: Cannot assign requested address Mar 14 00:14:08.666948 ntpd[1905]: 14 Mar 00:14:08 ntpd[1905]: unable to create socket on eth0 (5) for fe80::439:a2ff:fe30:3aef%2#123 Mar 14 00:14:08.666948 ntpd[1905]: 14 Mar 00:14:08 ntpd[1905]: failed to init interface for address fe80::439:a2ff:fe30:3aef%2 Mar 14 00:14:08.666948 ntpd[1905]: 14 Mar 00:14:08 ntpd[1905]: Listening on routing socket on fd #21 for interface updates Mar 14 00:14:08.663173 ntpd[1905]: Listen normally on 3 eth0 172.31.26.130:123 Mar 14 00:14:08.669481 extend-filesystems[1902]: Found loop4 Mar 14 00:14:08.669481 extend-filesystems[1902]: Found loop5 Mar 14 00:14:08.669481 extend-filesystems[1902]: Found loop6 Mar 14 00:14:08.669481 extend-filesystems[1902]: Found loop7 Mar 14 00:14:08.669481 extend-filesystems[1902]: Found nvme0n1 Mar 14 00:14:08.663250 ntpd[1905]: Listen normally on 4 lo [::1]:123 Mar 14 00:14:08.689020 extend-filesystems[1902]: Found nvme0n1p1 Mar 14 00:14:08.689020 extend-filesystems[1902]: Found nvme0n1p2 Mar 14 00:14:08.689020 extend-filesystems[1902]: Found nvme0n1p3 Mar 14 00:14:08.689020 extend-filesystems[1902]: Found usr Mar 14 00:14:08.689020 extend-filesystems[1902]: Found nvme0n1p4 Mar 14 00:14:08.689020 extend-filesystems[1902]: Found nvme0n1p6 Mar 14 00:14:08.689020 extend-filesystems[1902]: Found nvme0n1p7 Mar 14 00:14:08.689020 extend-filesystems[1902]: Found nvme0n1p9 Mar 14 00:14:08.689020 extend-filesystems[1902]: Checking size of /dev/nvme0n1p9 Mar 14 00:14:08.663332 ntpd[1905]: bind(21) AF_INET6 fe80::439:a2ff:fe30:3aef%2#123 flags 0x11 failed: Cannot assign requested address Mar 14 00:14:08.742521 ntpd[1905]: 14 Mar 00:14:08 ntpd[1905]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:14:08.742521 ntpd[1905]: 14 Mar 00:14:08 ntpd[1905]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:14:08.743009 jq[1913]: true Mar 14 00:14:08.706372 systemd[1]: motdgen.service: Deactivated successfully. Mar 14 00:14:08.743314 tar[1919]: linux-arm64/LICENSE Mar 14 00:14:08.743314 tar[1919]: linux-arm64/helm Mar 14 00:14:08.663372 ntpd[1905]: unable to create socket on eth0 (5) for fe80::439:a2ff:fe30:3aef%2#123 Mar 14 00:14:08.706822 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 14 00:14:08.663401 ntpd[1905]: failed to init interface for address fe80::439:a2ff:fe30:3aef%2 Mar 14 00:14:08.663460 ntpd[1905]: Listening on routing socket on fd #21 for interface updates Mar 14 00:14:08.714824 ntpd[1905]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:14:08.714873 ntpd[1905]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:14:08.781860 extend-filesystems[1902]: Resized partition /dev/nvme0n1p9 Mar 14 00:14:08.791688 extend-filesystems[1947]: resize2fs 1.47.1 (20-May-2024) Mar 14 00:14:08.806040 coreos-metadata[1898]: Mar 14 00:14:08.797 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 14 00:14:08.806040 coreos-metadata[1898]: Mar 14 00:14:08.801 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 14 00:14:08.812538 coreos-metadata[1898]: Mar 14 00:14:08.806 INFO Fetch successful Mar 14 00:14:08.812538 coreos-metadata[1898]: Mar 14 00:14:08.807 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 14 00:14:08.812538 coreos-metadata[1898]: Mar 14 00:14:08.807 INFO Fetch successful Mar 14 00:14:08.812538 coreos-metadata[1898]: Mar 14 00:14:08.807 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 14 00:14:08.812538 coreos-metadata[1898]: Mar 14 00:14:08.810 INFO Fetch successful Mar 14 00:14:08.812538 coreos-metadata[1898]: Mar 14 00:14:08.810 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 14 00:14:08.824784 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Mar 14 00:14:08.818539 (ntainerd)[1937]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 14 00:14:08.825350 coreos-metadata[1898]: Mar 14 00:14:08.813 INFO Fetch successful Mar 14 00:14:08.825350 coreos-metadata[1898]: Mar 14 00:14:08.817 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 14 00:14:08.825350 coreos-metadata[1898]: Mar 14 00:14:08.817 INFO Fetch failed with 404: resource not found Mar 14 00:14:08.825350 coreos-metadata[1898]: Mar 14 00:14:08.817 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 14 00:14:08.825350 coreos-metadata[1898]: Mar 14 00:14:08.817 INFO Fetch successful Mar 14 00:14:08.825350 coreos-metadata[1898]: Mar 14 00:14:08.817 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 14 00:14:08.825350 coreos-metadata[1898]: Mar 14 00:14:08.824 INFO Fetch successful Mar 14 00:14:08.825350 coreos-metadata[1898]: Mar 14 00:14:08.824 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 14 00:14:08.825350 coreos-metadata[1898]: Mar 14 00:14:08.825 INFO Fetch successful Mar 14 00:14:08.825350 coreos-metadata[1898]: Mar 14 00:14:08.825 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 14 00:14:08.828838 coreos-metadata[1898]: Mar 14 00:14:08.827 INFO Fetch successful Mar 14 00:14:08.828838 coreos-metadata[1898]: Mar 14 00:14:08.827 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 14 00:14:08.837968 coreos-metadata[1898]: Mar 14 00:14:08.835 INFO Fetch successful Mar 14 00:14:08.857555 jq[1941]: true Mar 14 00:14:08.917825 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 14 00:14:08.933092 update_engine[1912]: I20260314 00:14:08.928627 1912 main.cc:92] Flatcar Update Engine starting Mar 14 00:14:08.948854 update_engine[1912]: I20260314 00:14:08.946491 1912 update_check_scheduler.cc:74] Next update check in 9m43s Mar 14 00:14:08.946680 systemd[1]: Started update-engine.service - Update Engine. Mar 14 00:14:08.961123 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 14 00:14:08.965868 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 14 00:14:08.969977 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 14 00:14:09.105813 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Mar 14 00:14:09.105906 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1675) Mar 14 00:14:09.115854 systemd-logind[1911]: Watching system buttons on /dev/input/event0 (Power Button) Mar 14 00:14:09.116475 systemd-logind[1911]: Watching system buttons on /dev/input/event1 (Sleep Button) Mar 14 00:14:09.118422 systemd-logind[1911]: New seat seat0. Mar 14 00:14:09.119489 extend-filesystems[1947]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 14 00:14:09.119489 extend-filesystems[1947]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 14 00:14:09.119489 extend-filesystems[1947]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Mar 14 00:14:09.139880 extend-filesystems[1902]: Resized filesystem in /dev/nvme0n1p9 Mar 14 00:14:09.128709 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 14 00:14:09.130065 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 14 00:14:09.151244 bash[1980]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:14:09.157833 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 14 00:14:09.161074 systemd[1]: Started systemd-logind.service - User Login Management. Mar 14 00:14:09.211742 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 14 00:14:09.223245 systemd[1]: Starting sshkeys.service... Mar 14 00:14:09.285227 dbus-daemon[1899]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 14 00:14:09.288339 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 14 00:14:09.303022 dbus-daemon[1899]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1929 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 14 00:14:09.336054 systemd[1]: Starting polkit.service - Authorization Manager... Mar 14 00:14:09.354472 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 14 00:14:09.362420 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 14 00:14:09.382070 polkitd[2016]: Started polkitd version 121 Mar 14 00:14:09.410516 polkitd[2016]: Loading rules from directory /etc/polkit-1/rules.d Mar 14 00:14:09.410645 polkitd[2016]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 14 00:14:09.413445 polkitd[2016]: Finished loading, compiling and executing 2 rules Mar 14 00:14:09.420232 dbus-daemon[1899]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 14 00:14:09.422123 systemd[1]: Started polkit.service - Authorization Manager. Mar 14 00:14:09.425513 polkitd[2016]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 14 00:14:09.543966 systemd-hostnamed[1929]: Hostname set to (transient) Mar 14 00:14:09.544122 systemd-resolved[1852]: System hostname changed to 'ip-172-31-26-130'. Mar 14 00:14:09.552623 locksmithd[1964]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 14 00:14:09.587045 ntpd[1905]: bind(24) AF_INET6 fe80::439:a2ff:fe30:3aef%2#123 flags 0x11 failed: Cannot assign requested address Mar 14 00:14:09.588354 ntpd[1905]: 14 Mar 00:14:09 ntpd[1905]: bind(24) AF_INET6 fe80::439:a2ff:fe30:3aef%2#123 flags 0x11 failed: Cannot assign requested address Mar 14 00:14:09.588354 ntpd[1905]: 14 Mar 00:14:09 ntpd[1905]: unable to create socket on eth0 (6) for fe80::439:a2ff:fe30:3aef%2#123 Mar 14 00:14:09.588354 ntpd[1905]: 14 Mar 00:14:09 ntpd[1905]: failed to init interface for address fe80::439:a2ff:fe30:3aef%2 Mar 14 00:14:09.587114 ntpd[1905]: unable to create socket on eth0 (6) for fe80::439:a2ff:fe30:3aef%2#123 Mar 14 00:14:09.587144 ntpd[1905]: failed to init interface for address fe80::439:a2ff:fe30:3aef%2 Mar 14 00:14:09.720437 coreos-metadata[2023]: Mar 14 00:14:09.720 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 14 00:14:09.723137 coreos-metadata[2023]: Mar 14 00:14:09.722 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 14 00:14:09.724730 coreos-metadata[2023]: Mar 14 00:14:09.724 INFO Fetch successful Mar 14 00:14:09.724730 coreos-metadata[2023]: Mar 14 00:14:09.724 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 14 00:14:09.727216 coreos-metadata[2023]: Mar 14 00:14:09.726 INFO Fetch successful Mar 14 00:14:09.728502 unknown[2023]: wrote ssh authorized keys file for user: core Mar 14 00:14:09.766420 update-ssh-keys[2097]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:14:09.771499 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 14 00:14:09.791249 systemd[1]: Finished sshkeys.service. Mar 14 00:14:09.880846 containerd[1937]: time="2026-03-14T00:14:09.880698395Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 14 00:14:09.976678 containerd[1937]: time="2026-03-14T00:14:09.976505411Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:14:09.981041 containerd[1937]: time="2026-03-14T00:14:09.980975555Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:14:09.982180 containerd[1937]: time="2026-03-14T00:14:09.982148327Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 14 00:14:09.982301 containerd[1937]: time="2026-03-14T00:14:09.982273631Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 14 00:14:09.982701 containerd[1937]: time="2026-03-14T00:14:09.982671359Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 14 00:14:09.983863 containerd[1937]: time="2026-03-14T00:14:09.982815335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 14 00:14:09.983863 containerd[1937]: time="2026-03-14T00:14:09.982982771Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:14:09.983863 containerd[1937]: time="2026-03-14T00:14:09.983024303Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:14:09.983863 containerd[1937]: time="2026-03-14T00:14:09.983332739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:14:09.983863 containerd[1937]: time="2026-03-14T00:14:09.983369207Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 14 00:14:09.983863 containerd[1937]: time="2026-03-14T00:14:09.983404667Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:14:09.983863 containerd[1937]: time="2026-03-14T00:14:09.983430335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 14 00:14:09.983863 containerd[1937]: time="2026-03-14T00:14:09.983586167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:14:09.984549 containerd[1937]: time="2026-03-14T00:14:09.984513731Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:14:09.987619 containerd[1937]: time="2026-03-14T00:14:09.987003611Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:14:09.987619 containerd[1937]: time="2026-03-14T00:14:09.987047651Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 14 00:14:09.987619 containerd[1937]: time="2026-03-14T00:14:09.987241715Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 14 00:14:09.987619 containerd[1937]: time="2026-03-14T00:14:09.987339599Z" level=info msg="metadata content store policy set" policy=shared Mar 14 00:14:09.992614 containerd[1937]: time="2026-03-14T00:14:09.992563571Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 14 00:14:09.992845 containerd[1937]: time="2026-03-14T00:14:09.992816723Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 14 00:14:09.993024 containerd[1937]: time="2026-03-14T00:14:09.992995187Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 14 00:14:09.993140 containerd[1937]: time="2026-03-14T00:14:09.993113099Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 14 00:14:09.993357 containerd[1937]: time="2026-03-14T00:14:09.993327695Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 14 00:14:09.994167 containerd[1937]: time="2026-03-14T00:14:09.994135451Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 14 00:14:09.995236 containerd[1937]: time="2026-03-14T00:14:09.995201951Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 14 00:14:09.996159 containerd[1937]: time="2026-03-14T00:14:09.996127572Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 14 00:14:09.997487 containerd[1937]: time="2026-03-14T00:14:09.997455108Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 14 00:14:09.999776 containerd[1937]: time="2026-03-14T00:14:09.997610448Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 14 00:14:09.999776 containerd[1937]: time="2026-03-14T00:14:09.997652448Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 14 00:14:09.999776 containerd[1937]: time="2026-03-14T00:14:09.997683384Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 14 00:14:09.999776 containerd[1937]: time="2026-03-14T00:14:09.997713156Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 14 00:14:09.999776 containerd[1937]: time="2026-03-14T00:14:09.997744848Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 14 00:14:09.999776 containerd[1937]: time="2026-03-14T00:14:09.997803432Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 14 00:14:09.999776 containerd[1937]: time="2026-03-14T00:14:09.997835148Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 14 00:14:09.999776 containerd[1937]: time="2026-03-14T00:14:09.997864644Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 14 00:14:09.999776 containerd[1937]: time="2026-03-14T00:14:09.997894608Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 14 00:14:09.999776 containerd[1937]: time="2026-03-14T00:14:09.997935600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 14 00:14:09.999776 containerd[1937]: time="2026-03-14T00:14:09.997966680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 14 00:14:09.999776 containerd[1937]: time="2026-03-14T00:14:09.998013360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 14 00:14:09.999776 containerd[1937]: time="2026-03-14T00:14:09.998044452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 14 00:14:09.999776 containerd[1937]: time="2026-03-14T00:14:09.998076060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 14 00:14:10.000397 containerd[1937]: time="2026-03-14T00:14:09.998106276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 14 00:14:10.000397 containerd[1937]: time="2026-03-14T00:14:09.998134332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 14 00:14:10.000397 containerd[1937]: time="2026-03-14T00:14:09.998165772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 14 00:14:10.000397 containerd[1937]: time="2026-03-14T00:14:09.998197248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 14 00:14:10.000397 containerd[1937]: time="2026-03-14T00:14:09.998230908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 14 00:14:10.000397 containerd[1937]: time="2026-03-14T00:14:09.998260428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 14 00:14:10.000397 containerd[1937]: time="2026-03-14T00:14:09.998288244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 14 00:14:10.000397 containerd[1937]: time="2026-03-14T00:14:09.998316816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 14 00:14:10.000397 containerd[1937]: time="2026-03-14T00:14:09.998360328Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 14 00:14:10.000397 containerd[1937]: time="2026-03-14T00:14:09.998404620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 14 00:14:10.000397 containerd[1937]: time="2026-03-14T00:14:09.998451276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 14 00:14:10.000397 containerd[1937]: time="2026-03-14T00:14:09.998478804Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 14 00:14:10.000397 containerd[1937]: time="2026-03-14T00:14:09.998716044Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 14 00:14:10.000397 containerd[1937]: time="2026-03-14T00:14:09.998781372Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 14 00:14:10.000974 containerd[1937]: time="2026-03-14T00:14:09.998812980Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 14 00:14:10.000974 containerd[1937]: time="2026-03-14T00:14:09.998843640Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 14 00:14:10.000974 containerd[1937]: time="2026-03-14T00:14:09.998869188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 14 00:14:10.000974 containerd[1937]: time="2026-03-14T00:14:09.998898048Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 14 00:14:10.000974 containerd[1937]: time="2026-03-14T00:14:09.998939880Z" level=info msg="NRI interface is disabled by configuration." Mar 14 00:14:10.000974 containerd[1937]: time="2026-03-14T00:14:09.998967168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 14 00:14:10.001270 containerd[1937]: time="2026-03-14T00:14:09.999463188Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 14 00:14:10.001270 containerd[1937]: time="2026-03-14T00:14:09.999569592Z" level=info msg="Connect containerd service" Mar 14 00:14:10.001270 containerd[1937]: time="2026-03-14T00:14:09.999631128Z" level=info msg="using legacy CRI server" Mar 14 00:14:10.001270 containerd[1937]: time="2026-03-14T00:14:09.999648684Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 14 00:14:10.007784 containerd[1937]: time="2026-03-14T00:14:10.005390984Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 14 00:14:10.009140 containerd[1937]: time="2026-03-14T00:14:10.009068552Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:14:10.012064 containerd[1937]: time="2026-03-14T00:14:10.011975672Z" level=info msg="Start subscribing containerd event" Mar 14 00:14:10.012174 containerd[1937]: time="2026-03-14T00:14:10.012090272Z" level=info msg="Start recovering state" Mar 14 00:14:10.012274 containerd[1937]: time="2026-03-14T00:14:10.012233960Z" level=info msg="Start event monitor" Mar 14 00:14:10.012332 containerd[1937]: time="2026-03-14T00:14:10.012270764Z" level=info msg="Start snapshots syncer" Mar 14 00:14:10.012332 containerd[1937]: time="2026-03-14T00:14:10.012297368Z" level=info msg="Start cni network conf syncer for default" Mar 14 00:14:10.012441 containerd[1937]: time="2026-03-14T00:14:10.012327740Z" level=info msg="Start streaming server" Mar 14 00:14:10.013193 containerd[1937]: time="2026-03-14T00:14:10.013140644Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 14 00:14:10.017033 containerd[1937]: time="2026-03-14T00:14:10.016976564Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 14 00:14:10.017240 systemd[1]: Started containerd.service - containerd container runtime. Mar 14 00:14:10.028094 containerd[1937]: time="2026-03-14T00:14:10.028029692Z" level=info msg="containerd successfully booted in 0.148785s" Mar 14 00:14:10.294937 systemd-networkd[1851]: eth0: Gained IPv6LL Mar 14 00:14:10.302853 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 14 00:14:10.306812 systemd[1]: Reached target network-online.target - Network is Online. Mar 14 00:14:10.321370 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 14 00:14:10.335277 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:10.343659 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 14 00:14:10.434547 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 14 00:14:10.482374 amazon-ssm-agent[2105]: Initializing new seelog logger Mar 14 00:14:10.483903 amazon-ssm-agent[2105]: New Seelog Logger Creation Complete Mar 14 00:14:10.485149 amazon-ssm-agent[2105]: 2026/03/14 00:14:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:14:10.485149 amazon-ssm-agent[2105]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:14:10.485149 amazon-ssm-agent[2105]: 2026/03/14 00:14:10 processing appconfig overrides Mar 14 00:14:10.487205 amazon-ssm-agent[2105]: 2026/03/14 00:14:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:14:10.487802 amazon-ssm-agent[2105]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:14:10.487802 amazon-ssm-agent[2105]: 2026/03/14 00:14:10 processing appconfig overrides Mar 14 00:14:10.487802 amazon-ssm-agent[2105]: 2026/03/14 00:14:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:14:10.487802 amazon-ssm-agent[2105]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:14:10.488781 amazon-ssm-agent[2105]: 2026/03/14 00:14:10 processing appconfig overrides Mar 14 00:14:10.490038 amazon-ssm-agent[2105]: 2026-03-14 00:14:10 INFO Proxy environment variables: Mar 14 00:14:10.497789 amazon-ssm-agent[2105]: 2026/03/14 00:14:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:14:10.497789 amazon-ssm-agent[2105]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:14:10.497789 amazon-ssm-agent[2105]: 2026/03/14 00:14:10 processing appconfig overrides Mar 14 00:14:10.595576 amazon-ssm-agent[2105]: 2026-03-14 00:14:10 INFO http_proxy: Mar 14 00:14:10.615131 tar[1919]: linux-arm64/README.md Mar 14 00:14:10.657302 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 14 00:14:10.697769 amazon-ssm-agent[2105]: 2026-03-14 00:14:10 INFO no_proxy: Mar 14 00:14:10.793114 amazon-ssm-agent[2105]: 2026-03-14 00:14:10 INFO https_proxy: Mar 14 00:14:10.892855 amazon-ssm-agent[2105]: 2026-03-14 00:14:10 INFO Checking if agent identity type OnPrem can be assumed Mar 14 00:14:10.991013 amazon-ssm-agent[2105]: 2026-03-14 00:14:10 INFO Checking if agent identity type EC2 can be assumed Mar 14 00:14:11.090007 amazon-ssm-agent[2105]: 2026-03-14 00:14:10 INFO Agent will take identity from EC2 Mar 14 00:14:11.189340 amazon-ssm-agent[2105]: 2026-03-14 00:14:10 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 14 00:14:11.288284 amazon-ssm-agent[2105]: 2026-03-14 00:14:10 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 14 00:14:11.363580 amazon-ssm-agent[2105]: 2026-03-14 00:14:10 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 14 00:14:11.365583 amazon-ssm-agent[2105]: 2026-03-14 00:14:10 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Mar 14 00:14:11.365583 amazon-ssm-agent[2105]: 2026-03-14 00:14:10 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Mar 14 00:14:11.365583 amazon-ssm-agent[2105]: 2026-03-14 00:14:10 INFO [amazon-ssm-agent] Starting Core Agent Mar 14 00:14:11.365583 amazon-ssm-agent[2105]: 2026-03-14 00:14:10 INFO [amazon-ssm-agent] registrar detected. Attempting registration Mar 14 00:14:11.365583 amazon-ssm-agent[2105]: 2026-03-14 00:14:10 INFO [Registrar] Starting registrar module Mar 14 00:14:11.365583 amazon-ssm-agent[2105]: 2026-03-14 00:14:10 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Mar 14 00:14:11.365583 amazon-ssm-agent[2105]: 2026-03-14 00:14:11 INFO [EC2Identity] EC2 registration was successful. Mar 14 00:14:11.365583 amazon-ssm-agent[2105]: 2026-03-14 00:14:11 INFO [CredentialRefresher] credentialRefresher has started Mar 14 00:14:11.365583 amazon-ssm-agent[2105]: 2026-03-14 00:14:11 INFO [CredentialRefresher] Starting credentials refresher loop Mar 14 00:14:11.365583 amazon-ssm-agent[2105]: 2026-03-14 00:14:11 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 14 00:14:11.387533 amazon-ssm-agent[2105]: 2026-03-14 00:14:11 INFO [CredentialRefresher] Next credential rotation will be in 31.966623563466666 minutes Mar 14 00:14:11.470777 sshd_keygen[1943]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 14 00:14:11.513888 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 14 00:14:11.530037 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 14 00:14:11.540353 systemd[1]: Started sshd@0-172.31.26.130:22-68.220.241.50:54792.service - OpenSSH per-connection server daemon (68.220.241.50:54792). Mar 14 00:14:11.550523 systemd[1]: issuegen.service: Deactivated successfully. Mar 14 00:14:11.550949 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 14 00:14:11.569470 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 14 00:14:11.607400 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 14 00:14:11.624445 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 14 00:14:11.636556 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 14 00:14:11.642248 systemd[1]: Reached target getty.target - Login Prompts. Mar 14 00:14:12.109783 sshd[2136]: Accepted publickey for core from 68.220.241.50 port 54792 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:14:12.111796 sshd[2136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:12.133578 systemd-logind[1911]: New session 1 of user core. Mar 14 00:14:12.138989 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 14 00:14:12.149294 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 14 00:14:12.187841 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 14 00:14:12.203042 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 14 00:14:12.226090 (systemd)[2147]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 14 00:14:12.407904 amazon-ssm-agent[2105]: 2026-03-14 00:14:12 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 14 00:14:12.469848 systemd[2147]: Queued start job for default target default.target. Mar 14 00:14:12.478346 systemd[2147]: Created slice app.slice - User Application Slice. Mar 14 00:14:12.478430 systemd[2147]: Reached target paths.target - Paths. Mar 14 00:14:12.478465 systemd[2147]: Reached target timers.target - Timers. Mar 14 00:14:12.482927 systemd[2147]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 14 00:14:12.509940 amazon-ssm-agent[2105]: 2026-03-14 00:14:12 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2154) started Mar 14 00:14:12.521650 systemd[2147]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 14 00:14:12.522222 systemd[2147]: Reached target sockets.target - Sockets. Mar 14 00:14:12.522263 systemd[2147]: Reached target basic.target - Basic System. Mar 14 00:14:12.522363 systemd[2147]: Reached target default.target - Main User Target. Mar 14 00:14:12.522432 systemd[2147]: Startup finished in 283ms. Mar 14 00:14:12.523260 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 14 00:14:12.536743 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 14 00:14:12.587121 ntpd[1905]: Listen normally on 7 eth0 [fe80::439:a2ff:fe30:3aef%2]:123 Mar 14 00:14:12.587634 ntpd[1905]: 14 Mar 00:14:12 ntpd[1905]: Listen normally on 7 eth0 [fe80::439:a2ff:fe30:3aef%2]:123 Mar 14 00:14:12.610845 amazon-ssm-agent[2105]: 2026-03-14 00:14:12 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 14 00:14:12.695672 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:12.699898 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 14 00:14:12.706989 systemd[1]: Startup finished in 1.253s (kernel) + 7.647s (initrd) + 9.285s (userspace) = 18.187s. Mar 14 00:14:12.725632 (kubelet)[2170]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:14:12.933475 systemd[1]: Started sshd@1-172.31.26.130:22-68.220.241.50:49286.service - OpenSSH per-connection server daemon (68.220.241.50:49286). Mar 14 00:14:13.452942 sshd[2178]: Accepted publickey for core from 68.220.241.50 port 49286 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:14:13.457229 sshd[2178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:13.471180 systemd-logind[1911]: New session 2 of user core. Mar 14 00:14:13.483064 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 14 00:14:13.724230 kubelet[2170]: E0314 00:14:13.724024 2170 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:14:13.729220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:14:13.729569 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:14:13.730809 systemd[1]: kubelet.service: Consumed 1.376s CPU time. Mar 14 00:14:13.818740 sshd[2178]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:13.826113 systemd[1]: sshd@1-172.31.26.130:22-68.220.241.50:49286.service: Deactivated successfully. Mar 14 00:14:13.826326 systemd-logind[1911]: Session 2 logged out. Waiting for processes to exit. Mar 14 00:14:13.829983 systemd[1]: session-2.scope: Deactivated successfully. Mar 14 00:14:13.834224 systemd-logind[1911]: Removed session 2. Mar 14 00:14:13.930232 systemd[1]: Started sshd@2-172.31.26.130:22-68.220.241.50:49300.service - OpenSSH per-connection server daemon (68.220.241.50:49300). Mar 14 00:14:14.472015 sshd[2191]: Accepted publickey for core from 68.220.241.50 port 49300 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:14:14.474679 sshd[2191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:14.483090 systemd-logind[1911]: New session 3 of user core. Mar 14 00:14:14.493091 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 14 00:14:14.851149 sshd[2191]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:14.857132 systemd[1]: sshd@2-172.31.26.130:22-68.220.241.50:49300.service: Deactivated successfully. Mar 14 00:14:14.861466 systemd[1]: session-3.scope: Deactivated successfully. Mar 14 00:14:14.866085 systemd-logind[1911]: Session 3 logged out. Waiting for processes to exit. Mar 14 00:14:14.868246 systemd-logind[1911]: Removed session 3. Mar 14 00:14:14.944258 systemd[1]: Started sshd@3-172.31.26.130:22-68.220.241.50:49306.service - OpenSSH per-connection server daemon (68.220.241.50:49306). Mar 14 00:14:15.441818 sshd[2199]: Accepted publickey for core from 68.220.241.50 port 49306 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:14:15.443694 sshd[2199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:15.451874 systemd-logind[1911]: New session 4 of user core. Mar 14 00:14:15.462109 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 14 00:14:15.233414 systemd-resolved[1852]: Clock change detected. Flushing caches. Mar 14 00:14:15.241480 systemd-journald[1489]: Time jumped backwards, rotating. Mar 14 00:14:15.447730 sshd[2199]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:15.454708 systemd[1]: sshd@3-172.31.26.130:22-68.220.241.50:49306.service: Deactivated successfully. Mar 14 00:14:15.456296 systemd-logind[1911]: Session 4 logged out. Waiting for processes to exit. Mar 14 00:14:15.459410 systemd[1]: session-4.scope: Deactivated successfully. Mar 14 00:14:15.463800 systemd-logind[1911]: Removed session 4. Mar 14 00:14:15.541517 systemd[1]: Started sshd@4-172.31.26.130:22-68.220.241.50:49308.service - OpenSSH per-connection server daemon (68.220.241.50:49308). Mar 14 00:14:16.060665 sshd[2207]: Accepted publickey for core from 68.220.241.50 port 49308 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:14:16.062575 sshd[2207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:16.072370 systemd-logind[1911]: New session 5 of user core. Mar 14 00:14:16.079312 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 14 00:14:16.359916 sudo[2210]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 14 00:14:16.360829 sudo[2210]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:14:16.377025 sudo[2210]: pam_unix(sudo:session): session closed for user root Mar 14 00:14:16.456693 sshd[2207]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:16.463796 systemd-logind[1911]: Session 5 logged out. Waiting for processes to exit. Mar 14 00:14:16.464992 systemd[1]: sshd@4-172.31.26.130:22-68.220.241.50:49308.service: Deactivated successfully. Mar 14 00:14:16.468391 systemd[1]: session-5.scope: Deactivated successfully. Mar 14 00:14:16.473178 systemd-logind[1911]: Removed session 5. Mar 14 00:14:16.553565 systemd[1]: Started sshd@5-172.31.26.130:22-68.220.241.50:49314.service - OpenSSH per-connection server daemon (68.220.241.50:49314). Mar 14 00:14:17.070358 sshd[2215]: Accepted publickey for core from 68.220.241.50 port 49314 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:14:17.073201 sshd[2215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:17.083342 systemd-logind[1911]: New session 6 of user core. Mar 14 00:14:17.093304 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 14 00:14:17.356593 sudo[2219]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 14 00:14:17.357526 sudo[2219]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:14:17.365262 sudo[2219]: pam_unix(sudo:session): session closed for user root Mar 14 00:14:17.377404 sudo[2218]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 14 00:14:17.378257 sudo[2218]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:14:17.404525 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 14 00:14:17.419634 auditctl[2222]: No rules Mar 14 00:14:17.420556 systemd[1]: audit-rules.service: Deactivated successfully. Mar 14 00:14:17.421048 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 14 00:14:17.435853 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:14:17.488801 augenrules[2240]: No rules Mar 14 00:14:17.492104 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:14:17.496358 sudo[2218]: pam_unix(sudo:session): session closed for user root Mar 14 00:14:17.575471 sshd[2215]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:17.580838 systemd-logind[1911]: Session 6 logged out. Waiting for processes to exit. Mar 14 00:14:17.582650 systemd[1]: sshd@5-172.31.26.130:22-68.220.241.50:49314.service: Deactivated successfully. Mar 14 00:14:17.587153 systemd[1]: session-6.scope: Deactivated successfully. Mar 14 00:14:17.591510 systemd-logind[1911]: Removed session 6. Mar 14 00:14:17.687534 systemd[1]: Started sshd@6-172.31.26.130:22-68.220.241.50:49330.service - OpenSSH per-connection server daemon (68.220.241.50:49330). Mar 14 00:14:18.237316 sshd[2248]: Accepted publickey for core from 68.220.241.50 port 49330 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:14:18.240447 sshd[2248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:18.252419 systemd-logind[1911]: New session 7 of user core. Mar 14 00:14:18.261284 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 14 00:14:18.540445 sudo[2251]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 14 00:14:18.541308 sudo[2251]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:14:19.095475 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 14 00:14:19.095814 (dockerd)[2266]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 14 00:14:19.544241 dockerd[2266]: time="2026-03-14T00:14:19.542604125Z" level=info msg="Starting up" Mar 14 00:14:19.702017 systemd[1]: var-lib-docker-metacopy\x2dcheck3430291915-merged.mount: Deactivated successfully. Mar 14 00:14:19.717907 dockerd[2266]: time="2026-03-14T00:14:19.717797994Z" level=info msg="Loading containers: start." Mar 14 00:14:19.903157 kernel: Initializing XFRM netlink socket Mar 14 00:14:19.943346 (udev-worker)[2290]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:14:20.056848 systemd-networkd[1851]: docker0: Link UP Mar 14 00:14:20.083000 dockerd[2266]: time="2026-03-14T00:14:20.082877644Z" level=info msg="Loading containers: done." Mar 14 00:14:20.110700 dockerd[2266]: time="2026-03-14T00:14:20.110595160Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 14 00:14:20.112153 dockerd[2266]: time="2026-03-14T00:14:20.111138712Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 14 00:14:20.112153 dockerd[2266]: time="2026-03-14T00:14:20.111557392Z" level=info msg="Daemon has completed initialization" Mar 14 00:14:20.111508 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1360736545-merged.mount: Deactivated successfully. Mar 14 00:14:20.192379 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 14 00:14:20.193215 dockerd[2266]: time="2026-03-14T00:14:20.192220864Z" level=info msg="API listen on /run/docker.sock" Mar 14 00:14:21.052307 containerd[1937]: time="2026-03-14T00:14:21.051674632Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 14 00:14:21.753892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount231026093.mount: Deactivated successfully. Mar 14 00:14:23.374573 containerd[1937]: time="2026-03-14T00:14:23.374463296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:23.377196 containerd[1937]: time="2026-03-14T00:14:23.377106116Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=27390174" Mar 14 00:14:23.381118 containerd[1937]: time="2026-03-14T00:14:23.381018632Z" level=info msg="ImageCreate event name:\"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:23.390021 containerd[1937]: time="2026-03-14T00:14:23.389221772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:23.393341 containerd[1937]: time="2026-03-14T00:14:23.392298824Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"27386773\" in 2.340543456s" Mar 14 00:14:23.393341 containerd[1937]: time="2026-03-14T00:14:23.392406428Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:6dbc3c6e88c8bca1294fa5fafe73dbe01fb58d40e562dbfc8b8b4195940270c8\"" Mar 14 00:14:23.393617 containerd[1937]: time="2026-03-14T00:14:23.393551192Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 14 00:14:23.626273 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 14 00:14:23.643062 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:24.036482 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:24.054587 (kubelet)[2471]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:14:24.157137 kubelet[2471]: E0314 00:14:24.157024 2471 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:14:24.165591 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:14:24.166130 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:14:24.913312 containerd[1937]: time="2026-03-14T00:14:24.913255320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:24.916336 containerd[1937]: time="2026-03-14T00:14:24.916276572Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=23552106" Mar 14 00:14:24.919156 containerd[1937]: time="2026-03-14T00:14:24.919079172Z" level=info msg="ImageCreate event name:\"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:24.925399 containerd[1937]: time="2026-03-14T00:14:24.925314504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:24.927777 containerd[1937]: time="2026-03-14T00:14:24.927718224Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"25136510\" in 1.534105448s" Mar 14 00:14:24.928341 containerd[1937]: time="2026-03-14T00:14:24.927906060Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:c58be92c40cc41b6c83c361b92110b587104386f93c5b7a9fc66dffdd1523d17\"" Mar 14 00:14:24.928739 containerd[1937]: time="2026-03-14T00:14:24.928693116Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 14 00:14:26.225003 containerd[1937]: time="2026-03-14T00:14:26.222985858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:26.227292 containerd[1937]: time="2026-03-14T00:14:26.227231062Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=18301305" Mar 14 00:14:26.230159 containerd[1937]: time="2026-03-14T00:14:26.230098786Z" level=info msg="ImageCreate event name:\"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:26.237292 containerd[1937]: time="2026-03-14T00:14:26.237214822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:26.240128 containerd[1937]: time="2026-03-14T00:14:26.240043066Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"19885727\" in 1.311286722s" Mar 14 00:14:26.240128 containerd[1937]: time="2026-03-14T00:14:26.240117646Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:5dcd4a0c93d95bd92241ba240a130ffbde67814e2b417a13c25738a7b0204e95\"" Mar 14 00:14:26.240922 containerd[1937]: time="2026-03-14T00:14:26.240850198Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 14 00:14:27.644513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount29923829.mount: Deactivated successfully. Mar 14 00:14:28.269124 containerd[1937]: time="2026-03-14T00:14:28.269029356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:28.271104 containerd[1937]: time="2026-03-14T00:14:28.271029840Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=28148870" Mar 14 00:14:28.273229 containerd[1937]: time="2026-03-14T00:14:28.273132984Z" level=info msg="ImageCreate event name:\"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:28.276559 containerd[1937]: time="2026-03-14T00:14:28.276472224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:28.278314 containerd[1937]: time="2026-03-14T00:14:28.278112240Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"28147889\" in 2.037194122s" Mar 14 00:14:28.278314 containerd[1937]: time="2026-03-14T00:14:28.278171736Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:fb4f3cb8cccaec5975890c2ee802236a557e3f108da9c3c66ebec335ac73dcc9\"" Mar 14 00:14:28.279565 containerd[1937]: time="2026-03-14T00:14:28.279497676Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 14 00:14:28.854981 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3818934206.mount: Deactivated successfully. Mar 14 00:14:30.124458 containerd[1937]: time="2026-03-14T00:14:30.124357273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:30.128845 containerd[1937]: time="2026-03-14T00:14:30.128751217Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Mar 14 00:14:30.133002 containerd[1937]: time="2026-03-14T00:14:30.132113090Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:30.138432 containerd[1937]: time="2026-03-14T00:14:30.138335630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:30.141624 containerd[1937]: time="2026-03-14T00:14:30.141558098Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.861988962s" Mar 14 00:14:30.143083 containerd[1937]: time="2026-03-14T00:14:30.141771098Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Mar 14 00:14:30.146019 containerd[1937]: time="2026-03-14T00:14:30.145919186Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 14 00:14:30.693055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1521590600.mount: Deactivated successfully. Mar 14 00:14:30.709736 containerd[1937]: time="2026-03-14T00:14:30.708001048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:30.711698 containerd[1937]: time="2026-03-14T00:14:30.711642112Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Mar 14 00:14:30.714509 containerd[1937]: time="2026-03-14T00:14:30.714443248Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:30.721221 containerd[1937]: time="2026-03-14T00:14:30.721133548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:30.723031 containerd[1937]: time="2026-03-14T00:14:30.722967592Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 576.685538ms" Mar 14 00:14:30.723244 containerd[1937]: time="2026-03-14T00:14:30.723206848Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 14 00:14:30.724394 containerd[1937]: time="2026-03-14T00:14:30.724228288Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 14 00:14:31.284170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1034117636.mount: Deactivated successfully. Mar 14 00:14:32.885162 containerd[1937]: time="2026-03-14T00:14:32.885066379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:32.887634 containerd[1937]: time="2026-03-14T00:14:32.887543347Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=21885780" Mar 14 00:14:32.890981 containerd[1937]: time="2026-03-14T00:14:32.889564375Z" level=info msg="ImageCreate event name:\"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:32.899233 containerd[1937]: time="2026-03-14T00:14:32.899151163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:32.902346 containerd[1937]: time="2026-03-14T00:14:32.902254867Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"21882972\" in 2.177652179s" Mar 14 00:14:32.902346 containerd[1937]: time="2026-03-14T00:14:32.902331691Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:1211402d28f5813ed906916bfcdd0a7404c2f9048ef5bb54387a6745bc410eca\"" Mar 14 00:14:34.416914 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 14 00:14:34.428161 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:34.818388 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:34.831039 (kubelet)[2644]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:14:34.922979 kubelet[2644]: E0314 00:14:34.921643 2644 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:14:34.926699 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:14:34.928308 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:14:39.230042 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 14 00:14:40.422882 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:40.435465 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:40.496065 systemd[1]: Reloading requested from client PID 2661 ('systemctl') (unit session-7.scope)... Mar 14 00:14:40.496102 systemd[1]: Reloading... Mar 14 00:14:40.708997 zram_generator::config[2704]: No configuration found. Mar 14 00:14:40.983466 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:14:41.160855 systemd[1]: Reloading finished in 664 ms. Mar 14 00:14:41.254968 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 14 00:14:41.255331 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 14 00:14:41.255801 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:41.272563 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:41.587229 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:41.603487 (kubelet)[2765]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:14:41.675403 kubelet[2765]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:14:41.675905 kubelet[2765]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:14:41.676037 kubelet[2765]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:14:41.676284 kubelet[2765]: I0314 00:14:41.676227 2765 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:14:42.371751 kubelet[2765]: I0314 00:14:42.371655 2765 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 14 00:14:42.371751 kubelet[2765]: I0314 00:14:42.371719 2765 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:14:42.373007 kubelet[2765]: I0314 00:14:42.372384 2765 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:14:42.433014 kubelet[2765]: I0314 00:14:42.432970 2765 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:14:42.436720 kubelet[2765]: E0314 00:14:42.436650 2765 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.26.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.26.130:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:14:42.449194 kubelet[2765]: E0314 00:14:42.449129 2765 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:14:42.449402 kubelet[2765]: I0314 00:14:42.449376 2765 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 14 00:14:42.456837 kubelet[2765]: I0314 00:14:42.456500 2765 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 14 00:14:42.458772 kubelet[2765]: I0314 00:14:42.458678 2765 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:14:42.459107 kubelet[2765]: I0314 00:14:42.458751 2765 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-130","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:14:42.459107 kubelet[2765]: I0314 00:14:42.459108 2765 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:14:42.459364 kubelet[2765]: I0314 00:14:42.459135 2765 container_manager_linux.go:303] "Creating device plugin manager" Mar 14 00:14:42.459583 kubelet[2765]: I0314 00:14:42.459519 2765 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:14:42.465731 kubelet[2765]: I0314 00:14:42.465670 2765 kubelet.go:480] "Attempting to sync node with API server" Mar 14 00:14:42.465731 kubelet[2765]: I0314 00:14:42.465730 2765 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:14:42.467094 kubelet[2765]: I0314 00:14:42.466757 2765 kubelet.go:386] "Adding apiserver pod source" Mar 14 00:14:42.467094 kubelet[2765]: I0314 00:14:42.466820 2765 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:14:42.474166 kubelet[2765]: E0314 00:14:42.473102 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.26.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-130&limit=500&resourceVersion=0\": dial tcp 172.31.26.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:14:42.475572 kubelet[2765]: E0314 00:14:42.475514 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.26.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:14:42.476035 kubelet[2765]: I0314 00:14:42.476001 2765 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:14:42.477352 kubelet[2765]: I0314 00:14:42.477305 2765 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:14:42.477755 kubelet[2765]: W0314 00:14:42.477729 2765 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 14 00:14:42.487451 kubelet[2765]: I0314 00:14:42.487413 2765 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 14 00:14:42.487696 kubelet[2765]: I0314 00:14:42.487675 2765 server.go:1289] "Started kubelet" Mar 14 00:14:42.497391 kubelet[2765]: I0314 00:14:42.496823 2765 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:14:42.499873 kubelet[2765]: I0314 00:14:42.499807 2765 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:14:42.500908 kubelet[2765]: I0314 00:14:42.500770 2765 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:14:42.505416 kubelet[2765]: I0314 00:14:42.505350 2765 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:14:42.509463 kubelet[2765]: I0314 00:14:42.509074 2765 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 14 00:14:42.509463 kubelet[2765]: I0314 00:14:42.509076 2765 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:14:42.509463 kubelet[2765]: I0314 00:14:42.509466 2765 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:14:42.512053 kubelet[2765]: E0314 00:14:42.511102 2765 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-130\" not found" Mar 14 00:14:42.512563 kubelet[2765]: I0314 00:14:42.512525 2765 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 14 00:14:42.512761 kubelet[2765]: I0314 00:14:42.512740 2765 reconciler.go:26] "Reconciler: start to sync state" Mar 14 00:14:42.518127 kubelet[2765]: E0314 00:14:42.514760 2765 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.26.130:6443/api/v1/namespaces/default/events\": dial tcp 172.31.26.130:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-26-130.189c8ce8448e318f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-26-130,UID:ip-172-31-26-130,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-26-130,},FirstTimestamp:2026-03-14 00:14:42.487603599 +0000 UTC m=+0.876526025,LastTimestamp:2026-03-14 00:14:42.487603599 +0000 UTC m=+0.876526025,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-26-130,}" Mar 14 00:14:42.519765 kubelet[2765]: I0314 00:14:42.519707 2765 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:14:42.523650 kubelet[2765]: I0314 00:14:42.523608 2765 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:14:42.524059 kubelet[2765]: I0314 00:14:42.523836 2765 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:14:42.524254 kubelet[2765]: I0314 00:14:42.524199 2765 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 14 00:14:42.531493 kubelet[2765]: E0314 00:14:42.531412 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.26.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:14:42.531673 kubelet[2765]: E0314 00:14:42.531595 2765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-130?timeout=10s\": dial tcp 172.31.26.130:6443: connect: connection refused" interval="200ms" Mar 14 00:14:42.568048 kubelet[2765]: I0314 00:14:42.567715 2765 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:14:42.568048 kubelet[2765]: I0314 00:14:42.567749 2765 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:14:42.568048 kubelet[2765]: I0314 00:14:42.567799 2765 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:14:42.568312 kubelet[2765]: I0314 00:14:42.568206 2765 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 14 00:14:42.568312 kubelet[2765]: I0314 00:14:42.568268 2765 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 14 00:14:42.568437 kubelet[2765]: I0314 00:14:42.568324 2765 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:14:42.568437 kubelet[2765]: I0314 00:14:42.568341 2765 kubelet.go:2436] "Starting kubelet main sync loop" Mar 14 00:14:42.568437 kubelet[2765]: E0314 00:14:42.568410 2765 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:14:42.569608 kubelet[2765]: E0314 00:14:42.569239 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.26.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:14:42.574814 kubelet[2765]: I0314 00:14:42.574365 2765 policy_none.go:49] "None policy: Start" Mar 14 00:14:42.574814 kubelet[2765]: I0314 00:14:42.574411 2765 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 14 00:14:42.574814 kubelet[2765]: I0314 00:14:42.574436 2765 state_mem.go:35] "Initializing new in-memory state store" Mar 14 00:14:42.586430 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 14 00:14:42.603646 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 14 00:14:42.611853 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 14 00:14:42.613884 kubelet[2765]: E0314 00:14:42.613063 2765 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-130\" not found" Mar 14 00:14:42.626266 kubelet[2765]: E0314 00:14:42.624121 2765 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:14:42.626266 kubelet[2765]: I0314 00:14:42.624435 2765 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:14:42.626266 kubelet[2765]: I0314 00:14:42.624459 2765 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:14:42.626266 kubelet[2765]: I0314 00:14:42.626039 2765 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:14:42.631499 kubelet[2765]: E0314 00:14:42.629989 2765 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:14:42.631499 kubelet[2765]: E0314 00:14:42.630061 2765 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-26-130\" not found" Mar 14 00:14:42.691980 systemd[1]: Created slice kubepods-burstable-pod88a7ea18542baaad9a44b4e62aa2d906.slice - libcontainer container kubepods-burstable-pod88a7ea18542baaad9a44b4e62aa2d906.slice. Mar 14 00:14:42.715083 kubelet[2765]: E0314 00:14:42.714686 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-130\" not found" node="ip-172-31-26-130" Mar 14 00:14:42.715677 kubelet[2765]: I0314 00:14:42.715254 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/497b82c61908daf3ed4f48123c9943f9-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-130\" (UID: \"497b82c61908daf3ed4f48123c9943f9\") " pod="kube-system/kube-scheduler-ip-172-31-26-130" Mar 14 00:14:42.715677 kubelet[2765]: I0314 00:14:42.715376 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/88a7ea18542baaad9a44b4e62aa2d906-ca-certs\") pod \"kube-apiserver-ip-172-31-26-130\" (UID: \"88a7ea18542baaad9a44b4e62aa2d906\") " pod="kube-system/kube-apiserver-ip-172-31-26-130" Mar 14 00:14:42.715677 kubelet[2765]: I0314 00:14:42.715446 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/88a7ea18542baaad9a44b4e62aa2d906-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-130\" (UID: \"88a7ea18542baaad9a44b4e62aa2d906\") " pod="kube-system/kube-apiserver-ip-172-31-26-130" Mar 14 00:14:42.715677 kubelet[2765]: I0314 00:14:42.715522 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/88a7ea18542baaad9a44b4e62aa2d906-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-130\" (UID: \"88a7ea18542baaad9a44b4e62aa2d906\") " pod="kube-system/kube-apiserver-ip-172-31-26-130" Mar 14 00:14:42.715677 kubelet[2765]: I0314 00:14:42.715642 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c1f2b7119e8ec924f0ed15942329f94f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-130\" (UID: \"c1f2b7119e8ec924f0ed15942329f94f\") " pod="kube-system/kube-controller-manager-ip-172-31-26-130" Mar 14 00:14:42.715928 kubelet[2765]: I0314 00:14:42.715720 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1f2b7119e8ec924f0ed15942329f94f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-130\" (UID: \"c1f2b7119e8ec924f0ed15942329f94f\") " pod="kube-system/kube-controller-manager-ip-172-31-26-130" Mar 14 00:14:42.715928 kubelet[2765]: I0314 00:14:42.715786 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1f2b7119e8ec924f0ed15942329f94f-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-130\" (UID: \"c1f2b7119e8ec924f0ed15942329f94f\") " pod="kube-system/kube-controller-manager-ip-172-31-26-130" Mar 14 00:14:42.715928 kubelet[2765]: I0314 00:14:42.715825 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c1f2b7119e8ec924f0ed15942329f94f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-130\" (UID: \"c1f2b7119e8ec924f0ed15942329f94f\") " pod="kube-system/kube-controller-manager-ip-172-31-26-130" Mar 14 00:14:42.715928 kubelet[2765]: I0314 00:14:42.715892 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c1f2b7119e8ec924f0ed15942329f94f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-130\" (UID: \"c1f2b7119e8ec924f0ed15942329f94f\") " pod="kube-system/kube-controller-manager-ip-172-31-26-130" Mar 14 00:14:42.725700 systemd[1]: Created slice kubepods-burstable-podc1f2b7119e8ec924f0ed15942329f94f.slice - libcontainer container kubepods-burstable-podc1f2b7119e8ec924f0ed15942329f94f.slice. Mar 14 00:14:42.729808 kubelet[2765]: E0314 00:14:42.729748 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-130\" not found" node="ip-172-31-26-130" Mar 14 00:14:42.733713 kubelet[2765]: E0314 00:14:42.732401 2765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-130?timeout=10s\": dial tcp 172.31.26.130:6443: connect: connection refused" interval="400ms" Mar 14 00:14:42.733713 kubelet[2765]: I0314 00:14:42.732488 2765 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-130" Mar 14 00:14:42.733713 kubelet[2765]: E0314 00:14:42.733039 2765 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.130:6443/api/v1/nodes\": dial tcp 172.31.26.130:6443: connect: connection refused" node="ip-172-31-26-130" Mar 14 00:14:42.739457 systemd[1]: Created slice kubepods-burstable-pod497b82c61908daf3ed4f48123c9943f9.slice - libcontainer container kubepods-burstable-pod497b82c61908daf3ed4f48123c9943f9.slice. Mar 14 00:14:42.744737 kubelet[2765]: E0314 00:14:42.744689 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-130\" not found" node="ip-172-31-26-130" Mar 14 00:14:42.935988 kubelet[2765]: I0314 00:14:42.935830 2765 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-130" Mar 14 00:14:42.937380 kubelet[2765]: E0314 00:14:42.936344 2765 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.130:6443/api/v1/nodes\": dial tcp 172.31.26.130:6443: connect: connection refused" node="ip-172-31-26-130" Mar 14 00:14:43.017307 containerd[1937]: time="2026-03-14T00:14:43.017218934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-130,Uid:88a7ea18542baaad9a44b4e62aa2d906,Namespace:kube-system,Attempt:0,}" Mar 14 00:14:43.031143 containerd[1937]: time="2026-03-14T00:14:43.031062134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-130,Uid:c1f2b7119e8ec924f0ed15942329f94f,Namespace:kube-system,Attempt:0,}" Mar 14 00:14:43.046628 containerd[1937]: time="2026-03-14T00:14:43.046553030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-130,Uid:497b82c61908daf3ed4f48123c9943f9,Namespace:kube-system,Attempt:0,}" Mar 14 00:14:43.133860 kubelet[2765]: E0314 00:14:43.133761 2765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-130?timeout=10s\": dial tcp 172.31.26.130:6443: connect: connection refused" interval="800ms" Mar 14 00:14:43.339314 kubelet[2765]: I0314 00:14:43.338844 2765 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-130" Mar 14 00:14:43.339539 kubelet[2765]: E0314 00:14:43.339441 2765 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.130:6443/api/v1/nodes\": dial tcp 172.31.26.130:6443: connect: connection refused" node="ip-172-31-26-130" Mar 14 00:14:43.396152 kubelet[2765]: E0314 00:14:43.396051 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.26.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.26.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:14:43.401839 kubelet[2765]: E0314 00:14:43.401775 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.26.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.26.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:14:43.553437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1649978998.mount: Deactivated successfully. Mar 14 00:14:43.563874 containerd[1937]: time="2026-03-14T00:14:43.563685040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:14:43.565505 containerd[1937]: time="2026-03-14T00:14:43.565444396Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:14:43.566326 containerd[1937]: time="2026-03-14T00:14:43.566255824Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 14 00:14:43.567297 containerd[1937]: time="2026-03-14T00:14:43.567247468Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:14:43.568593 containerd[1937]: time="2026-03-14T00:14:43.568553788Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:14:43.570316 containerd[1937]: time="2026-03-14T00:14:43.570251104Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:14:43.570964 containerd[1937]: time="2026-03-14T00:14:43.570764992Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:14:43.584974 containerd[1937]: time="2026-03-14T00:14:43.584055544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:14:43.590018 containerd[1937]: time="2026-03-14T00:14:43.589816300Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 558.627686ms" Mar 14 00:14:43.595759 containerd[1937]: time="2026-03-14T00:14:43.595696084Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 578.348666ms" Mar 14 00:14:43.598245 containerd[1937]: time="2026-03-14T00:14:43.598179616Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 551.493746ms" Mar 14 00:14:43.741978 kubelet[2765]: E0314 00:14:43.741590 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.26.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-130&limit=500&resourceVersion=0\": dial tcp 172.31.26.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:14:43.767372 containerd[1937]: time="2026-03-14T00:14:43.765301313Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:14:43.767372 containerd[1937]: time="2026-03-14T00:14:43.765427193Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:14:43.767372 containerd[1937]: time="2026-03-14T00:14:43.765466865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:43.768274 kubelet[2765]: E0314 00:14:43.768200 2765 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.26.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.26.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:14:43.769248 containerd[1937]: time="2026-03-14T00:14:43.769103693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:43.782014 containerd[1937]: time="2026-03-14T00:14:43.780319433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:14:43.782791 containerd[1937]: time="2026-03-14T00:14:43.781841057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:14:43.782791 containerd[1937]: time="2026-03-14T00:14:43.781916453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:43.784226 containerd[1937]: time="2026-03-14T00:14:43.783909929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:43.784226 containerd[1937]: time="2026-03-14T00:14:43.783653393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:14:43.784226 containerd[1937]: time="2026-03-14T00:14:43.783785093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:14:43.784226 containerd[1937]: time="2026-03-14T00:14:43.783835085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:43.784226 containerd[1937]: time="2026-03-14T00:14:43.784077965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:43.830543 systemd[1]: Started cri-containerd-d7bb5bb142f1d85aaf7f8d1c21ad906c5b8fbbd89c605f6fa89a1e0f5987c73d.scope - libcontainer container d7bb5bb142f1d85aaf7f8d1c21ad906c5b8fbbd89c605f6fa89a1e0f5987c73d. Mar 14 00:14:43.848341 systemd[1]: Started cri-containerd-77d3dc20b93a051ae049a0398643538b68990ccdf8cbcbc407f1b46871336508.scope - libcontainer container 77d3dc20b93a051ae049a0398643538b68990ccdf8cbcbc407f1b46871336508. Mar 14 00:14:43.866890 systemd[1]: Started cri-containerd-47f47703000a4c2c8bf45f456dce6bd17e7bc23be629c750a11480b99f508b67.scope - libcontainer container 47f47703000a4c2c8bf45f456dce6bd17e7bc23be629c750a11480b99f508b67. Mar 14 00:14:43.934785 kubelet[2765]: E0314 00:14:43.934590 2765 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-130?timeout=10s\": dial tcp 172.31.26.130:6443: connect: connection refused" interval="1.6s" Mar 14 00:14:43.951778 containerd[1937]: time="2026-03-14T00:14:43.951147306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-130,Uid:88a7ea18542baaad9a44b4e62aa2d906,Namespace:kube-system,Attempt:0,} returns sandbox id \"77d3dc20b93a051ae049a0398643538b68990ccdf8cbcbc407f1b46871336508\"" Mar 14 00:14:43.986645 containerd[1937]: time="2026-03-14T00:14:43.986560266Z" level=info msg="CreateContainer within sandbox \"77d3dc20b93a051ae049a0398643538b68990ccdf8cbcbc407f1b46871336508\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 14 00:14:43.992291 containerd[1937]: time="2026-03-14T00:14:43.992234874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-130,Uid:497b82c61908daf3ed4f48123c9943f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7bb5bb142f1d85aaf7f8d1c21ad906c5b8fbbd89c605f6fa89a1e0f5987c73d\"" Mar 14 00:14:44.001923 containerd[1937]: time="2026-03-14T00:14:44.001739738Z" level=info msg="CreateContainer within sandbox \"d7bb5bb142f1d85aaf7f8d1c21ad906c5b8fbbd89c605f6fa89a1e0f5987c73d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 14 00:14:44.020305 containerd[1937]: time="2026-03-14T00:14:44.020146802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-130,Uid:c1f2b7119e8ec924f0ed15942329f94f,Namespace:kube-system,Attempt:0,} returns sandbox id \"47f47703000a4c2c8bf45f456dce6bd17e7bc23be629c750a11480b99f508b67\"" Mar 14 00:14:44.024759 containerd[1937]: time="2026-03-14T00:14:44.024666147Z" level=info msg="CreateContainer within sandbox \"77d3dc20b93a051ae049a0398643538b68990ccdf8cbcbc407f1b46871336508\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1781662406dd353ddd9b4f204dfa01d580b08031513b1d708e39ffcfe243c067\"" Mar 14 00:14:44.027327 containerd[1937]: time="2026-03-14T00:14:44.027191235Z" level=info msg="StartContainer for \"1781662406dd353ddd9b4f204dfa01d580b08031513b1d708e39ffcfe243c067\"" Mar 14 00:14:44.028255 containerd[1937]: time="2026-03-14T00:14:44.028166499Z" level=info msg="CreateContainer within sandbox \"d7bb5bb142f1d85aaf7f8d1c21ad906c5b8fbbd89c605f6fa89a1e0f5987c73d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"449be0ac4a09b6a365357fc2219efb0692234b6c67e80f2d01e208446de6bd1e\"" Mar 14 00:14:44.030459 containerd[1937]: time="2026-03-14T00:14:44.030380403Z" level=info msg="StartContainer for \"449be0ac4a09b6a365357fc2219efb0692234b6c67e80f2d01e208446de6bd1e\"" Mar 14 00:14:44.032818 containerd[1937]: time="2026-03-14T00:14:44.032737023Z" level=info msg="CreateContainer within sandbox \"47f47703000a4c2c8bf45f456dce6bd17e7bc23be629c750a11480b99f508b67\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 14 00:14:44.058195 containerd[1937]: time="2026-03-14T00:14:44.058130331Z" level=info msg="CreateContainer within sandbox \"47f47703000a4c2c8bf45f456dce6bd17e7bc23be629c750a11480b99f508b67\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1752256aa7458b7802c9bae0fc9b309c85787ae9ceffcebb9e5914161b53f352\"" Mar 14 00:14:44.059978 containerd[1937]: time="2026-03-14T00:14:44.059400231Z" level=info msg="StartContainer for \"1752256aa7458b7802c9bae0fc9b309c85787ae9ceffcebb9e5914161b53f352\"" Mar 14 00:14:44.115304 systemd[1]: Started cri-containerd-1781662406dd353ddd9b4f204dfa01d580b08031513b1d708e39ffcfe243c067.scope - libcontainer container 1781662406dd353ddd9b4f204dfa01d580b08031513b1d708e39ffcfe243c067. Mar 14 00:14:44.122749 systemd[1]: Started cri-containerd-449be0ac4a09b6a365357fc2219efb0692234b6c67e80f2d01e208446de6bd1e.scope - libcontainer container 449be0ac4a09b6a365357fc2219efb0692234b6c67e80f2d01e208446de6bd1e. Mar 14 00:14:44.143808 kubelet[2765]: I0314 00:14:44.143246 2765 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-130" Mar 14 00:14:44.143808 kubelet[2765]: E0314 00:14:44.143749 2765 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.26.130:6443/api/v1/nodes\": dial tcp 172.31.26.130:6443: connect: connection refused" node="ip-172-31-26-130" Mar 14 00:14:44.175300 systemd[1]: Started cri-containerd-1752256aa7458b7802c9bae0fc9b309c85787ae9ceffcebb9e5914161b53f352.scope - libcontainer container 1752256aa7458b7802c9bae0fc9b309c85787ae9ceffcebb9e5914161b53f352. Mar 14 00:14:44.244560 containerd[1937]: time="2026-03-14T00:14:44.244482592Z" level=info msg="StartContainer for \"1781662406dd353ddd9b4f204dfa01d580b08031513b1d708e39ffcfe243c067\" returns successfully" Mar 14 00:14:44.255138 containerd[1937]: time="2026-03-14T00:14:44.254689012Z" level=info msg="StartContainer for \"449be0ac4a09b6a365357fc2219efb0692234b6c67e80f2d01e208446de6bd1e\" returns successfully" Mar 14 00:14:44.311162 containerd[1937]: time="2026-03-14T00:14:44.310992052Z" level=info msg="StartContainer for \"1752256aa7458b7802c9bae0fc9b309c85787ae9ceffcebb9e5914161b53f352\" returns successfully" Mar 14 00:14:44.583125 kubelet[2765]: E0314 00:14:44.582337 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-130\" not found" node="ip-172-31-26-130" Mar 14 00:14:44.590430 kubelet[2765]: E0314 00:14:44.590371 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-130\" not found" node="ip-172-31-26-130" Mar 14 00:14:44.596708 kubelet[2765]: E0314 00:14:44.596391 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-130\" not found" node="ip-172-31-26-130" Mar 14 00:14:45.599984 kubelet[2765]: E0314 00:14:45.597831 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-130\" not found" node="ip-172-31-26-130" Mar 14 00:14:45.601561 kubelet[2765]: E0314 00:14:45.601165 2765 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-26-130\" not found" node="ip-172-31-26-130" Mar 14 00:14:45.746976 kubelet[2765]: I0314 00:14:45.745968 2765 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-130" Mar 14 00:14:48.006533 kubelet[2765]: E0314 00:14:48.006467 2765 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-26-130\" not found" node="ip-172-31-26-130" Mar 14 00:14:48.052144 kubelet[2765]: I0314 00:14:48.051766 2765 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-26-130" Mar 14 00:14:48.112679 kubelet[2765]: I0314 00:14:48.111917 2765 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-130" Mar 14 00:14:48.231603 kubelet[2765]: E0314 00:14:48.231511 2765 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-26-130\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-26-130" Mar 14 00:14:48.231603 kubelet[2765]: I0314 00:14:48.231576 2765 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-130" Mar 14 00:14:48.279378 kubelet[2765]: E0314 00:14:48.279202 2765 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-26-130\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-26-130" Mar 14 00:14:48.279378 kubelet[2765]: I0314 00:14:48.279266 2765 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-26-130" Mar 14 00:14:48.287980 kubelet[2765]: E0314 00:14:48.285924 2765 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-26-130\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-26-130" Mar 14 00:14:48.478976 kubelet[2765]: I0314 00:14:48.476391 2765 apiserver.go:52] "Watching apiserver" Mar 14 00:14:48.512867 kubelet[2765]: I0314 00:14:48.512807 2765 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 14 00:14:50.384671 kubelet[2765]: I0314 00:14:50.384601 2765 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-130" Mar 14 00:14:50.595173 systemd[1]: Reloading requested from client PID 3048 ('systemctl') (unit session-7.scope)... Mar 14 00:14:50.595206 systemd[1]: Reloading... Mar 14 00:14:50.846998 zram_generator::config[3097]: No configuration found. Mar 14 00:14:51.109651 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:14:51.346297 systemd[1]: Reloading finished in 750 ms. Mar 14 00:14:51.447254 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:51.466755 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 00:14:51.467525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:51.467904 systemd[1]: kubelet.service: Consumed 1.655s CPU time, 130.4M memory peak, 0B memory swap peak. Mar 14 00:14:51.477856 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:51.867228 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:51.883138 (kubelet)[3148]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:14:51.987010 kubelet[3148]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:14:51.987010 kubelet[3148]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:14:51.987010 kubelet[3148]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:14:51.987010 kubelet[3148]: I0314 00:14:51.985879 3148 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:14:52.004219 kubelet[3148]: I0314 00:14:52.004177 3148 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 14 00:14:52.004394 kubelet[3148]: I0314 00:14:52.004375 3148 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:14:52.005626 kubelet[3148]: I0314 00:14:52.005112 3148 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:14:52.009830 kubelet[3148]: I0314 00:14:52.008873 3148 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 14 00:14:52.021032 kubelet[3148]: I0314 00:14:52.020986 3148 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:14:52.029188 kubelet[3148]: E0314 00:14:52.029140 3148 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:14:52.029828 kubelet[3148]: I0314 00:14:52.029474 3148 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 14 00:14:52.034401 kubelet[3148]: I0314 00:14:52.034366 3148 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 14 00:14:52.035355 kubelet[3148]: I0314 00:14:52.035309 3148 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:14:52.035777 kubelet[3148]: I0314 00:14:52.035493 3148 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-26-130","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:14:52.036370 kubelet[3148]: I0314 00:14:52.036013 3148 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:14:52.036370 kubelet[3148]: I0314 00:14:52.036038 3148 container_manager_linux.go:303] "Creating device plugin manager" Mar 14 00:14:52.036370 kubelet[3148]: I0314 00:14:52.036127 3148 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:14:52.036613 kubelet[3148]: I0314 00:14:52.036594 3148 kubelet.go:480] "Attempting to sync node with API server" Mar 14 00:14:52.037438 kubelet[3148]: I0314 00:14:52.037408 3148 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:14:52.037716 kubelet[3148]: I0314 00:14:52.037680 3148 kubelet.go:386] "Adding apiserver pod source" Mar 14 00:14:52.038056 kubelet[3148]: I0314 00:14:52.037853 3148 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:14:52.046708 kubelet[3148]: I0314 00:14:52.046641 3148 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:14:52.049979 kubelet[3148]: I0314 00:14:52.048722 3148 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:14:52.054883 kubelet[3148]: I0314 00:14:52.054850 3148 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 14 00:14:52.055173 kubelet[3148]: I0314 00:14:52.055151 3148 server.go:1289] "Started kubelet" Mar 14 00:14:52.057503 kubelet[3148]: I0314 00:14:52.057469 3148 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:14:52.068569 kubelet[3148]: I0314 00:14:52.068507 3148 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:14:52.071245 kubelet[3148]: I0314 00:14:52.071174 3148 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:14:52.078608 kubelet[3148]: I0314 00:14:52.078090 3148 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:14:52.080199 kubelet[3148]: I0314 00:14:52.073205 3148 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 14 00:14:52.097104 kubelet[3148]: I0314 00:14:52.073257 3148 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 14 00:14:52.105972 kubelet[3148]: E0314 00:14:52.075302 3148 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-26-130\" not found" Mar 14 00:14:52.114646 kubelet[3148]: I0314 00:14:52.073126 3148 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:14:52.128141 kubelet[3148]: I0314 00:14:52.086361 3148 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:14:52.128571 kubelet[3148]: I0314 00:14:52.097736 3148 reconciler.go:26] "Reconciler: start to sync state" Mar 14 00:14:52.128766 kubelet[3148]: E0314 00:14:52.113703 3148 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:14:52.177678 kubelet[3148]: I0314 00:14:52.177605 3148 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 14 00:14:52.183719 kubelet[3148]: I0314 00:14:52.182123 3148 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 14 00:14:52.183719 kubelet[3148]: I0314 00:14:52.182179 3148 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 14 00:14:52.183719 kubelet[3148]: I0314 00:14:52.182231 3148 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:14:52.183719 kubelet[3148]: I0314 00:14:52.182248 3148 kubelet.go:2436] "Starting kubelet main sync loop" Mar 14 00:14:52.183719 kubelet[3148]: E0314 00:14:52.183066 3148 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:14:52.214224 kubelet[3148]: I0314 00:14:52.214057 3148 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:14:52.214224 kubelet[3148]: I0314 00:14:52.214098 3148 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:14:52.214478 kubelet[3148]: I0314 00:14:52.214266 3148 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:14:52.283289 kubelet[3148]: E0314 00:14:52.283165 3148 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 14 00:14:52.328751 kubelet[3148]: I0314 00:14:52.328718 3148 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:14:52.329348 kubelet[3148]: I0314 00:14:52.328995 3148 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:14:52.329348 kubelet[3148]: I0314 00:14:52.329043 3148 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:14:52.329348 kubelet[3148]: I0314 00:14:52.329261 3148 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 14 00:14:52.329348 kubelet[3148]: I0314 00:14:52.329281 3148 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 14 00:14:52.329348 kubelet[3148]: I0314 00:14:52.329314 3148 policy_none.go:49] "None policy: Start" Mar 14 00:14:52.330407 kubelet[3148]: I0314 00:14:52.329802 3148 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 14 00:14:52.330407 kubelet[3148]: I0314 00:14:52.329837 3148 state_mem.go:35] "Initializing new in-memory state store" Mar 14 00:14:52.330407 kubelet[3148]: I0314 00:14:52.330068 3148 state_mem.go:75] "Updated machine memory state" Mar 14 00:14:52.349460 kubelet[3148]: E0314 00:14:52.349408 3148 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:14:52.349734 kubelet[3148]: I0314 00:14:52.349696 3148 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:14:52.349812 kubelet[3148]: I0314 00:14:52.349731 3148 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:14:52.350460 kubelet[3148]: I0314 00:14:52.350422 3148 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:14:52.366713 kubelet[3148]: E0314 00:14:52.366653 3148 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:14:52.482830 kubelet[3148]: I0314 00:14:52.480696 3148 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-26-130" Mar 14 00:14:52.489999 kubelet[3148]: I0314 00:14:52.485193 3148 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-130" Mar 14 00:14:52.489999 kubelet[3148]: I0314 00:14:52.485312 3148 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-26-130" Mar 14 00:14:52.489999 kubelet[3148]: I0314 00:14:52.485767 3148 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-26-130" Mar 14 00:14:52.516574 kubelet[3148]: E0314 00:14:52.516497 3148 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-26-130\" already exists" pod="kube-system/kube-scheduler-ip-172-31-26-130" Mar 14 00:14:52.516872 kubelet[3148]: I0314 00:14:52.516841 3148 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-26-130" Mar 14 00:14:52.517828 kubelet[3148]: I0314 00:14:52.517154 3148 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-26-130" Mar 14 00:14:52.531875 kubelet[3148]: I0314 00:14:52.531758 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c1f2b7119e8ec924f0ed15942329f94f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-130\" (UID: \"c1f2b7119e8ec924f0ed15942329f94f\") " pod="kube-system/kube-controller-manager-ip-172-31-26-130" Mar 14 00:14:52.533108 kubelet[3148]: I0314 00:14:52.532024 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1f2b7119e8ec924f0ed15942329f94f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-130\" (UID: \"c1f2b7119e8ec924f0ed15942329f94f\") " pod="kube-system/kube-controller-manager-ip-172-31-26-130" Mar 14 00:14:52.533108 kubelet[3148]: I0314 00:14:52.532073 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/497b82c61908daf3ed4f48123c9943f9-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-130\" (UID: \"497b82c61908daf3ed4f48123c9943f9\") " pod="kube-system/kube-scheduler-ip-172-31-26-130" Mar 14 00:14:52.533108 kubelet[3148]: I0314 00:14:52.532126 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/88a7ea18542baaad9a44b4e62aa2d906-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-130\" (UID: \"88a7ea18542baaad9a44b4e62aa2d906\") " pod="kube-system/kube-apiserver-ip-172-31-26-130" Mar 14 00:14:52.533108 kubelet[3148]: I0314 00:14:52.532168 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1f2b7119e8ec924f0ed15942329f94f-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-130\" (UID: \"c1f2b7119e8ec924f0ed15942329f94f\") " pod="kube-system/kube-controller-manager-ip-172-31-26-130" Mar 14 00:14:52.533108 kubelet[3148]: I0314 00:14:52.532397 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c1f2b7119e8ec924f0ed15942329f94f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-130\" (UID: \"c1f2b7119e8ec924f0ed15942329f94f\") " pod="kube-system/kube-controller-manager-ip-172-31-26-130" Mar 14 00:14:52.534356 kubelet[3148]: I0314 00:14:52.532458 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c1f2b7119e8ec924f0ed15942329f94f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-130\" (UID: \"c1f2b7119e8ec924f0ed15942329f94f\") " pod="kube-system/kube-controller-manager-ip-172-31-26-130" Mar 14 00:14:52.534356 kubelet[3148]: I0314 00:14:52.532498 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/88a7ea18542baaad9a44b4e62aa2d906-ca-certs\") pod \"kube-apiserver-ip-172-31-26-130\" (UID: \"88a7ea18542baaad9a44b4e62aa2d906\") " pod="kube-system/kube-apiserver-ip-172-31-26-130" Mar 14 00:14:52.534356 kubelet[3148]: I0314 00:14:52.532533 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/88a7ea18542baaad9a44b4e62aa2d906-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-130\" (UID: \"88a7ea18542baaad9a44b4e62aa2d906\") " pod="kube-system/kube-apiserver-ip-172-31-26-130" Mar 14 00:14:53.041586 kubelet[3148]: I0314 00:14:53.041452 3148 apiserver.go:52] "Watching apiserver" Mar 14 00:14:53.105554 kubelet[3148]: I0314 00:14:53.105463 3148 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 14 00:14:53.264191 kubelet[3148]: I0314 00:14:53.263615 3148 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-26-130" Mar 14 00:14:53.280447 kubelet[3148]: E0314 00:14:53.280377 3148 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-26-130\" already exists" pod="kube-system/kube-apiserver-ip-172-31-26-130" Mar 14 00:14:53.341901 kubelet[3148]: I0314 00:14:53.341722 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-26-130" podStartSLOduration=1.341699617 podStartE2EDuration="1.341699617s" podCreationTimestamp="2026-03-14 00:14:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:14:53.336538693 +0000 UTC m=+1.440942140" watchObservedRunningTime="2026-03-14 00:14:53.341699617 +0000 UTC m=+1.446103052" Mar 14 00:14:53.392188 kubelet[3148]: I0314 00:14:53.392092 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-26-130" podStartSLOduration=1.3920623810000001 podStartE2EDuration="1.392062381s" podCreationTimestamp="2026-03-14 00:14:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:14:53.368366725 +0000 UTC m=+1.472770160" watchObservedRunningTime="2026-03-14 00:14:53.392062381 +0000 UTC m=+1.496465816" Mar 14 00:14:54.169101 update_engine[1912]: I20260314 00:14:54.167922 1912 update_attempter.cc:509] Updating boot flags... Mar 14 00:14:54.282976 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3214) Mar 14 00:14:54.635027 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3215) Mar 14 00:14:54.947038 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3215) Mar 14 00:14:55.307188 kubelet[3148]: I0314 00:14:55.306673 3148 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 14 00:14:55.307927 containerd[1937]: time="2026-03-14T00:14:55.307761459Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 14 00:14:55.314889 kubelet[3148]: I0314 00:14:55.312309 3148 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 14 00:14:55.713101 kubelet[3148]: I0314 00:14:55.712908 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-26-130" podStartSLOduration=5.712884977 podStartE2EDuration="5.712884977s" podCreationTimestamp="2026-03-14 00:14:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:14:53.393348901 +0000 UTC m=+1.497752420" watchObservedRunningTime="2026-03-14 00:14:55.712884977 +0000 UTC m=+3.817288412" Mar 14 00:14:56.213811 systemd[1]: Created slice kubepods-besteffort-pod36a3ad6e_c439_4eac_a371_0e5738008cb6.slice - libcontainer container kubepods-besteffort-pod36a3ad6e_c439_4eac_a371_0e5738008cb6.slice. Mar 14 00:14:56.262628 kubelet[3148]: I0314 00:14:56.262552 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36a3ad6e-c439-4eac-a371-0e5738008cb6-xtables-lock\") pod \"kube-proxy-wj5kj\" (UID: \"36a3ad6e-c439-4eac-a371-0e5738008cb6\") " pod="kube-system/kube-proxy-wj5kj" Mar 14 00:14:56.262950 kubelet[3148]: I0314 00:14:56.262865 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x82f6\" (UniqueName: \"kubernetes.io/projected/36a3ad6e-c439-4eac-a371-0e5738008cb6-kube-api-access-x82f6\") pod \"kube-proxy-wj5kj\" (UID: \"36a3ad6e-c439-4eac-a371-0e5738008cb6\") " pod="kube-system/kube-proxy-wj5kj" Mar 14 00:14:56.263045 kubelet[3148]: I0314 00:14:56.262994 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36a3ad6e-c439-4eac-a371-0e5738008cb6-lib-modules\") pod \"kube-proxy-wj5kj\" (UID: \"36a3ad6e-c439-4eac-a371-0e5738008cb6\") " pod="kube-system/kube-proxy-wj5kj" Mar 14 00:14:56.263122 kubelet[3148]: I0314 00:14:56.263070 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/36a3ad6e-c439-4eac-a371-0e5738008cb6-kube-proxy\") pod \"kube-proxy-wj5kj\" (UID: \"36a3ad6e-c439-4eac-a371-0e5738008cb6\") " pod="kube-system/kube-proxy-wj5kj" Mar 14 00:14:56.422528 systemd[1]: Created slice kubepods-besteffort-podd9bfb8b6_e9ea_4e5f_924a_50d4d5c98c25.slice - libcontainer container kubepods-besteffort-podd9bfb8b6_e9ea_4e5f_924a_50d4d5c98c25.slice. Mar 14 00:14:56.466631 kubelet[3148]: I0314 00:14:56.466383 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d9bfb8b6-e9ea-4e5f-924a-50d4d5c98c25-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-6k8kl\" (UID: \"d9bfb8b6-e9ea-4e5f-924a-50d4d5c98c25\") " pod="tigera-operator/tigera-operator-6bf85f8dd-6k8kl" Mar 14 00:14:56.466631 kubelet[3148]: I0314 00:14:56.466454 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64wjg\" (UniqueName: \"kubernetes.io/projected/d9bfb8b6-e9ea-4e5f-924a-50d4d5c98c25-kube-api-access-64wjg\") pod \"tigera-operator-6bf85f8dd-6k8kl\" (UID: \"d9bfb8b6-e9ea-4e5f-924a-50d4d5c98c25\") " pod="tigera-operator/tigera-operator-6bf85f8dd-6k8kl" Mar 14 00:14:56.534353 containerd[1937]: time="2026-03-14T00:14:56.533796605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wj5kj,Uid:36a3ad6e-c439-4eac-a371-0e5738008cb6,Namespace:kube-system,Attempt:0,}" Mar 14 00:14:56.577128 containerd[1937]: time="2026-03-14T00:14:56.576577517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:14:56.577128 containerd[1937]: time="2026-03-14T00:14:56.576695201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:14:56.577128 containerd[1937]: time="2026-03-14T00:14:56.576746993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:56.578913 containerd[1937]: time="2026-03-14T00:14:56.577078097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:56.627293 systemd[1]: Started cri-containerd-279fb715b6dca5c0be73e83a902c9fa80add809844f66c3994c8cc87e76299d5.scope - libcontainer container 279fb715b6dca5c0be73e83a902c9fa80add809844f66c3994c8cc87e76299d5. Mar 14 00:14:56.674347 containerd[1937]: time="2026-03-14T00:14:56.674117741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wj5kj,Uid:36a3ad6e-c439-4eac-a371-0e5738008cb6,Namespace:kube-system,Attempt:0,} returns sandbox id \"279fb715b6dca5c0be73e83a902c9fa80add809844f66c3994c8cc87e76299d5\"" Mar 14 00:14:56.684699 containerd[1937]: time="2026-03-14T00:14:56.684398093Z" level=info msg="CreateContainer within sandbox \"279fb715b6dca5c0be73e83a902c9fa80add809844f66c3994c8cc87e76299d5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 14 00:14:56.703121 containerd[1937]: time="2026-03-14T00:14:56.703056677Z" level=info msg="CreateContainer within sandbox \"279fb715b6dca5c0be73e83a902c9fa80add809844f66c3994c8cc87e76299d5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f8ac824e3c394d8e00d74c2416eeb8968dbe399da3531eee366adb714b791ed3\"" Mar 14 00:14:56.705126 containerd[1937]: time="2026-03-14T00:14:56.704733713Z" level=info msg="StartContainer for \"f8ac824e3c394d8e00d74c2416eeb8968dbe399da3531eee366adb714b791ed3\"" Mar 14 00:14:56.732474 containerd[1937]: time="2026-03-14T00:14:56.731708826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-6k8kl,Uid:d9bfb8b6-e9ea-4e5f-924a-50d4d5c98c25,Namespace:tigera-operator,Attempt:0,}" Mar 14 00:14:56.761244 systemd[1]: Started cri-containerd-f8ac824e3c394d8e00d74c2416eeb8968dbe399da3531eee366adb714b791ed3.scope - libcontainer container f8ac824e3c394d8e00d74c2416eeb8968dbe399da3531eee366adb714b791ed3. Mar 14 00:14:56.794493 containerd[1937]: time="2026-03-14T00:14:56.794333442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:14:56.794664 containerd[1937]: time="2026-03-14T00:14:56.794447034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:14:56.794664 containerd[1937]: time="2026-03-14T00:14:56.794486802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:56.794852 containerd[1937]: time="2026-03-14T00:14:56.794637438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:56.845290 systemd[1]: Started cri-containerd-874ccf8d2d7cdeac615889c519f37d0131bfe8ce54bbd6aa0a3df16338c07ee1.scope - libcontainer container 874ccf8d2d7cdeac615889c519f37d0131bfe8ce54bbd6aa0a3df16338c07ee1. Mar 14 00:14:56.851303 containerd[1937]: time="2026-03-14T00:14:56.851226066Z" level=info msg="StartContainer for \"f8ac824e3c394d8e00d74c2416eeb8968dbe399da3531eee366adb714b791ed3\" returns successfully" Mar 14 00:14:56.924859 containerd[1937]: time="2026-03-14T00:14:56.924707719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-6k8kl,Uid:d9bfb8b6-e9ea-4e5f-924a-50d4d5c98c25,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"874ccf8d2d7cdeac615889c519f37d0131bfe8ce54bbd6aa0a3df16338c07ee1\"" Mar 14 00:14:56.928710 containerd[1937]: time="2026-03-14T00:14:56.928493203Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 14 00:14:57.340697 kubelet[3148]: I0314 00:14:57.340590 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wj5kj" podStartSLOduration=1.340563113 podStartE2EDuration="1.340563113s" podCreationTimestamp="2026-03-14 00:14:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:14:57.316669061 +0000 UTC m=+5.421072508" watchObservedRunningTime="2026-03-14 00:14:57.340563113 +0000 UTC m=+5.444966548" Mar 14 00:14:58.226460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3242798026.mount: Deactivated successfully. Mar 14 00:14:59.214985 containerd[1937]: time="2026-03-14T00:14:59.214300578Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:59.216455 containerd[1937]: time="2026-03-14T00:14:59.216384822Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Mar 14 00:14:59.216929 containerd[1937]: time="2026-03-14T00:14:59.216891786Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:59.221460 containerd[1937]: time="2026-03-14T00:14:59.221403606Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:59.223421 containerd[1937]: time="2026-03-14T00:14:59.223224750Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.294649479s" Mar 14 00:14:59.223421 containerd[1937]: time="2026-03-14T00:14:59.223281294Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Mar 14 00:14:59.230447 containerd[1937]: time="2026-03-14T00:14:59.230393214Z" level=info msg="CreateContainer within sandbox \"874ccf8d2d7cdeac615889c519f37d0131bfe8ce54bbd6aa0a3df16338c07ee1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 14 00:14:59.252276 containerd[1937]: time="2026-03-14T00:14:59.249823650Z" level=info msg="CreateContainer within sandbox \"874ccf8d2d7cdeac615889c519f37d0131bfe8ce54bbd6aa0a3df16338c07ee1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"40dd9bdc747a186e71b3da118e3394be6c9b1954b5f56d60ab98612b737f251f\"" Mar 14 00:14:59.250297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1578507165.mount: Deactivated successfully. Mar 14 00:14:59.253877 containerd[1937]: time="2026-03-14T00:14:59.253203570Z" level=info msg="StartContainer for \"40dd9bdc747a186e71b3da118e3394be6c9b1954b5f56d60ab98612b737f251f\"" Mar 14 00:14:59.322393 systemd[1]: Started cri-containerd-40dd9bdc747a186e71b3da118e3394be6c9b1954b5f56d60ab98612b737f251f.scope - libcontainer container 40dd9bdc747a186e71b3da118e3394be6c9b1954b5f56d60ab98612b737f251f. Mar 14 00:14:59.376671 containerd[1937]: time="2026-03-14T00:14:59.376477231Z" level=info msg="StartContainer for \"40dd9bdc747a186e71b3da118e3394be6c9b1954b5f56d60ab98612b737f251f\" returns successfully" Mar 14 00:15:00.341320 kubelet[3148]: I0314 00:15:00.340304 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-6k8kl" podStartSLOduration=2.042773213 podStartE2EDuration="4.340280552s" podCreationTimestamp="2026-03-14 00:14:56 +0000 UTC" firstStartedPulling="2026-03-14 00:14:56.927897415 +0000 UTC m=+5.032300850" lastFinishedPulling="2026-03-14 00:14:59.225404754 +0000 UTC m=+7.329808189" observedRunningTime="2026-03-14 00:15:00.340025336 +0000 UTC m=+8.444428795" watchObservedRunningTime="2026-03-14 00:15:00.340280552 +0000 UTC m=+8.444683987" Mar 14 00:15:06.614281 sudo[2251]: pam_unix(sudo:session): session closed for user root Mar 14 00:15:06.700268 sshd[2248]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:06.711450 systemd[1]: sshd@6-172.31.26.130:22-68.220.241.50:49330.service: Deactivated successfully. Mar 14 00:15:06.717469 systemd[1]: session-7.scope: Deactivated successfully. Mar 14 00:15:06.720253 systemd[1]: session-7.scope: Consumed 11.650s CPU time, 152.9M memory peak, 0B memory swap peak. Mar 14 00:15:06.724880 systemd-logind[1911]: Session 7 logged out. Waiting for processes to exit. Mar 14 00:15:06.731196 systemd-logind[1911]: Removed session 7. Mar 14 00:15:24.300058 systemd[1]: Created slice kubepods-besteffort-podccf97ac2_6d4c_4a41_85b0_953e7ddd060a.slice - libcontainer container kubepods-besteffort-podccf97ac2_6d4c_4a41_85b0_953e7ddd060a.slice. Mar 14 00:15:24.350263 kubelet[3148]: I0314 00:15:24.350188 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlmhj\" (UniqueName: \"kubernetes.io/projected/ccf97ac2-6d4c-4a41-85b0-953e7ddd060a-kube-api-access-tlmhj\") pod \"calico-typha-69f8f7955d-8mvcg\" (UID: \"ccf97ac2-6d4c-4a41-85b0-953e7ddd060a\") " pod="calico-system/calico-typha-69f8f7955d-8mvcg" Mar 14 00:15:24.352108 kubelet[3148]: I0314 00:15:24.351875 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ccf97ac2-6d4c-4a41-85b0-953e7ddd060a-tigera-ca-bundle\") pod \"calico-typha-69f8f7955d-8mvcg\" (UID: \"ccf97ac2-6d4c-4a41-85b0-953e7ddd060a\") " pod="calico-system/calico-typha-69f8f7955d-8mvcg" Mar 14 00:15:24.352108 kubelet[3148]: I0314 00:15:24.351970 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ccf97ac2-6d4c-4a41-85b0-953e7ddd060a-typha-certs\") pod \"calico-typha-69f8f7955d-8mvcg\" (UID: \"ccf97ac2-6d4c-4a41-85b0-953e7ddd060a\") " pod="calico-system/calico-typha-69f8f7955d-8mvcg" Mar 14 00:15:24.574499 systemd[1]: Created slice kubepods-besteffort-pod1fe5bf4e_6676_4682_848a_78e111d01421.slice - libcontainer container kubepods-besteffort-pod1fe5bf4e_6676_4682_848a_78e111d01421.slice. Mar 14 00:15:24.609243 containerd[1937]: time="2026-03-14T00:15:24.609184328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69f8f7955d-8mvcg,Uid:ccf97ac2-6d4c-4a41-85b0-953e7ddd060a,Namespace:calico-system,Attempt:0,}" Mar 14 00:15:24.655463 kubelet[3148]: I0314 00:15:24.654652 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1fe5bf4e-6676-4682-848a-78e111d01421-node-certs\") pod \"calico-node-9mbch\" (UID: \"1fe5bf4e-6676-4682-848a-78e111d01421\") " pod="calico-system/calico-node-9mbch" Mar 14 00:15:24.655463 kubelet[3148]: I0314 00:15:24.654728 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/1fe5bf4e-6676-4682-848a-78e111d01421-sys-fs\") pod \"calico-node-9mbch\" (UID: \"1fe5bf4e-6676-4682-848a-78e111d01421\") " pod="calico-system/calico-node-9mbch" Mar 14 00:15:24.655463 kubelet[3148]: I0314 00:15:24.654766 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1fe5bf4e-6676-4682-848a-78e111d01421-var-run-calico\") pod \"calico-node-9mbch\" (UID: \"1fe5bf4e-6676-4682-848a-78e111d01421\") " pod="calico-system/calico-node-9mbch" Mar 14 00:15:24.655463 kubelet[3148]: I0314 00:15:24.654809 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1fe5bf4e-6676-4682-848a-78e111d01421-cni-log-dir\") pod \"calico-node-9mbch\" (UID: \"1fe5bf4e-6676-4682-848a-78e111d01421\") " pod="calico-system/calico-node-9mbch" Mar 14 00:15:24.655463 kubelet[3148]: I0314 00:15:24.654872 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1fe5bf4e-6676-4682-848a-78e111d01421-tigera-ca-bundle\") pod \"calico-node-9mbch\" (UID: \"1fe5bf4e-6676-4682-848a-78e111d01421\") " pod="calico-system/calico-node-9mbch" Mar 14 00:15:24.655868 kubelet[3148]: I0314 00:15:24.654927 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/1fe5bf4e-6676-4682-848a-78e111d01421-bpffs\") pod \"calico-node-9mbch\" (UID: \"1fe5bf4e-6676-4682-848a-78e111d01421\") " pod="calico-system/calico-node-9mbch" Mar 14 00:15:24.655868 kubelet[3148]: I0314 00:15:24.655047 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1fe5bf4e-6676-4682-848a-78e111d01421-cni-net-dir\") pod \"calico-node-9mbch\" (UID: \"1fe5bf4e-6676-4682-848a-78e111d01421\") " pod="calico-system/calico-node-9mbch" Mar 14 00:15:24.655868 kubelet[3148]: I0314 00:15:24.655130 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1fe5bf4e-6676-4682-848a-78e111d01421-lib-modules\") pod \"calico-node-9mbch\" (UID: \"1fe5bf4e-6676-4682-848a-78e111d01421\") " pod="calico-system/calico-node-9mbch" Mar 14 00:15:24.655868 kubelet[3148]: I0314 00:15:24.655198 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/1fe5bf4e-6676-4682-848a-78e111d01421-nodeproc\") pod \"calico-node-9mbch\" (UID: \"1fe5bf4e-6676-4682-848a-78e111d01421\") " pod="calico-system/calico-node-9mbch" Mar 14 00:15:24.655868 kubelet[3148]: I0314 00:15:24.655235 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1fe5bf4e-6676-4682-848a-78e111d01421-policysync\") pod \"calico-node-9mbch\" (UID: \"1fe5bf4e-6676-4682-848a-78e111d01421\") " pod="calico-system/calico-node-9mbch" Mar 14 00:15:24.657894 kubelet[3148]: I0314 00:15:24.655374 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1fe5bf4e-6676-4682-848a-78e111d01421-var-lib-calico\") pod \"calico-node-9mbch\" (UID: \"1fe5bf4e-6676-4682-848a-78e111d01421\") " pod="calico-system/calico-node-9mbch" Mar 14 00:15:24.657894 kubelet[3148]: I0314 00:15:24.655713 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1fe5bf4e-6676-4682-848a-78e111d01421-flexvol-driver-host\") pod \"calico-node-9mbch\" (UID: \"1fe5bf4e-6676-4682-848a-78e111d01421\") " pod="calico-system/calico-node-9mbch" Mar 14 00:15:24.657894 kubelet[3148]: I0314 00:15:24.655771 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1fe5bf4e-6676-4682-848a-78e111d01421-cni-bin-dir\") pod \"calico-node-9mbch\" (UID: \"1fe5bf4e-6676-4682-848a-78e111d01421\") " pod="calico-system/calico-node-9mbch" Mar 14 00:15:24.657894 kubelet[3148]: I0314 00:15:24.656246 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1fe5bf4e-6676-4682-848a-78e111d01421-xtables-lock\") pod \"calico-node-9mbch\" (UID: \"1fe5bf4e-6676-4682-848a-78e111d01421\") " pod="calico-system/calico-node-9mbch" Mar 14 00:15:24.659119 kubelet[3148]: I0314 00:15:24.656800 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr5f2\" (UniqueName: \"kubernetes.io/projected/1fe5bf4e-6676-4682-848a-78e111d01421-kube-api-access-fr5f2\") pod \"calico-node-9mbch\" (UID: \"1fe5bf4e-6676-4682-848a-78e111d01421\") " pod="calico-system/calico-node-9mbch" Mar 14 00:15:24.676810 containerd[1937]: time="2026-03-14T00:15:24.676658888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:24.678610 containerd[1937]: time="2026-03-14T00:15:24.678491900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:24.682029 containerd[1937]: time="2026-03-14T00:15:24.681452096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:24.682029 containerd[1937]: time="2026-03-14T00:15:24.681763316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:24.702546 kubelet[3148]: E0314 00:15:24.700448 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s9wlx" podUID="8bfac06b-f0bb-4f88-a72c-e23a86afafd1" Mar 14 00:15:24.754268 systemd[1]: Started cri-containerd-afaed7d0e583a0479ef38fbccea83f7b4bcc5a930985c32f6961a1ef126dcb79.scope - libcontainer container afaed7d0e583a0479ef38fbccea83f7b4bcc5a930985c32f6961a1ef126dcb79. Mar 14 00:15:24.759523 kubelet[3148]: I0314 00:15:24.759296 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8bfac06b-f0bb-4f88-a72c-e23a86afafd1-socket-dir\") pod \"csi-node-driver-s9wlx\" (UID: \"8bfac06b-f0bb-4f88-a72c-e23a86afafd1\") " pod="calico-system/csi-node-driver-s9wlx" Mar 14 00:15:24.769915 kubelet[3148]: I0314 00:15:24.769611 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8bfac06b-f0bb-4f88-a72c-e23a86afafd1-kubelet-dir\") pod \"csi-node-driver-s9wlx\" (UID: \"8bfac06b-f0bb-4f88-a72c-e23a86afafd1\") " pod="calico-system/csi-node-driver-s9wlx" Mar 14 00:15:24.769915 kubelet[3148]: I0314 00:15:24.769784 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8bfac06b-f0bb-4f88-a72c-e23a86afafd1-registration-dir\") pod \"csi-node-driver-s9wlx\" (UID: \"8bfac06b-f0bb-4f88-a72c-e23a86afafd1\") " pod="calico-system/csi-node-driver-s9wlx" Mar 14 00:15:24.769915 kubelet[3148]: I0314 00:15:24.769840 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8bfac06b-f0bb-4f88-a72c-e23a86afafd1-varrun\") pod \"csi-node-driver-s9wlx\" (UID: \"8bfac06b-f0bb-4f88-a72c-e23a86afafd1\") " pod="calico-system/csi-node-driver-s9wlx" Mar 14 00:15:24.770297 kubelet[3148]: I0314 00:15:24.770203 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb4lp\" (UniqueName: \"kubernetes.io/projected/8bfac06b-f0bb-4f88-a72c-e23a86afafd1-kube-api-access-xb4lp\") pod \"csi-node-driver-s9wlx\" (UID: \"8bfac06b-f0bb-4f88-a72c-e23a86afafd1\") " pod="calico-system/csi-node-driver-s9wlx" Mar 14 00:15:24.794301 kubelet[3148]: E0314 00:15:24.794044 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:24.795245 kubelet[3148]: W0314 00:15:24.794088 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:24.795245 kubelet[3148]: E0314 00:15:24.795047 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:24.797983 kubelet[3148]: E0314 00:15:24.797549 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:24.797983 kubelet[3148]: W0314 00:15:24.797591 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:24.799176 kubelet[3148]: E0314 00:15:24.798341 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:24.800226 kubelet[3148]: E0314 00:15:24.800055 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:24.800226 kubelet[3148]: W0314 00:15:24.800089 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:24.800226 kubelet[3148]: E0314 00:15:24.800146 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:24.822258 kubelet[3148]: E0314 00:15:24.822123 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:24.822258 kubelet[3148]: W0314 00:15:24.822163 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:24.822258 kubelet[3148]: E0314 00:15:24.822195 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:24.871626 kubelet[3148]: E0314 00:15:24.871505 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:24.874415 kubelet[3148]: W0314 00:15:24.874007 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:24.874415 kubelet[3148]: E0314 00:15:24.874123 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:24.876190 kubelet[3148]: E0314 00:15:24.876129 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:24.877230 kubelet[3148]: W0314 00:15:24.877072 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:24.877230 kubelet[3148]: E0314 00:15:24.877148 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:24.877786 kubelet[3148]: E0314 00:15:24.877638 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:24.877786 kubelet[3148]: W0314 00:15:24.877674 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:24.877786 kubelet[3148]: E0314 00:15:24.877703 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:24.881027 kubelet[3148]: E0314 00:15:24.879011 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:24.881027 kubelet[3148]: W0314 00:15:24.879042 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:24.881027 kubelet[3148]: E0314 00:15:24.879095 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:24.881027 kubelet[3148]: E0314 00:15:24.879464 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:24.881027 kubelet[3148]: W0314 00:15:24.879482 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:24.881027 kubelet[3148]: E0314 00:15:24.879504 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:24.881027 kubelet[3148]: E0314 00:15:24.879763 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:24.881027 kubelet[3148]: W0314 00:15:24.879777 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:24.881027 kubelet[3148]: E0314 00:15:24.879796 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:24.881027 kubelet[3148]: E0314 00:15:24.880367 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:24.882374 kubelet[3148]: W0314 00:15:24.880387 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:24.882374 kubelet[3148]: E0314 00:15:24.880410 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:24.882374 kubelet[3148]: E0314 00:15:24.881119 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:24.882374 kubelet[3148]: W0314 00:15:24.881141 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:24.882374 kubelet[3148]: E0314 00:15:24.881167 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:24.882374 kubelet[3148]: E0314 00:15:24.881547 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:24.882374 kubelet[3148]: W0314 00:15:24.881566 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:24.882374 kubelet[3148]: E0314 00:15:24.881586 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:24.882374 kubelet[3148]: E0314 00:15:24.882038 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:24.882374 kubelet[3148]: W0314 00:15:24.882061 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:24.884639 kubelet[3148]: E0314 00:15:24.882086 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:24.884908 kubelet[3148]: E0314 00:15:24.884874 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:24.885438 kubelet[3148]: W0314 00:15:24.885393 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:24.885755 kubelet[3148]: E0314 00:15:24.885724 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:24.885883 containerd[1937]: time="2026-03-14T00:15:24.885778953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9mbch,Uid:1fe5bf4e-6676-4682-848a-78e111d01421,Namespace:calico-system,Attempt:0,}" Mar 14 00:15:24.888186 kubelet[3148]: E0314 00:15:24.887631 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:24.888186 kubelet[3148]: W0314 00:15:24.887672 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:24.888186 kubelet[3148]: E0314 00:15:24.887706 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:24.891277 kubelet[3148]: E0314 00:15:24.890444 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:24.891277 kubelet[3148]: W0314 00:15:24.890479 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:24.891277 kubelet[3148]: E0314 00:15:24.890511 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:24.893161 kubelet[3148]: E0314 00:15:24.892542 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:24.893161 kubelet[3148]: W0314 00:15:24.892582 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:24.893161 kubelet[3148]: E0314 00:15:24.892631 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:24.896091 kubelet[3148]: E0314 00:15:24.895851 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:24.896473 kubelet[3148]: W0314 00:15:24.896428 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:24.897241 kubelet[3148]: E0314 00:15:24.896671 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:24.898984 kubelet[3148]: E0314 00:15:24.898353 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:24.898984 kubelet[3148]: W0314 00:15:24.898390 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:24.898984 kubelet[3148]: E0314 00:15:24.898560 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:24.902641 kubelet[3148]: E0314 00:15:24.901101 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:24.902641 kubelet[3148]: W0314 00:15:24.902352 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:24.902641 kubelet[3148]: E0314 00:15:24.902408 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:24.904461 kubelet[3148]: E0314 00:15:24.904174 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:24.904461 kubelet[3148]: W0314 00:15:24.904204 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:24.904461 kubelet[3148]: E0314 00:15:24.904234 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:24.905545 kubelet[3148]: E0314 00:15:24.905508 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:24.906779 kubelet[3148]: W0314 00:15:24.906121 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:24.906779 kubelet[3148]: E0314 00:15:24.906170 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:24.908165 kubelet[3148]: E0314 00:15:24.908126 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:24.908328 kubelet[3148]: W0314 00:15:24.908300 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:24.908444 kubelet[3148]: E0314 00:15:24.908419 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:24.911257 kubelet[3148]: E0314 00:15:24.910924 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:24.911257 kubelet[3148]: W0314 00:15:24.911024 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:24.911257 kubelet[3148]: E0314 00:15:24.911061 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:24.912263 kubelet[3148]: E0314 00:15:24.912226 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:24.913984 kubelet[3148]: W0314 00:15:24.912593 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:24.913984 kubelet[3148]: E0314 00:15:24.912660 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:24.915591 kubelet[3148]: E0314 00:15:24.915526 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:24.915591 kubelet[3148]: W0314 00:15:24.915572 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:24.915811 kubelet[3148]: E0314 00:15:24.915608 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:24.917543 kubelet[3148]: E0314 00:15:24.917490 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:24.917543 kubelet[3148]: W0314 00:15:24.917530 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:24.917726 kubelet[3148]: E0314 00:15:24.917565 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:24.919998 kubelet[3148]: E0314 00:15:24.919513 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:24.919998 kubelet[3148]: W0314 00:15:24.919552 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:24.919998 kubelet[3148]: E0314 00:15:24.919586 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:24.959016 containerd[1937]: time="2026-03-14T00:15:24.957636058Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:24.959016 containerd[1937]: time="2026-03-14T00:15:24.957763870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:24.959016 containerd[1937]: time="2026-03-14T00:15:24.957794758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:24.959016 containerd[1937]: time="2026-03-14T00:15:24.958001218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:24.980643 kubelet[3148]: E0314 00:15:24.980369 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:24.980643 kubelet[3148]: W0314 00:15:24.980427 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:24.980643 kubelet[3148]: E0314 00:15:24.980462 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:25.018279 systemd[1]: Started cri-containerd-e16f3d19ba4aeb6a1c7c4ae35a197dccb8b7c2cc01512948a66eecb0d59aaef7.scope - libcontainer container e16f3d19ba4aeb6a1c7c4ae35a197dccb8b7c2cc01512948a66eecb0d59aaef7. Mar 14 00:15:25.139446 containerd[1937]: time="2026-03-14T00:15:25.138295855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69f8f7955d-8mvcg,Uid:ccf97ac2-6d4c-4a41-85b0-953e7ddd060a,Namespace:calico-system,Attempt:0,} returns sandbox id \"afaed7d0e583a0479ef38fbccea83f7b4bcc5a930985c32f6961a1ef126dcb79\"" Mar 14 00:15:25.146796 containerd[1937]: time="2026-03-14T00:15:25.146637919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 14 00:15:25.226647 containerd[1937]: time="2026-03-14T00:15:25.226591303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9mbch,Uid:1fe5bf4e-6676-4682-848a-78e111d01421,Namespace:calico-system,Attempt:0,} returns sandbox id \"e16f3d19ba4aeb6a1c7c4ae35a197dccb8b7c2cc01512948a66eecb0d59aaef7\"" Mar 14 00:15:26.184014 kubelet[3148]: E0314 00:15:26.183599 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s9wlx" podUID="8bfac06b-f0bb-4f88-a72c-e23a86afafd1" Mar 14 00:15:26.538810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1648614828.mount: Deactivated successfully. Mar 14 00:15:27.387245 containerd[1937]: time="2026-03-14T00:15:27.387174982Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:27.388854 containerd[1937]: time="2026-03-14T00:15:27.388799374Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Mar 14 00:15:27.390151 containerd[1937]: time="2026-03-14T00:15:27.389990806Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:27.394445 containerd[1937]: time="2026-03-14T00:15:27.394376578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:27.396489 containerd[1937]: time="2026-03-14T00:15:27.396202594Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.249496299s" Mar 14 00:15:27.396489 containerd[1937]: time="2026-03-14T00:15:27.396261562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Mar 14 00:15:27.399184 containerd[1937]: time="2026-03-14T00:15:27.398523394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 14 00:15:27.436093 containerd[1937]: time="2026-03-14T00:15:27.436040350Z" level=info msg="CreateContainer within sandbox \"afaed7d0e583a0479ef38fbccea83f7b4bcc5a930985c32f6961a1ef126dcb79\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 14 00:15:27.457525 containerd[1937]: time="2026-03-14T00:15:27.457356526Z" level=info msg="CreateContainer within sandbox \"afaed7d0e583a0479ef38fbccea83f7b4bcc5a930985c32f6961a1ef126dcb79\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b1bf36913bae42c7c4f4170ac3c5d41cadd066a42f093a5131afd6b5ba2e0758\"" Mar 14 00:15:27.458203 containerd[1937]: time="2026-03-14T00:15:27.458158126Z" level=info msg="StartContainer for \"b1bf36913bae42c7c4f4170ac3c5d41cadd066a42f093a5131afd6b5ba2e0758\"" Mar 14 00:15:27.507368 systemd[1]: Started cri-containerd-b1bf36913bae42c7c4f4170ac3c5d41cadd066a42f093a5131afd6b5ba2e0758.scope - libcontainer container b1bf36913bae42c7c4f4170ac3c5d41cadd066a42f093a5131afd6b5ba2e0758. Mar 14 00:15:27.581599 containerd[1937]: time="2026-03-14T00:15:27.581417135Z" level=info msg="StartContainer for \"b1bf36913bae42c7c4f4170ac3c5d41cadd066a42f093a5131afd6b5ba2e0758\" returns successfully" Mar 14 00:15:28.185890 kubelet[3148]: E0314 00:15:28.184130 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s9wlx" podUID="8bfac06b-f0bb-4f88-a72c-e23a86afafd1" Mar 14 00:15:28.429286 kubelet[3148]: E0314 00:15:28.429201 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:28.429286 kubelet[3148]: W0314 00:15:28.429344 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:28.429286 kubelet[3148]: E0314 00:15:28.429377 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:28.433037 kubelet[3148]: E0314 00:15:28.432732 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:28.433674 kubelet[3148]: W0314 00:15:28.432899 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:28.433674 kubelet[3148]: E0314 00:15:28.433169 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:28.434783 kubelet[3148]: E0314 00:15:28.434560 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:28.434783 kubelet[3148]: W0314 00:15:28.434625 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:28.435497 kubelet[3148]: E0314 00:15:28.434655 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:28.437098 kubelet[3148]: E0314 00:15:28.436759 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:28.437098 kubelet[3148]: W0314 00:15:28.437018 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:28.437833 kubelet[3148]: E0314 00:15:28.437054 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:28.439097 kubelet[3148]: E0314 00:15:28.438843 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:28.439097 kubelet[3148]: W0314 00:15:28.438901 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:28.439097 kubelet[3148]: E0314 00:15:28.438973 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:28.441062 kubelet[3148]: E0314 00:15:28.440551 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:28.441062 kubelet[3148]: W0314 00:15:28.440632 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:28.441062 kubelet[3148]: E0314 00:15:28.440786 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:28.442794 kubelet[3148]: E0314 00:15:28.442387 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:28.442794 kubelet[3148]: W0314 00:15:28.442446 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:28.442794 kubelet[3148]: E0314 00:15:28.442481 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:28.445483 kubelet[3148]: E0314 00:15:28.444874 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:28.445483 kubelet[3148]: W0314 00:15:28.445207 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:28.445483 kubelet[3148]: E0314 00:15:28.445245 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:28.447653 kubelet[3148]: E0314 00:15:28.447330 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:28.447653 kubelet[3148]: W0314 00:15:28.447363 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:28.447653 kubelet[3148]: E0314 00:15:28.447397 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:28.449180 kubelet[3148]: E0314 00:15:28.448320 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:28.449180 kubelet[3148]: W0314 00:15:28.448350 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:28.449180 kubelet[3148]: E0314 00:15:28.448382 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:28.451169 kubelet[3148]: E0314 00:15:28.450656 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:28.451169 kubelet[3148]: W0314 00:15:28.450711 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:28.451169 kubelet[3148]: E0314 00:15:28.450745 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:28.455646 kubelet[3148]: E0314 00:15:28.455322 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:28.455646 kubelet[3148]: W0314 00:15:28.455361 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:28.455646 kubelet[3148]: E0314 00:15:28.455396 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:28.458522 kubelet[3148]: E0314 00:15:28.457497 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:28.458522 kubelet[3148]: W0314 00:15:28.457576 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:28.458522 kubelet[3148]: E0314 00:15:28.457610 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:28.460832 kubelet[3148]: I0314 00:15:28.460298 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-69f8f7955d-8mvcg" podStartSLOduration=2.208257536 podStartE2EDuration="4.460271711s" podCreationTimestamp="2026-03-14 00:15:24 +0000 UTC" firstStartedPulling="2026-03-14 00:15:25.146187931 +0000 UTC m=+33.250591366" lastFinishedPulling="2026-03-14 00:15:27.398202106 +0000 UTC m=+35.502605541" observedRunningTime="2026-03-14 00:15:28.454047767 +0000 UTC m=+36.558451226" watchObservedRunningTime="2026-03-14 00:15:28.460271711 +0000 UTC m=+36.564675242" Mar 14 00:15:28.462544 kubelet[3148]: E0314 00:15:28.461074 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:28.462544 kubelet[3148]: W0314 00:15:28.462434 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:28.462544 kubelet[3148]: E0314 00:15:28.462470 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:28.464302 kubelet[3148]: E0314 00:15:28.464109 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:28.464302 kubelet[3148]: W0314 00:15:28.464143 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:28.464302 kubelet[3148]: E0314 00:15:28.464176 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:28.519122 kubelet[3148]: E0314 00:15:28.519061 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:28.520128 kubelet[3148]: W0314 00:15:28.519507 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:28.520128 kubelet[3148]: E0314 00:15:28.519573 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:28.522259 kubelet[3148]: E0314 00:15:28.522108 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:28.522259 kubelet[3148]: W0314 00:15:28.522171 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:28.522259 kubelet[3148]: E0314 00:15:28.522205 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:28.524930 kubelet[3148]: E0314 00:15:28.524883 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:28.524930 kubelet[3148]: W0314 00:15:28.524922 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:28.525350 kubelet[3148]: E0314 00:15:28.525024 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:28.527023 kubelet[3148]: E0314 00:15:28.526719 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:28.527023 kubelet[3148]: W0314 00:15:28.526754 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:28.527023 kubelet[3148]: E0314 00:15:28.526784 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:28.529282 kubelet[3148]: E0314 00:15:28.528156 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:28.529282 kubelet[3148]: W0314 00:15:28.528816 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:28.529282 kubelet[3148]: E0314 00:15:28.528856 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:28.532696 kubelet[3148]: E0314 00:15:28.532534 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:28.532696 kubelet[3148]: W0314 00:15:28.532575 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:28.532696 kubelet[3148]: E0314 00:15:28.532627 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:28.533593 kubelet[3148]: E0314 00:15:28.533134 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:28.533593 kubelet[3148]: W0314 00:15:28.533155 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:28.533593 kubelet[3148]: E0314 00:15:28.533179 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:28.535292 kubelet[3148]: E0314 00:15:28.534821 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:28.535487 kubelet[3148]: W0314 00:15:28.535390 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:28.535487 kubelet[3148]: E0314 00:15:28.535434 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:28.539405 kubelet[3148]: E0314 00:15:28.539093 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:28.539405 kubelet[3148]: W0314 00:15:28.539160 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:28.539405 kubelet[3148]: E0314 00:15:28.539198 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:28.541131 kubelet[3148]: E0314 00:15:28.540683 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:28.541131 kubelet[3148]: W0314 00:15:28.540730 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:28.541131 kubelet[3148]: E0314 00:15:28.540764 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:28.543360 kubelet[3148]: E0314 00:15:28.542916 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:28.543360 kubelet[3148]: W0314 00:15:28.543356 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:28.543599 kubelet[3148]: E0314 00:15:28.543397 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:28.546822 kubelet[3148]: E0314 00:15:28.545450 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:28.546822 kubelet[3148]: W0314 00:15:28.545513 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:28.546822 kubelet[3148]: E0314 00:15:28.545547 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:28.547237 kubelet[3148]: E0314 00:15:28.546993 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:28.547237 kubelet[3148]: W0314 00:15:28.547072 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:28.547237 kubelet[3148]: E0314 00:15:28.547102 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:28.548261 kubelet[3148]: E0314 00:15:28.548191 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:28.548261 kubelet[3148]: W0314 00:15:28.548251 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:28.549128 kubelet[3148]: E0314 00:15:28.548289 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:28.549249 kubelet[3148]: E0314 00:15:28.549173 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:28.549249 kubelet[3148]: W0314 00:15:28.549199 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:28.549249 kubelet[3148]: E0314 00:15:28.549229 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:28.549838 kubelet[3148]: E0314 00:15:28.549645 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:28.549838 kubelet[3148]: W0314 00:15:28.549708 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:28.549838 kubelet[3148]: E0314 00:15:28.549761 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:28.550550 kubelet[3148]: E0314 00:15:28.550191 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:28.550550 kubelet[3148]: W0314 00:15:28.550211 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:28.550550 kubelet[3148]: E0314 00:15:28.550320 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:28.551689 kubelet[3148]: E0314 00:15:28.551435 3148 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:15:28.551689 kubelet[3148]: W0314 00:15:28.551470 3148 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:15:28.551689 kubelet[3148]: E0314 00:15:28.551499 3148 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:15:28.630962 containerd[1937]: time="2026-03-14T00:15:28.630857988Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:28.632976 containerd[1937]: time="2026-03-14T00:15:28.632729208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Mar 14 00:15:28.634224 containerd[1937]: time="2026-03-14T00:15:28.634143600Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:28.638985 containerd[1937]: time="2026-03-14T00:15:28.638359896Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:28.640691 containerd[1937]: time="2026-03-14T00:15:28.640577712Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.24199241s" Mar 14 00:15:28.640691 containerd[1937]: time="2026-03-14T00:15:28.640671036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Mar 14 00:15:28.649210 containerd[1937]: time="2026-03-14T00:15:28.648538968Z" level=info msg="CreateContainer within sandbox \"e16f3d19ba4aeb6a1c7c4ae35a197dccb8b7c2cc01512948a66eecb0d59aaef7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 14 00:15:28.669367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount577654175.mount: Deactivated successfully. Mar 14 00:15:28.676828 containerd[1937]: time="2026-03-14T00:15:28.676766004Z" level=info msg="CreateContainer within sandbox \"e16f3d19ba4aeb6a1c7c4ae35a197dccb8b7c2cc01512948a66eecb0d59aaef7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a519da52f62339a7d9218b9953440bb31f07721f105d79154ad9c3f6bdd176e4\"" Mar 14 00:15:28.679702 containerd[1937]: time="2026-03-14T00:15:28.678522264Z" level=info msg="StartContainer for \"a519da52f62339a7d9218b9953440bb31f07721f105d79154ad9c3f6bdd176e4\"" Mar 14 00:15:28.739393 systemd[1]: run-containerd-runc-k8s.io-a519da52f62339a7d9218b9953440bb31f07721f105d79154ad9c3f6bdd176e4-runc.Ptxa4k.mount: Deactivated successfully. Mar 14 00:15:28.750248 systemd[1]: Started cri-containerd-a519da52f62339a7d9218b9953440bb31f07721f105d79154ad9c3f6bdd176e4.scope - libcontainer container a519da52f62339a7d9218b9953440bb31f07721f105d79154ad9c3f6bdd176e4. Mar 14 00:15:28.800453 containerd[1937]: time="2026-03-14T00:15:28.800311813Z" level=info msg="StartContainer for \"a519da52f62339a7d9218b9953440bb31f07721f105d79154ad9c3f6bdd176e4\" returns successfully" Mar 14 00:15:28.831408 systemd[1]: cri-containerd-a519da52f62339a7d9218b9953440bb31f07721f105d79154ad9c3f6bdd176e4.scope: Deactivated successfully. Mar 14 00:15:29.311886 containerd[1937]: time="2026-03-14T00:15:29.311486903Z" level=info msg="shim disconnected" id=a519da52f62339a7d9218b9953440bb31f07721f105d79154ad9c3f6bdd176e4 namespace=k8s.io Mar 14 00:15:29.311886 containerd[1937]: time="2026-03-14T00:15:29.311557871Z" level=warning msg="cleaning up after shim disconnected" id=a519da52f62339a7d9218b9953440bb31f07721f105d79154ad9c3f6bdd176e4 namespace=k8s.io Mar 14 00:15:29.311886 containerd[1937]: time="2026-03-14T00:15:29.311578283Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:15:29.437312 containerd[1937]: time="2026-03-14T00:15:29.437091672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 14 00:15:29.665347 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a519da52f62339a7d9218b9953440bb31f07721f105d79154ad9c3f6bdd176e4-rootfs.mount: Deactivated successfully. Mar 14 00:15:30.185007 kubelet[3148]: E0314 00:15:30.183525 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s9wlx" podUID="8bfac06b-f0bb-4f88-a72c-e23a86afafd1" Mar 14 00:15:32.185666 kubelet[3148]: E0314 00:15:32.185233 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s9wlx" podUID="8bfac06b-f0bb-4f88-a72c-e23a86afafd1" Mar 14 00:15:34.190311 kubelet[3148]: E0314 00:15:34.189435 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s9wlx" podUID="8bfac06b-f0bb-4f88-a72c-e23a86afafd1" Mar 14 00:15:35.713751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount717639918.mount: Deactivated successfully. Mar 14 00:15:35.782415 containerd[1937]: time="2026-03-14T00:15:35.782109212Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:35.785517 containerd[1937]: time="2026-03-14T00:15:35.785415224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Mar 14 00:15:35.787757 containerd[1937]: time="2026-03-14T00:15:35.787678292Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:35.792768 containerd[1937]: time="2026-03-14T00:15:35.792718664Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:35.794374 containerd[1937]: time="2026-03-14T00:15:35.794130308Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 6.356981288s" Mar 14 00:15:35.794374 containerd[1937]: time="2026-03-14T00:15:35.794191520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Mar 14 00:15:35.807070 containerd[1937]: time="2026-03-14T00:15:35.807004112Z" level=info msg="CreateContainer within sandbox \"e16f3d19ba4aeb6a1c7c4ae35a197dccb8b7c2cc01512948a66eecb0d59aaef7\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 14 00:15:35.841377 containerd[1937]: time="2026-03-14T00:15:35.841264508Z" level=info msg="CreateContainer within sandbox \"e16f3d19ba4aeb6a1c7c4ae35a197dccb8b7c2cc01512948a66eecb0d59aaef7\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"a28fa8de51bf4c9e09ff60bc3200326efcdf9090f84d64041aef7ad880c4fc4d\"" Mar 14 00:15:35.845861 containerd[1937]: time="2026-03-14T00:15:35.845117036Z" level=info msg="StartContainer for \"a28fa8de51bf4c9e09ff60bc3200326efcdf9090f84d64041aef7ad880c4fc4d\"" Mar 14 00:15:35.913288 systemd[1]: Started cri-containerd-a28fa8de51bf4c9e09ff60bc3200326efcdf9090f84d64041aef7ad880c4fc4d.scope - libcontainer container a28fa8de51bf4c9e09ff60bc3200326efcdf9090f84d64041aef7ad880c4fc4d. Mar 14 00:15:35.976067 containerd[1937]: time="2026-03-14T00:15:35.976004337Z" level=info msg="StartContainer for \"a28fa8de51bf4c9e09ff60bc3200326efcdf9090f84d64041aef7ad880c4fc4d\" returns successfully" Mar 14 00:15:36.170869 systemd[1]: cri-containerd-a28fa8de51bf4c9e09ff60bc3200326efcdf9090f84d64041aef7ad880c4fc4d.scope: Deactivated successfully. Mar 14 00:15:36.187013 kubelet[3148]: E0314 00:15:36.186578 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s9wlx" podUID="8bfac06b-f0bb-4f88-a72c-e23a86afafd1" Mar 14 00:15:36.394492 containerd[1937]: time="2026-03-14T00:15:36.394053139Z" level=info msg="shim disconnected" id=a28fa8de51bf4c9e09ff60bc3200326efcdf9090f84d64041aef7ad880c4fc4d namespace=k8s.io Mar 14 00:15:36.394492 containerd[1937]: time="2026-03-14T00:15:36.394130083Z" level=warning msg="cleaning up after shim disconnected" id=a28fa8de51bf4c9e09ff60bc3200326efcdf9090f84d64041aef7ad880c4fc4d namespace=k8s.io Mar 14 00:15:36.394492 containerd[1937]: time="2026-03-14T00:15:36.394153027Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:15:36.460078 containerd[1937]: time="2026-03-14T00:15:36.459630751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 14 00:15:36.714245 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a28fa8de51bf4c9e09ff60bc3200326efcdf9090f84d64041aef7ad880c4fc4d-rootfs.mount: Deactivated successfully. Mar 14 00:15:38.184647 kubelet[3148]: E0314 00:15:38.184023 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s9wlx" podUID="8bfac06b-f0bb-4f88-a72c-e23a86afafd1" Mar 14 00:15:39.678641 containerd[1937]: time="2026-03-14T00:15:39.678554903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:39.680485 containerd[1937]: time="2026-03-14T00:15:39.680338283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Mar 14 00:15:39.682056 containerd[1937]: time="2026-03-14T00:15:39.681550451Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:39.687305 containerd[1937]: time="2026-03-14T00:15:39.687221999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:39.689984 containerd[1937]: time="2026-03-14T00:15:39.688760267Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 3.229064752s" Mar 14 00:15:39.689984 containerd[1937]: time="2026-03-14T00:15:39.688819967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Mar 14 00:15:39.697008 containerd[1937]: time="2026-03-14T00:15:39.696898931Z" level=info msg="CreateContainer within sandbox \"e16f3d19ba4aeb6a1c7c4ae35a197dccb8b7c2cc01512948a66eecb0d59aaef7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 14 00:15:39.719903 containerd[1937]: time="2026-03-14T00:15:39.719846879Z" level=info msg="CreateContainer within sandbox \"e16f3d19ba4aeb6a1c7c4ae35a197dccb8b7c2cc01512948a66eecb0d59aaef7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8cc4778e3c5c563245b3e827cb09325dab06fc0d123ac7e40c55932a68611015\"" Mar 14 00:15:39.721594 containerd[1937]: time="2026-03-14T00:15:39.721395107Z" level=info msg="StartContainer for \"8cc4778e3c5c563245b3e827cb09325dab06fc0d123ac7e40c55932a68611015\"" Mar 14 00:15:39.797272 systemd[1]: Started cri-containerd-8cc4778e3c5c563245b3e827cb09325dab06fc0d123ac7e40c55932a68611015.scope - libcontainer container 8cc4778e3c5c563245b3e827cb09325dab06fc0d123ac7e40c55932a68611015. Mar 14 00:15:39.856164 containerd[1937]: time="2026-03-14T00:15:39.855878856Z" level=info msg="StartContainer for \"8cc4778e3c5c563245b3e827cb09325dab06fc0d123ac7e40c55932a68611015\" returns successfully" Mar 14 00:15:40.187890 kubelet[3148]: E0314 00:15:40.186322 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s9wlx" podUID="8bfac06b-f0bb-4f88-a72c-e23a86afafd1" Mar 14 00:15:41.785194 containerd[1937]: time="2026-03-14T00:15:41.785119429Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:15:41.790849 systemd[1]: cri-containerd-8cc4778e3c5c563245b3e827cb09325dab06fc0d123ac7e40c55932a68611015.scope: Deactivated successfully. Mar 14 00:15:41.791726 systemd[1]: cri-containerd-8cc4778e3c5c563245b3e827cb09325dab06fc0d123ac7e40c55932a68611015.scope: Consumed 1.060s CPU time. Mar 14 00:15:41.831334 kubelet[3148]: I0314 00:15:41.830735 3148 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 14 00:15:41.844320 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8cc4778e3c5c563245b3e827cb09325dab06fc0d123ac7e40c55932a68611015-rootfs.mount: Deactivated successfully. Mar 14 00:15:41.848927 containerd[1937]: time="2026-03-14T00:15:41.848693942Z" level=info msg="shim disconnected" id=8cc4778e3c5c563245b3e827cb09325dab06fc0d123ac7e40c55932a68611015 namespace=k8s.io Mar 14 00:15:41.849199 containerd[1937]: time="2026-03-14T00:15:41.848926838Z" level=warning msg="cleaning up after shim disconnected" id=8cc4778e3c5c563245b3e827cb09325dab06fc0d123ac7e40c55932a68611015 namespace=k8s.io Mar 14 00:15:41.849199 containerd[1937]: time="2026-03-14T00:15:41.848976158Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:15:41.948403 systemd[1]: Created slice kubepods-burstable-pod8127331c_4b50_47c1_bbe1_89afe1cea98e.slice - libcontainer container kubepods-burstable-pod8127331c_4b50_47c1_bbe1_89afe1cea98e.slice. Mar 14 00:15:41.981477 systemd[1]: Created slice kubepods-besteffort-pod7629e1e9_e956_4dbb_9bf3_396748a97bfb.slice - libcontainer container kubepods-besteffort-pod7629e1e9_e956_4dbb_9bf3_396748a97bfb.slice. Mar 14 00:15:41.996543 systemd[1]: Created slice kubepods-besteffort-podde037f31_e304_4774_8e09_1ec32c3e29bf.slice - libcontainer container kubepods-besteffort-podde037f31_e304_4774_8e09_1ec32c3e29bf.slice. Mar 14 00:15:42.019489 systemd[1]: Created slice kubepods-burstable-pod7983ca7e_7b32_4d4f_acd3_e05012673e7d.slice - libcontainer container kubepods-burstable-pod7983ca7e_7b32_4d4f_acd3_e05012673e7d.slice. Mar 14 00:15:42.038058 systemd[1]: Created slice kubepods-besteffort-pod6669e0b2_65fc_448c_87e3_c79fbf1e2867.slice - libcontainer container kubepods-besteffort-pod6669e0b2_65fc_448c_87e3_c79fbf1e2867.slice. Mar 14 00:15:42.050007 kubelet[3148]: I0314 00:15:42.048764 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6669e0b2-65fc-448c-87e3-c79fbf1e2867-calico-apiserver-certs\") pod \"calico-apiserver-5cc67d498c-gnbqp\" (UID: \"6669e0b2-65fc-448c-87e3-c79fbf1e2867\") " pod="calico-system/calico-apiserver-5cc67d498c-gnbqp" Mar 14 00:15:42.050007 kubelet[3148]: I0314 00:15:42.048992 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7922\" (UniqueName: \"kubernetes.io/projected/7629e1e9-e956-4dbb-9bf3-396748a97bfb-kube-api-access-c7922\") pod \"calico-kube-controllers-75d5fd567b-lbrvk\" (UID: \"7629e1e9-e956-4dbb-9bf3-396748a97bfb\") " pod="calico-system/calico-kube-controllers-75d5fd567b-lbrvk" Mar 14 00:15:42.050007 kubelet[3148]: I0314 00:15:42.049049 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d7880c76-182c-44f3-99e8-6a915d275ae2-whisker-backend-key-pair\") pod \"whisker-544d6dd76d-4rcl5\" (UID: \"d7880c76-182c-44f3-99e8-6a915d275ae2\") " pod="calico-system/whisker-544d6dd76d-4rcl5" Mar 14 00:15:42.050007 kubelet[3148]: I0314 00:15:42.049097 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sltfm\" (UniqueName: \"kubernetes.io/projected/8127331c-4b50-47c1-bbe1-89afe1cea98e-kube-api-access-sltfm\") pod \"coredns-674b8bbfcf-nsj8c\" (UID: \"8127331c-4b50-47c1-bbe1-89afe1cea98e\") " pod="kube-system/coredns-674b8bbfcf-nsj8c" Mar 14 00:15:42.050007 kubelet[3148]: I0314 00:15:42.049138 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt2qs\" (UniqueName: \"kubernetes.io/projected/7983ca7e-7b32-4d4f-acd3-e05012673e7d-kube-api-access-nt2qs\") pod \"coredns-674b8bbfcf-jkzng\" (UID: \"7983ca7e-7b32-4d4f-acd3-e05012673e7d\") " pod="kube-system/coredns-674b8bbfcf-jkzng" Mar 14 00:15:42.051550 kubelet[3148]: I0314 00:15:42.051009 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/de037f31-e304-4774-8e09-1ec32c3e29bf-calico-apiserver-certs\") pod \"calico-apiserver-5cc67d498c-jsdnf\" (UID: \"de037f31-e304-4774-8e09-1ec32c3e29bf\") " pod="calico-system/calico-apiserver-5cc67d498c-jsdnf" Mar 14 00:15:42.051550 kubelet[3148]: I0314 00:15:42.051088 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/d7880c76-182c-44f3-99e8-6a915d275ae2-nginx-config\") pod \"whisker-544d6dd76d-4rcl5\" (UID: \"d7880c76-182c-44f3-99e8-6a915d275ae2\") " pod="calico-system/whisker-544d6dd76d-4rcl5" Mar 14 00:15:42.051550 kubelet[3148]: I0314 00:15:42.051128 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z2r8\" (UniqueName: \"kubernetes.io/projected/d7880c76-182c-44f3-99e8-6a915d275ae2-kube-api-access-6z2r8\") pod \"whisker-544d6dd76d-4rcl5\" (UID: \"d7880c76-182c-44f3-99e8-6a915d275ae2\") " pod="calico-system/whisker-544d6dd76d-4rcl5" Mar 14 00:15:42.051550 kubelet[3148]: I0314 00:15:42.051177 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7b232062-acf6-4e50-a0e3-33b7e15835a4-config\") pod \"goldmane-5b85766d88-lxpml\" (UID: \"7b232062-acf6-4e50-a0e3-33b7e15835a4\") " pod="calico-system/goldmane-5b85766d88-lxpml" Mar 14 00:15:42.051550 kubelet[3148]: I0314 00:15:42.051214 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42gzn\" (UniqueName: \"kubernetes.io/projected/7b232062-acf6-4e50-a0e3-33b7e15835a4-kube-api-access-42gzn\") pod \"goldmane-5b85766d88-lxpml\" (UID: \"7b232062-acf6-4e50-a0e3-33b7e15835a4\") " pod="calico-system/goldmane-5b85766d88-lxpml" Mar 14 00:15:42.051895 kubelet[3148]: I0314 00:15:42.051269 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzpw4\" (UniqueName: \"kubernetes.io/projected/6669e0b2-65fc-448c-87e3-c79fbf1e2867-kube-api-access-qzpw4\") pod \"calico-apiserver-5cc67d498c-gnbqp\" (UID: \"6669e0b2-65fc-448c-87e3-c79fbf1e2867\") " pod="calico-system/calico-apiserver-5cc67d498c-gnbqp" Mar 14 00:15:42.051895 kubelet[3148]: I0314 00:15:42.051332 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7629e1e9-e956-4dbb-9bf3-396748a97bfb-tigera-ca-bundle\") pod \"calico-kube-controllers-75d5fd567b-lbrvk\" (UID: \"7629e1e9-e956-4dbb-9bf3-396748a97bfb\") " pod="calico-system/calico-kube-controllers-75d5fd567b-lbrvk" Mar 14 00:15:42.053991 kubelet[3148]: I0314 00:15:42.053378 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8127331c-4b50-47c1-bbe1-89afe1cea98e-config-volume\") pod \"coredns-674b8bbfcf-nsj8c\" (UID: \"8127331c-4b50-47c1-bbe1-89afe1cea98e\") " pod="kube-system/coredns-674b8bbfcf-nsj8c" Mar 14 00:15:42.053991 kubelet[3148]: I0314 00:15:42.053442 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7983ca7e-7b32-4d4f-acd3-e05012673e7d-config-volume\") pod \"coredns-674b8bbfcf-jkzng\" (UID: \"7983ca7e-7b32-4d4f-acd3-e05012673e7d\") " pod="kube-system/coredns-674b8bbfcf-jkzng" Mar 14 00:15:42.053991 kubelet[3148]: I0314 00:15:42.053483 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbm8f\" (UniqueName: \"kubernetes.io/projected/de037f31-e304-4774-8e09-1ec32c3e29bf-kube-api-access-tbm8f\") pod \"calico-apiserver-5cc67d498c-jsdnf\" (UID: \"de037f31-e304-4774-8e09-1ec32c3e29bf\") " pod="calico-system/calico-apiserver-5cc67d498c-jsdnf" Mar 14 00:15:42.053991 kubelet[3148]: I0314 00:15:42.053577 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7b232062-acf6-4e50-a0e3-33b7e15835a4-goldmane-key-pair\") pod \"goldmane-5b85766d88-lxpml\" (UID: \"7b232062-acf6-4e50-a0e3-33b7e15835a4\") " pod="calico-system/goldmane-5b85766d88-lxpml" Mar 14 00:15:42.053991 kubelet[3148]: I0314 00:15:42.053625 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7880c76-182c-44f3-99e8-6a915d275ae2-whisker-ca-bundle\") pod \"whisker-544d6dd76d-4rcl5\" (UID: \"d7880c76-182c-44f3-99e8-6a915d275ae2\") " pod="calico-system/whisker-544d6dd76d-4rcl5" Mar 14 00:15:42.054398 kubelet[3148]: I0314 00:15:42.053679 3148 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b232062-acf6-4e50-a0e3-33b7e15835a4-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-lxpml\" (UID: \"7b232062-acf6-4e50-a0e3-33b7e15835a4\") " pod="calico-system/goldmane-5b85766d88-lxpml" Mar 14 00:15:42.060532 systemd[1]: Created slice kubepods-besteffort-podd7880c76_182c_44f3_99e8_6a915d275ae2.slice - libcontainer container kubepods-besteffort-podd7880c76_182c_44f3_99e8_6a915d275ae2.slice. Mar 14 00:15:42.077594 systemd[1]: Created slice kubepods-besteffort-pod7b232062_acf6_4e50_a0e3_33b7e15835a4.slice - libcontainer container kubepods-besteffort-pod7b232062_acf6_4e50_a0e3_33b7e15835a4.slice. Mar 14 00:15:42.324813 systemd[1]: Created slice kubepods-besteffort-pod8bfac06b_f0bb_4f88_a72c_e23a86afafd1.slice - libcontainer container kubepods-besteffort-pod8bfac06b_f0bb_4f88_a72c_e23a86afafd1.slice. Mar 14 00:15:42.337709 containerd[1937]: time="2026-03-14T00:15:42.337101564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jkzng,Uid:7983ca7e-7b32-4d4f-acd3-e05012673e7d,Namespace:kube-system,Attempt:0,}" Mar 14 00:15:42.340395 containerd[1937]: time="2026-03-14T00:15:42.340238064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s9wlx,Uid:8bfac06b-f0bb-4f88-a72c-e23a86afafd1,Namespace:calico-system,Attempt:0,}" Mar 14 00:15:42.350657 containerd[1937]: time="2026-03-14T00:15:42.350564064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc67d498c-gnbqp,Uid:6669e0b2-65fc-448c-87e3-c79fbf1e2867,Namespace:calico-system,Attempt:0,}" Mar 14 00:15:42.378908 containerd[1937]: time="2026-03-14T00:15:42.378781716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-544d6dd76d-4rcl5,Uid:d7880c76-182c-44f3-99e8-6a915d275ae2,Namespace:calico-system,Attempt:0,}" Mar 14 00:15:42.385705 containerd[1937]: time="2026-03-14T00:15:42.385624560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-lxpml,Uid:7b232062-acf6-4e50-a0e3-33b7e15835a4,Namespace:calico-system,Attempt:0,}" Mar 14 00:15:42.553875 containerd[1937]: time="2026-03-14T00:15:42.551310109Z" level=info msg="CreateContainer within sandbox \"e16f3d19ba4aeb6a1c7c4ae35a197dccb8b7c2cc01512948a66eecb0d59aaef7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 14 00:15:42.567322 containerd[1937]: time="2026-03-14T00:15:42.567136177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nsj8c,Uid:8127331c-4b50-47c1-bbe1-89afe1cea98e,Namespace:kube-system,Attempt:0,}" Mar 14 00:15:42.590613 containerd[1937]: time="2026-03-14T00:15:42.590048137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75d5fd567b-lbrvk,Uid:7629e1e9-e956-4dbb-9bf3-396748a97bfb,Namespace:calico-system,Attempt:0,}" Mar 14 00:15:42.605153 containerd[1937]: time="2026-03-14T00:15:42.605096617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc67d498c-jsdnf,Uid:de037f31-e304-4774-8e09-1ec32c3e29bf,Namespace:calico-system,Attempt:0,}" Mar 14 00:15:42.736716 containerd[1937]: time="2026-03-14T00:15:42.733216046Z" level=info msg="CreateContainer within sandbox \"e16f3d19ba4aeb6a1c7c4ae35a197dccb8b7c2cc01512948a66eecb0d59aaef7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"af2fde5709097b44e30659b731daf1c92cc9e267fb4493be616f9eb48adc9937\"" Mar 14 00:15:42.746640 containerd[1937]: time="2026-03-14T00:15:42.746573738Z" level=info msg="StartContainer for \"af2fde5709097b44e30659b731daf1c92cc9e267fb4493be616f9eb48adc9937\"" Mar 14 00:15:43.019213 containerd[1937]: time="2026-03-14T00:15:43.019007760Z" level=error msg="Failed to destroy network for sandbox \"785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:43.031876 containerd[1937]: time="2026-03-14T00:15:43.030098688Z" level=error msg="encountered an error cleaning up failed sandbox \"785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:43.031876 containerd[1937]: time="2026-03-14T00:15:43.030199392Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s9wlx,Uid:8bfac06b-f0bb-4f88-a72c-e23a86afafd1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:43.032114 kubelet[3148]: E0314 00:15:43.030504 3148 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:43.032114 kubelet[3148]: E0314 00:15:43.030589 3148 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s9wlx" Mar 14 00:15:43.032114 kubelet[3148]: E0314 00:15:43.030625 3148 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s9wlx" Mar 14 00:15:43.034468 kubelet[3148]: E0314 00:15:43.030699 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s9wlx_calico-system(8bfac06b-f0bb-4f88-a72c-e23a86afafd1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s9wlx_calico-system(8bfac06b-f0bb-4f88-a72c-e23a86afafd1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s9wlx" podUID="8bfac06b-f0bb-4f88-a72c-e23a86afafd1" Mar 14 00:15:43.033281 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e-shm.mount: Deactivated successfully. Mar 14 00:15:43.078341 systemd[1]: Started cri-containerd-af2fde5709097b44e30659b731daf1c92cc9e267fb4493be616f9eb48adc9937.scope - libcontainer container af2fde5709097b44e30659b731daf1c92cc9e267fb4493be616f9eb48adc9937. Mar 14 00:15:43.102341 containerd[1937]: time="2026-03-14T00:15:43.102142152Z" level=error msg="Failed to destroy network for sandbox \"6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:43.105195 containerd[1937]: time="2026-03-14T00:15:43.104909472Z" level=error msg="encountered an error cleaning up failed sandbox \"6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:43.105195 containerd[1937]: time="2026-03-14T00:15:43.105039888Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-lxpml,Uid:7b232062-acf6-4e50-a0e3-33b7e15835a4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:43.107160 kubelet[3148]: E0314 00:15:43.105711 3148 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:43.107160 kubelet[3148]: E0314 00:15:43.105790 3148 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-lxpml" Mar 14 00:15:43.107160 kubelet[3148]: E0314 00:15:43.105829 3148 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-lxpml" Mar 14 00:15:43.107637 kubelet[3148]: E0314 00:15:43.105904 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-lxpml_calico-system(7b232062-acf6-4e50-a0e3-33b7e15835a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-lxpml_calico-system(7b232062-acf6-4e50-a0e3-33b7e15835a4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-lxpml" podUID="7b232062-acf6-4e50-a0e3-33b7e15835a4" Mar 14 00:15:43.116627 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf-shm.mount: Deactivated successfully. Mar 14 00:15:43.120837 containerd[1937]: time="2026-03-14T00:15:43.120553428Z" level=error msg="Failed to destroy network for sandbox \"f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:43.120837 containerd[1937]: time="2026-03-14T00:15:43.120686232Z" level=error msg="Failed to destroy network for sandbox \"7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:43.125072 containerd[1937]: time="2026-03-14T00:15:43.124747296Z" level=error msg="encountered an error cleaning up failed sandbox \"7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:43.125072 containerd[1937]: time="2026-03-14T00:15:43.124854108Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jkzng,Uid:7983ca7e-7b32-4d4f-acd3-e05012673e7d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:43.126875 containerd[1937]: time="2026-03-14T00:15:43.126199740Z" level=error msg="encountered an error cleaning up failed sandbox \"f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:43.126875 containerd[1937]: time="2026-03-14T00:15:43.126360900Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc67d498c-gnbqp,Uid:6669e0b2-65fc-448c-87e3-c79fbf1e2867,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:43.127264 kubelet[3148]: E0314 00:15:43.126747 3148 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:43.127264 kubelet[3148]: E0314 00:15:43.126761 3148 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:43.127264 kubelet[3148]: E0314 00:15:43.126895 3148 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5cc67d498c-gnbqp" Mar 14 00:15:43.127264 kubelet[3148]: E0314 00:15:43.126829 3148 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jkzng" Mar 14 00:15:43.127502 kubelet[3148]: E0314 00:15:43.126984 3148 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jkzng" Mar 14 00:15:43.129449 kubelet[3148]: E0314 00:15:43.127691 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jkzng_kube-system(7983ca7e-7b32-4d4f-acd3-e05012673e7d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jkzng_kube-system(7983ca7e-7b32-4d4f-acd3-e05012673e7d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jkzng" podUID="7983ca7e-7b32-4d4f-acd3-e05012673e7d" Mar 14 00:15:43.129449 kubelet[3148]: E0314 00:15:43.127002 3148 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5cc67d498c-gnbqp" Mar 14 00:15:43.129449 kubelet[3148]: E0314 00:15:43.128222 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cc67d498c-gnbqp_calico-system(6669e0b2-65fc-448c-87e3-c79fbf1e2867)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cc67d498c-gnbqp_calico-system(6669e0b2-65fc-448c-87e3-c79fbf1e2867)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5cc67d498c-gnbqp" podUID="6669e0b2-65fc-448c-87e3-c79fbf1e2867" Mar 14 00:15:43.158888 containerd[1937]: time="2026-03-14T00:15:43.158594868Z" level=error msg="Failed to destroy network for sandbox \"10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:43.165626 containerd[1937]: time="2026-03-14T00:15:43.165367836Z" level=error msg="encountered an error cleaning up failed sandbox \"10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:43.165626 containerd[1937]: time="2026-03-14T00:15:43.165464496Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-544d6dd76d-4rcl5,Uid:d7880c76-182c-44f3-99e8-6a915d275ae2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:43.166469 kubelet[3148]: E0314 00:15:43.165985 3148 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:43.166469 kubelet[3148]: E0314 00:15:43.166187 3148 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-544d6dd76d-4rcl5" Mar 14 00:15:43.166469 kubelet[3148]: E0314 00:15:43.166255 3148 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-544d6dd76d-4rcl5" Mar 14 00:15:43.166739 kubelet[3148]: E0314 00:15:43.166597 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-544d6dd76d-4rcl5_calico-system(d7880c76-182c-44f3-99e8-6a915d275ae2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-544d6dd76d-4rcl5_calico-system(d7880c76-182c-44f3-99e8-6a915d275ae2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-544d6dd76d-4rcl5" podUID="d7880c76-182c-44f3-99e8-6a915d275ae2" Mar 14 00:15:43.259225 containerd[1937]: time="2026-03-14T00:15:43.259160245Z" level=error msg="Failed to destroy network for sandbox \"b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:43.260366 containerd[1937]: time="2026-03-14T00:15:43.260046697Z" level=error msg="encountered an error cleaning up failed sandbox \"b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:43.261161 containerd[1937]: time="2026-03-14T00:15:43.260687605Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc67d498c-jsdnf,Uid:de037f31-e304-4774-8e09-1ec32c3e29bf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:43.262593 kubelet[3148]: E0314 00:15:43.262522 3148 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:43.262902 kubelet[3148]: E0314 00:15:43.262611 3148 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5cc67d498c-jsdnf" Mar 14 00:15:43.262902 kubelet[3148]: E0314 00:15:43.262651 3148 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5cc67d498c-jsdnf" Mar 14 00:15:43.262902 kubelet[3148]: E0314 00:15:43.262737 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5cc67d498c-jsdnf_calico-system(de037f31-e304-4774-8e09-1ec32c3e29bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5cc67d498c-jsdnf_calico-system(de037f31-e304-4774-8e09-1ec32c3e29bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5cc67d498c-jsdnf" podUID="de037f31-e304-4774-8e09-1ec32c3e29bf" Mar 14 00:15:43.266109 containerd[1937]: time="2026-03-14T00:15:43.266047861Z" level=error msg="Failed to destroy network for sandbox \"f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:43.266915 containerd[1937]: time="2026-03-14T00:15:43.266866033Z" level=error msg="encountered an error cleaning up failed sandbox \"f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:43.267347 containerd[1937]: time="2026-03-14T00:15:43.267034153Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nsj8c,Uid:8127331c-4b50-47c1-bbe1-89afe1cea98e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:43.268654 kubelet[3148]: E0314 00:15:43.267647 3148 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:43.268654 kubelet[3148]: E0314 00:15:43.267726 3148 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nsj8c" Mar 14 00:15:43.268654 kubelet[3148]: E0314 00:15:43.267762 3148 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nsj8c" Mar 14 00:15:43.268901 kubelet[3148]: E0314 00:15:43.267841 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-nsj8c_kube-system(8127331c-4b50-47c1-bbe1-89afe1cea98e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-nsj8c_kube-system(8127331c-4b50-47c1-bbe1-89afe1cea98e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-nsj8c" podUID="8127331c-4b50-47c1-bbe1-89afe1cea98e" Mar 14 00:15:43.286020 containerd[1937]: time="2026-03-14T00:15:43.284026801Z" level=info msg="StartContainer for \"af2fde5709097b44e30659b731daf1c92cc9e267fb4493be616f9eb48adc9937\" returns successfully" Mar 14 00:15:43.319108 containerd[1937]: time="2026-03-14T00:15:43.319012393Z" level=error msg="Failed to destroy network for sandbox \"efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:43.319627 containerd[1937]: time="2026-03-14T00:15:43.319576753Z" level=error msg="encountered an error cleaning up failed sandbox \"efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:43.319741 containerd[1937]: time="2026-03-14T00:15:43.319669453Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75d5fd567b-lbrvk,Uid:7629e1e9-e956-4dbb-9bf3-396748a97bfb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:43.322269 kubelet[3148]: E0314 00:15:43.320004 3148 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:15:43.322269 kubelet[3148]: E0314 00:15:43.320080 3148 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75d5fd567b-lbrvk" Mar 14 00:15:43.322269 kubelet[3148]: E0314 00:15:43.320121 3148 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75d5fd567b-lbrvk" Mar 14 00:15:43.322499 kubelet[3148]: E0314 00:15:43.320210 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-75d5fd567b-lbrvk_calico-system(7629e1e9-e956-4dbb-9bf3-396748a97bfb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-75d5fd567b-lbrvk_calico-system(7629e1e9-e956-4dbb-9bf3-396748a97bfb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75d5fd567b-lbrvk" podUID="7629e1e9-e956-4dbb-9bf3-396748a97bfb" Mar 14 00:15:43.501130 kubelet[3148]: I0314 00:15:43.497779 3148 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" Mar 14 00:15:43.501306 containerd[1937]: time="2026-03-14T00:15:43.500327654Z" level=info msg="StopPodSandbox for \"785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e\"" Mar 14 00:15:43.501306 containerd[1937]: time="2026-03-14T00:15:43.500647766Z" level=info msg="Ensure that sandbox 785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e in task-service has been cleanup successfully" Mar 14 00:15:43.511599 kubelet[3148]: I0314 00:15:43.511511 3148 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" Mar 14 00:15:43.514558 containerd[1937]: time="2026-03-14T00:15:43.514495166Z" level=info msg="StopPodSandbox for \"f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba\"" Mar 14 00:15:43.514825 containerd[1937]: time="2026-03-14T00:15:43.514781414Z" level=info msg="Ensure that sandbox f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba in task-service has been cleanup successfully" Mar 14 00:15:43.521453 kubelet[3148]: I0314 00:15:43.520774 3148 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" Mar 14 00:15:43.521991 containerd[1937]: time="2026-03-14T00:15:43.521915414Z" level=info msg="StopPodSandbox for \"f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b\"" Mar 14 00:15:43.525269 containerd[1937]: time="2026-03-14T00:15:43.525192482Z" level=info msg="Ensure that sandbox f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b in task-service has been cleanup successfully" Mar 14 00:15:43.538293 kubelet[3148]: I0314 00:15:43.537180 3148 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" Mar 14 00:15:43.540987 containerd[1937]: time="2026-03-14T00:15:43.540596042Z" level=info msg="StopPodSandbox for \"b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a\"" Mar 14 00:15:43.540987 containerd[1937]: time="2026-03-14T00:15:43.540897626Z" level=info msg="Ensure that sandbox b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a in task-service has been cleanup successfully" Mar 14 00:15:43.552759 kubelet[3148]: I0314 00:15:43.552714 3148 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" Mar 14 00:15:43.556007 containerd[1937]: time="2026-03-14T00:15:43.555907082Z" level=info msg="StopPodSandbox for \"10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237\"" Mar 14 00:15:43.556513 containerd[1937]: time="2026-03-14T00:15:43.556456934Z" level=info msg="Ensure that sandbox 10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237 in task-service has been cleanup successfully" Mar 14 00:15:43.562296 kubelet[3148]: I0314 00:15:43.562236 3148 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" Mar 14 00:15:43.572390 containerd[1937]: time="2026-03-14T00:15:43.572341418Z" level=info msg="StopPodSandbox for \"7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5\"" Mar 14 00:15:43.575016 containerd[1937]: time="2026-03-14T00:15:43.574507802Z" level=info msg="Ensure that sandbox 7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5 in task-service has been cleanup successfully" Mar 14 00:15:43.660529 kubelet[3148]: I0314 00:15:43.659510 3148 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" Mar 14 00:15:43.663435 containerd[1937]: time="2026-03-14T00:15:43.663274407Z" level=info msg="StopPodSandbox for \"efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e\"" Mar 14 00:15:43.663757 containerd[1937]: time="2026-03-14T00:15:43.663589479Z" level=info msg="Ensure that sandbox efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e in task-service has been cleanup successfully" Mar 14 00:15:43.687100 kubelet[3148]: I0314 00:15:43.686920 3148 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" Mar 14 00:15:43.695978 containerd[1937]: time="2026-03-14T00:15:43.695255607Z" level=info msg="StopPodSandbox for \"6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf\"" Mar 14 00:15:43.705052 containerd[1937]: time="2026-03-14T00:15:43.704967903Z" level=info msg="Ensure that sandbox 6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf in task-service has been cleanup successfully" Mar 14 00:15:43.852499 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a-shm.mount: Deactivated successfully. Mar 14 00:15:43.852705 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e-shm.mount: Deactivated successfully. Mar 14 00:15:43.852843 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba-shm.mount: Deactivated successfully. Mar 14 00:15:43.853057 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237-shm.mount: Deactivated successfully. Mar 14 00:15:43.853197 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b-shm.mount: Deactivated successfully. Mar 14 00:15:43.853329 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5-shm.mount: Deactivated successfully. Mar 14 00:15:43.928971 systemd[1]: run-containerd-runc-k8s.io-af2fde5709097b44e30659b731daf1c92cc9e267fb4493be616f9eb48adc9937-runc.1mMhYa.mount: Deactivated successfully. Mar 14 00:15:44.292247 kubelet[3148]: I0314 00:15:44.292152 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9mbch" podStartSLOduration=5.832010354 podStartE2EDuration="20.29212997s" podCreationTimestamp="2026-03-14 00:15:24 +0000 UTC" firstStartedPulling="2026-03-14 00:15:25.230177611 +0000 UTC m=+33.334581046" lastFinishedPulling="2026-03-14 00:15:39.690297239 +0000 UTC m=+47.794700662" observedRunningTime="2026-03-14 00:15:43.744404523 +0000 UTC m=+51.848807946" watchObservedRunningTime="2026-03-14 00:15:44.29212997 +0000 UTC m=+52.396533405" Mar 14 00:15:44.920262 containerd[1937]: 2026-03-14 00:15:44.384 [INFO][4519] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" Mar 14 00:15:44.920262 containerd[1937]: 2026-03-14 00:15:44.384 [INFO][4519] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" iface="eth0" netns="/var/run/netns/cni-97b4901a-5640-cd5f-57fc-6188f9ebcae9" Mar 14 00:15:44.920262 containerd[1937]: 2026-03-14 00:15:44.386 [INFO][4519] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" iface="eth0" netns="/var/run/netns/cni-97b4901a-5640-cd5f-57fc-6188f9ebcae9" Mar 14 00:15:44.920262 containerd[1937]: 2026-03-14 00:15:44.386 [INFO][4519] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" iface="eth0" netns="/var/run/netns/cni-97b4901a-5640-cd5f-57fc-6188f9ebcae9" Mar 14 00:15:44.920262 containerd[1937]: 2026-03-14 00:15:44.394 [INFO][4519] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" Mar 14 00:15:44.920262 containerd[1937]: 2026-03-14 00:15:44.395 [INFO][4519] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" Mar 14 00:15:44.920262 containerd[1937]: 2026-03-14 00:15:44.834 [INFO][4672] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" HandleID="k8s-pod-network.785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" Workload="ip--172--31--26--130-k8s-csi--node--driver--s9wlx-eth0" Mar 14 00:15:44.920262 containerd[1937]: 2026-03-14 00:15:44.837 [INFO][4672] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:44.920262 containerd[1937]: 2026-03-14 00:15:44.839 [INFO][4672] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:44.920262 containerd[1937]: 2026-03-14 00:15:44.878 [WARNING][4672] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" HandleID="k8s-pod-network.785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" Workload="ip--172--31--26--130-k8s-csi--node--driver--s9wlx-eth0" Mar 14 00:15:44.920262 containerd[1937]: 2026-03-14 00:15:44.878 [INFO][4672] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" HandleID="k8s-pod-network.785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" Workload="ip--172--31--26--130-k8s-csi--node--driver--s9wlx-eth0" Mar 14 00:15:44.920262 containerd[1937]: 2026-03-14 00:15:44.883 [INFO][4672] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:44.920262 containerd[1937]: 2026-03-14 00:15:44.911 [INFO][4519] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" Mar 14 00:15:44.928766 containerd[1937]: time="2026-03-14T00:15:44.921631625Z" level=info msg="TearDown network for sandbox \"785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e\" successfully" Mar 14 00:15:44.928766 containerd[1937]: time="2026-03-14T00:15:44.921685037Z" level=info msg="StopPodSandbox for \"785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e\" returns successfully" Mar 14 00:15:44.929933 containerd[1937]: time="2026-03-14T00:15:44.929517641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s9wlx,Uid:8bfac06b-f0bb-4f88-a72c-e23a86afafd1,Namespace:calico-system,Attempt:1,}" Mar 14 00:15:44.933648 systemd[1]: run-netns-cni\x2d97b4901a\x2d5640\x2dcd5f\x2d57fc\x2d6188f9ebcae9.mount: Deactivated successfully. Mar 14 00:15:44.984431 containerd[1937]: 2026-03-14 00:15:44.306 [INFO][4606] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" Mar 14 00:15:44.984431 containerd[1937]: 2026-03-14 00:15:44.307 [INFO][4606] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" iface="eth0" netns="/var/run/netns/cni-dd8a7be7-6f50-0984-bf89-f85d20d795f4" Mar 14 00:15:44.984431 containerd[1937]: 2026-03-14 00:15:44.308 [INFO][4606] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" iface="eth0" netns="/var/run/netns/cni-dd8a7be7-6f50-0984-bf89-f85d20d795f4" Mar 14 00:15:44.984431 containerd[1937]: 2026-03-14 00:15:44.310 [INFO][4606] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" iface="eth0" netns="/var/run/netns/cni-dd8a7be7-6f50-0984-bf89-f85d20d795f4" Mar 14 00:15:44.984431 containerd[1937]: 2026-03-14 00:15:44.310 [INFO][4606] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" Mar 14 00:15:44.984431 containerd[1937]: 2026-03-14 00:15:44.311 [INFO][4606] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" Mar 14 00:15:44.984431 containerd[1937]: 2026-03-14 00:15:44.842 [INFO][4653] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" HandleID="k8s-pod-network.6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" Workload="ip--172--31--26--130-k8s-goldmane--5b85766d88--lxpml-eth0" Mar 14 00:15:44.984431 containerd[1937]: 2026-03-14 00:15:44.843 [INFO][4653] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:44.984431 containerd[1937]: 2026-03-14 00:15:44.884 [INFO][4653] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:44.984431 containerd[1937]: 2026-03-14 00:15:44.940 [WARNING][4653] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" HandleID="k8s-pod-network.6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" Workload="ip--172--31--26--130-k8s-goldmane--5b85766d88--lxpml-eth0" Mar 14 00:15:44.984431 containerd[1937]: 2026-03-14 00:15:44.941 [INFO][4653] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" HandleID="k8s-pod-network.6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" Workload="ip--172--31--26--130-k8s-goldmane--5b85766d88--lxpml-eth0" Mar 14 00:15:44.984431 containerd[1937]: 2026-03-14 00:15:44.946 [INFO][4653] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:44.984431 containerd[1937]: 2026-03-14 00:15:44.962 [INFO][4606] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" Mar 14 00:15:44.986291 containerd[1937]: time="2026-03-14T00:15:44.985812449Z" level=info msg="TearDown network for sandbox \"6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf\" successfully" Mar 14 00:15:44.986291 containerd[1937]: time="2026-03-14T00:15:44.985885661Z" level=info msg="StopPodSandbox for \"6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf\" returns successfully" Mar 14 00:15:44.989386 containerd[1937]: time="2026-03-14T00:15:44.988808465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-lxpml,Uid:7b232062-acf6-4e50-a0e3-33b7e15835a4,Namespace:calico-system,Attempt:1,}" Mar 14 00:15:45.006978 systemd[1]: run-netns-cni\x2ddd8a7be7\x2d6f50\x2d0984\x2dbf89\x2df85d20d795f4.mount: Deactivated successfully. Mar 14 00:15:45.044568 systemd[1]: Started sshd@7-172.31.26.130:22-68.220.241.50:35390.service - OpenSSH per-connection server daemon (68.220.241.50:35390). Mar 14 00:15:45.085091 containerd[1937]: 2026-03-14 00:15:44.380 [INFO][4563] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" Mar 14 00:15:45.085091 containerd[1937]: 2026-03-14 00:15:44.386 [INFO][4563] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" iface="eth0" netns="/var/run/netns/cni-e857928c-ef27-0ff0-cdd1-f781ac5118d5" Mar 14 00:15:45.085091 containerd[1937]: 2026-03-14 00:15:44.391 [INFO][4563] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" iface="eth0" netns="/var/run/netns/cni-e857928c-ef27-0ff0-cdd1-f781ac5118d5" Mar 14 00:15:45.085091 containerd[1937]: 2026-03-14 00:15:44.397 [INFO][4563] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" iface="eth0" netns="/var/run/netns/cni-e857928c-ef27-0ff0-cdd1-f781ac5118d5" Mar 14 00:15:45.085091 containerd[1937]: 2026-03-14 00:15:44.397 [INFO][4563] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" Mar 14 00:15:45.085091 containerd[1937]: 2026-03-14 00:15:44.400 [INFO][4563] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" Mar 14 00:15:45.085091 containerd[1937]: 2026-03-14 00:15:44.854 [INFO][4671] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" HandleID="k8s-pod-network.10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" Workload="ip--172--31--26--130-k8s-whisker--544d6dd76d--4rcl5-eth0" Mar 14 00:15:45.085091 containerd[1937]: 2026-03-14 00:15:44.854 [INFO][4671] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:45.085091 containerd[1937]: 2026-03-14 00:15:44.946 [INFO][4671] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:45.085091 containerd[1937]: 2026-03-14 00:15:45.004 [WARNING][4671] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" HandleID="k8s-pod-network.10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" Workload="ip--172--31--26--130-k8s-whisker--544d6dd76d--4rcl5-eth0" Mar 14 00:15:45.085091 containerd[1937]: 2026-03-14 00:15:45.007 [INFO][4671] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" HandleID="k8s-pod-network.10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" Workload="ip--172--31--26--130-k8s-whisker--544d6dd76d--4rcl5-eth0" Mar 14 00:15:45.085091 containerd[1937]: 2026-03-14 00:15:45.020 [INFO][4671] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:45.085091 containerd[1937]: 2026-03-14 00:15:45.056 [INFO][4563] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" Mar 14 00:15:45.086115 containerd[1937]: time="2026-03-14T00:15:45.085912814Z" level=info msg="TearDown network for sandbox \"10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237\" successfully" Mar 14 00:15:45.086115 containerd[1937]: time="2026-03-14T00:15:45.086000294Z" level=info msg="StopPodSandbox for \"10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237\" returns successfully" Mar 14 00:15:45.088409 containerd[1937]: time="2026-03-14T00:15:45.087701174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-544d6dd76d-4rcl5,Uid:d7880c76-182c-44f3-99e8-6a915d275ae2,Namespace:calico-system,Attempt:1,}" Mar 14 00:15:45.097912 systemd[1]: run-netns-cni\x2de857928c\x2def27\x2d0ff0\x2dcdd1\x2df781ac5118d5.mount: Deactivated successfully. Mar 14 00:15:45.150082 containerd[1937]: 2026-03-14 00:15:44.381 [INFO][4592] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" Mar 14 00:15:45.150082 containerd[1937]: 2026-03-14 00:15:44.381 [INFO][4592] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" iface="eth0" netns="/var/run/netns/cni-04a360cf-e630-d15e-9fe6-18691f5fd723" Mar 14 00:15:45.150082 containerd[1937]: 2026-03-14 00:15:44.382 [INFO][4592] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" iface="eth0" netns="/var/run/netns/cni-04a360cf-e630-d15e-9fe6-18691f5fd723" Mar 14 00:15:45.150082 containerd[1937]: 2026-03-14 00:15:44.384 [INFO][4592] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" iface="eth0" netns="/var/run/netns/cni-04a360cf-e630-d15e-9fe6-18691f5fd723" Mar 14 00:15:45.150082 containerd[1937]: 2026-03-14 00:15:44.384 [INFO][4592] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" Mar 14 00:15:45.150082 containerd[1937]: 2026-03-14 00:15:44.385 [INFO][4592] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" Mar 14 00:15:45.150082 containerd[1937]: 2026-03-14 00:15:44.863 [INFO][4668] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" HandleID="k8s-pod-network.7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" Workload="ip--172--31--26--130-k8s-coredns--674b8bbfcf--jkzng-eth0" Mar 14 00:15:45.150082 containerd[1937]: 2026-03-14 00:15:44.863 [INFO][4668] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:45.150082 containerd[1937]: 2026-03-14 00:15:45.024 [INFO][4668] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:45.150082 containerd[1937]: 2026-03-14 00:15:45.080 [WARNING][4668] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" HandleID="k8s-pod-network.7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" Workload="ip--172--31--26--130-k8s-coredns--674b8bbfcf--jkzng-eth0" Mar 14 00:15:45.150082 containerd[1937]: 2026-03-14 00:15:45.082 [INFO][4668] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" HandleID="k8s-pod-network.7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" Workload="ip--172--31--26--130-k8s-coredns--674b8bbfcf--jkzng-eth0" Mar 14 00:15:45.150082 containerd[1937]: 2026-03-14 00:15:45.130 [INFO][4668] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:45.150082 containerd[1937]: 2026-03-14 00:15:45.134 [INFO][4592] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" Mar 14 00:15:45.150082 containerd[1937]: time="2026-03-14T00:15:45.149851874Z" level=info msg="TearDown network for sandbox \"7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5\" successfully" Mar 14 00:15:45.152504 containerd[1937]: time="2026-03-14T00:15:45.152253302Z" level=info msg="StopPodSandbox for \"7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5\" returns successfully" Mar 14 00:15:45.156516 containerd[1937]: time="2026-03-14T00:15:45.154691702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jkzng,Uid:7983ca7e-7b32-4d4f-acd3-e05012673e7d,Namespace:kube-system,Attempt:1,}" Mar 14 00:15:45.254034 containerd[1937]: 2026-03-14 00:15:44.342 [INFO][4539] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" Mar 14 00:15:45.254034 containerd[1937]: 2026-03-14 00:15:44.342 [INFO][4539] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" iface="eth0" netns="/var/run/netns/cni-a591aa27-4cf9-435b-7228-cece8ddf20b6" Mar 14 00:15:45.254034 containerd[1937]: 2026-03-14 00:15:44.344 [INFO][4539] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" iface="eth0" netns="/var/run/netns/cni-a591aa27-4cf9-435b-7228-cece8ddf20b6" Mar 14 00:15:45.254034 containerd[1937]: 2026-03-14 00:15:44.345 [INFO][4539] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" iface="eth0" netns="/var/run/netns/cni-a591aa27-4cf9-435b-7228-cece8ddf20b6" Mar 14 00:15:45.254034 containerd[1937]: 2026-03-14 00:15:44.345 [INFO][4539] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" Mar 14 00:15:45.254034 containerd[1937]: 2026-03-14 00:15:44.345 [INFO][4539] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" Mar 14 00:15:45.254034 containerd[1937]: 2026-03-14 00:15:44.871 [INFO][4658] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" HandleID="k8s-pod-network.f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" Workload="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--gnbqp-eth0" Mar 14 00:15:45.254034 containerd[1937]: 2026-03-14 00:15:44.871 [INFO][4658] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:45.254034 containerd[1937]: 2026-03-14 00:15:45.132 [INFO][4658] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:45.254034 containerd[1937]: 2026-03-14 00:15:45.178 [WARNING][4658] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" HandleID="k8s-pod-network.f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" Workload="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--gnbqp-eth0" Mar 14 00:15:45.254034 containerd[1937]: 2026-03-14 00:15:45.179 [INFO][4658] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" HandleID="k8s-pod-network.f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" Workload="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--gnbqp-eth0" Mar 14 00:15:45.254034 containerd[1937]: 2026-03-14 00:15:45.187 [INFO][4658] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:45.254034 containerd[1937]: 2026-03-14 00:15:45.219 [INFO][4539] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" Mar 14 00:15:45.258886 containerd[1937]: time="2026-03-14T00:15:45.258688815Z" level=info msg="TearDown network for sandbox \"f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b\" successfully" Mar 14 00:15:45.259182 containerd[1937]: time="2026-03-14T00:15:45.258875463Z" level=info msg="StopPodSandbox for \"f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b\" returns successfully" Mar 14 00:15:45.263036 containerd[1937]: time="2026-03-14T00:15:45.262926939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc67d498c-gnbqp,Uid:6669e0b2-65fc-448c-87e3-c79fbf1e2867,Namespace:calico-system,Attempt:1,}" Mar 14 00:15:45.324325 containerd[1937]: 2026-03-14 00:15:44.359 [INFO][4600] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" Mar 14 00:15:45.324325 containerd[1937]: 2026-03-14 00:15:44.359 [INFO][4600] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" iface="eth0" netns="/var/run/netns/cni-39460e19-b848-b029-b694-f5abb0997ed0" Mar 14 00:15:45.324325 containerd[1937]: 2026-03-14 00:15:44.360 [INFO][4600] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" iface="eth0" netns="/var/run/netns/cni-39460e19-b848-b029-b694-f5abb0997ed0" Mar 14 00:15:45.324325 containerd[1937]: 2026-03-14 00:15:44.362 [INFO][4600] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" iface="eth0" netns="/var/run/netns/cni-39460e19-b848-b029-b694-f5abb0997ed0" Mar 14 00:15:45.324325 containerd[1937]: 2026-03-14 00:15:44.363 [INFO][4600] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" Mar 14 00:15:45.324325 containerd[1937]: 2026-03-14 00:15:44.363 [INFO][4600] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" Mar 14 00:15:45.324325 containerd[1937]: 2026-03-14 00:15:44.885 [INFO][4663] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" HandleID="k8s-pod-network.efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" Workload="ip--172--31--26--130-k8s-calico--kube--controllers--75d5fd567b--lbrvk-eth0" Mar 14 00:15:45.324325 containerd[1937]: 2026-03-14 00:15:44.889 [INFO][4663] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:45.324325 containerd[1937]: 2026-03-14 00:15:45.192 [INFO][4663] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:45.324325 containerd[1937]: 2026-03-14 00:15:45.243 [WARNING][4663] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" HandleID="k8s-pod-network.efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" Workload="ip--172--31--26--130-k8s-calico--kube--controllers--75d5fd567b--lbrvk-eth0" Mar 14 00:15:45.324325 containerd[1937]: 2026-03-14 00:15:45.243 [INFO][4663] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" HandleID="k8s-pod-network.efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" Workload="ip--172--31--26--130-k8s-calico--kube--controllers--75d5fd567b--lbrvk-eth0" Mar 14 00:15:45.324325 containerd[1937]: 2026-03-14 00:15:45.273 [INFO][4663] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:45.324325 containerd[1937]: 2026-03-14 00:15:45.305 [INFO][4600] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" Mar 14 00:15:45.337263 containerd[1937]: time="2026-03-14T00:15:45.337202175Z" level=info msg="TearDown network for sandbox \"efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e\" successfully" Mar 14 00:15:45.339500 containerd[1937]: time="2026-03-14T00:15:45.339431799Z" level=info msg="StopPodSandbox for \"efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e\" returns successfully" Mar 14 00:15:45.341099 containerd[1937]: time="2026-03-14T00:15:45.340467963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75d5fd567b-lbrvk,Uid:7629e1e9-e956-4dbb-9bf3-396748a97bfb,Namespace:calico-system,Attempt:1,}" Mar 14 00:15:45.421655 containerd[1937]: 2026-03-14 00:15:44.518 [INFO][4562] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" Mar 14 00:15:45.421655 containerd[1937]: 2026-03-14 00:15:44.521 [INFO][4562] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" iface="eth0" netns="/var/run/netns/cni-f80b1bc2-0f2d-ca13-0d9f-c80358379c4f" Mar 14 00:15:45.421655 containerd[1937]: 2026-03-14 00:15:44.524 [INFO][4562] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" iface="eth0" netns="/var/run/netns/cni-f80b1bc2-0f2d-ca13-0d9f-c80358379c4f" Mar 14 00:15:45.421655 containerd[1937]: 2026-03-14 00:15:44.525 [INFO][4562] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" iface="eth0" netns="/var/run/netns/cni-f80b1bc2-0f2d-ca13-0d9f-c80358379c4f" Mar 14 00:15:45.421655 containerd[1937]: 2026-03-14 00:15:44.525 [INFO][4562] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" Mar 14 00:15:45.421655 containerd[1937]: 2026-03-14 00:15:44.525 [INFO][4562] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" Mar 14 00:15:45.421655 containerd[1937]: 2026-03-14 00:15:44.906 [INFO][4688] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" HandleID="k8s-pod-network.b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" Workload="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--jsdnf-eth0" Mar 14 00:15:45.421655 containerd[1937]: 2026-03-14 00:15:44.906 [INFO][4688] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:45.421655 containerd[1937]: 2026-03-14 00:15:45.273 [INFO][4688] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:45.421655 containerd[1937]: 2026-03-14 00:15:45.341 [WARNING][4688] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" HandleID="k8s-pod-network.b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" Workload="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--jsdnf-eth0" Mar 14 00:15:45.421655 containerd[1937]: 2026-03-14 00:15:45.341 [INFO][4688] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" HandleID="k8s-pod-network.b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" Workload="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--jsdnf-eth0" Mar 14 00:15:45.421655 containerd[1937]: 2026-03-14 00:15:45.357 [INFO][4688] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:45.421655 containerd[1937]: 2026-03-14 00:15:45.380 [INFO][4562] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" Mar 14 00:15:45.425767 containerd[1937]: time="2026-03-14T00:15:45.424127295Z" level=info msg="TearDown network for sandbox \"b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a\" successfully" Mar 14 00:15:45.425767 containerd[1937]: time="2026-03-14T00:15:45.424180515Z" level=info msg="StopPodSandbox for \"b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a\" returns successfully" Mar 14 00:15:45.426553 containerd[1937]: time="2026-03-14T00:15:45.426490311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc67d498c-jsdnf,Uid:de037f31-e304-4774-8e09-1ec32c3e29bf,Namespace:calico-system,Attempt:1,}" Mar 14 00:15:45.467217 containerd[1937]: 2026-03-14 00:15:44.401 [INFO][4543] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" Mar 14 00:15:45.467217 containerd[1937]: 2026-03-14 00:15:44.414 [INFO][4543] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" iface="eth0" netns="/var/run/netns/cni-2eda71fe-5308-b8c7-b2bb-e8666373fa85" Mar 14 00:15:45.467217 containerd[1937]: 2026-03-14 00:15:44.415 [INFO][4543] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" iface="eth0" netns="/var/run/netns/cni-2eda71fe-5308-b8c7-b2bb-e8666373fa85" Mar 14 00:15:45.467217 containerd[1937]: 2026-03-14 00:15:44.415 [INFO][4543] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" iface="eth0" netns="/var/run/netns/cni-2eda71fe-5308-b8c7-b2bb-e8666373fa85" Mar 14 00:15:45.467217 containerd[1937]: 2026-03-14 00:15:44.415 [INFO][4543] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" Mar 14 00:15:45.467217 containerd[1937]: 2026-03-14 00:15:44.415 [INFO][4543] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" Mar 14 00:15:45.467217 containerd[1937]: 2026-03-14 00:15:44.909 [INFO][4674] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" HandleID="k8s-pod-network.f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" Workload="ip--172--31--26--130-k8s-coredns--674b8bbfcf--nsj8c-eth0" Mar 14 00:15:45.467217 containerd[1937]: 2026-03-14 00:15:44.911 [INFO][4674] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:45.467217 containerd[1937]: 2026-03-14 00:15:45.357 [INFO][4674] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:45.467217 containerd[1937]: 2026-03-14 00:15:45.393 [WARNING][4674] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" HandleID="k8s-pod-network.f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" Workload="ip--172--31--26--130-k8s-coredns--674b8bbfcf--nsj8c-eth0" Mar 14 00:15:45.467217 containerd[1937]: 2026-03-14 00:15:45.393 [INFO][4674] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" HandleID="k8s-pod-network.f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" Workload="ip--172--31--26--130-k8s-coredns--674b8bbfcf--nsj8c-eth0" Mar 14 00:15:45.467217 containerd[1937]: 2026-03-14 00:15:45.405 [INFO][4674] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:45.467217 containerd[1937]: 2026-03-14 00:15:45.433 [INFO][4543] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" Mar 14 00:15:45.467217 containerd[1937]: time="2026-03-14T00:15:45.466557352Z" level=info msg="TearDown network for sandbox \"f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba\" successfully" Mar 14 00:15:45.469366 containerd[1937]: time="2026-03-14T00:15:45.466596916Z" level=info msg="StopPodSandbox for \"f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba\" returns successfully" Mar 14 00:15:45.469366 containerd[1937]: time="2026-03-14T00:15:45.468817876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nsj8c,Uid:8127331c-4b50-47c1-bbe1-89afe1cea98e,Namespace:kube-system,Attempt:1,}" Mar 14 00:15:45.638723 sshd[4718]: Accepted publickey for core from 68.220.241.50 port 35390 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:15:45.643702 sshd[4718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:45.660029 systemd-logind[1911]: New session 8 of user core. Mar 14 00:15:45.667488 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 14 00:15:45.992241 systemd[1]: run-netns-cni\x2df80b1bc2\x2d0f2d\x2dca13\x2d0d9f\x2dc80358379c4f.mount: Deactivated successfully. Mar 14 00:15:45.992465 systemd[1]: run-netns-cni\x2d39460e19\x2db848\x2db029\x2db694\x2df5abb0997ed0.mount: Deactivated successfully. Mar 14 00:15:45.992650 systemd[1]: run-netns-cni\x2d2eda71fe\x2d5308\x2db8c7\x2db2bb\x2de8666373fa85.mount: Deactivated successfully. Mar 14 00:15:45.992795 systemd[1]: run-netns-cni\x2da591aa27\x2d4cf9\x2d435b\x2d7228\x2dcece8ddf20b6.mount: Deactivated successfully. Mar 14 00:15:45.992971 systemd[1]: run-netns-cni\x2d04a360cf\x2de630\x2dd15e\x2d9fe6\x2d18691f5fd723.mount: Deactivated successfully. Mar 14 00:15:46.088088 systemd-networkd[1851]: cali9cc61126b64: Link UP Mar 14 00:15:46.089915 systemd-networkd[1851]: cali9cc61126b64: Gained carrier Mar 14 00:15:46.127893 (udev-worker)[4887]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:15:46.247917 containerd[1937]: 2026-03-14 00:15:45.198 [ERROR][4707] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:15:46.247917 containerd[1937]: 2026-03-14 00:15:45.332 [INFO][4707] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--130-k8s-csi--node--driver--s9wlx-eth0 csi-node-driver- calico-system 8bfac06b-f0bb-4f88-a72c-e23a86afafd1 952 0 2026-03-14 00:15:24 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-26-130 csi-node-driver-s9wlx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9cc61126b64 [] [] }} ContainerID="5283e00bde7364e035872080f1d61a1a2b0646d156f4344e1de48d8553e8c6e6" Namespace="calico-system" Pod="csi-node-driver-s9wlx" WorkloadEndpoint="ip--172--31--26--130-k8s-csi--node--driver--s9wlx-" Mar 14 00:15:46.247917 containerd[1937]: 2026-03-14 00:15:45.332 [INFO][4707] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5283e00bde7364e035872080f1d61a1a2b0646d156f4344e1de48d8553e8c6e6" Namespace="calico-system" Pod="csi-node-driver-s9wlx" WorkloadEndpoint="ip--172--31--26--130-k8s-csi--node--driver--s9wlx-eth0" Mar 14 00:15:46.247917 containerd[1937]: 2026-03-14 00:15:45.747 [INFO][4784] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5283e00bde7364e035872080f1d61a1a2b0646d156f4344e1de48d8553e8c6e6" HandleID="k8s-pod-network.5283e00bde7364e035872080f1d61a1a2b0646d156f4344e1de48d8553e8c6e6" Workload="ip--172--31--26--130-k8s-csi--node--driver--s9wlx-eth0" Mar 14 00:15:46.247917 containerd[1937]: 2026-03-14 00:15:45.773 [INFO][4784] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5283e00bde7364e035872080f1d61a1a2b0646d156f4344e1de48d8553e8c6e6" HandleID="k8s-pod-network.5283e00bde7364e035872080f1d61a1a2b0646d156f4344e1de48d8553e8c6e6" Workload="ip--172--31--26--130-k8s-csi--node--driver--s9wlx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000370b70), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-130", "pod":"csi-node-driver-s9wlx", "timestamp":"2026-03-14 00:15:45.747048917 +0000 UTC"}, Hostname:"ip-172-31-26-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003fe580)} Mar 14 00:15:46.247917 containerd[1937]: 2026-03-14 00:15:45.773 [INFO][4784] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:46.247917 containerd[1937]: 2026-03-14 00:15:45.774 [INFO][4784] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:46.247917 containerd[1937]: 2026-03-14 00:15:45.774 [INFO][4784] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-130' Mar 14 00:15:46.247917 containerd[1937]: 2026-03-14 00:15:45.785 [INFO][4784] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5283e00bde7364e035872080f1d61a1a2b0646d156f4344e1de48d8553e8c6e6" host="ip-172-31-26-130" Mar 14 00:15:46.247917 containerd[1937]: 2026-03-14 00:15:45.814 [INFO][4784] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-26-130" Mar 14 00:15:46.247917 containerd[1937]: 2026-03-14 00:15:45.845 [INFO][4784] ipam/ipam.go 526: Trying affinity for 192.168.25.128/26 host="ip-172-31-26-130" Mar 14 00:15:46.247917 containerd[1937]: 2026-03-14 00:15:45.857 [INFO][4784] ipam/ipam.go 160: Attempting to load block cidr=192.168.25.128/26 host="ip-172-31-26-130" Mar 14 00:15:46.247917 containerd[1937]: 2026-03-14 00:15:45.869 [INFO][4784] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.25.128/26 host="ip-172-31-26-130" Mar 14 00:15:46.247917 containerd[1937]: 2026-03-14 00:15:45.869 [INFO][4784] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.25.128/26 handle="k8s-pod-network.5283e00bde7364e035872080f1d61a1a2b0646d156f4344e1de48d8553e8c6e6" host="ip-172-31-26-130" Mar 14 00:15:46.247917 containerd[1937]: 2026-03-14 00:15:45.878 [INFO][4784] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5283e00bde7364e035872080f1d61a1a2b0646d156f4344e1de48d8553e8c6e6 Mar 14 00:15:46.247917 containerd[1937]: 2026-03-14 00:15:45.905 [INFO][4784] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.25.128/26 handle="k8s-pod-network.5283e00bde7364e035872080f1d61a1a2b0646d156f4344e1de48d8553e8c6e6" host="ip-172-31-26-130" Mar 14 00:15:46.247917 containerd[1937]: 2026-03-14 00:15:45.951 [INFO][4784] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.25.129/26] block=192.168.25.128/26 handle="k8s-pod-network.5283e00bde7364e035872080f1d61a1a2b0646d156f4344e1de48d8553e8c6e6" host="ip-172-31-26-130" Mar 14 00:15:46.247917 containerd[1937]: 2026-03-14 00:15:45.951 [INFO][4784] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.25.129/26] handle="k8s-pod-network.5283e00bde7364e035872080f1d61a1a2b0646d156f4344e1de48d8553e8c6e6" host="ip-172-31-26-130" Mar 14 00:15:46.247917 containerd[1937]: 2026-03-14 00:15:45.951 [INFO][4784] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:46.247917 containerd[1937]: 2026-03-14 00:15:45.951 [INFO][4784] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.25.129/26] IPv6=[] ContainerID="5283e00bde7364e035872080f1d61a1a2b0646d156f4344e1de48d8553e8c6e6" HandleID="k8s-pod-network.5283e00bde7364e035872080f1d61a1a2b0646d156f4344e1de48d8553e8c6e6" Workload="ip--172--31--26--130-k8s-csi--node--driver--s9wlx-eth0" Mar 14 00:15:46.251925 containerd[1937]: 2026-03-14 00:15:46.047 [INFO][4707] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5283e00bde7364e035872080f1d61a1a2b0646d156f4344e1de48d8553e8c6e6" Namespace="calico-system" Pod="csi-node-driver-s9wlx" WorkloadEndpoint="ip--172--31--26--130-k8s-csi--node--driver--s9wlx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-csi--node--driver--s9wlx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8bfac06b-f0bb-4f88-a72c-e23a86afafd1", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"", Pod:"csi-node-driver-s9wlx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.25.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9cc61126b64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:46.251925 containerd[1937]: 2026-03-14 00:15:46.047 [INFO][4707] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.129/32] ContainerID="5283e00bde7364e035872080f1d61a1a2b0646d156f4344e1de48d8553e8c6e6" Namespace="calico-system" Pod="csi-node-driver-s9wlx" WorkloadEndpoint="ip--172--31--26--130-k8s-csi--node--driver--s9wlx-eth0" Mar 14 00:15:46.251925 containerd[1937]: 2026-03-14 00:15:46.048 [INFO][4707] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9cc61126b64 ContainerID="5283e00bde7364e035872080f1d61a1a2b0646d156f4344e1de48d8553e8c6e6" Namespace="calico-system" Pod="csi-node-driver-s9wlx" WorkloadEndpoint="ip--172--31--26--130-k8s-csi--node--driver--s9wlx-eth0" Mar 14 00:15:46.251925 containerd[1937]: 2026-03-14 00:15:46.093 [INFO][4707] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5283e00bde7364e035872080f1d61a1a2b0646d156f4344e1de48d8553e8c6e6" Namespace="calico-system" Pod="csi-node-driver-s9wlx" WorkloadEndpoint="ip--172--31--26--130-k8s-csi--node--driver--s9wlx-eth0" Mar 14 00:15:46.251925 containerd[1937]: 2026-03-14 00:15:46.126 [INFO][4707] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5283e00bde7364e035872080f1d61a1a2b0646d156f4344e1de48d8553e8c6e6" Namespace="calico-system" Pod="csi-node-driver-s9wlx" WorkloadEndpoint="ip--172--31--26--130-k8s-csi--node--driver--s9wlx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-csi--node--driver--s9wlx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8bfac06b-f0bb-4f88-a72c-e23a86afafd1", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"5283e00bde7364e035872080f1d61a1a2b0646d156f4344e1de48d8553e8c6e6", Pod:"csi-node-driver-s9wlx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.25.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9cc61126b64", MAC:"ce:99:4a:44:8d:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:46.251925 containerd[1937]: 2026-03-14 00:15:46.203 [INFO][4707] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5283e00bde7364e035872080f1d61a1a2b0646d156f4344e1de48d8553e8c6e6" Namespace="calico-system" Pod="csi-node-driver-s9wlx" WorkloadEndpoint="ip--172--31--26--130-k8s-csi--node--driver--s9wlx-eth0" Mar 14 00:15:46.459620 sshd[4718]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:46.473610 systemd[1]: sshd@7-172.31.26.130:22-68.220.241.50:35390.service: Deactivated successfully. Mar 14 00:15:46.480584 systemd[1]: session-8.scope: Deactivated successfully. Mar 14 00:15:46.487751 containerd[1937]: time="2026-03-14T00:15:46.485830013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:46.487751 containerd[1937]: time="2026-03-14T00:15:46.485977001Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:46.487751 containerd[1937]: time="2026-03-14T00:15:46.486017297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:46.487751 containerd[1937]: time="2026-03-14T00:15:46.486195125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:46.493063 systemd-logind[1911]: Session 8 logged out. Waiting for processes to exit. Mar 14 00:15:46.501264 systemd-logind[1911]: Removed session 8. Mar 14 00:15:46.566168 (udev-worker)[4889]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:15:46.576523 systemd-networkd[1851]: cali4a239523e4c: Link UP Mar 14 00:15:46.580865 systemd-networkd[1851]: cali4a239523e4c: Gained carrier Mar 14 00:15:46.637919 systemd[1]: Started cri-containerd-5283e00bde7364e035872080f1d61a1a2b0646d156f4344e1de48d8553e8c6e6.scope - libcontainer container 5283e00bde7364e035872080f1d61a1a2b0646d156f4344e1de48d8553e8c6e6. Mar 14 00:15:46.650096 containerd[1937]: 2026-03-14 00:15:45.503 [ERROR][4727] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:15:46.650096 containerd[1937]: 2026-03-14 00:15:45.616 [INFO][4727] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--130-k8s-goldmane--5b85766d88--lxpml-eth0 goldmane-5b85766d88- calico-system 7b232062-acf6-4e50-a0e3-33b7e15835a4 949 0 2026-03-14 00:15:22 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-26-130 goldmane-5b85766d88-lxpml eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4a239523e4c [] [] }} ContainerID="160abf700d9df5e2b1cb444326fde6d3abdd3afa63698c00f03672f9460a0309" Namespace="calico-system" Pod="goldmane-5b85766d88-lxpml" WorkloadEndpoint="ip--172--31--26--130-k8s-goldmane--5b85766d88--lxpml-" Mar 14 00:15:46.650096 containerd[1937]: 2026-03-14 00:15:45.616 [INFO][4727] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="160abf700d9df5e2b1cb444326fde6d3abdd3afa63698c00f03672f9460a0309" Namespace="calico-system" Pod="goldmane-5b85766d88-lxpml" WorkloadEndpoint="ip--172--31--26--130-k8s-goldmane--5b85766d88--lxpml-eth0" Mar 14 00:15:46.650096 containerd[1937]: 2026-03-14 00:15:46.073 [INFO][4831] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="160abf700d9df5e2b1cb444326fde6d3abdd3afa63698c00f03672f9460a0309" HandleID="k8s-pod-network.160abf700d9df5e2b1cb444326fde6d3abdd3afa63698c00f03672f9460a0309" Workload="ip--172--31--26--130-k8s-goldmane--5b85766d88--lxpml-eth0" Mar 14 00:15:46.650096 containerd[1937]: 2026-03-14 00:15:46.217 [INFO][4831] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="160abf700d9df5e2b1cb444326fde6d3abdd3afa63698c00f03672f9460a0309" HandleID="k8s-pod-network.160abf700d9df5e2b1cb444326fde6d3abdd3afa63698c00f03672f9460a0309" Workload="ip--172--31--26--130-k8s-goldmane--5b85766d88--lxpml-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e8290), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-130", "pod":"goldmane-5b85766d88-lxpml", "timestamp":"2026-03-14 00:15:46.073928187 +0000 UTC"}, Hostname:"ip-172-31-26-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40000cc840)} Mar 14 00:15:46.650096 containerd[1937]: 2026-03-14 00:15:46.219 [INFO][4831] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:46.650096 containerd[1937]: 2026-03-14 00:15:46.219 [INFO][4831] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:46.650096 containerd[1937]: 2026-03-14 00:15:46.221 [INFO][4831] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-130' Mar 14 00:15:46.650096 containerd[1937]: 2026-03-14 00:15:46.235 [INFO][4831] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.160abf700d9df5e2b1cb444326fde6d3abdd3afa63698c00f03672f9460a0309" host="ip-172-31-26-130" Mar 14 00:15:46.650096 containerd[1937]: 2026-03-14 00:15:46.271 [INFO][4831] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-26-130" Mar 14 00:15:46.650096 containerd[1937]: 2026-03-14 00:15:46.368 [INFO][4831] ipam/ipam.go 526: Trying affinity for 192.168.25.128/26 host="ip-172-31-26-130" Mar 14 00:15:46.650096 containerd[1937]: 2026-03-14 00:15:46.388 [INFO][4831] ipam/ipam.go 160: Attempting to load block cidr=192.168.25.128/26 host="ip-172-31-26-130" Mar 14 00:15:46.650096 containerd[1937]: 2026-03-14 00:15:46.404 [INFO][4831] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.25.128/26 host="ip-172-31-26-130" Mar 14 00:15:46.650096 containerd[1937]: 2026-03-14 00:15:46.404 [INFO][4831] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.25.128/26 handle="k8s-pod-network.160abf700d9df5e2b1cb444326fde6d3abdd3afa63698c00f03672f9460a0309" host="ip-172-31-26-130" Mar 14 00:15:46.650096 containerd[1937]: 2026-03-14 00:15:46.424 [INFO][4831] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.160abf700d9df5e2b1cb444326fde6d3abdd3afa63698c00f03672f9460a0309 Mar 14 00:15:46.650096 containerd[1937]: 2026-03-14 00:15:46.459 [INFO][4831] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.25.128/26 handle="k8s-pod-network.160abf700d9df5e2b1cb444326fde6d3abdd3afa63698c00f03672f9460a0309" host="ip-172-31-26-130" Mar 14 00:15:46.650096 containerd[1937]: 2026-03-14 00:15:46.534 [INFO][4831] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.25.130/26] block=192.168.25.128/26 handle="k8s-pod-network.160abf700d9df5e2b1cb444326fde6d3abdd3afa63698c00f03672f9460a0309" host="ip-172-31-26-130" Mar 14 00:15:46.650096 containerd[1937]: 2026-03-14 00:15:46.534 [INFO][4831] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.25.130/26] handle="k8s-pod-network.160abf700d9df5e2b1cb444326fde6d3abdd3afa63698c00f03672f9460a0309" host="ip-172-31-26-130" Mar 14 00:15:46.650096 containerd[1937]: 2026-03-14 00:15:46.534 [INFO][4831] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:46.650096 containerd[1937]: 2026-03-14 00:15:46.534 [INFO][4831] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.25.130/26] IPv6=[] ContainerID="160abf700d9df5e2b1cb444326fde6d3abdd3afa63698c00f03672f9460a0309" HandleID="k8s-pod-network.160abf700d9df5e2b1cb444326fde6d3abdd3afa63698c00f03672f9460a0309" Workload="ip--172--31--26--130-k8s-goldmane--5b85766d88--lxpml-eth0" Mar 14 00:15:46.651931 containerd[1937]: 2026-03-14 00:15:46.560 [INFO][4727] cni-plugin/k8s.go 418: Populated endpoint ContainerID="160abf700d9df5e2b1cb444326fde6d3abdd3afa63698c00f03672f9460a0309" Namespace="calico-system" Pod="goldmane-5b85766d88-lxpml" WorkloadEndpoint="ip--172--31--26--130-k8s-goldmane--5b85766d88--lxpml-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-goldmane--5b85766d88--lxpml-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"7b232062-acf6-4e50-a0e3-33b7e15835a4", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"", Pod:"goldmane-5b85766d88-lxpml", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.25.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4a239523e4c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:46.651931 containerd[1937]: 2026-03-14 00:15:46.561 [INFO][4727] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.130/32] ContainerID="160abf700d9df5e2b1cb444326fde6d3abdd3afa63698c00f03672f9460a0309" Namespace="calico-system" Pod="goldmane-5b85766d88-lxpml" WorkloadEndpoint="ip--172--31--26--130-k8s-goldmane--5b85766d88--lxpml-eth0" Mar 14 00:15:46.651931 containerd[1937]: 2026-03-14 00:15:46.561 [INFO][4727] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a239523e4c ContainerID="160abf700d9df5e2b1cb444326fde6d3abdd3afa63698c00f03672f9460a0309" Namespace="calico-system" Pod="goldmane-5b85766d88-lxpml" WorkloadEndpoint="ip--172--31--26--130-k8s-goldmane--5b85766d88--lxpml-eth0" Mar 14 00:15:46.651931 containerd[1937]: 2026-03-14 00:15:46.594 [INFO][4727] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="160abf700d9df5e2b1cb444326fde6d3abdd3afa63698c00f03672f9460a0309" Namespace="calico-system" Pod="goldmane-5b85766d88-lxpml" WorkloadEndpoint="ip--172--31--26--130-k8s-goldmane--5b85766d88--lxpml-eth0" Mar 14 00:15:46.651931 containerd[1937]: 2026-03-14 00:15:46.604 [INFO][4727] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="160abf700d9df5e2b1cb444326fde6d3abdd3afa63698c00f03672f9460a0309" Namespace="calico-system" Pod="goldmane-5b85766d88-lxpml" WorkloadEndpoint="ip--172--31--26--130-k8s-goldmane--5b85766d88--lxpml-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-goldmane--5b85766d88--lxpml-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"7b232062-acf6-4e50-a0e3-33b7e15835a4", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"160abf700d9df5e2b1cb444326fde6d3abdd3afa63698c00f03672f9460a0309", Pod:"goldmane-5b85766d88-lxpml", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.25.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4a239523e4c", MAC:"6e:a3:3c:d2:d3:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:46.651931 containerd[1937]: 2026-03-14 00:15:46.642 [INFO][4727] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="160abf700d9df5e2b1cb444326fde6d3abdd3afa63698c00f03672f9460a0309" Namespace="calico-system" Pod="goldmane-5b85766d88-lxpml" WorkloadEndpoint="ip--172--31--26--130-k8s-goldmane--5b85766d88--lxpml-eth0" Mar 14 00:15:46.707715 containerd[1937]: time="2026-03-14T00:15:46.704143026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:46.707715 containerd[1937]: time="2026-03-14T00:15:46.704392254Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:46.707715 containerd[1937]: time="2026-03-14T00:15:46.704487414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:46.707715 containerd[1937]: time="2026-03-14T00:15:46.705874734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:46.723885 systemd-networkd[1851]: cali12aa16a0f2b: Link UP Mar 14 00:15:46.738184 systemd-networkd[1851]: cali12aa16a0f2b: Gained carrier Mar 14 00:15:46.812286 containerd[1937]: 2026-03-14 00:15:45.719 [ERROR][4769] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:15:46.812286 containerd[1937]: 2026-03-14 00:15:45.836 [INFO][4769] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--gnbqp-eth0 calico-apiserver-5cc67d498c- calico-system 6669e0b2-65fc-448c-87e3-c79fbf1e2867 953 0 2026-03-14 00:15:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5cc67d498c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-26-130 calico-apiserver-5cc67d498c-gnbqp eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali12aa16a0f2b [] [] }} ContainerID="3e5832afb334736e14b78c558e71b90c4ec1c110cd39de6e945376820f93ef16" Namespace="calico-system" Pod="calico-apiserver-5cc67d498c-gnbqp" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--gnbqp-" Mar 14 00:15:46.812286 containerd[1937]: 2026-03-14 00:15:45.836 [INFO][4769] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3e5832afb334736e14b78c558e71b90c4ec1c110cd39de6e945376820f93ef16" Namespace="calico-system" Pod="calico-apiserver-5cc67d498c-gnbqp" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--gnbqp-eth0" Mar 14 00:15:46.812286 containerd[1937]: 2026-03-14 00:15:46.261 [INFO][4855] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e5832afb334736e14b78c558e71b90c4ec1c110cd39de6e945376820f93ef16" HandleID="k8s-pod-network.3e5832afb334736e14b78c558e71b90c4ec1c110cd39de6e945376820f93ef16" Workload="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--gnbqp-eth0" Mar 14 00:15:46.812286 containerd[1937]: 2026-03-14 00:15:46.402 [INFO][4855] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3e5832afb334736e14b78c558e71b90c4ec1c110cd39de6e945376820f93ef16" HandleID="k8s-pod-network.3e5832afb334736e14b78c558e71b90c4ec1c110cd39de6e945376820f93ef16" Workload="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--gnbqp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400032cf60), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-130", "pod":"calico-apiserver-5cc67d498c-gnbqp", "timestamp":"2026-03-14 00:15:46.261509236 +0000 UTC"}, Hostname:"ip-172-31-26-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003fc6e0)} Mar 14 00:15:46.812286 containerd[1937]: 2026-03-14 00:15:46.402 [INFO][4855] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:46.812286 containerd[1937]: 2026-03-14 00:15:46.535 [INFO][4855] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:46.812286 containerd[1937]: 2026-03-14 00:15:46.535 [INFO][4855] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-130' Mar 14 00:15:46.812286 containerd[1937]: 2026-03-14 00:15:46.553 [INFO][4855] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3e5832afb334736e14b78c558e71b90c4ec1c110cd39de6e945376820f93ef16" host="ip-172-31-26-130" Mar 14 00:15:46.812286 containerd[1937]: 2026-03-14 00:15:46.595 [INFO][4855] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-26-130" Mar 14 00:15:46.812286 containerd[1937]: 2026-03-14 00:15:46.613 [INFO][4855] ipam/ipam.go 526: Trying affinity for 192.168.25.128/26 host="ip-172-31-26-130" Mar 14 00:15:46.812286 containerd[1937]: 2026-03-14 00:15:46.624 [INFO][4855] ipam/ipam.go 160: Attempting to load block cidr=192.168.25.128/26 host="ip-172-31-26-130" Mar 14 00:15:46.812286 containerd[1937]: 2026-03-14 00:15:46.639 [INFO][4855] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.25.128/26 host="ip-172-31-26-130" Mar 14 00:15:46.812286 containerd[1937]: 2026-03-14 00:15:46.639 [INFO][4855] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.25.128/26 handle="k8s-pod-network.3e5832afb334736e14b78c558e71b90c4ec1c110cd39de6e945376820f93ef16" host="ip-172-31-26-130" Mar 14 00:15:46.812286 containerd[1937]: 2026-03-14 00:15:46.648 [INFO][4855] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3e5832afb334736e14b78c558e71b90c4ec1c110cd39de6e945376820f93ef16 Mar 14 00:15:46.812286 containerd[1937]: 2026-03-14 00:15:46.661 [INFO][4855] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.25.128/26 handle="k8s-pod-network.3e5832afb334736e14b78c558e71b90c4ec1c110cd39de6e945376820f93ef16" host="ip-172-31-26-130" Mar 14 00:15:46.812286 containerd[1937]: 2026-03-14 00:15:46.681 [INFO][4855] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.25.131/26] block=192.168.25.128/26 handle="k8s-pod-network.3e5832afb334736e14b78c558e71b90c4ec1c110cd39de6e945376820f93ef16" host="ip-172-31-26-130" Mar 14 00:15:46.812286 containerd[1937]: 2026-03-14 00:15:46.683 [INFO][4855] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.25.131/26] handle="k8s-pod-network.3e5832afb334736e14b78c558e71b90c4ec1c110cd39de6e945376820f93ef16" host="ip-172-31-26-130" Mar 14 00:15:46.812286 containerd[1937]: 2026-03-14 00:15:46.683 [INFO][4855] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:46.812286 containerd[1937]: 2026-03-14 00:15:46.683 [INFO][4855] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.25.131/26] IPv6=[] ContainerID="3e5832afb334736e14b78c558e71b90c4ec1c110cd39de6e945376820f93ef16" HandleID="k8s-pod-network.3e5832afb334736e14b78c558e71b90c4ec1c110cd39de6e945376820f93ef16" Workload="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--gnbqp-eth0" Mar 14 00:15:46.813445 containerd[1937]: 2026-03-14 00:15:46.700 [INFO][4769] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3e5832afb334736e14b78c558e71b90c4ec1c110cd39de6e945376820f93ef16" Namespace="calico-system" Pod="calico-apiserver-5cc67d498c-gnbqp" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--gnbqp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--gnbqp-eth0", GenerateName:"calico-apiserver-5cc67d498c-", Namespace:"calico-system", SelfLink:"", UID:"6669e0b2-65fc-448c-87e3-c79fbf1e2867", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cc67d498c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"", Pod:"calico-apiserver-5cc67d498c-gnbqp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali12aa16a0f2b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:46.813445 containerd[1937]: 2026-03-14 00:15:46.700 [INFO][4769] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.131/32] ContainerID="3e5832afb334736e14b78c558e71b90c4ec1c110cd39de6e945376820f93ef16" Namespace="calico-system" Pod="calico-apiserver-5cc67d498c-gnbqp" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--gnbqp-eth0" Mar 14 00:15:46.813445 containerd[1937]: 2026-03-14 00:15:46.700 [INFO][4769] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali12aa16a0f2b ContainerID="3e5832afb334736e14b78c558e71b90c4ec1c110cd39de6e945376820f93ef16" Namespace="calico-system" Pod="calico-apiserver-5cc67d498c-gnbqp" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--gnbqp-eth0" Mar 14 00:15:46.813445 containerd[1937]: 2026-03-14 00:15:46.745 [INFO][4769] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3e5832afb334736e14b78c558e71b90c4ec1c110cd39de6e945376820f93ef16" Namespace="calico-system" Pod="calico-apiserver-5cc67d498c-gnbqp" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--gnbqp-eth0" Mar 14 00:15:46.813445 containerd[1937]: 2026-03-14 00:15:46.748 [INFO][4769] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3e5832afb334736e14b78c558e71b90c4ec1c110cd39de6e945376820f93ef16" Namespace="calico-system" Pod="calico-apiserver-5cc67d498c-gnbqp" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--gnbqp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--gnbqp-eth0", GenerateName:"calico-apiserver-5cc67d498c-", Namespace:"calico-system", SelfLink:"", UID:"6669e0b2-65fc-448c-87e3-c79fbf1e2867", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cc67d498c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"3e5832afb334736e14b78c558e71b90c4ec1c110cd39de6e945376820f93ef16", Pod:"calico-apiserver-5cc67d498c-gnbqp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali12aa16a0f2b", MAC:"fe:1e:0e:32:94:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:46.813445 containerd[1937]: 2026-03-14 00:15:46.789 [INFO][4769] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3e5832afb334736e14b78c558e71b90c4ec1c110cd39de6e945376820f93ef16" Namespace="calico-system" Pod="calico-apiserver-5cc67d498c-gnbqp" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--gnbqp-eth0" Mar 14 00:15:46.859263 systemd[1]: Started cri-containerd-160abf700d9df5e2b1cb444326fde6d3abdd3afa63698c00f03672f9460a0309.scope - libcontainer container 160abf700d9df5e2b1cb444326fde6d3abdd3afa63698c00f03672f9460a0309. Mar 14 00:15:46.887859 systemd-networkd[1851]: cali9f633f38b65: Link UP Mar 14 00:15:46.892584 systemd-networkd[1851]: cali9f633f38b65: Gained carrier Mar 14 00:15:46.962372 containerd[1937]: 2026-03-14 00:15:45.868 [ERROR][4788] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:15:46.962372 containerd[1937]: 2026-03-14 00:15:45.991 [INFO][4788] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--130-k8s-calico--kube--controllers--75d5fd567b--lbrvk-eth0 calico-kube-controllers-75d5fd567b- calico-system 7629e1e9-e956-4dbb-9bf3-396748a97bfb 955 0 2026-03-14 00:15:24 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:75d5fd567b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-26-130 calico-kube-controllers-75d5fd567b-lbrvk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9f633f38b65 [] [] }} ContainerID="71feb8c9be5ff74a5fd73038fdb4497ef881dc4cba10ae6b2b9827262a4e66c0" Namespace="calico-system" Pod="calico-kube-controllers-75d5fd567b-lbrvk" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--kube--controllers--75d5fd567b--lbrvk-" Mar 14 00:15:46.962372 containerd[1937]: 2026-03-14 00:15:45.991 [INFO][4788] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="71feb8c9be5ff74a5fd73038fdb4497ef881dc4cba10ae6b2b9827262a4e66c0" Namespace="calico-system" Pod="calico-kube-controllers-75d5fd567b-lbrvk" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--kube--controllers--75d5fd567b--lbrvk-eth0" Mar 14 00:15:46.962372 containerd[1937]: 2026-03-14 00:15:46.337 [INFO][4880] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="71feb8c9be5ff74a5fd73038fdb4497ef881dc4cba10ae6b2b9827262a4e66c0" HandleID="k8s-pod-network.71feb8c9be5ff74a5fd73038fdb4497ef881dc4cba10ae6b2b9827262a4e66c0" Workload="ip--172--31--26--130-k8s-calico--kube--controllers--75d5fd567b--lbrvk-eth0" Mar 14 00:15:46.962372 containerd[1937]: 2026-03-14 00:15:46.403 [INFO][4880] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="71feb8c9be5ff74a5fd73038fdb4497ef881dc4cba10ae6b2b9827262a4e66c0" HandleID="k8s-pod-network.71feb8c9be5ff74a5fd73038fdb4497ef881dc4cba10ae6b2b9827262a4e66c0" Workload="ip--172--31--26--130-k8s-calico--kube--controllers--75d5fd567b--lbrvk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ca20), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-130", "pod":"calico-kube-controllers-75d5fd567b-lbrvk", "timestamp":"2026-03-14 00:15:46.337137772 +0000 UTC"}, Hostname:"ip-172-31-26-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000260420)} Mar 14 00:15:46.962372 containerd[1937]: 2026-03-14 00:15:46.405 [INFO][4880] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:46.962372 containerd[1937]: 2026-03-14 00:15:46.683 [INFO][4880] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:46.962372 containerd[1937]: 2026-03-14 00:15:46.684 [INFO][4880] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-130' Mar 14 00:15:46.962372 containerd[1937]: 2026-03-14 00:15:46.689 [INFO][4880] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.71feb8c9be5ff74a5fd73038fdb4497ef881dc4cba10ae6b2b9827262a4e66c0" host="ip-172-31-26-130" Mar 14 00:15:46.962372 containerd[1937]: 2026-03-14 00:15:46.707 [INFO][4880] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-26-130" Mar 14 00:15:46.962372 containerd[1937]: 2026-03-14 00:15:46.764 [INFO][4880] ipam/ipam.go 526: Trying affinity for 192.168.25.128/26 host="ip-172-31-26-130" Mar 14 00:15:46.962372 containerd[1937]: 2026-03-14 00:15:46.769 [INFO][4880] ipam/ipam.go 160: Attempting to load block cidr=192.168.25.128/26 host="ip-172-31-26-130" Mar 14 00:15:46.962372 containerd[1937]: 2026-03-14 00:15:46.786 [INFO][4880] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.25.128/26 host="ip-172-31-26-130" Mar 14 00:15:46.962372 containerd[1937]: 2026-03-14 00:15:46.786 [INFO][4880] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.25.128/26 handle="k8s-pod-network.71feb8c9be5ff74a5fd73038fdb4497ef881dc4cba10ae6b2b9827262a4e66c0" host="ip-172-31-26-130" Mar 14 00:15:46.962372 containerd[1937]: 2026-03-14 00:15:46.804 [INFO][4880] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.71feb8c9be5ff74a5fd73038fdb4497ef881dc4cba10ae6b2b9827262a4e66c0 Mar 14 00:15:46.962372 containerd[1937]: 2026-03-14 00:15:46.821 [INFO][4880] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.25.128/26 handle="k8s-pod-network.71feb8c9be5ff74a5fd73038fdb4497ef881dc4cba10ae6b2b9827262a4e66c0" host="ip-172-31-26-130" Mar 14 00:15:46.962372 containerd[1937]: 2026-03-14 00:15:46.838 [INFO][4880] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.25.132/26] block=192.168.25.128/26 handle="k8s-pod-network.71feb8c9be5ff74a5fd73038fdb4497ef881dc4cba10ae6b2b9827262a4e66c0" host="ip-172-31-26-130" Mar 14 00:15:46.962372 containerd[1937]: 2026-03-14 00:15:46.838 [INFO][4880] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.25.132/26] handle="k8s-pod-network.71feb8c9be5ff74a5fd73038fdb4497ef881dc4cba10ae6b2b9827262a4e66c0" host="ip-172-31-26-130" Mar 14 00:15:46.962372 containerd[1937]: 2026-03-14 00:15:46.839 [INFO][4880] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:46.962372 containerd[1937]: 2026-03-14 00:15:46.839 [INFO][4880] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.25.132/26] IPv6=[] ContainerID="71feb8c9be5ff74a5fd73038fdb4497ef881dc4cba10ae6b2b9827262a4e66c0" HandleID="k8s-pod-network.71feb8c9be5ff74a5fd73038fdb4497ef881dc4cba10ae6b2b9827262a4e66c0" Workload="ip--172--31--26--130-k8s-calico--kube--controllers--75d5fd567b--lbrvk-eth0" Mar 14 00:15:46.964840 containerd[1937]: 2026-03-14 00:15:46.863 [INFO][4788] cni-plugin/k8s.go 418: Populated endpoint ContainerID="71feb8c9be5ff74a5fd73038fdb4497ef881dc4cba10ae6b2b9827262a4e66c0" Namespace="calico-system" Pod="calico-kube-controllers-75d5fd567b-lbrvk" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--kube--controllers--75d5fd567b--lbrvk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-calico--kube--controllers--75d5fd567b--lbrvk-eth0", GenerateName:"calico-kube-controllers-75d5fd567b-", Namespace:"calico-system", SelfLink:"", UID:"7629e1e9-e956-4dbb-9bf3-396748a97bfb", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75d5fd567b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"", Pod:"calico-kube-controllers-75d5fd567b-lbrvk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.25.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9f633f38b65", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:46.964840 containerd[1937]: 2026-03-14 00:15:46.866 [INFO][4788] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.132/32] ContainerID="71feb8c9be5ff74a5fd73038fdb4497ef881dc4cba10ae6b2b9827262a4e66c0" Namespace="calico-system" Pod="calico-kube-controllers-75d5fd567b-lbrvk" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--kube--controllers--75d5fd567b--lbrvk-eth0" Mar 14 00:15:46.964840 containerd[1937]: 2026-03-14 00:15:46.867 [INFO][4788] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9f633f38b65 ContainerID="71feb8c9be5ff74a5fd73038fdb4497ef881dc4cba10ae6b2b9827262a4e66c0" Namespace="calico-system" Pod="calico-kube-controllers-75d5fd567b-lbrvk" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--kube--controllers--75d5fd567b--lbrvk-eth0" Mar 14 00:15:46.964840 containerd[1937]: 2026-03-14 00:15:46.910 [INFO][4788] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="71feb8c9be5ff74a5fd73038fdb4497ef881dc4cba10ae6b2b9827262a4e66c0" Namespace="calico-system" Pod="calico-kube-controllers-75d5fd567b-lbrvk" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--kube--controllers--75d5fd567b--lbrvk-eth0" Mar 14 00:15:46.964840 containerd[1937]: 2026-03-14 00:15:46.918 [INFO][4788] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="71feb8c9be5ff74a5fd73038fdb4497ef881dc4cba10ae6b2b9827262a4e66c0" Namespace="calico-system" Pod="calico-kube-controllers-75d5fd567b-lbrvk" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--kube--controllers--75d5fd567b--lbrvk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-calico--kube--controllers--75d5fd567b--lbrvk-eth0", GenerateName:"calico-kube-controllers-75d5fd567b-", Namespace:"calico-system", SelfLink:"", UID:"7629e1e9-e956-4dbb-9bf3-396748a97bfb", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75d5fd567b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"71feb8c9be5ff74a5fd73038fdb4497ef881dc4cba10ae6b2b9827262a4e66c0", Pod:"calico-kube-controllers-75d5fd567b-lbrvk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.25.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9f633f38b65", MAC:"b2:43:b0:ce:05:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:46.964840 containerd[1937]: 2026-03-14 00:15:46.948 [INFO][4788] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="71feb8c9be5ff74a5fd73038fdb4497ef881dc4cba10ae6b2b9827262a4e66c0" Namespace="calico-system" Pod="calico-kube-controllers-75d5fd567b-lbrvk" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--kube--controllers--75d5fd567b--lbrvk-eth0" Mar 14 00:15:47.031978 containerd[1937]: time="2026-03-14T00:15:47.030964155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s9wlx,Uid:8bfac06b-f0bb-4f88-a72c-e23a86afafd1,Namespace:calico-system,Attempt:1,} returns sandbox id \"5283e00bde7364e035872080f1d61a1a2b0646d156f4344e1de48d8553e8c6e6\"" Mar 14 00:15:47.044073 containerd[1937]: time="2026-03-14T00:15:47.040874608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 14 00:15:47.044340 systemd-networkd[1851]: cali220f7bb61e5: Link UP Mar 14 00:15:47.050747 systemd-networkd[1851]: cali220f7bb61e5: Gained carrier Mar 14 00:15:47.086438 containerd[1937]: time="2026-03-14T00:15:47.085183828Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:47.098147 containerd[1937]: time="2026-03-14T00:15:47.094039576Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:47.098147 containerd[1937]: time="2026-03-14T00:15:47.094147516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:47.098147 containerd[1937]: time="2026-03-14T00:15:47.095563408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:47.152122 containerd[1937]: 2026-03-14 00:15:45.654 [ERROR][4755] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:15:47.152122 containerd[1937]: 2026-03-14 00:15:45.800 [INFO][4755] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--130-k8s-whisker--544d6dd76d--4rcl5-eth0 whisker-544d6dd76d- calico-system d7880c76-182c-44f3-99e8-6a915d275ae2 951 0 2026-03-14 00:15:27 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:544d6dd76d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-26-130 whisker-544d6dd76d-4rcl5 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali220f7bb61e5 [] [] }} ContainerID="0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33" Namespace="calico-system" Pod="whisker-544d6dd76d-4rcl5" WorkloadEndpoint="ip--172--31--26--130-k8s-whisker--544d6dd76d--4rcl5-" Mar 14 00:15:47.152122 containerd[1937]: 2026-03-14 00:15:45.800 [INFO][4755] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33" Namespace="calico-system" Pod="whisker-544d6dd76d-4rcl5" WorkloadEndpoint="ip--172--31--26--130-k8s-whisker--544d6dd76d--4rcl5-eth0" Mar 14 00:15:47.152122 containerd[1937]: 2026-03-14 00:15:46.356 [INFO][4850] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33" HandleID="k8s-pod-network.0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33" Workload="ip--172--31--26--130-k8s-whisker--544d6dd76d--4rcl5-eth0" Mar 14 00:15:47.152122 containerd[1937]: 2026-03-14 00:15:46.419 [INFO][4850] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33" HandleID="k8s-pod-network.0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33" Workload="ip--172--31--26--130-k8s-whisker--544d6dd76d--4rcl5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400039e230), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-130", "pod":"whisker-544d6dd76d-4rcl5", "timestamp":"2026-03-14 00:15:46.356459512 +0000 UTC"}, Hostname:"ip-172-31-26-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000222160)} Mar 14 00:15:47.152122 containerd[1937]: 2026-03-14 00:15:46.420 [INFO][4850] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:47.152122 containerd[1937]: 2026-03-14 00:15:46.839 [INFO][4850] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:47.152122 containerd[1937]: 2026-03-14 00:15:46.849 [INFO][4850] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-130' Mar 14 00:15:47.152122 containerd[1937]: 2026-03-14 00:15:46.860 [INFO][4850] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33" host="ip-172-31-26-130" Mar 14 00:15:47.152122 containerd[1937]: 2026-03-14 00:15:46.889 [INFO][4850] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-26-130" Mar 14 00:15:47.152122 containerd[1937]: 2026-03-14 00:15:46.916 [INFO][4850] ipam/ipam.go 526: Trying affinity for 192.168.25.128/26 host="ip-172-31-26-130" Mar 14 00:15:47.152122 containerd[1937]: 2026-03-14 00:15:46.941 [INFO][4850] ipam/ipam.go 160: Attempting to load block cidr=192.168.25.128/26 host="ip-172-31-26-130" Mar 14 00:15:47.152122 containerd[1937]: 2026-03-14 00:15:46.958 [INFO][4850] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.25.128/26 host="ip-172-31-26-130" Mar 14 00:15:47.152122 containerd[1937]: 2026-03-14 00:15:46.958 [INFO][4850] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.25.128/26 handle="k8s-pod-network.0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33" host="ip-172-31-26-130" Mar 14 00:15:47.152122 containerd[1937]: 2026-03-14 00:15:46.968 [INFO][4850] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33 Mar 14 00:15:47.152122 containerd[1937]: 2026-03-14 00:15:46.981 [INFO][4850] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.25.128/26 handle="k8s-pod-network.0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33" host="ip-172-31-26-130" Mar 14 00:15:47.152122 containerd[1937]: 2026-03-14 00:15:46.999 [INFO][4850] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.25.133/26] block=192.168.25.128/26 handle="k8s-pod-network.0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33" host="ip-172-31-26-130" Mar 14 00:15:47.152122 containerd[1937]: 2026-03-14 00:15:47.000 [INFO][4850] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.25.133/26] handle="k8s-pod-network.0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33" host="ip-172-31-26-130" Mar 14 00:15:47.152122 containerd[1937]: 2026-03-14 00:15:47.000 [INFO][4850] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:47.152122 containerd[1937]: 2026-03-14 00:15:47.000 [INFO][4850] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.25.133/26] IPv6=[] ContainerID="0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33" HandleID="k8s-pod-network.0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33" Workload="ip--172--31--26--130-k8s-whisker--544d6dd76d--4rcl5-eth0" Mar 14 00:15:47.153254 containerd[1937]: 2026-03-14 00:15:47.013 [INFO][4755] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33" Namespace="calico-system" Pod="whisker-544d6dd76d-4rcl5" WorkloadEndpoint="ip--172--31--26--130-k8s-whisker--544d6dd76d--4rcl5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-whisker--544d6dd76d--4rcl5-eth0", GenerateName:"whisker-544d6dd76d-", Namespace:"calico-system", SelfLink:"", UID:"d7880c76-182c-44f3-99e8-6a915d275ae2", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"544d6dd76d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"", Pod:"whisker-544d6dd76d-4rcl5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.25.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali220f7bb61e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:47.153254 containerd[1937]: 2026-03-14 00:15:47.014 [INFO][4755] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.133/32] ContainerID="0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33" Namespace="calico-system" Pod="whisker-544d6dd76d-4rcl5" WorkloadEndpoint="ip--172--31--26--130-k8s-whisker--544d6dd76d--4rcl5-eth0" Mar 14 00:15:47.153254 containerd[1937]: 2026-03-14 00:15:47.014 [INFO][4755] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali220f7bb61e5 ContainerID="0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33" Namespace="calico-system" Pod="whisker-544d6dd76d-4rcl5" WorkloadEndpoint="ip--172--31--26--130-k8s-whisker--544d6dd76d--4rcl5-eth0" Mar 14 00:15:47.153254 containerd[1937]: 2026-03-14 00:15:47.064 [INFO][4755] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33" Namespace="calico-system" Pod="whisker-544d6dd76d-4rcl5" WorkloadEndpoint="ip--172--31--26--130-k8s-whisker--544d6dd76d--4rcl5-eth0" Mar 14 00:15:47.153254 containerd[1937]: 2026-03-14 00:15:47.065 [INFO][4755] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33" Namespace="calico-system" Pod="whisker-544d6dd76d-4rcl5" WorkloadEndpoint="ip--172--31--26--130-k8s-whisker--544d6dd76d--4rcl5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-whisker--544d6dd76d--4rcl5-eth0", GenerateName:"whisker-544d6dd76d-", Namespace:"calico-system", SelfLink:"", UID:"d7880c76-182c-44f3-99e8-6a915d275ae2", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"544d6dd76d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33", Pod:"whisker-544d6dd76d-4rcl5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.25.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali220f7bb61e5", MAC:"82:1b:7e:04:d3:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:47.153254 containerd[1937]: 2026-03-14 00:15:47.124 [INFO][4755] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33" Namespace="calico-system" Pod="whisker-544d6dd76d-4rcl5" WorkloadEndpoint="ip--172--31--26--130-k8s-whisker--544d6dd76d--4rcl5-eth0" Mar 14 00:15:47.177466 containerd[1937]: time="2026-03-14T00:15:47.176245480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:47.177466 containerd[1937]: time="2026-03-14T00:15:47.176476192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:47.177466 containerd[1937]: time="2026-03-14T00:15:47.176557744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:47.178323 containerd[1937]: time="2026-03-14T00:15:47.177772348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:47.284497 systemd-networkd[1851]: cali642f362b105: Link UP Mar 14 00:15:47.289008 systemd-networkd[1851]: cali9cc61126b64: Gained IPv6LL Mar 14 00:15:47.372217 containerd[1937]: time="2026-03-14T00:15:47.350707769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:47.372217 containerd[1937]: time="2026-03-14T00:15:47.351105653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:47.372217 containerd[1937]: time="2026-03-14T00:15:47.351199493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:47.372217 containerd[1937]: time="2026-03-14T00:15:47.353036525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:47.310742 systemd-networkd[1851]: cali642f362b105: Gained carrier Mar 14 00:15:47.369877 systemd[1]: Started cri-containerd-71feb8c9be5ff74a5fd73038fdb4497ef881dc4cba10ae6b2b9827262a4e66c0.scope - libcontainer container 71feb8c9be5ff74a5fd73038fdb4497ef881dc4cba10ae6b2b9827262a4e66c0. Mar 14 00:15:47.427263 systemd[1]: Started cri-containerd-3e5832afb334736e14b78c558e71b90c4ec1c110cd39de6e945376820f93ef16.scope - libcontainer container 3e5832afb334736e14b78c558e71b90c4ec1c110cd39de6e945376820f93ef16. Mar 14 00:15:47.442446 containerd[1937]: 2026-03-14 00:15:45.656 [ERROR][4751] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:15:47.442446 containerd[1937]: 2026-03-14 00:15:45.804 [INFO][4751] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--130-k8s-coredns--674b8bbfcf--jkzng-eth0 coredns-674b8bbfcf- kube-system 7983ca7e-7b32-4d4f-acd3-e05012673e7d 950 0 2026-03-14 00:14:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-130 coredns-674b8bbfcf-jkzng eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali642f362b105 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="71f09e6a973d537137e2f7832fe7f0941502fa8d5247a9af8ab6b1957f7b6f31" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkzng" WorkloadEndpoint="ip--172--31--26--130-k8s-coredns--674b8bbfcf--jkzng-" Mar 14 00:15:47.442446 containerd[1937]: 2026-03-14 00:15:45.807 [INFO][4751] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="71f09e6a973d537137e2f7832fe7f0941502fa8d5247a9af8ab6b1957f7b6f31" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkzng" WorkloadEndpoint="ip--172--31--26--130-k8s-coredns--674b8bbfcf--jkzng-eth0" Mar 14 00:15:47.442446 containerd[1937]: 2026-03-14 00:15:46.407 [INFO][4851] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="71f09e6a973d537137e2f7832fe7f0941502fa8d5247a9af8ab6b1957f7b6f31" HandleID="k8s-pod-network.71f09e6a973d537137e2f7832fe7f0941502fa8d5247a9af8ab6b1957f7b6f31" Workload="ip--172--31--26--130-k8s-coredns--674b8bbfcf--jkzng-eth0" Mar 14 00:15:47.442446 containerd[1937]: 2026-03-14 00:15:46.511 [INFO][4851] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="71f09e6a973d537137e2f7832fe7f0941502fa8d5247a9af8ab6b1957f7b6f31" HandleID="k8s-pod-network.71f09e6a973d537137e2f7832fe7f0941502fa8d5247a9af8ab6b1957f7b6f31" Workload="ip--172--31--26--130-k8s-coredns--674b8bbfcf--jkzng-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000392510), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-130", "pod":"coredns-674b8bbfcf-jkzng", "timestamp":"2026-03-14 00:15:46.407212348 +0000 UTC"}, Hostname:"ip-172-31-26-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000322000)} Mar 14 00:15:47.442446 containerd[1937]: 2026-03-14 00:15:46.511 [INFO][4851] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:47.442446 containerd[1937]: 2026-03-14 00:15:47.001 [INFO][4851] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:47.442446 containerd[1937]: 2026-03-14 00:15:47.002 [INFO][4851] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-130' Mar 14 00:15:47.442446 containerd[1937]: 2026-03-14 00:15:47.010 [INFO][4851] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.71f09e6a973d537137e2f7832fe7f0941502fa8d5247a9af8ab6b1957f7b6f31" host="ip-172-31-26-130" Mar 14 00:15:47.442446 containerd[1937]: 2026-03-14 00:15:47.031 [INFO][4851] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-26-130" Mar 14 00:15:47.442446 containerd[1937]: 2026-03-14 00:15:47.081 [INFO][4851] ipam/ipam.go 526: Trying affinity for 192.168.25.128/26 host="ip-172-31-26-130" Mar 14 00:15:47.442446 containerd[1937]: 2026-03-14 00:15:47.093 [INFO][4851] ipam/ipam.go 160: Attempting to load block cidr=192.168.25.128/26 host="ip-172-31-26-130" Mar 14 00:15:47.442446 containerd[1937]: 2026-03-14 00:15:47.127 [INFO][4851] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.25.128/26 host="ip-172-31-26-130" Mar 14 00:15:47.442446 containerd[1937]: 2026-03-14 00:15:47.130 [INFO][4851] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.25.128/26 handle="k8s-pod-network.71f09e6a973d537137e2f7832fe7f0941502fa8d5247a9af8ab6b1957f7b6f31" host="ip-172-31-26-130" Mar 14 00:15:47.442446 containerd[1937]: 2026-03-14 00:15:47.142 [INFO][4851] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.71f09e6a973d537137e2f7832fe7f0941502fa8d5247a9af8ab6b1957f7b6f31 Mar 14 00:15:47.442446 containerd[1937]: 2026-03-14 00:15:47.160 [INFO][4851] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.25.128/26 handle="k8s-pod-network.71f09e6a973d537137e2f7832fe7f0941502fa8d5247a9af8ab6b1957f7b6f31" host="ip-172-31-26-130" Mar 14 00:15:47.442446 containerd[1937]: 2026-03-14 00:15:47.192 [INFO][4851] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.25.134/26] block=192.168.25.128/26 handle="k8s-pod-network.71f09e6a973d537137e2f7832fe7f0941502fa8d5247a9af8ab6b1957f7b6f31" host="ip-172-31-26-130" Mar 14 00:15:47.442446 containerd[1937]: 2026-03-14 00:15:47.192 [INFO][4851] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.25.134/26] handle="k8s-pod-network.71f09e6a973d537137e2f7832fe7f0941502fa8d5247a9af8ab6b1957f7b6f31" host="ip-172-31-26-130" Mar 14 00:15:47.442446 containerd[1937]: 2026-03-14 00:15:47.197 [INFO][4851] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:47.442446 containerd[1937]: 2026-03-14 00:15:47.197 [INFO][4851] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.25.134/26] IPv6=[] ContainerID="71f09e6a973d537137e2f7832fe7f0941502fa8d5247a9af8ab6b1957f7b6f31" HandleID="k8s-pod-network.71f09e6a973d537137e2f7832fe7f0941502fa8d5247a9af8ab6b1957f7b6f31" Workload="ip--172--31--26--130-k8s-coredns--674b8bbfcf--jkzng-eth0" Mar 14 00:15:47.449105 containerd[1937]: 2026-03-14 00:15:47.252 [INFO][4751] cni-plugin/k8s.go 418: Populated endpoint ContainerID="71f09e6a973d537137e2f7832fe7f0941502fa8d5247a9af8ab6b1957f7b6f31" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkzng" WorkloadEndpoint="ip--172--31--26--130-k8s-coredns--674b8bbfcf--jkzng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-coredns--674b8bbfcf--jkzng-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7983ca7e-7b32-4d4f-acd3-e05012673e7d", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"", Pod:"coredns-674b8bbfcf-jkzng", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali642f362b105", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:47.449105 containerd[1937]: 2026-03-14 00:15:47.253 [INFO][4751] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.134/32] ContainerID="71f09e6a973d537137e2f7832fe7f0941502fa8d5247a9af8ab6b1957f7b6f31" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkzng" WorkloadEndpoint="ip--172--31--26--130-k8s-coredns--674b8bbfcf--jkzng-eth0" Mar 14 00:15:47.449105 containerd[1937]: 2026-03-14 00:15:47.253 [INFO][4751] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali642f362b105 ContainerID="71f09e6a973d537137e2f7832fe7f0941502fa8d5247a9af8ab6b1957f7b6f31" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkzng" WorkloadEndpoint="ip--172--31--26--130-k8s-coredns--674b8bbfcf--jkzng-eth0" Mar 14 00:15:47.449105 containerd[1937]: 2026-03-14 00:15:47.327 [INFO][4751] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="71f09e6a973d537137e2f7832fe7f0941502fa8d5247a9af8ab6b1957f7b6f31" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkzng" WorkloadEndpoint="ip--172--31--26--130-k8s-coredns--674b8bbfcf--jkzng-eth0" Mar 14 00:15:47.449105 containerd[1937]: 2026-03-14 00:15:47.330 [INFO][4751] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="71f09e6a973d537137e2f7832fe7f0941502fa8d5247a9af8ab6b1957f7b6f31" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkzng" WorkloadEndpoint="ip--172--31--26--130-k8s-coredns--674b8bbfcf--jkzng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-coredns--674b8bbfcf--jkzng-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7983ca7e-7b32-4d4f-acd3-e05012673e7d", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"71f09e6a973d537137e2f7832fe7f0941502fa8d5247a9af8ab6b1957f7b6f31", Pod:"coredns-674b8bbfcf-jkzng", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali642f362b105", MAC:"f6:48:29:10:89:97", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:47.449105 containerd[1937]: 2026-03-14 00:15:47.392 [INFO][4751] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="71f09e6a973d537137e2f7832fe7f0941502fa8d5247a9af8ab6b1957f7b6f31" Namespace="kube-system" Pod="coredns-674b8bbfcf-jkzng" WorkloadEndpoint="ip--172--31--26--130-k8s-coredns--674b8bbfcf--jkzng-eth0" Mar 14 00:15:47.521628 systemd-networkd[1851]: calib2c851d3f77: Link UP Mar 14 00:15:47.528667 systemd-networkd[1851]: calib2c851d3f77: Gained carrier Mar 14 00:15:47.582266 containerd[1937]: time="2026-03-14T00:15:47.581563470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-lxpml,Uid:7b232062-acf6-4e50-a0e3-33b7e15835a4,Namespace:calico-system,Attempt:1,} returns sandbox id \"160abf700d9df5e2b1cb444326fde6d3abdd3afa63698c00f03672f9460a0309\"" Mar 14 00:15:47.613535 containerd[1937]: 2026-03-14 00:15:45.864 [ERROR][4803] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:15:47.613535 containerd[1937]: 2026-03-14 00:15:45.992 [INFO][4803] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--jsdnf-eth0 calico-apiserver-5cc67d498c- calico-system de037f31-e304-4774-8e09-1ec32c3e29bf 958 0 2026-03-14 00:15:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5cc67d498c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-26-130 calico-apiserver-5cc67d498c-jsdnf eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calib2c851d3f77 [] [] }} ContainerID="35f540316688fb10e9434caec976f19768c05e9e39306ab7b650f8d038634ed2" Namespace="calico-system" Pod="calico-apiserver-5cc67d498c-jsdnf" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--jsdnf-" Mar 14 00:15:47.613535 containerd[1937]: 2026-03-14 00:15:45.993 [INFO][4803] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="35f540316688fb10e9434caec976f19768c05e9e39306ab7b650f8d038634ed2" Namespace="calico-system" Pod="calico-apiserver-5cc67d498c-jsdnf" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--jsdnf-eth0" Mar 14 00:15:47.613535 containerd[1937]: 2026-03-14 00:15:46.481 [INFO][4882] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="35f540316688fb10e9434caec976f19768c05e9e39306ab7b650f8d038634ed2" HandleID="k8s-pod-network.35f540316688fb10e9434caec976f19768c05e9e39306ab7b650f8d038634ed2" Workload="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--jsdnf-eth0" Mar 14 00:15:47.613535 containerd[1937]: 2026-03-14 00:15:46.554 [INFO][4882] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="35f540316688fb10e9434caec976f19768c05e9e39306ab7b650f8d038634ed2" HandleID="k8s-pod-network.35f540316688fb10e9434caec976f19768c05e9e39306ab7b650f8d038634ed2" Workload="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--jsdnf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031bbc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-130", "pod":"calico-apiserver-5cc67d498c-jsdnf", "timestamp":"2026-03-14 00:15:46.481643225 +0000 UTC"}, Hostname:"ip-172-31-26-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40001866e0)} Mar 14 00:15:47.613535 containerd[1937]: 2026-03-14 00:15:46.555 [INFO][4882] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:47.613535 containerd[1937]: 2026-03-14 00:15:47.198 [INFO][4882] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:47.613535 containerd[1937]: 2026-03-14 00:15:47.199 [INFO][4882] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-130' Mar 14 00:15:47.613535 containerd[1937]: 2026-03-14 00:15:47.224 [INFO][4882] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.35f540316688fb10e9434caec976f19768c05e9e39306ab7b650f8d038634ed2" host="ip-172-31-26-130" Mar 14 00:15:47.613535 containerd[1937]: 2026-03-14 00:15:47.254 [INFO][4882] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-26-130" Mar 14 00:15:47.613535 containerd[1937]: 2026-03-14 00:15:47.339 [INFO][4882] ipam/ipam.go 526: Trying affinity for 192.168.25.128/26 host="ip-172-31-26-130" Mar 14 00:15:47.613535 containerd[1937]: 2026-03-14 00:15:47.352 [INFO][4882] ipam/ipam.go 160: Attempting to load block cidr=192.168.25.128/26 host="ip-172-31-26-130" Mar 14 00:15:47.613535 containerd[1937]: 2026-03-14 00:15:47.371 [INFO][4882] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.25.128/26 host="ip-172-31-26-130" Mar 14 00:15:47.613535 containerd[1937]: 2026-03-14 00:15:47.371 [INFO][4882] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.25.128/26 handle="k8s-pod-network.35f540316688fb10e9434caec976f19768c05e9e39306ab7b650f8d038634ed2" host="ip-172-31-26-130" Mar 14 00:15:47.613535 containerd[1937]: 2026-03-14 00:15:47.393 [INFO][4882] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.35f540316688fb10e9434caec976f19768c05e9e39306ab7b650f8d038634ed2 Mar 14 00:15:47.613535 containerd[1937]: 2026-03-14 00:15:47.410 [INFO][4882] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.25.128/26 handle="k8s-pod-network.35f540316688fb10e9434caec976f19768c05e9e39306ab7b650f8d038634ed2" host="ip-172-31-26-130" Mar 14 00:15:47.613535 containerd[1937]: 2026-03-14 00:15:47.455 [INFO][4882] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.25.135/26] block=192.168.25.128/26 handle="k8s-pod-network.35f540316688fb10e9434caec976f19768c05e9e39306ab7b650f8d038634ed2" host="ip-172-31-26-130" Mar 14 00:15:47.613535 containerd[1937]: 2026-03-14 00:15:47.455 [INFO][4882] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.25.135/26] handle="k8s-pod-network.35f540316688fb10e9434caec976f19768c05e9e39306ab7b650f8d038634ed2" host="ip-172-31-26-130" Mar 14 00:15:47.613535 containerd[1937]: 2026-03-14 00:15:47.455 [INFO][4882] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:47.613535 containerd[1937]: 2026-03-14 00:15:47.455 [INFO][4882] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.25.135/26] IPv6=[] ContainerID="35f540316688fb10e9434caec976f19768c05e9e39306ab7b650f8d038634ed2" HandleID="k8s-pod-network.35f540316688fb10e9434caec976f19768c05e9e39306ab7b650f8d038634ed2" Workload="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--jsdnf-eth0" Mar 14 00:15:47.615277 containerd[1937]: 2026-03-14 00:15:47.488 [INFO][4803] cni-plugin/k8s.go 418: Populated endpoint ContainerID="35f540316688fb10e9434caec976f19768c05e9e39306ab7b650f8d038634ed2" Namespace="calico-system" Pod="calico-apiserver-5cc67d498c-jsdnf" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--jsdnf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--jsdnf-eth0", GenerateName:"calico-apiserver-5cc67d498c-", Namespace:"calico-system", SelfLink:"", UID:"de037f31-e304-4774-8e09-1ec32c3e29bf", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cc67d498c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"", Pod:"calico-apiserver-5cc67d498c-jsdnf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib2c851d3f77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:47.615277 containerd[1937]: 2026-03-14 00:15:47.488 [INFO][4803] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.135/32] ContainerID="35f540316688fb10e9434caec976f19768c05e9e39306ab7b650f8d038634ed2" Namespace="calico-system" Pod="calico-apiserver-5cc67d498c-jsdnf" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--jsdnf-eth0" Mar 14 00:15:47.615277 containerd[1937]: 2026-03-14 00:15:47.488 [INFO][4803] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib2c851d3f77 ContainerID="35f540316688fb10e9434caec976f19768c05e9e39306ab7b650f8d038634ed2" Namespace="calico-system" Pod="calico-apiserver-5cc67d498c-jsdnf" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--jsdnf-eth0" Mar 14 00:15:47.615277 containerd[1937]: 2026-03-14 00:15:47.542 [INFO][4803] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="35f540316688fb10e9434caec976f19768c05e9e39306ab7b650f8d038634ed2" Namespace="calico-system" Pod="calico-apiserver-5cc67d498c-jsdnf" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--jsdnf-eth0" Mar 14 00:15:47.615277 containerd[1937]: 2026-03-14 00:15:47.556 [INFO][4803] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="35f540316688fb10e9434caec976f19768c05e9e39306ab7b650f8d038634ed2" Namespace="calico-system" Pod="calico-apiserver-5cc67d498c-jsdnf" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--jsdnf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--jsdnf-eth0", GenerateName:"calico-apiserver-5cc67d498c-", Namespace:"calico-system", SelfLink:"", UID:"de037f31-e304-4774-8e09-1ec32c3e29bf", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cc67d498c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"35f540316688fb10e9434caec976f19768c05e9e39306ab7b650f8d038634ed2", Pod:"calico-apiserver-5cc67d498c-jsdnf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib2c851d3f77", MAC:"2e:e0:61:b5:f4:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:47.615277 containerd[1937]: 2026-03-14 00:15:47.599 [INFO][4803] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="35f540316688fb10e9434caec976f19768c05e9e39306ab7b650f8d038634ed2" Namespace="calico-system" Pod="calico-apiserver-5cc67d498c-jsdnf" WorkloadEndpoint="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--jsdnf-eth0" Mar 14 00:15:47.672242 systemd-networkd[1851]: cali41be2aef46c: Link UP Mar 14 00:15:47.696905 systemd[1]: Started cri-containerd-0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33.scope - libcontainer container 0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33. Mar 14 00:15:47.701623 containerd[1937]: time="2026-03-14T00:15:47.692496787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:47.701623 containerd[1937]: time="2026-03-14T00:15:47.692609047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:47.701623 containerd[1937]: time="2026-03-14T00:15:47.692653519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:47.701623 containerd[1937]: time="2026-03-14T00:15:47.692835859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:47.713393 systemd-networkd[1851]: cali41be2aef46c: Gained carrier Mar 14 00:15:47.733309 systemd-networkd[1851]: cali4a239523e4c: Gained IPv6LL Mar 14 00:15:47.801413 containerd[1937]: 2026-03-14 00:15:45.945 [ERROR][4814] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:15:47.801413 containerd[1937]: 2026-03-14 00:15:46.148 [INFO][4814] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--130-k8s-coredns--674b8bbfcf--nsj8c-eth0 coredns-674b8bbfcf- kube-system 8127331c-4b50-47c1-bbe1-89afe1cea98e 956 0 2026-03-14 00:14:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-130 coredns-674b8bbfcf-nsj8c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali41be2aef46c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="814c2dc8e297ab28c838bd958983583e82d4470648ab834c7f3e7092da677e5f" Namespace="kube-system" Pod="coredns-674b8bbfcf-nsj8c" WorkloadEndpoint="ip--172--31--26--130-k8s-coredns--674b8bbfcf--nsj8c-" Mar 14 00:15:47.801413 containerd[1937]: 2026-03-14 00:15:46.155 [INFO][4814] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="814c2dc8e297ab28c838bd958983583e82d4470648ab834c7f3e7092da677e5f" Namespace="kube-system" Pod="coredns-674b8bbfcf-nsj8c" WorkloadEndpoint="ip--172--31--26--130-k8s-coredns--674b8bbfcf--nsj8c-eth0" Mar 14 00:15:47.801413 containerd[1937]: 2026-03-14 00:15:46.585 [INFO][4896] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="814c2dc8e297ab28c838bd958983583e82d4470648ab834c7f3e7092da677e5f" HandleID="k8s-pod-network.814c2dc8e297ab28c838bd958983583e82d4470648ab834c7f3e7092da677e5f" Workload="ip--172--31--26--130-k8s-coredns--674b8bbfcf--nsj8c-eth0" Mar 14 00:15:47.801413 containerd[1937]: 2026-03-14 00:15:46.636 [INFO][4896] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="814c2dc8e297ab28c838bd958983583e82d4470648ab834c7f3e7092da677e5f" HandleID="k8s-pod-network.814c2dc8e297ab28c838bd958983583e82d4470648ab834c7f3e7092da677e5f" Workload="ip--172--31--26--130-k8s-coredns--674b8bbfcf--nsj8c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000352500), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-130", "pod":"coredns-674b8bbfcf-nsj8c", "timestamp":"2026-03-14 00:15:46.585367505 +0000 UTC"}, Hostname:"ip-172-31-26-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40000cc840)} Mar 14 00:15:47.801413 containerd[1937]: 2026-03-14 00:15:46.636 [INFO][4896] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:47.801413 containerd[1937]: 2026-03-14 00:15:47.455 [INFO][4896] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:47.801413 containerd[1937]: 2026-03-14 00:15:47.455 [INFO][4896] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-130' Mar 14 00:15:47.801413 containerd[1937]: 2026-03-14 00:15:47.476 [INFO][4896] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.814c2dc8e297ab28c838bd958983583e82d4470648ab834c7f3e7092da677e5f" host="ip-172-31-26-130" Mar 14 00:15:47.801413 containerd[1937]: 2026-03-14 00:15:47.523 [INFO][4896] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-26-130" Mar 14 00:15:47.801413 containerd[1937]: 2026-03-14 00:15:47.569 [INFO][4896] ipam/ipam.go 526: Trying affinity for 192.168.25.128/26 host="ip-172-31-26-130" Mar 14 00:15:47.801413 containerd[1937]: 2026-03-14 00:15:47.578 [INFO][4896] ipam/ipam.go 160: Attempting to load block cidr=192.168.25.128/26 host="ip-172-31-26-130" Mar 14 00:15:47.801413 containerd[1937]: 2026-03-14 00:15:47.606 [INFO][4896] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.25.128/26 host="ip-172-31-26-130" Mar 14 00:15:47.801413 containerd[1937]: 2026-03-14 00:15:47.607 [INFO][4896] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.25.128/26 handle="k8s-pod-network.814c2dc8e297ab28c838bd958983583e82d4470648ab834c7f3e7092da677e5f" host="ip-172-31-26-130" Mar 14 00:15:47.801413 containerd[1937]: 2026-03-14 00:15:47.611 [INFO][4896] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.814c2dc8e297ab28c838bd958983583e82d4470648ab834c7f3e7092da677e5f Mar 14 00:15:47.801413 containerd[1937]: 2026-03-14 00:15:47.630 [INFO][4896] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.25.128/26 handle="k8s-pod-network.814c2dc8e297ab28c838bd958983583e82d4470648ab834c7f3e7092da677e5f" host="ip-172-31-26-130" Mar 14 00:15:47.801413 containerd[1937]: 2026-03-14 00:15:47.647 [INFO][4896] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.25.136/26] block=192.168.25.128/26 handle="k8s-pod-network.814c2dc8e297ab28c838bd958983583e82d4470648ab834c7f3e7092da677e5f" host="ip-172-31-26-130" Mar 14 00:15:47.801413 containerd[1937]: 2026-03-14 00:15:47.647 [INFO][4896] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.25.136/26] handle="k8s-pod-network.814c2dc8e297ab28c838bd958983583e82d4470648ab834c7f3e7092da677e5f" host="ip-172-31-26-130" Mar 14 00:15:47.801413 containerd[1937]: 2026-03-14 00:15:47.647 [INFO][4896] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:47.801413 containerd[1937]: 2026-03-14 00:15:47.647 [INFO][4896] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.25.136/26] IPv6=[] ContainerID="814c2dc8e297ab28c838bd958983583e82d4470648ab834c7f3e7092da677e5f" HandleID="k8s-pod-network.814c2dc8e297ab28c838bd958983583e82d4470648ab834c7f3e7092da677e5f" Workload="ip--172--31--26--130-k8s-coredns--674b8bbfcf--nsj8c-eth0" Mar 14 00:15:47.803975 containerd[1937]: 2026-03-14 00:15:47.660 [INFO][4814] cni-plugin/k8s.go 418: Populated endpoint ContainerID="814c2dc8e297ab28c838bd958983583e82d4470648ab834c7f3e7092da677e5f" Namespace="kube-system" Pod="coredns-674b8bbfcf-nsj8c" WorkloadEndpoint="ip--172--31--26--130-k8s-coredns--674b8bbfcf--nsj8c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-coredns--674b8bbfcf--nsj8c-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8127331c-4b50-47c1-bbe1-89afe1cea98e", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"", Pod:"coredns-674b8bbfcf-nsj8c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali41be2aef46c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:47.803975 containerd[1937]: 2026-03-14 00:15:47.660 [INFO][4814] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.25.136/32] ContainerID="814c2dc8e297ab28c838bd958983583e82d4470648ab834c7f3e7092da677e5f" Namespace="kube-system" Pod="coredns-674b8bbfcf-nsj8c" WorkloadEndpoint="ip--172--31--26--130-k8s-coredns--674b8bbfcf--nsj8c-eth0" Mar 14 00:15:47.803975 containerd[1937]: 2026-03-14 00:15:47.660 [INFO][4814] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali41be2aef46c ContainerID="814c2dc8e297ab28c838bd958983583e82d4470648ab834c7f3e7092da677e5f" Namespace="kube-system" Pod="coredns-674b8bbfcf-nsj8c" WorkloadEndpoint="ip--172--31--26--130-k8s-coredns--674b8bbfcf--nsj8c-eth0" Mar 14 00:15:47.803975 containerd[1937]: 2026-03-14 00:15:47.752 [INFO][4814] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="814c2dc8e297ab28c838bd958983583e82d4470648ab834c7f3e7092da677e5f" Namespace="kube-system" Pod="coredns-674b8bbfcf-nsj8c" WorkloadEndpoint="ip--172--31--26--130-k8s-coredns--674b8bbfcf--nsj8c-eth0" Mar 14 00:15:47.803975 containerd[1937]: 2026-03-14 00:15:47.754 [INFO][4814] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="814c2dc8e297ab28c838bd958983583e82d4470648ab834c7f3e7092da677e5f" Namespace="kube-system" Pod="coredns-674b8bbfcf-nsj8c" WorkloadEndpoint="ip--172--31--26--130-k8s-coredns--674b8bbfcf--nsj8c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-coredns--674b8bbfcf--nsj8c-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8127331c-4b50-47c1-bbe1-89afe1cea98e", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"814c2dc8e297ab28c838bd958983583e82d4470648ab834c7f3e7092da677e5f", Pod:"coredns-674b8bbfcf-nsj8c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali41be2aef46c", MAC:"e6:47:50:b1:f4:07", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:47.803975 containerd[1937]: 2026-03-14 00:15:47.780 [INFO][4814] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="814c2dc8e297ab28c838bd958983583e82d4470648ab834c7f3e7092da677e5f" Namespace="kube-system" Pod="coredns-674b8bbfcf-nsj8c" WorkloadEndpoint="ip--172--31--26--130-k8s-coredns--674b8bbfcf--nsj8c-eth0" Mar 14 00:15:47.808570 containerd[1937]: time="2026-03-14T00:15:47.805194535Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:47.808570 containerd[1937]: time="2026-03-14T00:15:47.805519219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:47.808570 containerd[1937]: time="2026-03-14T00:15:47.805561963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:47.808570 containerd[1937]: time="2026-03-14T00:15:47.805759543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:47.877469 systemd[1]: Started cri-containerd-71f09e6a973d537137e2f7832fe7f0941502fa8d5247a9af8ab6b1957f7b6f31.scope - libcontainer container 71f09e6a973d537137e2f7832fe7f0941502fa8d5247a9af8ab6b1957f7b6f31. Mar 14 00:15:47.914287 systemd[1]: Started cri-containerd-35f540316688fb10e9434caec976f19768c05e9e39306ab7b650f8d038634ed2.scope - libcontainer container 35f540316688fb10e9434caec976f19768c05e9e39306ab7b650f8d038634ed2. Mar 14 00:15:47.939840 containerd[1937]: time="2026-03-14T00:15:47.939602516Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:15:47.940741 containerd[1937]: time="2026-03-14T00:15:47.940484156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:15:47.943306 containerd[1937]: time="2026-03-14T00:15:47.941433344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:47.948061 containerd[1937]: time="2026-03-14T00:15:47.946701296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:15:48.015353 systemd[1]: Started cri-containerd-814c2dc8e297ab28c838bd958983583e82d4470648ab834c7f3e7092da677e5f.scope - libcontainer container 814c2dc8e297ab28c838bd958983583e82d4470648ab834c7f3e7092da677e5f. Mar 14 00:15:48.048397 containerd[1937]: time="2026-03-14T00:15:48.048190145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jkzng,Uid:7983ca7e-7b32-4d4f-acd3-e05012673e7d,Namespace:kube-system,Attempt:1,} returns sandbox id \"71f09e6a973d537137e2f7832fe7f0941502fa8d5247a9af8ab6b1957f7b6f31\"" Mar 14 00:15:48.064533 containerd[1937]: time="2026-03-14T00:15:48.063806081Z" level=info msg="CreateContainer within sandbox \"71f09e6a973d537137e2f7832fe7f0941502fa8d5247a9af8ab6b1957f7b6f31\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:15:48.138167 containerd[1937]: time="2026-03-14T00:15:48.137838365Z" level=info msg="CreateContainer within sandbox \"71f09e6a973d537137e2f7832fe7f0941502fa8d5247a9af8ab6b1957f7b6f31\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3876ba2e6fdc15c8a370e5c110fa1d094e329a2a3f634a1a7e43a2eda30aaf97\"" Mar 14 00:15:48.140316 containerd[1937]: time="2026-03-14T00:15:48.140202089Z" level=info msg="StartContainer for \"3876ba2e6fdc15c8a370e5c110fa1d094e329a2a3f634a1a7e43a2eda30aaf97\"" Mar 14 00:15:48.263377 systemd[1]: Started cri-containerd-3876ba2e6fdc15c8a370e5c110fa1d094e329a2a3f634a1a7e43a2eda30aaf97.scope - libcontainer container 3876ba2e6fdc15c8a370e5c110fa1d094e329a2a3f634a1a7e43a2eda30aaf97. Mar 14 00:15:48.329203 containerd[1937]: time="2026-03-14T00:15:48.328232610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nsj8c,Uid:8127331c-4b50-47c1-bbe1-89afe1cea98e,Namespace:kube-system,Attempt:1,} returns sandbox id \"814c2dc8e297ab28c838bd958983583e82d4470648ab834c7f3e7092da677e5f\"" Mar 14 00:15:48.358678 containerd[1937]: time="2026-03-14T00:15:48.358315230Z" level=info msg="CreateContainer within sandbox \"814c2dc8e297ab28c838bd958983583e82d4470648ab834c7f3e7092da677e5f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:15:48.411052 containerd[1937]: time="2026-03-14T00:15:48.410981898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-544d6dd76d-4rcl5,Uid:d7880c76-182c-44f3-99e8-6a915d275ae2,Namespace:calico-system,Attempt:1,} returns sandbox id \"0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33\"" Mar 14 00:15:48.437218 systemd-networkd[1851]: cali9f633f38b65: Gained IPv6LL Mar 14 00:15:48.441264 containerd[1937]: time="2026-03-14T00:15:48.441204990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75d5fd567b-lbrvk,Uid:7629e1e9-e956-4dbb-9bf3-396748a97bfb,Namespace:calico-system,Attempt:1,} returns sandbox id \"71feb8c9be5ff74a5fd73038fdb4497ef881dc4cba10ae6b2b9827262a4e66c0\"" Mar 14 00:15:48.466510 containerd[1937]: time="2026-03-14T00:15:48.466190059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc67d498c-gnbqp,Uid:6669e0b2-65fc-448c-87e3-c79fbf1e2867,Namespace:calico-system,Attempt:1,} returns sandbox id \"3e5832afb334736e14b78c558e71b90c4ec1c110cd39de6e945376820f93ef16\"" Mar 14 00:15:48.481765 containerd[1937]: time="2026-03-14T00:15:48.481417663Z" level=info msg="CreateContainer within sandbox \"814c2dc8e297ab28c838bd958983583e82d4470648ab834c7f3e7092da677e5f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"22f8ca320d86f8807e831e671bdbabee28b5f39835ede743840f23dfce0d49d7\"" Mar 14 00:15:48.486321 containerd[1937]: time="2026-03-14T00:15:48.486064159Z" level=info msg="StartContainer for \"22f8ca320d86f8807e831e671bdbabee28b5f39835ede743840f23dfce0d49d7\"" Mar 14 00:15:48.505655 containerd[1937]: time="2026-03-14T00:15:48.503397151Z" level=info msg="StartContainer for \"3876ba2e6fdc15c8a370e5c110fa1d094e329a2a3f634a1a7e43a2eda30aaf97\" returns successfully" Mar 14 00:15:48.504298 systemd-networkd[1851]: cali12aa16a0f2b: Gained IPv6LL Mar 14 00:15:48.505310 systemd-networkd[1851]: cali642f362b105: Gained IPv6LL Mar 14 00:15:48.620279 systemd[1]: Started cri-containerd-22f8ca320d86f8807e831e671bdbabee28b5f39835ede743840f23dfce0d49d7.scope - libcontainer container 22f8ca320d86f8807e831e671bdbabee28b5f39835ede743840f23dfce0d49d7. Mar 14 00:15:48.629280 systemd-networkd[1851]: calib2c851d3f77: Gained IPv6LL Mar 14 00:15:48.739478 containerd[1937]: time="2026-03-14T00:15:48.739295492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5cc67d498c-jsdnf,Uid:de037f31-e304-4774-8e09-1ec32c3e29bf,Namespace:calico-system,Attempt:1,} returns sandbox id \"35f540316688fb10e9434caec976f19768c05e9e39306ab7b650f8d038634ed2\"" Mar 14 00:15:48.768042 containerd[1937]: time="2026-03-14T00:15:48.767986580Z" level=info msg="StartContainer for \"22f8ca320d86f8807e831e671bdbabee28b5f39835ede743840f23dfce0d49d7\" returns successfully" Mar 14 00:15:48.926734 kubelet[3148]: I0314 00:15:48.925856 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jkzng" podStartSLOduration=52.925834089 podStartE2EDuration="52.925834089s" podCreationTimestamp="2026-03-14 00:14:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:15:48.870606561 +0000 UTC m=+56.975010044" watchObservedRunningTime="2026-03-14 00:15:48.925834089 +0000 UTC m=+57.030237524" Mar 14 00:15:49.078143 systemd-networkd[1851]: cali220f7bb61e5: Gained IPv6LL Mar 14 00:15:49.113281 systemd[1]: run-containerd-runc-k8s.io-3876ba2e6fdc15c8a370e5c110fa1d094e329a2a3f634a1a7e43a2eda30aaf97-runc.a7qCI7.mount: Deactivated successfully. Mar 14 00:15:49.270829 systemd-networkd[1851]: cali41be2aef46c: Gained IPv6LL Mar 14 00:15:49.610094 kernel: calico-node[5043]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 14 00:15:49.976765 kubelet[3148]: I0314 00:15:49.976643 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-nsj8c" podStartSLOduration=53.976612354 podStartE2EDuration="53.976612354s" podCreationTimestamp="2026-03-14 00:14:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:15:48.995328165 +0000 UTC m=+57.099731624" watchObservedRunningTime="2026-03-14 00:15:49.976612354 +0000 UTC m=+58.081015777" Mar 14 00:15:50.141971 containerd[1937]: time="2026-03-14T00:15:50.140374291Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:50.142829 containerd[1937]: time="2026-03-14T00:15:50.142779667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Mar 14 00:15:50.146393 containerd[1937]: time="2026-03-14T00:15:50.146331271Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:50.153190 containerd[1937]: time="2026-03-14T00:15:50.153134227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:50.155011 containerd[1937]: time="2026-03-14T00:15:50.154925179Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 3.113974371s" Mar 14 00:15:50.155230 containerd[1937]: time="2026-03-14T00:15:50.155198287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Mar 14 00:15:50.179545 containerd[1937]: time="2026-03-14T00:15:50.179480539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 14 00:15:50.188642 containerd[1937]: time="2026-03-14T00:15:50.188424091Z" level=info msg="CreateContainer within sandbox \"5283e00bde7364e035872080f1d61a1a2b0646d156f4344e1de48d8553e8c6e6\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 14 00:15:50.282519 containerd[1937]: time="2026-03-14T00:15:50.282352760Z" level=info msg="CreateContainer within sandbox \"5283e00bde7364e035872080f1d61a1a2b0646d156f4344e1de48d8553e8c6e6\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0e1bdcd5be911b5027cb6d5b83cc3da0de4935c7b0c84c74624aa6477c973959\"" Mar 14 00:15:50.283584 containerd[1937]: time="2026-03-14T00:15:50.283520312Z" level=info msg="StartContainer for \"0e1bdcd5be911b5027cb6d5b83cc3da0de4935c7b0c84c74624aa6477c973959\"" Mar 14 00:15:50.447264 systemd[1]: Started cri-containerd-0e1bdcd5be911b5027cb6d5b83cc3da0de4935c7b0c84c74624aa6477c973959.scope - libcontainer container 0e1bdcd5be911b5027cb6d5b83cc3da0de4935c7b0c84c74624aa6477c973959. Mar 14 00:15:50.519124 containerd[1937]: time="2026-03-14T00:15:50.517792713Z" level=info msg="StartContainer for \"0e1bdcd5be911b5027cb6d5b83cc3da0de4935c7b0c84c74624aa6477c973959\" returns successfully" Mar 14 00:15:50.803152 systemd-networkd[1851]: vxlan.calico: Link UP Mar 14 00:15:50.803177 systemd-networkd[1851]: vxlan.calico: Gained carrier Mar 14 00:15:51.564608 systemd[1]: Started sshd@8-172.31.26.130:22-68.220.241.50:35392.service - OpenSSH per-connection server daemon (68.220.241.50:35392). Mar 14 00:15:52.107424 containerd[1937]: time="2026-03-14T00:15:52.107319249Z" level=info msg="StopPodSandbox for \"10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237\"" Mar 14 00:15:52.116065 sshd[5639]: Accepted publickey for core from 68.220.241.50 port 35392 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:15:52.123804 sshd[5639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:52.138979 systemd-logind[1911]: New session 9 of user core. Mar 14 00:15:52.146247 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 14 00:15:52.506239 containerd[1937]: 2026-03-14 00:15:52.263 [WARNING][5655] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-whisker--544d6dd76d--4rcl5-eth0", GenerateName:"whisker-544d6dd76d-", Namespace:"calico-system", SelfLink:"", UID:"d7880c76-182c-44f3-99e8-6a915d275ae2", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"544d6dd76d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33", Pod:"whisker-544d6dd76d-4rcl5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.25.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali220f7bb61e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:52.506239 containerd[1937]: 2026-03-14 00:15:52.263 [INFO][5655] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" Mar 14 00:15:52.506239 containerd[1937]: 2026-03-14 00:15:52.263 [INFO][5655] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" iface="eth0" netns="" Mar 14 00:15:52.506239 containerd[1937]: 2026-03-14 00:15:52.263 [INFO][5655] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" Mar 14 00:15:52.506239 containerd[1937]: 2026-03-14 00:15:52.263 [INFO][5655] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" Mar 14 00:15:52.506239 containerd[1937]: 2026-03-14 00:15:52.454 [INFO][5666] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" HandleID="k8s-pod-network.10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" Workload="ip--172--31--26--130-k8s-whisker--544d6dd76d--4rcl5-eth0" Mar 14 00:15:52.506239 containerd[1937]: 2026-03-14 00:15:52.455 [INFO][5666] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:52.506239 containerd[1937]: 2026-03-14 00:15:52.455 [INFO][5666] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:52.506239 containerd[1937]: 2026-03-14 00:15:52.474 [WARNING][5666] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" HandleID="k8s-pod-network.10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" Workload="ip--172--31--26--130-k8s-whisker--544d6dd76d--4rcl5-eth0" Mar 14 00:15:52.506239 containerd[1937]: 2026-03-14 00:15:52.474 [INFO][5666] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" HandleID="k8s-pod-network.10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" Workload="ip--172--31--26--130-k8s-whisker--544d6dd76d--4rcl5-eth0" Mar 14 00:15:52.506239 containerd[1937]: 2026-03-14 00:15:52.482 [INFO][5666] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:52.506239 containerd[1937]: 2026-03-14 00:15:52.493 [INFO][5655] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" Mar 14 00:15:52.508350 containerd[1937]: time="2026-03-14T00:15:52.506311235Z" level=info msg="TearDown network for sandbox \"10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237\" successfully" Mar 14 00:15:52.508350 containerd[1937]: time="2026-03-14T00:15:52.506352023Z" level=info msg="StopPodSandbox for \"10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237\" returns successfully" Mar 14 00:15:52.513780 containerd[1937]: time="2026-03-14T00:15:52.512018879Z" level=info msg="RemovePodSandbox for \"10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237\"" Mar 14 00:15:52.513780 containerd[1937]: time="2026-03-14T00:15:52.512101571Z" level=info msg="Forcibly stopping sandbox \"10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237\"" Mar 14 00:15:52.674357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3291540610.mount: Deactivated successfully. Mar 14 00:15:52.701925 sshd[5639]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:52.713653 systemd[1]: session-9.scope: Deactivated successfully. Mar 14 00:15:52.715433 systemd[1]: sshd@8-172.31.26.130:22-68.220.241.50:35392.service: Deactivated successfully. Mar 14 00:15:52.731712 systemd-logind[1911]: Session 9 logged out. Waiting for processes to exit. Mar 14 00:15:52.739352 systemd-logind[1911]: Removed session 9. Mar 14 00:15:52.791279 systemd-networkd[1851]: vxlan.calico: Gained IPv6LL Mar 14 00:15:52.909318 containerd[1937]: 2026-03-14 00:15:52.697 [WARNING][5688] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-whisker--544d6dd76d--4rcl5-eth0", GenerateName:"whisker-544d6dd76d-", Namespace:"calico-system", SelfLink:"", UID:"d7880c76-182c-44f3-99e8-6a915d275ae2", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"544d6dd76d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33", Pod:"whisker-544d6dd76d-4rcl5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.25.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali220f7bb61e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:52.909318 containerd[1937]: 2026-03-14 00:15:52.698 [INFO][5688] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" Mar 14 00:15:52.909318 containerd[1937]: 2026-03-14 00:15:52.698 [INFO][5688] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" iface="eth0" netns="" Mar 14 00:15:52.909318 containerd[1937]: 2026-03-14 00:15:52.698 [INFO][5688] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" Mar 14 00:15:52.909318 containerd[1937]: 2026-03-14 00:15:52.698 [INFO][5688] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" Mar 14 00:15:52.909318 containerd[1937]: 2026-03-14 00:15:52.868 [INFO][5696] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" HandleID="k8s-pod-network.10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" Workload="ip--172--31--26--130-k8s-whisker--544d6dd76d--4rcl5-eth0" Mar 14 00:15:52.909318 containerd[1937]: 2026-03-14 00:15:52.869 [INFO][5696] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:52.909318 containerd[1937]: 2026-03-14 00:15:52.870 [INFO][5696] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:52.909318 containerd[1937]: 2026-03-14 00:15:52.891 [WARNING][5696] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" HandleID="k8s-pod-network.10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" Workload="ip--172--31--26--130-k8s-whisker--544d6dd76d--4rcl5-eth0" Mar 14 00:15:52.909318 containerd[1937]: 2026-03-14 00:15:52.891 [INFO][5696] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" HandleID="k8s-pod-network.10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" Workload="ip--172--31--26--130-k8s-whisker--544d6dd76d--4rcl5-eth0" Mar 14 00:15:52.909318 containerd[1937]: 2026-03-14 00:15:52.894 [INFO][5696] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:52.909318 containerd[1937]: 2026-03-14 00:15:52.900 [INFO][5688] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237" Mar 14 00:15:52.909318 containerd[1937]: time="2026-03-14T00:15:52.909083629Z" level=info msg="TearDown network for sandbox \"10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237\" successfully" Mar 14 00:15:52.918828 containerd[1937]: time="2026-03-14T00:15:52.918250957Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:15:52.918828 containerd[1937]: time="2026-03-14T00:15:52.918355105Z" level=info msg="RemovePodSandbox \"10a346f6434ebb1f1565ff53830c1a7759043eef5b8ba26d38fe010e4f322237\" returns successfully" Mar 14 00:15:52.919901 containerd[1937]: time="2026-03-14T00:15:52.919254169Z" level=info msg="StopPodSandbox for \"efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e\"" Mar 14 00:15:53.123371 containerd[1937]: 2026-03-14 00:15:53.024 [WARNING][5722] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-calico--kube--controllers--75d5fd567b--lbrvk-eth0", GenerateName:"calico-kube-controllers-75d5fd567b-", Namespace:"calico-system", SelfLink:"", UID:"7629e1e9-e956-4dbb-9bf3-396748a97bfb", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75d5fd567b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"71feb8c9be5ff74a5fd73038fdb4497ef881dc4cba10ae6b2b9827262a4e66c0", Pod:"calico-kube-controllers-75d5fd567b-lbrvk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.25.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9f633f38b65", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:53.123371 containerd[1937]: 2026-03-14 00:15:53.025 [INFO][5722] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" Mar 14 00:15:53.123371 containerd[1937]: 2026-03-14 00:15:53.025 [INFO][5722] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" iface="eth0" netns="" Mar 14 00:15:53.123371 containerd[1937]: 2026-03-14 00:15:53.025 [INFO][5722] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" Mar 14 00:15:53.123371 containerd[1937]: 2026-03-14 00:15:53.025 [INFO][5722] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" Mar 14 00:15:53.123371 containerd[1937]: 2026-03-14 00:15:53.084 [INFO][5731] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" HandleID="k8s-pod-network.efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" Workload="ip--172--31--26--130-k8s-calico--kube--controllers--75d5fd567b--lbrvk-eth0" Mar 14 00:15:53.123371 containerd[1937]: 2026-03-14 00:15:53.084 [INFO][5731] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:53.123371 containerd[1937]: 2026-03-14 00:15:53.084 [INFO][5731] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:53.123371 containerd[1937]: 2026-03-14 00:15:53.103 [WARNING][5731] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" HandleID="k8s-pod-network.efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" Workload="ip--172--31--26--130-k8s-calico--kube--controllers--75d5fd567b--lbrvk-eth0" Mar 14 00:15:53.123371 containerd[1937]: 2026-03-14 00:15:53.103 [INFO][5731] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" HandleID="k8s-pod-network.efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" Workload="ip--172--31--26--130-k8s-calico--kube--controllers--75d5fd567b--lbrvk-eth0" Mar 14 00:15:53.123371 containerd[1937]: 2026-03-14 00:15:53.107 [INFO][5731] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:53.123371 containerd[1937]: 2026-03-14 00:15:53.115 [INFO][5722] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" Mar 14 00:15:53.123371 containerd[1937]: time="2026-03-14T00:15:53.123089302Z" level=info msg="TearDown network for sandbox \"efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e\" successfully" Mar 14 00:15:53.123371 containerd[1937]: time="2026-03-14T00:15:53.123125734Z" level=info msg="StopPodSandbox for \"efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e\" returns successfully" Mar 14 00:15:53.126638 containerd[1937]: time="2026-03-14T00:15:53.125232886Z" level=info msg="RemovePodSandbox for \"efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e\"" Mar 14 00:15:53.126638 containerd[1937]: time="2026-03-14T00:15:53.125286106Z" level=info msg="Forcibly stopping sandbox \"efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e\"" Mar 14 00:15:53.348033 containerd[1937]: 2026-03-14 00:15:53.225 [WARNING][5746] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-calico--kube--controllers--75d5fd567b--lbrvk-eth0", GenerateName:"calico-kube-controllers-75d5fd567b-", Namespace:"calico-system", SelfLink:"", UID:"7629e1e9-e956-4dbb-9bf3-396748a97bfb", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75d5fd567b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"71feb8c9be5ff74a5fd73038fdb4497ef881dc4cba10ae6b2b9827262a4e66c0", Pod:"calico-kube-controllers-75d5fd567b-lbrvk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.25.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9f633f38b65", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:53.348033 containerd[1937]: 2026-03-14 00:15:53.226 [INFO][5746] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" Mar 14 00:15:53.348033 containerd[1937]: 2026-03-14 00:15:53.226 [INFO][5746] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" iface="eth0" netns="" Mar 14 00:15:53.348033 containerd[1937]: 2026-03-14 00:15:53.227 [INFO][5746] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" Mar 14 00:15:53.348033 containerd[1937]: 2026-03-14 00:15:53.227 [INFO][5746] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" Mar 14 00:15:53.348033 containerd[1937]: 2026-03-14 00:15:53.305 [INFO][5754] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" HandleID="k8s-pod-network.efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" Workload="ip--172--31--26--130-k8s-calico--kube--controllers--75d5fd567b--lbrvk-eth0" Mar 14 00:15:53.348033 containerd[1937]: 2026-03-14 00:15:53.308 [INFO][5754] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:53.348033 containerd[1937]: 2026-03-14 00:15:53.308 [INFO][5754] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:53.348033 containerd[1937]: 2026-03-14 00:15:53.333 [WARNING][5754] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" HandleID="k8s-pod-network.efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" Workload="ip--172--31--26--130-k8s-calico--kube--controllers--75d5fd567b--lbrvk-eth0" Mar 14 00:15:53.348033 containerd[1937]: 2026-03-14 00:15:53.334 [INFO][5754] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" HandleID="k8s-pod-network.efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" Workload="ip--172--31--26--130-k8s-calico--kube--controllers--75d5fd567b--lbrvk-eth0" Mar 14 00:15:53.348033 containerd[1937]: 2026-03-14 00:15:53.337 [INFO][5754] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:53.348033 containerd[1937]: 2026-03-14 00:15:53.342 [INFO][5746] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e" Mar 14 00:15:53.350975 containerd[1937]: time="2026-03-14T00:15:53.348994211Z" level=info msg="TearDown network for sandbox \"efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e\" successfully" Mar 14 00:15:53.353609 containerd[1937]: time="2026-03-14T00:15:53.353551655Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:15:53.354709 containerd[1937]: time="2026-03-14T00:15:53.354624959Z" level=info msg="RemovePodSandbox \"efa00cdc7494fe8a016e9cb31e6ee087e72818fbc00ecce10cb90555c4062f9e\" returns successfully" Mar 14 00:15:53.355872 containerd[1937]: time="2026-03-14T00:15:53.355824419Z" level=info msg="StopPodSandbox for \"785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e\"" Mar 14 00:15:53.616194 containerd[1937]: 2026-03-14 00:15:53.467 [WARNING][5769] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-csi--node--driver--s9wlx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8bfac06b-f0bb-4f88-a72c-e23a86afafd1", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"5283e00bde7364e035872080f1d61a1a2b0646d156f4344e1de48d8553e8c6e6", Pod:"csi-node-driver-s9wlx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.25.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9cc61126b64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:53.616194 containerd[1937]: 2026-03-14 00:15:53.469 [INFO][5769] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" Mar 14 00:15:53.616194 containerd[1937]: 2026-03-14 00:15:53.469 [INFO][5769] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" iface="eth0" netns="" Mar 14 00:15:53.616194 containerd[1937]: 2026-03-14 00:15:53.469 [INFO][5769] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" Mar 14 00:15:53.616194 containerd[1937]: 2026-03-14 00:15:53.469 [INFO][5769] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" Mar 14 00:15:53.616194 containerd[1937]: 2026-03-14 00:15:53.565 [INFO][5776] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" HandleID="k8s-pod-network.785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" Workload="ip--172--31--26--130-k8s-csi--node--driver--s9wlx-eth0" Mar 14 00:15:53.616194 containerd[1937]: 2026-03-14 00:15:53.565 [INFO][5776] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:53.616194 containerd[1937]: 2026-03-14 00:15:53.565 [INFO][5776] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:53.616194 containerd[1937]: 2026-03-14 00:15:53.589 [WARNING][5776] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" HandleID="k8s-pod-network.785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" Workload="ip--172--31--26--130-k8s-csi--node--driver--s9wlx-eth0" Mar 14 00:15:53.616194 containerd[1937]: 2026-03-14 00:15:53.589 [INFO][5776] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" HandleID="k8s-pod-network.785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" Workload="ip--172--31--26--130-k8s-csi--node--driver--s9wlx-eth0" Mar 14 00:15:53.616194 containerd[1937]: 2026-03-14 00:15:53.594 [INFO][5776] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:53.616194 containerd[1937]: 2026-03-14 00:15:53.607 [INFO][5769] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" Mar 14 00:15:53.617180 containerd[1937]: time="2026-03-14T00:15:53.616515108Z" level=info msg="TearDown network for sandbox \"785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e\" successfully" Mar 14 00:15:53.617180 containerd[1937]: time="2026-03-14T00:15:53.616554456Z" level=info msg="StopPodSandbox for \"785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e\" returns successfully" Mar 14 00:15:53.618822 containerd[1937]: time="2026-03-14T00:15:53.618755364Z" level=info msg="RemovePodSandbox for \"785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e\"" Mar 14 00:15:53.619035 containerd[1937]: time="2026-03-14T00:15:53.618823944Z" level=info msg="Forcibly stopping sandbox \"785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e\"" Mar 14 00:15:53.822587 containerd[1937]: time="2026-03-14T00:15:53.822515041Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:53.825490 containerd[1937]: time="2026-03-14T00:15:53.825433285Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Mar 14 00:15:53.827201 containerd[1937]: time="2026-03-14T00:15:53.827129605Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:53.836154 containerd[1937]: time="2026-03-14T00:15:53.836096797Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:53.839547 containerd[1937]: time="2026-03-14T00:15:53.839480593Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 3.65992299s" Mar 14 00:15:53.839770 containerd[1937]: time="2026-03-14T00:15:53.839739625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Mar 14 00:15:53.845220 containerd[1937]: time="2026-03-14T00:15:53.845027833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 14 00:15:53.859848 containerd[1937]: time="2026-03-14T00:15:53.859766797Z" level=info msg="CreateContainer within sandbox \"160abf700d9df5e2b1cb444326fde6d3abdd3afa63698c00f03672f9460a0309\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 14 00:15:53.897266 containerd[1937]: 2026-03-14 00:15:53.765 [WARNING][5790] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-csi--node--driver--s9wlx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8bfac06b-f0bb-4f88-a72c-e23a86afafd1", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"5283e00bde7364e035872080f1d61a1a2b0646d156f4344e1de48d8553e8c6e6", Pod:"csi-node-driver-s9wlx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.25.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9cc61126b64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:53.897266 containerd[1937]: 2026-03-14 00:15:53.769 [INFO][5790] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" Mar 14 00:15:53.897266 containerd[1937]: 2026-03-14 00:15:53.769 [INFO][5790] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" iface="eth0" netns="" Mar 14 00:15:53.897266 containerd[1937]: 2026-03-14 00:15:53.769 [INFO][5790] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" Mar 14 00:15:53.897266 containerd[1937]: 2026-03-14 00:15:53.769 [INFO][5790] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" Mar 14 00:15:53.897266 containerd[1937]: 2026-03-14 00:15:53.857 [INFO][5797] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" HandleID="k8s-pod-network.785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" Workload="ip--172--31--26--130-k8s-csi--node--driver--s9wlx-eth0" Mar 14 00:15:53.897266 containerd[1937]: 2026-03-14 00:15:53.859 [INFO][5797] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:53.897266 containerd[1937]: 2026-03-14 00:15:53.859 [INFO][5797] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:53.897266 containerd[1937]: 2026-03-14 00:15:53.879 [WARNING][5797] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" HandleID="k8s-pod-network.785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" Workload="ip--172--31--26--130-k8s-csi--node--driver--s9wlx-eth0" Mar 14 00:15:53.897266 containerd[1937]: 2026-03-14 00:15:53.879 [INFO][5797] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" HandleID="k8s-pod-network.785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" Workload="ip--172--31--26--130-k8s-csi--node--driver--s9wlx-eth0" Mar 14 00:15:53.897266 containerd[1937]: 2026-03-14 00:15:53.882 [INFO][5797] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:53.897266 containerd[1937]: 2026-03-14 00:15:53.888 [INFO][5790] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e" Mar 14 00:15:53.899883 containerd[1937]: time="2026-03-14T00:15:53.898808174Z" level=info msg="TearDown network for sandbox \"785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e\" successfully" Mar 14 00:15:53.909267 containerd[1937]: time="2026-03-14T00:15:53.906288950Z" level=info msg="CreateContainer within sandbox \"160abf700d9df5e2b1cb444326fde6d3abdd3afa63698c00f03672f9460a0309\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"dabb7a9deacfaec6b6ff07cf24e3c37e70709c9820e314cc0d2d8b909a154ece\"" Mar 14 00:15:53.917007 containerd[1937]: time="2026-03-14T00:15:53.916478234Z" level=info msg="StartContainer for \"dabb7a9deacfaec6b6ff07cf24e3c37e70709c9820e314cc0d2d8b909a154ece\"" Mar 14 00:15:53.927126 containerd[1937]: time="2026-03-14T00:15:53.927047870Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:15:53.927363 containerd[1937]: time="2026-03-14T00:15:53.927328370Z" level=info msg="RemovePodSandbox \"785da1bc534eeb9f6e98fe5c517fd015db4a04ca61926e2e84e0969fba65bf4e\" returns successfully" Mar 14 00:15:53.930448 containerd[1937]: time="2026-03-14T00:15:53.930367682Z" level=info msg="StopPodSandbox for \"7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5\"" Mar 14 00:15:54.001311 systemd[1]: Started cri-containerd-dabb7a9deacfaec6b6ff07cf24e3c37e70709c9820e314cc0d2d8b909a154ece.scope - libcontainer container dabb7a9deacfaec6b6ff07cf24e3c37e70709c9820e314cc0d2d8b909a154ece. Mar 14 00:15:54.145997 containerd[1937]: time="2026-03-14T00:15:54.145799531Z" level=info msg="StartContainer for \"dabb7a9deacfaec6b6ff07cf24e3c37e70709c9820e314cc0d2d8b909a154ece\" returns successfully" Mar 14 00:15:54.178444 containerd[1937]: 2026-03-14 00:15:54.059 [WARNING][5828] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-coredns--674b8bbfcf--jkzng-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7983ca7e-7b32-4d4f-acd3-e05012673e7d", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"71f09e6a973d537137e2f7832fe7f0941502fa8d5247a9af8ab6b1957f7b6f31", Pod:"coredns-674b8bbfcf-jkzng", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali642f362b105", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:54.178444 containerd[1937]: 2026-03-14 00:15:54.059 [INFO][5828] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" Mar 14 00:15:54.178444 containerd[1937]: 2026-03-14 00:15:54.059 [INFO][5828] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" iface="eth0" netns="" Mar 14 00:15:54.178444 containerd[1937]: 2026-03-14 00:15:54.059 [INFO][5828] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" Mar 14 00:15:54.178444 containerd[1937]: 2026-03-14 00:15:54.059 [INFO][5828] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" Mar 14 00:15:54.178444 containerd[1937]: 2026-03-14 00:15:54.150 [INFO][5847] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" HandleID="k8s-pod-network.7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" Workload="ip--172--31--26--130-k8s-coredns--674b8bbfcf--jkzng-eth0" Mar 14 00:15:54.178444 containerd[1937]: 2026-03-14 00:15:54.151 [INFO][5847] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:54.178444 containerd[1937]: 2026-03-14 00:15:54.151 [INFO][5847] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:54.178444 containerd[1937]: 2026-03-14 00:15:54.166 [WARNING][5847] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" HandleID="k8s-pod-network.7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" Workload="ip--172--31--26--130-k8s-coredns--674b8bbfcf--jkzng-eth0" Mar 14 00:15:54.178444 containerd[1937]: 2026-03-14 00:15:54.166 [INFO][5847] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" HandleID="k8s-pod-network.7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" Workload="ip--172--31--26--130-k8s-coredns--674b8bbfcf--jkzng-eth0" Mar 14 00:15:54.178444 containerd[1937]: 2026-03-14 00:15:54.170 [INFO][5847] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:54.178444 containerd[1937]: 2026-03-14 00:15:54.173 [INFO][5828] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" Mar 14 00:15:54.178444 containerd[1937]: time="2026-03-14T00:15:54.178359083Z" level=info msg="TearDown network for sandbox \"7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5\" successfully" Mar 14 00:15:54.178444 containerd[1937]: time="2026-03-14T00:15:54.178402487Z" level=info msg="StopPodSandbox for \"7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5\" returns successfully" Mar 14 00:15:54.181870 containerd[1937]: time="2026-03-14T00:15:54.179771399Z" level=info msg="RemovePodSandbox for \"7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5\"" Mar 14 00:15:54.181870 containerd[1937]: time="2026-03-14T00:15:54.180070547Z" level=info msg="Forcibly stopping sandbox \"7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5\"" Mar 14 00:15:54.392494 containerd[1937]: 2026-03-14 00:15:54.310 [WARNING][5871] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-coredns--674b8bbfcf--jkzng-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7983ca7e-7b32-4d4f-acd3-e05012673e7d", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"71f09e6a973d537137e2f7832fe7f0941502fa8d5247a9af8ab6b1957f7b6f31", Pod:"coredns-674b8bbfcf-jkzng", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali642f362b105", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:54.392494 containerd[1937]: 2026-03-14 00:15:54.312 [INFO][5871] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" Mar 14 00:15:54.392494 containerd[1937]: 2026-03-14 00:15:54.312 [INFO][5871] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" iface="eth0" netns="" Mar 14 00:15:54.392494 containerd[1937]: 2026-03-14 00:15:54.312 [INFO][5871] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" Mar 14 00:15:54.392494 containerd[1937]: 2026-03-14 00:15:54.312 [INFO][5871] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" Mar 14 00:15:54.392494 containerd[1937]: 2026-03-14 00:15:54.368 [INFO][5881] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" HandleID="k8s-pod-network.7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" Workload="ip--172--31--26--130-k8s-coredns--674b8bbfcf--jkzng-eth0" Mar 14 00:15:54.392494 containerd[1937]: 2026-03-14 00:15:54.368 [INFO][5881] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:54.392494 containerd[1937]: 2026-03-14 00:15:54.368 [INFO][5881] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:54.392494 containerd[1937]: 2026-03-14 00:15:54.382 [WARNING][5881] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" HandleID="k8s-pod-network.7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" Workload="ip--172--31--26--130-k8s-coredns--674b8bbfcf--jkzng-eth0" Mar 14 00:15:54.392494 containerd[1937]: 2026-03-14 00:15:54.382 [INFO][5881] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" HandleID="k8s-pod-network.7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" Workload="ip--172--31--26--130-k8s-coredns--674b8bbfcf--jkzng-eth0" Mar 14 00:15:54.392494 containerd[1937]: 2026-03-14 00:15:54.384 [INFO][5881] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:54.392494 containerd[1937]: 2026-03-14 00:15:54.388 [INFO][5871] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5" Mar 14 00:15:54.393537 containerd[1937]: time="2026-03-14T00:15:54.392545668Z" level=info msg="TearDown network for sandbox \"7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5\" successfully" Mar 14 00:15:54.398757 containerd[1937]: time="2026-03-14T00:15:54.398683200Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:15:54.398927 containerd[1937]: time="2026-03-14T00:15:54.398795520Z" level=info msg="RemovePodSandbox \"7e8399f6475ea1c00ffe75608c67ca322dc96d92e46effb8d6ed14681caaeda5\" returns successfully" Mar 14 00:15:54.400062 containerd[1937]: time="2026-03-14T00:15:54.400007280Z" level=info msg="StopPodSandbox for \"b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a\"" Mar 14 00:15:54.548512 containerd[1937]: 2026-03-14 00:15:54.476 [WARNING][5895] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--jsdnf-eth0", GenerateName:"calico-apiserver-5cc67d498c-", Namespace:"calico-system", SelfLink:"", UID:"de037f31-e304-4774-8e09-1ec32c3e29bf", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cc67d498c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"35f540316688fb10e9434caec976f19768c05e9e39306ab7b650f8d038634ed2", Pod:"calico-apiserver-5cc67d498c-jsdnf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib2c851d3f77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:54.548512 containerd[1937]: 2026-03-14 00:15:54.476 [INFO][5895] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" Mar 14 00:15:54.548512 containerd[1937]: 2026-03-14 00:15:54.476 [INFO][5895] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" iface="eth0" netns="" Mar 14 00:15:54.548512 containerd[1937]: 2026-03-14 00:15:54.476 [INFO][5895] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" Mar 14 00:15:54.548512 containerd[1937]: 2026-03-14 00:15:54.476 [INFO][5895] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" Mar 14 00:15:54.548512 containerd[1937]: 2026-03-14 00:15:54.519 [INFO][5903] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" HandleID="k8s-pod-network.b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" Workload="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--jsdnf-eth0" Mar 14 00:15:54.548512 containerd[1937]: 2026-03-14 00:15:54.519 [INFO][5903] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:54.548512 containerd[1937]: 2026-03-14 00:15:54.519 [INFO][5903] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:54.548512 containerd[1937]: 2026-03-14 00:15:54.535 [WARNING][5903] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" HandleID="k8s-pod-network.b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" Workload="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--jsdnf-eth0" Mar 14 00:15:54.548512 containerd[1937]: 2026-03-14 00:15:54.535 [INFO][5903] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" HandleID="k8s-pod-network.b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" Workload="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--jsdnf-eth0" Mar 14 00:15:54.548512 containerd[1937]: 2026-03-14 00:15:54.540 [INFO][5903] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:54.548512 containerd[1937]: 2026-03-14 00:15:54.545 [INFO][5895] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" Mar 14 00:15:54.549564 containerd[1937]: time="2026-03-14T00:15:54.548651977Z" level=info msg="TearDown network for sandbox \"b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a\" successfully" Mar 14 00:15:54.549564 containerd[1937]: time="2026-03-14T00:15:54.548693557Z" level=info msg="StopPodSandbox for \"b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a\" returns successfully" Mar 14 00:15:54.550095 containerd[1937]: time="2026-03-14T00:15:54.549926029Z" level=info msg="RemovePodSandbox for \"b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a\"" Mar 14 00:15:54.550192 containerd[1937]: time="2026-03-14T00:15:54.550106989Z" level=info msg="Forcibly stopping sandbox \"b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a\"" Mar 14 00:15:54.694074 containerd[1937]: 2026-03-14 00:15:54.620 [WARNING][5917] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--jsdnf-eth0", GenerateName:"calico-apiserver-5cc67d498c-", Namespace:"calico-system", SelfLink:"", UID:"de037f31-e304-4774-8e09-1ec32c3e29bf", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cc67d498c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"35f540316688fb10e9434caec976f19768c05e9e39306ab7b650f8d038634ed2", Pod:"calico-apiserver-5cc67d498c-jsdnf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib2c851d3f77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:54.694074 containerd[1937]: 2026-03-14 00:15:54.621 [INFO][5917] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" Mar 14 00:15:54.694074 containerd[1937]: 2026-03-14 00:15:54.621 [INFO][5917] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" iface="eth0" netns="" Mar 14 00:15:54.694074 containerd[1937]: 2026-03-14 00:15:54.621 [INFO][5917] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" Mar 14 00:15:54.694074 containerd[1937]: 2026-03-14 00:15:54.621 [INFO][5917] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" Mar 14 00:15:54.694074 containerd[1937]: 2026-03-14 00:15:54.669 [INFO][5924] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" HandleID="k8s-pod-network.b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" Workload="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--jsdnf-eth0" Mar 14 00:15:54.694074 containerd[1937]: 2026-03-14 00:15:54.670 [INFO][5924] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:54.694074 containerd[1937]: 2026-03-14 00:15:54.670 [INFO][5924] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:54.694074 containerd[1937]: 2026-03-14 00:15:54.684 [WARNING][5924] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" HandleID="k8s-pod-network.b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" Workload="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--jsdnf-eth0" Mar 14 00:15:54.694074 containerd[1937]: 2026-03-14 00:15:54.684 [INFO][5924] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" HandleID="k8s-pod-network.b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" Workload="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--jsdnf-eth0" Mar 14 00:15:54.694074 containerd[1937]: 2026-03-14 00:15:54.687 [INFO][5924] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:54.694074 containerd[1937]: 2026-03-14 00:15:54.690 [INFO][5917] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a" Mar 14 00:15:54.694986 containerd[1937]: time="2026-03-14T00:15:54.694142474Z" level=info msg="TearDown network for sandbox \"b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a\" successfully" Mar 14 00:15:54.700104 containerd[1937]: time="2026-03-14T00:15:54.699827930Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:15:54.700104 containerd[1937]: time="2026-03-14T00:15:54.700000046Z" level=info msg="RemovePodSandbox \"b41f544578fb3027d4c1a6208245070323c46b597eacc20d0e79cdb1193a871a\" returns successfully" Mar 14 00:15:54.701434 containerd[1937]: time="2026-03-14T00:15:54.701341346Z" level=info msg="StopPodSandbox for \"6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf\"" Mar 14 00:15:54.885194 containerd[1937]: 2026-03-14 00:15:54.793 [WARNING][5938] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-goldmane--5b85766d88--lxpml-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"7b232062-acf6-4e50-a0e3-33b7e15835a4", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"160abf700d9df5e2b1cb444326fde6d3abdd3afa63698c00f03672f9460a0309", Pod:"goldmane-5b85766d88-lxpml", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.25.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4a239523e4c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:54.885194 containerd[1937]: 2026-03-14 00:15:54.795 [INFO][5938] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" Mar 14 00:15:54.885194 containerd[1937]: 2026-03-14 00:15:54.795 [INFO][5938] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" iface="eth0" netns="" Mar 14 00:15:54.885194 containerd[1937]: 2026-03-14 00:15:54.797 [INFO][5938] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" Mar 14 00:15:54.885194 containerd[1937]: 2026-03-14 00:15:54.797 [INFO][5938] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" Mar 14 00:15:54.885194 containerd[1937]: 2026-03-14 00:15:54.858 [INFO][5945] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" HandleID="k8s-pod-network.6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" Workload="ip--172--31--26--130-k8s-goldmane--5b85766d88--lxpml-eth0" Mar 14 00:15:54.885194 containerd[1937]: 2026-03-14 00:15:54.859 [INFO][5945] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:54.885194 containerd[1937]: 2026-03-14 00:15:54.859 [INFO][5945] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:54.885194 containerd[1937]: 2026-03-14 00:15:54.875 [WARNING][5945] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" HandleID="k8s-pod-network.6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" Workload="ip--172--31--26--130-k8s-goldmane--5b85766d88--lxpml-eth0" Mar 14 00:15:54.885194 containerd[1937]: 2026-03-14 00:15:54.875 [INFO][5945] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" HandleID="k8s-pod-network.6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" Workload="ip--172--31--26--130-k8s-goldmane--5b85766d88--lxpml-eth0" Mar 14 00:15:54.885194 containerd[1937]: 2026-03-14 00:15:54.877 [INFO][5945] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:54.885194 containerd[1937]: 2026-03-14 00:15:54.881 [INFO][5938] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" Mar 14 00:15:54.886644 containerd[1937]: time="2026-03-14T00:15:54.884928110Z" level=info msg="TearDown network for sandbox \"6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf\" successfully" Mar 14 00:15:54.886644 containerd[1937]: time="2026-03-14T00:15:54.885803378Z" level=info msg="StopPodSandbox for \"6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf\" returns successfully" Mar 14 00:15:54.886773 containerd[1937]: time="2026-03-14T00:15:54.886664018Z" level=info msg="RemovePodSandbox for \"6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf\"" Mar 14 00:15:54.886773 containerd[1937]: time="2026-03-14T00:15:54.886719422Z" level=info msg="Forcibly stopping sandbox \"6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf\"" Mar 14 00:15:55.140621 containerd[1937]: 2026-03-14 00:15:54.998 [WARNING][5959] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-goldmane--5b85766d88--lxpml-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"7b232062-acf6-4e50-a0e3-33b7e15835a4", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"160abf700d9df5e2b1cb444326fde6d3abdd3afa63698c00f03672f9460a0309", Pod:"goldmane-5b85766d88-lxpml", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.25.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4a239523e4c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:55.140621 containerd[1937]: 2026-03-14 00:15:54.998 [INFO][5959] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" Mar 14 00:15:55.140621 containerd[1937]: 2026-03-14 00:15:54.998 [INFO][5959] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" iface="eth0" netns="" Mar 14 00:15:55.140621 containerd[1937]: 2026-03-14 00:15:54.998 [INFO][5959] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" Mar 14 00:15:55.140621 containerd[1937]: 2026-03-14 00:15:54.998 [INFO][5959] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" Mar 14 00:15:55.140621 containerd[1937]: 2026-03-14 00:15:55.110 [INFO][5972] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" HandleID="k8s-pod-network.6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" Workload="ip--172--31--26--130-k8s-goldmane--5b85766d88--lxpml-eth0" Mar 14 00:15:55.140621 containerd[1937]: 2026-03-14 00:15:55.111 [INFO][5972] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:55.140621 containerd[1937]: 2026-03-14 00:15:55.112 [INFO][5972] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:55.140621 containerd[1937]: 2026-03-14 00:15:55.129 [WARNING][5972] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" HandleID="k8s-pod-network.6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" Workload="ip--172--31--26--130-k8s-goldmane--5b85766d88--lxpml-eth0" Mar 14 00:15:55.140621 containerd[1937]: 2026-03-14 00:15:55.129 [INFO][5972] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" HandleID="k8s-pod-network.6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" Workload="ip--172--31--26--130-k8s-goldmane--5b85766d88--lxpml-eth0" Mar 14 00:15:55.140621 containerd[1937]: 2026-03-14 00:15:55.132 [INFO][5972] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:55.140621 containerd[1937]: 2026-03-14 00:15:55.136 [INFO][5959] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf" Mar 14 00:15:55.142473 containerd[1937]: time="2026-03-14T00:15:55.141075804Z" level=info msg="TearDown network for sandbox \"6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf\" successfully" Mar 14 00:15:55.146014 containerd[1937]: time="2026-03-14T00:15:55.145731204Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:15:55.146014 containerd[1937]: time="2026-03-14T00:15:55.145853472Z" level=info msg="RemovePodSandbox \"6be4d3b6027773831ae9b71add9c73f9ec6b28f74e6cfdadee6859c5c66e2aaf\" returns successfully" Mar 14 00:15:55.148159 containerd[1937]: time="2026-03-14T00:15:55.147282756Z" level=info msg="StopPodSandbox for \"f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b\"" Mar 14 00:15:55.233848 ntpd[1905]: Listen normally on 8 vxlan.calico 192.168.25.128:123 Mar 14 00:15:55.234887 ntpd[1905]: Listen normally on 9 cali9cc61126b64 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 14 00:15:55.235903 ntpd[1905]: 14 Mar 00:15:55 ntpd[1905]: Listen normally on 8 vxlan.calico 192.168.25.128:123 Mar 14 00:15:55.235903 ntpd[1905]: 14 Mar 00:15:55 ntpd[1905]: Listen normally on 9 cali9cc61126b64 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 14 00:15:55.235903 ntpd[1905]: 14 Mar 00:15:55 ntpd[1905]: Listen normally on 10 cali4a239523e4c [fe80::ecee:eeff:feee:eeee%5]:123 Mar 14 00:15:55.235903 ntpd[1905]: 14 Mar 00:15:55 ntpd[1905]: Listen normally on 11 cali12aa16a0f2b [fe80::ecee:eeff:feee:eeee%6]:123 Mar 14 00:15:55.235903 ntpd[1905]: 14 Mar 00:15:55 ntpd[1905]: Listen normally on 12 cali9f633f38b65 [fe80::ecee:eeff:feee:eeee%7]:123 Mar 14 00:15:55.235903 ntpd[1905]: 14 Mar 00:15:55 ntpd[1905]: Listen normally on 13 cali220f7bb61e5 [fe80::ecee:eeff:feee:eeee%8]:123 Mar 14 00:15:55.235903 ntpd[1905]: 14 Mar 00:15:55 ntpd[1905]: Listen normally on 14 cali642f362b105 [fe80::ecee:eeff:feee:eeee%9]:123 Mar 14 00:15:55.235903 ntpd[1905]: 14 Mar 00:15:55 ntpd[1905]: Listen normally on 15 calib2c851d3f77 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 14 00:15:55.235903 ntpd[1905]: 14 Mar 00:15:55 ntpd[1905]: Listen normally on 16 cali41be2aef46c [fe80::ecee:eeff:feee:eeee%11]:123 Mar 14 00:15:55.235903 ntpd[1905]: 14 Mar 00:15:55 ntpd[1905]: Listen normally on 17 vxlan.calico [fe80::64df:69ff:fe1a:f30e%12]:123 Mar 14 00:15:55.235003 ntpd[1905]: Listen normally on 10 cali4a239523e4c [fe80::ecee:eeff:feee:eeee%5]:123 Mar 14 00:15:55.235086 ntpd[1905]: Listen normally on 11 cali12aa16a0f2b [fe80::ecee:eeff:feee:eeee%6]:123 Mar 14 00:15:55.235156 ntpd[1905]: Listen normally on 12 cali9f633f38b65 [fe80::ecee:eeff:feee:eeee%7]:123 Mar 14 00:15:55.235227 ntpd[1905]: Listen normally on 13 cali220f7bb61e5 [fe80::ecee:eeff:feee:eeee%8]:123 Mar 14 00:15:55.235293 ntpd[1905]: Listen normally on 14 cali642f362b105 [fe80::ecee:eeff:feee:eeee%9]:123 Mar 14 00:15:55.235360 ntpd[1905]: Listen normally on 15 calib2c851d3f77 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 14 00:15:55.235426 ntpd[1905]: Listen normally on 16 cali41be2aef46c [fe80::ecee:eeff:feee:eeee%11]:123 Mar 14 00:15:55.235497 ntpd[1905]: Listen normally on 17 vxlan.calico [fe80::64df:69ff:fe1a:f30e%12]:123 Mar 14 00:15:55.350289 containerd[1937]: 2026-03-14 00:15:55.245 [WARNING][6004] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--gnbqp-eth0", GenerateName:"calico-apiserver-5cc67d498c-", Namespace:"calico-system", SelfLink:"", UID:"6669e0b2-65fc-448c-87e3-c79fbf1e2867", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cc67d498c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"3e5832afb334736e14b78c558e71b90c4ec1c110cd39de6e945376820f93ef16", Pod:"calico-apiserver-5cc67d498c-gnbqp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali12aa16a0f2b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:55.350289 containerd[1937]: 2026-03-14 00:15:55.245 [INFO][6004] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" Mar 14 00:15:55.350289 containerd[1937]: 2026-03-14 00:15:55.245 [INFO][6004] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" iface="eth0" netns="" Mar 14 00:15:55.350289 containerd[1937]: 2026-03-14 00:15:55.246 [INFO][6004] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" Mar 14 00:15:55.350289 containerd[1937]: 2026-03-14 00:15:55.246 [INFO][6004] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" Mar 14 00:15:55.350289 containerd[1937]: 2026-03-14 00:15:55.323 [INFO][6011] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" HandleID="k8s-pod-network.f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" Workload="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--gnbqp-eth0" Mar 14 00:15:55.350289 containerd[1937]: 2026-03-14 00:15:55.323 [INFO][6011] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:55.350289 containerd[1937]: 2026-03-14 00:15:55.324 [INFO][6011] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:55.350289 containerd[1937]: 2026-03-14 00:15:55.338 [WARNING][6011] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" HandleID="k8s-pod-network.f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" Workload="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--gnbqp-eth0" Mar 14 00:15:55.350289 containerd[1937]: 2026-03-14 00:15:55.338 [INFO][6011] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" HandleID="k8s-pod-network.f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" Workload="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--gnbqp-eth0" Mar 14 00:15:55.350289 containerd[1937]: 2026-03-14 00:15:55.341 [INFO][6011] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:55.350289 containerd[1937]: 2026-03-14 00:15:55.345 [INFO][6004] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" Mar 14 00:15:55.350289 containerd[1937]: time="2026-03-14T00:15:55.348706921Z" level=info msg="TearDown network for sandbox \"f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b\" successfully" Mar 14 00:15:55.350289 containerd[1937]: time="2026-03-14T00:15:55.348744397Z" level=info msg="StopPodSandbox for \"f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b\" returns successfully" Mar 14 00:15:55.350289 containerd[1937]: time="2026-03-14T00:15:55.349697629Z" level=info msg="RemovePodSandbox for \"f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b\"" Mar 14 00:15:55.350289 containerd[1937]: time="2026-03-14T00:15:55.349742677Z" level=info msg="Forcibly stopping sandbox \"f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b\"" Mar 14 00:15:55.514447 containerd[1937]: 2026-03-14 00:15:55.428 [WARNING][6026] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--gnbqp-eth0", GenerateName:"calico-apiserver-5cc67d498c-", Namespace:"calico-system", SelfLink:"", UID:"6669e0b2-65fc-448c-87e3-c79fbf1e2867", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 15, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5cc67d498c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"3e5832afb334736e14b78c558e71b90c4ec1c110cd39de6e945376820f93ef16", Pod:"calico-apiserver-5cc67d498c-gnbqp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.25.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali12aa16a0f2b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:55.514447 containerd[1937]: 2026-03-14 00:15:55.429 [INFO][6026] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" Mar 14 00:15:55.514447 containerd[1937]: 2026-03-14 00:15:55.429 [INFO][6026] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" iface="eth0" netns="" Mar 14 00:15:55.514447 containerd[1937]: 2026-03-14 00:15:55.429 [INFO][6026] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" Mar 14 00:15:55.514447 containerd[1937]: 2026-03-14 00:15:55.429 [INFO][6026] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" Mar 14 00:15:55.514447 containerd[1937]: 2026-03-14 00:15:55.484 [INFO][6033] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" HandleID="k8s-pod-network.f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" Workload="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--gnbqp-eth0" Mar 14 00:15:55.514447 containerd[1937]: 2026-03-14 00:15:55.484 [INFO][6033] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:55.514447 containerd[1937]: 2026-03-14 00:15:55.484 [INFO][6033] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:55.514447 containerd[1937]: 2026-03-14 00:15:55.501 [WARNING][6033] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" HandleID="k8s-pod-network.f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" Workload="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--gnbqp-eth0" Mar 14 00:15:55.514447 containerd[1937]: 2026-03-14 00:15:55.501 [INFO][6033] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" HandleID="k8s-pod-network.f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" Workload="ip--172--31--26--130-k8s-calico--apiserver--5cc67d498c--gnbqp-eth0" Mar 14 00:15:55.514447 containerd[1937]: 2026-03-14 00:15:55.505 [INFO][6033] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:55.514447 containerd[1937]: 2026-03-14 00:15:55.509 [INFO][6026] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b" Mar 14 00:15:55.515295 containerd[1937]: time="2026-03-14T00:15:55.514501514Z" level=info msg="TearDown network for sandbox \"f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b\" successfully" Mar 14 00:15:55.520928 containerd[1937]: time="2026-03-14T00:15:55.520790642Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:15:55.521089 containerd[1937]: time="2026-03-14T00:15:55.520964870Z" level=info msg="RemovePodSandbox \"f2b4e2a680086eb8164358160767f2fe5d9d75db3c847d2dc9b2fa89b0d2f01b\" returns successfully" Mar 14 00:15:55.521871 containerd[1937]: time="2026-03-14T00:15:55.521799338Z" level=info msg="StopPodSandbox for \"f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba\"" Mar 14 00:15:55.658883 containerd[1937]: 2026-03-14 00:15:55.590 [WARNING][6047] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-coredns--674b8bbfcf--nsj8c-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8127331c-4b50-47c1-bbe1-89afe1cea98e", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"814c2dc8e297ab28c838bd958983583e82d4470648ab834c7f3e7092da677e5f", Pod:"coredns-674b8bbfcf-nsj8c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali41be2aef46c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:55.658883 containerd[1937]: 2026-03-14 00:15:55.591 [INFO][6047] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" Mar 14 00:15:55.658883 containerd[1937]: 2026-03-14 00:15:55.591 [INFO][6047] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" iface="eth0" netns="" Mar 14 00:15:55.658883 containerd[1937]: 2026-03-14 00:15:55.591 [INFO][6047] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" Mar 14 00:15:55.658883 containerd[1937]: 2026-03-14 00:15:55.591 [INFO][6047] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" Mar 14 00:15:55.658883 containerd[1937]: 2026-03-14 00:15:55.634 [INFO][6056] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" HandleID="k8s-pod-network.f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" Workload="ip--172--31--26--130-k8s-coredns--674b8bbfcf--nsj8c-eth0" Mar 14 00:15:55.658883 containerd[1937]: 2026-03-14 00:15:55.634 [INFO][6056] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:55.658883 containerd[1937]: 2026-03-14 00:15:55.634 [INFO][6056] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:55.658883 containerd[1937]: 2026-03-14 00:15:55.648 [WARNING][6056] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" HandleID="k8s-pod-network.f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" Workload="ip--172--31--26--130-k8s-coredns--674b8bbfcf--nsj8c-eth0" Mar 14 00:15:55.658883 containerd[1937]: 2026-03-14 00:15:55.648 [INFO][6056] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" HandleID="k8s-pod-network.f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" Workload="ip--172--31--26--130-k8s-coredns--674b8bbfcf--nsj8c-eth0" Mar 14 00:15:55.658883 containerd[1937]: 2026-03-14 00:15:55.651 [INFO][6056] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:55.658883 containerd[1937]: 2026-03-14 00:15:55.655 [INFO][6047] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" Mar 14 00:15:55.660768 containerd[1937]: time="2026-03-14T00:15:55.659071946Z" level=info msg="TearDown network for sandbox \"f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba\" successfully" Mar 14 00:15:55.660768 containerd[1937]: time="2026-03-14T00:15:55.659118074Z" level=info msg="StopPodSandbox for \"f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba\" returns successfully" Mar 14 00:15:55.660768 containerd[1937]: time="2026-03-14T00:15:55.660369710Z" level=info msg="RemovePodSandbox for \"f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba\"" Mar 14 00:15:55.660768 containerd[1937]: time="2026-03-14T00:15:55.660502910Z" level=info msg="Forcibly stopping sandbox \"f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba\"" Mar 14 00:15:55.824043 containerd[1937]: 2026-03-14 00:15:55.740 [WARNING][6070] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--130-k8s-coredns--674b8bbfcf--nsj8c-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8127331c-4b50-47c1-bbe1-89afe1cea98e", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-130", ContainerID:"814c2dc8e297ab28c838bd958983583e82d4470648ab834c7f3e7092da677e5f", Pod:"coredns-674b8bbfcf-nsj8c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.25.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali41be2aef46c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:55.824043 containerd[1937]: 2026-03-14 00:15:55.741 [INFO][6070] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" Mar 14 00:15:55.824043 containerd[1937]: 2026-03-14 00:15:55.741 [INFO][6070] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" iface="eth0" netns="" Mar 14 00:15:55.824043 containerd[1937]: 2026-03-14 00:15:55.741 [INFO][6070] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" Mar 14 00:15:55.824043 containerd[1937]: 2026-03-14 00:15:55.741 [INFO][6070] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" Mar 14 00:15:55.824043 containerd[1937]: 2026-03-14 00:15:55.793 [INFO][6077] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" HandleID="k8s-pod-network.f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" Workload="ip--172--31--26--130-k8s-coredns--674b8bbfcf--nsj8c-eth0" Mar 14 00:15:55.824043 containerd[1937]: 2026-03-14 00:15:55.793 [INFO][6077] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:55.824043 containerd[1937]: 2026-03-14 00:15:55.793 [INFO][6077] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:55.824043 containerd[1937]: 2026-03-14 00:15:55.813 [WARNING][6077] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" HandleID="k8s-pod-network.f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" Workload="ip--172--31--26--130-k8s-coredns--674b8bbfcf--nsj8c-eth0" Mar 14 00:15:55.824043 containerd[1937]: 2026-03-14 00:15:55.813 [INFO][6077] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" HandleID="k8s-pod-network.f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" Workload="ip--172--31--26--130-k8s-coredns--674b8bbfcf--nsj8c-eth0" Mar 14 00:15:55.824043 containerd[1937]: 2026-03-14 00:15:55.816 [INFO][6077] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:55.824043 containerd[1937]: 2026-03-14 00:15:55.819 [INFO][6070] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba" Mar 14 00:15:55.824043 containerd[1937]: time="2026-03-14T00:15:55.823335051Z" level=info msg="TearDown network for sandbox \"f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba\" successfully" Mar 14 00:15:55.829638 containerd[1937]: time="2026-03-14T00:15:55.829544499Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:15:55.829891 containerd[1937]: time="2026-03-14T00:15:55.829657911Z" level=info msg="RemovePodSandbox \"f9a97ac320354ceda5b24ac4ba785ab4cab6417e0a638cb3ab12df7f6e49a4ba\" returns successfully" Mar 14 00:15:56.269719 containerd[1937]: time="2026-03-14T00:15:56.269628733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:56.272541 containerd[1937]: time="2026-03-14T00:15:56.272019601Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Mar 14 00:15:56.273918 containerd[1937]: time="2026-03-14T00:15:56.273800917Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:56.280383 containerd[1937]: time="2026-03-14T00:15:56.280223521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:56.283729 containerd[1937]: time="2026-03-14T00:15:56.283663297Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 2.437751292s" Mar 14 00:15:56.284200 containerd[1937]: time="2026-03-14T00:15:56.283834045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Mar 14 00:15:56.287557 containerd[1937]: time="2026-03-14T00:15:56.287209825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 14 00:15:56.297489 containerd[1937]: time="2026-03-14T00:15:56.297284977Z" level=info msg="CreateContainer within sandbox \"0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 14 00:15:56.331146 containerd[1937]: time="2026-03-14T00:15:56.330009878Z" level=info msg="CreateContainer within sandbox \"0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"c7ebe0d367b107aae767548fea80d7d8955dfaf7eb51de5d074b363e102ccafb\"" Mar 14 00:15:56.334786 containerd[1937]: time="2026-03-14T00:15:56.334616438Z" level=info msg="StartContainer for \"c7ebe0d367b107aae767548fea80d7d8955dfaf7eb51de5d074b363e102ccafb\"" Mar 14 00:15:56.421505 systemd[1]: Started cri-containerd-c7ebe0d367b107aae767548fea80d7d8955dfaf7eb51de5d074b363e102ccafb.scope - libcontainer container c7ebe0d367b107aae767548fea80d7d8955dfaf7eb51de5d074b363e102ccafb. Mar 14 00:15:56.527343 containerd[1937]: time="2026-03-14T00:15:56.527159667Z" level=info msg="StartContainer for \"c7ebe0d367b107aae767548fea80d7d8955dfaf7eb51de5d074b363e102ccafb\" returns successfully" Mar 14 00:15:57.800741 systemd[1]: Started sshd@9-172.31.26.130:22-68.220.241.50:35100.service - OpenSSH per-connection server daemon (68.220.241.50:35100). Mar 14 00:15:58.324436 sshd[6158]: Accepted publickey for core from 68.220.241.50 port 35100 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:15:58.328496 sshd[6158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:58.341055 systemd-logind[1911]: New session 10 of user core. Mar 14 00:15:58.349807 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 14 00:15:58.823311 sshd[6158]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:58.829160 systemd-logind[1911]: Session 10 logged out. Waiting for processes to exit. Mar 14 00:15:58.829743 systemd[1]: sshd@9-172.31.26.130:22-68.220.241.50:35100.service: Deactivated successfully. Mar 14 00:15:58.834635 systemd[1]: session-10.scope: Deactivated successfully. Mar 14 00:15:58.839656 systemd-logind[1911]: Removed session 10. Mar 14 00:16:00.806819 containerd[1937]: time="2026-03-14T00:16:00.806758760Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:00.810342 containerd[1937]: time="2026-03-14T00:16:00.810273776Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Mar 14 00:16:00.813054 containerd[1937]: time="2026-03-14T00:16:00.812992988Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:00.818998 containerd[1937]: time="2026-03-14T00:16:00.818727200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:00.821612 containerd[1937]: time="2026-03-14T00:16:00.821528324Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 4.534257275s" Mar 14 00:16:00.821765 containerd[1937]: time="2026-03-14T00:16:00.821616656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Mar 14 00:16:00.826059 containerd[1937]: time="2026-03-14T00:16:00.825984560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 14 00:16:00.865555 containerd[1937]: time="2026-03-14T00:16:00.865101968Z" level=info msg="CreateContainer within sandbox \"71feb8c9be5ff74a5fd73038fdb4497ef881dc4cba10ae6b2b9827262a4e66c0\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 14 00:16:00.889416 containerd[1937]: time="2026-03-14T00:16:00.889335956Z" level=info msg="CreateContainer within sandbox \"71feb8c9be5ff74a5fd73038fdb4497ef881dc4cba10ae6b2b9827262a4e66c0\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b939b42bc867b939cd41dd57587bf4e07489531f5b63df335d94007fd1db9a18\"" Mar 14 00:16:00.890581 containerd[1937]: time="2026-03-14T00:16:00.890393060Z" level=info msg="StartContainer for \"b939b42bc867b939cd41dd57587bf4e07489531f5b63df335d94007fd1db9a18\"" Mar 14 00:16:00.958272 systemd[1]: Started cri-containerd-b939b42bc867b939cd41dd57587bf4e07489531f5b63df335d94007fd1db9a18.scope - libcontainer container b939b42bc867b939cd41dd57587bf4e07489531f5b63df335d94007fd1db9a18. Mar 14 00:16:01.038639 containerd[1937]: time="2026-03-14T00:16:01.038448653Z" level=info msg="StartContainer for \"b939b42bc867b939cd41dd57587bf4e07489531f5b63df335d94007fd1db9a18\" returns successfully" Mar 14 00:16:02.075061 kubelet[3148]: I0314 00:16:02.074256 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-lxpml" podStartSLOduration=33.825985671 podStartE2EDuration="40.074229438s" podCreationTimestamp="2026-03-14 00:15:22 +0000 UTC" firstStartedPulling="2026-03-14 00:15:47.594575994 +0000 UTC m=+55.698979429" lastFinishedPulling="2026-03-14 00:15:53.842819761 +0000 UTC m=+61.947223196" observedRunningTime="2026-03-14 00:15:54.999802179 +0000 UTC m=+63.104205626" watchObservedRunningTime="2026-03-14 00:16:02.074229438 +0000 UTC m=+70.178632885" Mar 14 00:16:02.217487 kubelet[3148]: I0314 00:16:02.216191 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-75d5fd567b-lbrvk" podStartSLOduration=25.84336007 podStartE2EDuration="38.216168079s" podCreationTimestamp="2026-03-14 00:15:24 +0000 UTC" firstStartedPulling="2026-03-14 00:15:48.450066907 +0000 UTC m=+56.554470354" lastFinishedPulling="2026-03-14 00:16:00.822874916 +0000 UTC m=+68.927278363" observedRunningTime="2026-03-14 00:16:02.07625343 +0000 UTC m=+70.180657285" watchObservedRunningTime="2026-03-14 00:16:02.216168079 +0000 UTC m=+70.320571514" Mar 14 00:16:03.541911 containerd[1937]: time="2026-03-14T00:16:03.541853397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:03.544850 containerd[1937]: time="2026-03-14T00:16:03.544789245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Mar 14 00:16:03.548090 containerd[1937]: time="2026-03-14T00:16:03.548024014Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:03.551988 containerd[1937]: time="2026-03-14T00:16:03.551544118Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:03.553692 containerd[1937]: time="2026-03-14T00:16:03.553505686Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 2.727454022s" Mar 14 00:16:03.553692 containerd[1937]: time="2026-03-14T00:16:03.553564990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 14 00:16:03.556536 containerd[1937]: time="2026-03-14T00:16:03.556472722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 14 00:16:03.565656 containerd[1937]: time="2026-03-14T00:16:03.565294582Z" level=info msg="CreateContainer within sandbox \"3e5832afb334736e14b78c558e71b90c4ec1c110cd39de6e945376820f93ef16\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 14 00:16:03.594280 containerd[1937]: time="2026-03-14T00:16:03.592233370Z" level=info msg="CreateContainer within sandbox \"3e5832afb334736e14b78c558e71b90c4ec1c110cd39de6e945376820f93ef16\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c14e4e9aa93d2204cffff44d8cdbcfa55af1a92b5a9f5a6b07e79bedd8f57ebf\"" Mar 14 00:16:03.597067 containerd[1937]: time="2026-03-14T00:16:03.595154566Z" level=info msg="StartContainer for \"c14e4e9aa93d2204cffff44d8cdbcfa55af1a92b5a9f5a6b07e79bedd8f57ebf\"" Mar 14 00:16:03.667337 systemd[1]: Started cri-containerd-c14e4e9aa93d2204cffff44d8cdbcfa55af1a92b5a9f5a6b07e79bedd8f57ebf.scope - libcontainer container c14e4e9aa93d2204cffff44d8cdbcfa55af1a92b5a9f5a6b07e79bedd8f57ebf. Mar 14 00:16:03.754099 containerd[1937]: time="2026-03-14T00:16:03.754029791Z" level=info msg="StartContainer for \"c14e4e9aa93d2204cffff44d8cdbcfa55af1a92b5a9f5a6b07e79bedd8f57ebf\" returns successfully" Mar 14 00:16:03.888233 containerd[1937]: time="2026-03-14T00:16:03.888066839Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:03.892970 containerd[1937]: time="2026-03-14T00:16:03.892224971Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 14 00:16:03.913195 containerd[1937]: time="2026-03-14T00:16:03.911623835Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 355.081885ms" Mar 14 00:16:03.913195 containerd[1937]: time="2026-03-14T00:16:03.911714903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 14 00:16:03.914701 containerd[1937]: time="2026-03-14T00:16:03.914296211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 14 00:16:03.924373 containerd[1937]: time="2026-03-14T00:16:03.924286775Z" level=info msg="CreateContainer within sandbox \"35f540316688fb10e9434caec976f19768c05e9e39306ab7b650f8d038634ed2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 14 00:16:03.939349 systemd[1]: Started sshd@10-172.31.26.130:22-68.220.241.50:37842.service - OpenSSH per-connection server daemon (68.220.241.50:37842). Mar 14 00:16:03.975373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2194223564.mount: Deactivated successfully. Mar 14 00:16:03.991367 containerd[1937]: time="2026-03-14T00:16:03.991285956Z" level=info msg="CreateContainer within sandbox \"35f540316688fb10e9434caec976f19768c05e9e39306ab7b650f8d038634ed2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bb4b5bdbad843db4c38a58910de40fa4aad36263edffa6095a0a9334a54f4f2f\"" Mar 14 00:16:03.994418 containerd[1937]: time="2026-03-14T00:16:03.992930148Z" level=info msg="StartContainer for \"bb4b5bdbad843db4c38a58910de40fa4aad36263edffa6095a0a9334a54f4f2f\"" Mar 14 00:16:04.096360 systemd[1]: Started cri-containerd-bb4b5bdbad843db4c38a58910de40fa4aad36263edffa6095a0a9334a54f4f2f.scope - libcontainer container bb4b5bdbad843db4c38a58910de40fa4aad36263edffa6095a0a9334a54f4f2f. Mar 14 00:16:04.212706 containerd[1937]: time="2026-03-14T00:16:04.212544789Z" level=info msg="StartContainer for \"bb4b5bdbad843db4c38a58910de40fa4aad36263edffa6095a0a9334a54f4f2f\" returns successfully" Mar 14 00:16:04.533300 sshd[6301]: Accepted publickey for core from 68.220.241.50 port 37842 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:04.537300 sshd[6301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:04.558224 systemd-logind[1911]: New session 11 of user core. Mar 14 00:16:04.568263 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 14 00:16:05.066965 kubelet[3148]: I0314 00:16:05.065585 3148 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:16:05.116089 kubelet[3148]: I0314 00:16:05.115921 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5cc67d498c-gnbqp" podStartSLOduration=29.04434501 podStartE2EDuration="44.115894293s" podCreationTimestamp="2026-03-14 00:15:21 +0000 UTC" firstStartedPulling="2026-03-14 00:15:48.484151947 +0000 UTC m=+56.588555382" lastFinishedPulling="2026-03-14 00:16:03.55570123 +0000 UTC m=+71.660104665" observedRunningTime="2026-03-14 00:16:04.075459044 +0000 UTC m=+72.179862659" watchObservedRunningTime="2026-03-14 00:16:05.115894293 +0000 UTC m=+73.220297728" Mar 14 00:16:05.188311 sshd[6301]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:05.201080 systemd-logind[1911]: Session 11 logged out. Waiting for processes to exit. Mar 14 00:16:05.206126 systemd[1]: sshd@10-172.31.26.130:22-68.220.241.50:37842.service: Deactivated successfully. Mar 14 00:16:05.215775 systemd[1]: session-11.scope: Deactivated successfully. Mar 14 00:16:05.221408 systemd-logind[1911]: Removed session 11. Mar 14 00:16:05.286496 systemd[1]: Started sshd@11-172.31.26.130:22-68.220.241.50:37844.service - OpenSSH per-connection server daemon (68.220.241.50:37844). Mar 14 00:16:05.826811 sshd[6364]: Accepted publickey for core from 68.220.241.50 port 37844 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:05.832089 sshd[6364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:05.847613 systemd-logind[1911]: New session 12 of user core. Mar 14 00:16:05.855335 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 14 00:16:05.991447 containerd[1937]: time="2026-03-14T00:16:05.991367234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:05.993646 containerd[1937]: time="2026-03-14T00:16:05.993339278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Mar 14 00:16:05.997689 containerd[1937]: time="2026-03-14T00:16:05.997083242Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:06.005851 containerd[1937]: time="2026-03-14T00:16:06.004240582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:06.007249 containerd[1937]: time="2026-03-14T00:16:06.007027702Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 2.092664519s" Mar 14 00:16:06.007794 containerd[1937]: time="2026-03-14T00:16:06.007725718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Mar 14 00:16:06.015426 containerd[1937]: time="2026-03-14T00:16:06.014720566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 14 00:16:06.021343 containerd[1937]: time="2026-03-14T00:16:06.021230770Z" level=info msg="CreateContainer within sandbox \"5283e00bde7364e035872080f1d61a1a2b0646d156f4344e1de48d8553e8c6e6\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 14 00:16:06.056549 containerd[1937]: time="2026-03-14T00:16:06.056489362Z" level=info msg="CreateContainer within sandbox \"5283e00bde7364e035872080f1d61a1a2b0646d156f4344e1de48d8553e8c6e6\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b24dc453967620f2013c27f2f64dc7af14a2eb5aeb372b68f59159f21cbf77d0\"" Mar 14 00:16:06.061044 containerd[1937]: time="2026-03-14T00:16:06.058650322Z" level=info msg="StartContainer for \"b24dc453967620f2013c27f2f64dc7af14a2eb5aeb372b68f59159f21cbf77d0\"" Mar 14 00:16:06.077481 kubelet[3148]: I0314 00:16:06.077148 3148 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:16:06.171323 systemd[1]: Started cri-containerd-b24dc453967620f2013c27f2f64dc7af14a2eb5aeb372b68f59159f21cbf77d0.scope - libcontainer container b24dc453967620f2013c27f2f64dc7af14a2eb5aeb372b68f59159f21cbf77d0. Mar 14 00:16:06.366063 containerd[1937]: time="2026-03-14T00:16:06.365732412Z" level=info msg="StartContainer for \"b24dc453967620f2013c27f2f64dc7af14a2eb5aeb372b68f59159f21cbf77d0\" returns successfully" Mar 14 00:16:06.599033 sshd[6364]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:06.609336 systemd-logind[1911]: Session 12 logged out. Waiting for processes to exit. Mar 14 00:16:06.612428 systemd[1]: sshd@11-172.31.26.130:22-68.220.241.50:37844.service: Deactivated successfully. Mar 14 00:16:06.623013 systemd[1]: session-12.scope: Deactivated successfully. Mar 14 00:16:06.626305 systemd-logind[1911]: Removed session 12. Mar 14 00:16:06.709452 systemd[1]: Started sshd@12-172.31.26.130:22-68.220.241.50:37852.service - OpenSSH per-connection server daemon (68.220.241.50:37852). Mar 14 00:16:07.115062 kubelet[3148]: I0314 00:16:07.114891 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-5cc67d498c-jsdnf" podStartSLOduration=30.94595286 podStartE2EDuration="46.114864563s" podCreationTimestamp="2026-03-14 00:15:21 +0000 UTC" firstStartedPulling="2026-03-14 00:15:48.744796916 +0000 UTC m=+56.849200351" lastFinishedPulling="2026-03-14 00:16:03.913708631 +0000 UTC m=+72.018112054" observedRunningTime="2026-03-14 00:16:05.120123093 +0000 UTC m=+73.224526540" watchObservedRunningTime="2026-03-14 00:16:07.114864563 +0000 UTC m=+75.219268034" Mar 14 00:16:07.285616 sshd[6421]: Accepted publickey for core from 68.220.241.50 port 37852 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:07.299411 sshd[6421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:07.332314 systemd-logind[1911]: New session 13 of user core. Mar 14 00:16:07.342241 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 14 00:16:07.413002 kubelet[3148]: I0314 00:16:07.412445 3148 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 14 00:16:07.413002 kubelet[3148]: I0314 00:16:07.412598 3148 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 14 00:16:07.977177 sshd[6421]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:07.992836 systemd[1]: sshd@12-172.31.26.130:22-68.220.241.50:37852.service: Deactivated successfully. Mar 14 00:16:08.003517 systemd[1]: session-13.scope: Deactivated successfully. Mar 14 00:16:08.011198 systemd-logind[1911]: Session 13 logged out. Waiting for processes to exit. Mar 14 00:16:08.016504 systemd-logind[1911]: Removed session 13. Mar 14 00:16:08.385807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2287303341.mount: Deactivated successfully. Mar 14 00:16:08.427427 containerd[1937]: time="2026-03-14T00:16:08.427341446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:08.431962 containerd[1937]: time="2026-03-14T00:16:08.431864102Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Mar 14 00:16:08.434752 containerd[1937]: time="2026-03-14T00:16:08.434660534Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:08.446976 containerd[1937]: time="2026-03-14T00:16:08.444890414Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:08.446976 containerd[1937]: time="2026-03-14T00:16:08.446764130Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 2.431964328s" Mar 14 00:16:08.446976 containerd[1937]: time="2026-03-14T00:16:08.446823098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Mar 14 00:16:08.460347 containerd[1937]: time="2026-03-14T00:16:08.460268270Z" level=info msg="CreateContainer within sandbox \"0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 14 00:16:08.493003 containerd[1937]: time="2026-03-14T00:16:08.491848442Z" level=info msg="CreateContainer within sandbox \"0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"d576c9bdbad4f005a88983625f0d499dc3bf26743a90a506b981444243992045\"" Mar 14 00:16:08.495855 containerd[1937]: time="2026-03-14T00:16:08.495789290Z" level=info msg="StartContainer for \"d576c9bdbad4f005a88983625f0d499dc3bf26743a90a506b981444243992045\"" Mar 14 00:16:08.625324 systemd[1]: Started cri-containerd-d576c9bdbad4f005a88983625f0d499dc3bf26743a90a506b981444243992045.scope - libcontainer container d576c9bdbad4f005a88983625f0d499dc3bf26743a90a506b981444243992045. Mar 14 00:16:08.746507 containerd[1937]: time="2026-03-14T00:16:08.746320179Z" level=info msg="StartContainer for \"d576c9bdbad4f005a88983625f0d499dc3bf26743a90a506b981444243992045\" returns successfully" Mar 14 00:16:09.107987 containerd[1937]: time="2026-03-14T00:16:09.107363797Z" level=info msg="StopContainer for \"d576c9bdbad4f005a88983625f0d499dc3bf26743a90a506b981444243992045\" with timeout 30 (s)" Mar 14 00:16:09.107987 containerd[1937]: time="2026-03-14T00:16:09.107621233Z" level=info msg="StopContainer for \"c7ebe0d367b107aae767548fea80d7d8955dfaf7eb51de5d074b363e102ccafb\" with timeout 30 (s)" Mar 14 00:16:09.108853 containerd[1937]: time="2026-03-14T00:16:09.108425917Z" level=info msg="Stop container \"c7ebe0d367b107aae767548fea80d7d8955dfaf7eb51de5d074b363e102ccafb\" with signal terminated" Mar 14 00:16:09.108853 containerd[1937]: time="2026-03-14T00:16:09.108701989Z" level=info msg="Stop container \"d576c9bdbad4f005a88983625f0d499dc3bf26743a90a506b981444243992045\" with signal terminated" Mar 14 00:16:09.148525 systemd[1]: cri-containerd-d576c9bdbad4f005a88983625f0d499dc3bf26743a90a506b981444243992045.scope: Deactivated successfully. Mar 14 00:16:09.164236 systemd[1]: cri-containerd-c7ebe0d367b107aae767548fea80d7d8955dfaf7eb51de5d074b363e102ccafb.scope: Deactivated successfully. Mar 14 00:16:09.166617 kubelet[3148]: I0314 00:16:09.165460 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-s9wlx" podStartSLOduration=26.194634739 podStartE2EDuration="45.165438925s" podCreationTimestamp="2026-03-14 00:15:24 +0000 UTC" firstStartedPulling="2026-03-14 00:15:47.039598156 +0000 UTC m=+55.144001591" lastFinishedPulling="2026-03-14 00:16:06.01040227 +0000 UTC m=+74.114805777" observedRunningTime="2026-03-14 00:16:07.114458171 +0000 UTC m=+75.218861690" watchObservedRunningTime="2026-03-14 00:16:09.165438925 +0000 UTC m=+77.269842360" Mar 14 00:16:09.277290 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d576c9bdbad4f005a88983625f0d499dc3bf26743a90a506b981444243992045-rootfs.mount: Deactivated successfully. Mar 14 00:16:09.339086 containerd[1937]: time="2026-03-14T00:16:09.338347706Z" level=info msg="shim disconnected" id=d576c9bdbad4f005a88983625f0d499dc3bf26743a90a506b981444243992045 namespace=k8s.io Mar 14 00:16:09.340782 containerd[1937]: time="2026-03-14T00:16:09.338787278Z" level=info msg="shim disconnected" id=c7ebe0d367b107aae767548fea80d7d8955dfaf7eb51de5d074b363e102ccafb namespace=k8s.io Mar 14 00:16:09.340782 containerd[1937]: time="2026-03-14T00:16:09.339861146Z" level=warning msg="cleaning up after shim disconnected" id=c7ebe0d367b107aae767548fea80d7d8955dfaf7eb51de5d074b363e102ccafb namespace=k8s.io Mar 14 00:16:09.340782 containerd[1937]: time="2026-03-14T00:16:09.339902930Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:16:09.340782 containerd[1937]: time="2026-03-14T00:16:09.340145258Z" level=warning msg="cleaning up after shim disconnected" id=d576c9bdbad4f005a88983625f0d499dc3bf26743a90a506b981444243992045 namespace=k8s.io Mar 14 00:16:09.340782 containerd[1937]: time="2026-03-14T00:16:09.340185506Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:16:09.421106 containerd[1937]: time="2026-03-14T00:16:09.420860139Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:16:09Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:16:09.425050 containerd[1937]: time="2026-03-14T00:16:09.424975839Z" level=info msg="StopContainer for \"c7ebe0d367b107aae767548fea80d7d8955dfaf7eb51de5d074b363e102ccafb\" returns successfully" Mar 14 00:16:09.430557 containerd[1937]: time="2026-03-14T00:16:09.430486731Z" level=info msg="StopContainer for \"d576c9bdbad4f005a88983625f0d499dc3bf26743a90a506b981444243992045\" returns successfully" Mar 14 00:16:09.432291 containerd[1937]: time="2026-03-14T00:16:09.432222831Z" level=info msg="StopPodSandbox for \"0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33\"" Mar 14 00:16:09.432291 containerd[1937]: time="2026-03-14T00:16:09.432339627Z" level=info msg="Container to stop \"c7ebe0d367b107aae767548fea80d7d8955dfaf7eb51de5d074b363e102ccafb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:16:09.432747 containerd[1937]: time="2026-03-14T00:16:09.432380403Z" level=info msg="Container to stop \"d576c9bdbad4f005a88983625f0d499dc3bf26743a90a506b981444243992045\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:16:09.444691 systemd[1]: cri-containerd-0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33.scope: Deactivated successfully. Mar 14 00:16:09.485061 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7ebe0d367b107aae767548fea80d7d8955dfaf7eb51de5d074b363e102ccafb-rootfs.mount: Deactivated successfully. Mar 14 00:16:09.485308 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33-shm.mount: Deactivated successfully. Mar 14 00:16:09.497581 containerd[1937]: time="2026-03-14T00:16:09.497296647Z" level=info msg="shim disconnected" id=0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33 namespace=k8s.io Mar 14 00:16:09.497581 containerd[1937]: time="2026-03-14T00:16:09.497376039Z" level=warning msg="cleaning up after shim disconnected" id=0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33 namespace=k8s.io Mar 14 00:16:09.497581 containerd[1937]: time="2026-03-14T00:16:09.497397603Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:16:09.504410 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33-rootfs.mount: Deactivated successfully. Mar 14 00:16:09.666895 kubelet[3148]: I0314 00:16:09.666606 3148 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-544d6dd76d-4rcl5" podStartSLOduration=22.634794956 podStartE2EDuration="42.666556816s" podCreationTimestamp="2026-03-14 00:15:27 +0000 UTC" firstStartedPulling="2026-03-14 00:15:48.419514534 +0000 UTC m=+56.523917981" lastFinishedPulling="2026-03-14 00:16:08.451276406 +0000 UTC m=+76.555679841" observedRunningTime="2026-03-14 00:16:09.166580281 +0000 UTC m=+77.270983752" watchObservedRunningTime="2026-03-14 00:16:09.666556816 +0000 UTC m=+77.770960263" Mar 14 00:16:09.673636 systemd-networkd[1851]: cali220f7bb61e5: Link DOWN Mar 14 00:16:09.673651 systemd-networkd[1851]: cali220f7bb61e5: Lost carrier Mar 14 00:16:09.958483 containerd[1937]: 2026-03-14 00:16:09.670 [INFO][6587] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33" Mar 14 00:16:09.958483 containerd[1937]: 2026-03-14 00:16:09.671 [INFO][6587] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33" iface="eth0" netns="/var/run/netns/cni-7da9ac99-c55b-0668-9dc1-8512ae3d3484" Mar 14 00:16:09.958483 containerd[1937]: 2026-03-14 00:16:09.671 [INFO][6587] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33" iface="eth0" netns="/var/run/netns/cni-7da9ac99-c55b-0668-9dc1-8512ae3d3484" Mar 14 00:16:09.958483 containerd[1937]: 2026-03-14 00:16:09.687 [INFO][6587] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33" after=15.999084ms iface="eth0" netns="/var/run/netns/cni-7da9ac99-c55b-0668-9dc1-8512ae3d3484" Mar 14 00:16:09.958483 containerd[1937]: 2026-03-14 00:16:09.687 [INFO][6587] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33" Mar 14 00:16:09.958483 containerd[1937]: 2026-03-14 00:16:09.688 [INFO][6587] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33" Mar 14 00:16:09.958483 containerd[1937]: 2026-03-14 00:16:09.771 [INFO][6596] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33" HandleID="k8s-pod-network.0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33" Workload="ip--172--31--26--130-k8s-whisker--544d6dd76d--4rcl5-eth0" Mar 14 00:16:09.958483 containerd[1937]: 2026-03-14 00:16:09.771 [INFO][6596] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:16:09.958483 containerd[1937]: 2026-03-14 00:16:09.772 [INFO][6596] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:16:09.958483 containerd[1937]: 2026-03-14 00:16:09.937 [INFO][6596] ipam/ipam_plugin.go 516: Released address using handleID ContainerID="0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33" HandleID="k8s-pod-network.0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33" Workload="ip--172--31--26--130-k8s-whisker--544d6dd76d--4rcl5-eth0" Mar 14 00:16:09.958483 containerd[1937]: 2026-03-14 00:16:09.938 [INFO][6596] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33" HandleID="k8s-pod-network.0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33" Workload="ip--172--31--26--130-k8s-whisker--544d6dd76d--4rcl5-eth0" Mar 14 00:16:09.958483 containerd[1937]: 2026-03-14 00:16:09.946 [INFO][6596] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:16:09.958483 containerd[1937]: 2026-03-14 00:16:09.951 [INFO][6587] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33" Mar 14 00:16:09.958483 containerd[1937]: time="2026-03-14T00:16:09.956045885Z" level=info msg="TearDown network for sandbox \"0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33\" successfully" Mar 14 00:16:09.958483 containerd[1937]: time="2026-03-14T00:16:09.956093453Z" level=info msg="StopPodSandbox for \"0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33\" returns successfully" Mar 14 00:16:09.962281 systemd[1]: run-netns-cni\x2d7da9ac99\x2dc55b\x2d0668\x2d9dc1\x2d8512ae3d3484.mount: Deactivated successfully. Mar 14 00:16:10.065046 kubelet[3148]: I0314 00:16:10.064981 3148 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d7880c76-182c-44f3-99e8-6a915d275ae2-whisker-backend-key-pair\") pod \"d7880c76-182c-44f3-99e8-6a915d275ae2\" (UID: \"d7880c76-182c-44f3-99e8-6a915d275ae2\") " Mar 14 00:16:10.065265 kubelet[3148]: I0314 00:16:10.065059 3148 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6z2r8\" (UniqueName: \"kubernetes.io/projected/d7880c76-182c-44f3-99e8-6a915d275ae2-kube-api-access-6z2r8\") pod \"d7880c76-182c-44f3-99e8-6a915d275ae2\" (UID: \"d7880c76-182c-44f3-99e8-6a915d275ae2\") " Mar 14 00:16:10.065265 kubelet[3148]: I0314 00:16:10.065110 3148 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/d7880c76-182c-44f3-99e8-6a915d275ae2-nginx-config\") pod \"d7880c76-182c-44f3-99e8-6a915d275ae2\" (UID: \"d7880c76-182c-44f3-99e8-6a915d275ae2\") " Mar 14 00:16:10.065265 kubelet[3148]: I0314 00:16:10.065171 3148 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7880c76-182c-44f3-99e8-6a915d275ae2-whisker-ca-bundle\") pod \"d7880c76-182c-44f3-99e8-6a915d275ae2\" (UID: \"d7880c76-182c-44f3-99e8-6a915d275ae2\") " Mar 14 00:16:10.069541 kubelet[3148]: I0314 00:16:10.068992 3148 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7880c76-182c-44f3-99e8-6a915d275ae2-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d7880c76-182c-44f3-99e8-6a915d275ae2" (UID: "d7880c76-182c-44f3-99e8-6a915d275ae2"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:16:10.069541 kubelet[3148]: I0314 00:16:10.069472 3148 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7880c76-182c-44f3-99e8-6a915d275ae2-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "d7880c76-182c-44f3-99e8-6a915d275ae2" (UID: "d7880c76-182c-44f3-99e8-6a915d275ae2"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:16:10.079115 systemd[1]: var-lib-kubelet-pods-d7880c76\x2d182c\x2d44f3\x2d99e8\x2d6a915d275ae2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6z2r8.mount: Deactivated successfully. Mar 14 00:16:10.084434 kubelet[3148]: I0314 00:16:10.084352 3148 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7880c76-182c-44f3-99e8-6a915d275ae2-kube-api-access-6z2r8" (OuterVolumeSpecName: "kube-api-access-6z2r8") pod "d7880c76-182c-44f3-99e8-6a915d275ae2" (UID: "d7880c76-182c-44f3-99e8-6a915d275ae2"). InnerVolumeSpecName "kube-api-access-6z2r8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:16:10.088337 systemd[1]: var-lib-kubelet-pods-d7880c76\x2d182c\x2d44f3\x2d99e8\x2d6a915d275ae2-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 14 00:16:10.091050 kubelet[3148]: I0314 00:16:10.090108 3148 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7880c76-182c-44f3-99e8-6a915d275ae2-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d7880c76-182c-44f3-99e8-6a915d275ae2" (UID: "d7880c76-182c-44f3-99e8-6a915d275ae2"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 14 00:16:10.116993 kubelet[3148]: I0314 00:16:10.114442 3148 scope.go:117] "RemoveContainer" containerID="d576c9bdbad4f005a88983625f0d499dc3bf26743a90a506b981444243992045" Mar 14 00:16:10.120141 containerd[1937]: time="2026-03-14T00:16:10.119423954Z" level=info msg="RemoveContainer for \"d576c9bdbad4f005a88983625f0d499dc3bf26743a90a506b981444243992045\"" Mar 14 00:16:10.136838 containerd[1937]: time="2026-03-14T00:16:10.136776326Z" level=info msg="RemoveContainer for \"d576c9bdbad4f005a88983625f0d499dc3bf26743a90a506b981444243992045\" returns successfully" Mar 14 00:16:10.137439 kubelet[3148]: I0314 00:16:10.137389 3148 scope.go:117] "RemoveContainer" containerID="c7ebe0d367b107aae767548fea80d7d8955dfaf7eb51de5d074b363e102ccafb" Mar 14 00:16:10.140645 containerd[1937]: time="2026-03-14T00:16:10.140575754Z" level=info msg="RemoveContainer for \"c7ebe0d367b107aae767548fea80d7d8955dfaf7eb51de5d074b363e102ccafb\"" Mar 14 00:16:10.142211 systemd[1]: Removed slice kubepods-besteffort-podd7880c76_182c_44f3_99e8_6a915d275ae2.slice - libcontainer container kubepods-besteffort-podd7880c76_182c_44f3_99e8_6a915d275ae2.slice. Mar 14 00:16:10.149960 containerd[1937]: time="2026-03-14T00:16:10.149834198Z" level=info msg="RemoveContainer for \"c7ebe0d367b107aae767548fea80d7d8955dfaf7eb51de5d074b363e102ccafb\" returns successfully" Mar 14 00:16:10.150527 kubelet[3148]: I0314 00:16:10.150458 3148 scope.go:117] "RemoveContainer" containerID="d576c9bdbad4f005a88983625f0d499dc3bf26743a90a506b981444243992045" Mar 14 00:16:10.151051 containerd[1937]: time="2026-03-14T00:16:10.150905558Z" level=error msg="ContainerStatus for \"d576c9bdbad4f005a88983625f0d499dc3bf26743a90a506b981444243992045\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d576c9bdbad4f005a88983625f0d499dc3bf26743a90a506b981444243992045\": not found" Mar 14 00:16:10.151336 kubelet[3148]: E0314 00:16:10.151269 3148 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d576c9bdbad4f005a88983625f0d499dc3bf26743a90a506b981444243992045\": not found" containerID="d576c9bdbad4f005a88983625f0d499dc3bf26743a90a506b981444243992045" Mar 14 00:16:10.151447 kubelet[3148]: I0314 00:16:10.151339 3148 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d576c9bdbad4f005a88983625f0d499dc3bf26743a90a506b981444243992045"} err="failed to get container status \"d576c9bdbad4f005a88983625f0d499dc3bf26743a90a506b981444243992045\": rpc error: code = NotFound desc = an error occurred when try to find container \"d576c9bdbad4f005a88983625f0d499dc3bf26743a90a506b981444243992045\": not found" Mar 14 00:16:10.151447 kubelet[3148]: I0314 00:16:10.151405 3148 scope.go:117] "RemoveContainer" containerID="c7ebe0d367b107aae767548fea80d7d8955dfaf7eb51de5d074b363e102ccafb" Mar 14 00:16:10.153073 containerd[1937]: time="2026-03-14T00:16:10.151801550Z" level=error msg="ContainerStatus for \"c7ebe0d367b107aae767548fea80d7d8955dfaf7eb51de5d074b363e102ccafb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c7ebe0d367b107aae767548fea80d7d8955dfaf7eb51de5d074b363e102ccafb\": not found" Mar 14 00:16:10.153267 kubelet[3148]: E0314 00:16:10.152173 3148 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c7ebe0d367b107aae767548fea80d7d8955dfaf7eb51de5d074b363e102ccafb\": not found" containerID="c7ebe0d367b107aae767548fea80d7d8955dfaf7eb51de5d074b363e102ccafb" Mar 14 00:16:10.153267 kubelet[3148]: I0314 00:16:10.152231 3148 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c7ebe0d367b107aae767548fea80d7d8955dfaf7eb51de5d074b363e102ccafb"} err="failed to get container status \"c7ebe0d367b107aae767548fea80d7d8955dfaf7eb51de5d074b363e102ccafb\": rpc error: code = NotFound desc = an error occurred when try to find container \"c7ebe0d367b107aae767548fea80d7d8955dfaf7eb51de5d074b363e102ccafb\": not found" Mar 14 00:16:10.153267 kubelet[3148]: I0314 00:16:10.152268 3148 scope.go:117] "RemoveContainer" containerID="d576c9bdbad4f005a88983625f0d499dc3bf26743a90a506b981444243992045" Mar 14 00:16:10.153755 containerd[1937]: time="2026-03-14T00:16:10.153494630Z" level=error msg="ContainerStatus for \"d576c9bdbad4f005a88983625f0d499dc3bf26743a90a506b981444243992045\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d576c9bdbad4f005a88983625f0d499dc3bf26743a90a506b981444243992045\": not found" Mar 14 00:16:10.153885 kubelet[3148]: I0314 00:16:10.153741 3148 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d576c9bdbad4f005a88983625f0d499dc3bf26743a90a506b981444243992045"} err="failed to get container status \"d576c9bdbad4f005a88983625f0d499dc3bf26743a90a506b981444243992045\": rpc error: code = NotFound desc = an error occurred when try to find container \"d576c9bdbad4f005a88983625f0d499dc3bf26743a90a506b981444243992045\": not found" Mar 14 00:16:10.153885 kubelet[3148]: I0314 00:16:10.153790 3148 scope.go:117] "RemoveContainer" containerID="c7ebe0d367b107aae767548fea80d7d8955dfaf7eb51de5d074b363e102ccafb" Mar 14 00:16:10.155070 kubelet[3148]: I0314 00:16:10.154539 3148 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c7ebe0d367b107aae767548fea80d7d8955dfaf7eb51de5d074b363e102ccafb"} err="failed to get container status \"c7ebe0d367b107aae767548fea80d7d8955dfaf7eb51de5d074b363e102ccafb\": rpc error: code = NotFound desc = an error occurred when try to find container \"c7ebe0d367b107aae767548fea80d7d8955dfaf7eb51de5d074b363e102ccafb\": not found" Mar 14 00:16:10.155227 containerd[1937]: time="2026-03-14T00:16:10.154241258Z" level=error msg="ContainerStatus for \"c7ebe0d367b107aae767548fea80d7d8955dfaf7eb51de5d074b363e102ccafb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c7ebe0d367b107aae767548fea80d7d8955dfaf7eb51de5d074b363e102ccafb\": not found" Mar 14 00:16:10.166716 kubelet[3148]: I0314 00:16:10.166488 3148 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d7880c76-182c-44f3-99e8-6a915d275ae2-whisker-backend-key-pair\") on node \"ip-172-31-26-130\" DevicePath \"\"" Mar 14 00:16:10.166716 kubelet[3148]: I0314 00:16:10.166589 3148 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6z2r8\" (UniqueName: \"kubernetes.io/projected/d7880c76-182c-44f3-99e8-6a915d275ae2-kube-api-access-6z2r8\") on node \"ip-172-31-26-130\" DevicePath \"\"" Mar 14 00:16:10.166716 kubelet[3148]: I0314 00:16:10.166654 3148 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/d7880c76-182c-44f3-99e8-6a915d275ae2-nginx-config\") on node \"ip-172-31-26-130\" DevicePath \"\"" Mar 14 00:16:10.166716 kubelet[3148]: I0314 00:16:10.166677 3148 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7880c76-182c-44f3-99e8-6a915d275ae2-whisker-ca-bundle\") on node \"ip-172-31-26-130\" DevicePath \"\"" Mar 14 00:16:10.192235 kubelet[3148]: I0314 00:16:10.191793 3148 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7880c76-182c-44f3-99e8-6a915d275ae2" path="/var/lib/kubelet/pods/d7880c76-182c-44f3-99e8-6a915d275ae2/volumes" Mar 14 00:16:12.233351 ntpd[1905]: Deleting interface #13 cali220f7bb61e5, fe80::ecee:eeff:feee:eeee%8#123, interface stats: received=0, sent=0, dropped=0, active_time=17 secs Mar 14 00:16:12.233860 ntpd[1905]: 14 Mar 00:16:12 ntpd[1905]: Deleting interface #13 cali220f7bb61e5, fe80::ecee:eeff:feee:eeee%8#123, interface stats: received=0, sent=0, dropped=0, active_time=17 secs Mar 14 00:16:13.061485 systemd[1]: Started sshd@13-172.31.26.130:22-68.220.241.50:46260.service - OpenSSH per-connection server daemon (68.220.241.50:46260). Mar 14 00:16:13.572417 sshd[6643]: Accepted publickey for core from 68.220.241.50 port 46260 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:13.575993 sshd[6643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:13.584835 systemd-logind[1911]: New session 14 of user core. Mar 14 00:16:13.592258 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 14 00:16:14.070045 sshd[6643]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:14.077114 systemd[1]: sshd@13-172.31.26.130:22-68.220.241.50:46260.service: Deactivated successfully. Mar 14 00:16:14.083520 systemd[1]: session-14.scope: Deactivated successfully. Mar 14 00:16:14.084912 systemd-logind[1911]: Session 14 logged out. Waiting for processes to exit. Mar 14 00:16:14.086779 systemd-logind[1911]: Removed session 14. Mar 14 00:16:14.172491 systemd[1]: Started sshd@14-172.31.26.130:22-68.220.241.50:46272.service - OpenSSH per-connection server daemon (68.220.241.50:46272). Mar 14 00:16:14.686812 sshd[6656]: Accepted publickey for core from 68.220.241.50 port 46272 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:14.691257 sshd[6656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:14.700044 systemd-logind[1911]: New session 15 of user core. Mar 14 00:16:14.709226 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 14 00:16:15.581763 sshd[6656]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:15.587525 systemd[1]: sshd@14-172.31.26.130:22-68.220.241.50:46272.service: Deactivated successfully. Mar 14 00:16:15.591370 systemd[1]: session-15.scope: Deactivated successfully. Mar 14 00:16:15.596782 systemd-logind[1911]: Session 15 logged out. Waiting for processes to exit. Mar 14 00:16:15.598811 systemd-logind[1911]: Removed session 15. Mar 14 00:16:15.683128 systemd[1]: Started sshd@15-172.31.26.130:22-68.220.241.50:46288.service - OpenSSH per-connection server daemon (68.220.241.50:46288). Mar 14 00:16:16.181469 sshd[6694]: Accepted publickey for core from 68.220.241.50 port 46288 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:16.184605 sshd[6694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:16.199085 systemd-logind[1911]: New session 16 of user core. Mar 14 00:16:16.204250 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 14 00:16:17.449835 sshd[6694]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:17.460581 systemd[1]: sshd@15-172.31.26.130:22-68.220.241.50:46288.service: Deactivated successfully. Mar 14 00:16:17.468697 systemd[1]: session-16.scope: Deactivated successfully. Mar 14 00:16:17.472470 systemd-logind[1911]: Session 16 logged out. Waiting for processes to exit. Mar 14 00:16:17.476828 systemd-logind[1911]: Removed session 16. Mar 14 00:16:17.562731 systemd[1]: Started sshd@16-172.31.26.130:22-68.220.241.50:46292.service - OpenSSH per-connection server daemon (68.220.241.50:46292). Mar 14 00:16:18.104925 sshd[6726]: Accepted publickey for core from 68.220.241.50 port 46292 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:18.108349 sshd[6726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:18.121356 systemd-logind[1911]: New session 17 of user core. Mar 14 00:16:18.129267 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 14 00:16:18.879420 sshd[6726]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:18.887288 systemd-logind[1911]: Session 17 logged out. Waiting for processes to exit. Mar 14 00:16:18.889133 systemd[1]: sshd@16-172.31.26.130:22-68.220.241.50:46292.service: Deactivated successfully. Mar 14 00:16:18.895844 systemd[1]: session-17.scope: Deactivated successfully. Mar 14 00:16:18.897928 systemd-logind[1911]: Removed session 17. Mar 14 00:16:18.979507 systemd[1]: Started sshd@17-172.31.26.130:22-68.220.241.50:46306.service - OpenSSH per-connection server daemon (68.220.241.50:46306). Mar 14 00:16:19.529310 sshd[6740]: Accepted publickey for core from 68.220.241.50 port 46306 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:19.531154 sshd[6740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:19.539812 systemd-logind[1911]: New session 18 of user core. Mar 14 00:16:19.550277 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 14 00:16:20.093681 sshd[6740]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:20.101405 systemd[1]: sshd@17-172.31.26.130:22-68.220.241.50:46306.service: Deactivated successfully. Mar 14 00:16:20.106190 systemd[1]: session-18.scope: Deactivated successfully. Mar 14 00:16:20.112894 systemd-logind[1911]: Session 18 logged out. Waiting for processes to exit. Mar 14 00:16:20.115986 systemd-logind[1911]: Removed session 18. Mar 14 00:16:25.182471 systemd[1]: Started sshd@18-172.31.26.130:22-68.220.241.50:51502.service - OpenSSH per-connection server daemon (68.220.241.50:51502). Mar 14 00:16:25.693420 sshd[6784]: Accepted publickey for core from 68.220.241.50 port 51502 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:25.696239 sshd[6784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:25.706054 systemd-logind[1911]: New session 19 of user core. Mar 14 00:16:25.713244 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 14 00:16:26.229877 sshd[6784]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:26.241415 systemd[1]: sshd@18-172.31.26.130:22-68.220.241.50:51502.service: Deactivated successfully. Mar 14 00:16:26.248615 systemd[1]: session-19.scope: Deactivated successfully. Mar 14 00:16:26.250750 systemd-logind[1911]: Session 19 logged out. Waiting for processes to exit. Mar 14 00:16:26.252801 systemd-logind[1911]: Removed session 19. Mar 14 00:16:30.301986 kubelet[3148]: I0314 00:16:30.300596 3148 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:16:31.338551 systemd[1]: Started sshd@19-172.31.26.130:22-68.220.241.50:51514.service - OpenSSH per-connection server daemon (68.220.241.50:51514). Mar 14 00:16:31.856595 sshd[6848]: Accepted publickey for core from 68.220.241.50 port 51514 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:31.859481 sshd[6848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:31.869127 systemd-logind[1911]: New session 20 of user core. Mar 14 00:16:31.879225 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 14 00:16:32.349284 sshd[6848]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:32.354241 systemd[1]: sshd@19-172.31.26.130:22-68.220.241.50:51514.service: Deactivated successfully. Mar 14 00:16:32.360412 systemd[1]: session-20.scope: Deactivated successfully. Mar 14 00:16:32.364089 systemd-logind[1911]: Session 20 logged out. Waiting for processes to exit. Mar 14 00:16:32.367070 systemd-logind[1911]: Removed session 20. Mar 14 00:16:33.647672 kubelet[3148]: I0314 00:16:33.647132 3148 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:16:37.442467 systemd[1]: Started sshd@20-172.31.26.130:22-68.220.241.50:35902.service - OpenSSH per-connection server daemon (68.220.241.50:35902). Mar 14 00:16:37.952014 sshd[6891]: Accepted publickey for core from 68.220.241.50 port 35902 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:37.953920 sshd[6891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:37.962790 systemd-logind[1911]: New session 21 of user core. Mar 14 00:16:37.970210 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 14 00:16:38.431676 sshd[6891]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:38.439187 systemd-logind[1911]: Session 21 logged out. Waiting for processes to exit. Mar 14 00:16:38.440004 systemd[1]: sshd@20-172.31.26.130:22-68.220.241.50:35902.service: Deactivated successfully. Mar 14 00:16:38.444873 systemd[1]: session-21.scope: Deactivated successfully. Mar 14 00:16:38.447714 systemd-logind[1911]: Removed session 21. Mar 14 00:16:43.525515 systemd[1]: Started sshd@21-172.31.26.130:22-68.220.241.50:49560.service - OpenSSH per-connection server daemon (68.220.241.50:49560). Mar 14 00:16:44.045303 sshd[6904]: Accepted publickey for core from 68.220.241.50 port 49560 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:44.048121 sshd[6904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:44.058701 systemd-logind[1911]: New session 22 of user core. Mar 14 00:16:44.063274 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 14 00:16:44.537578 sshd[6904]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:44.544851 systemd[1]: sshd@21-172.31.26.130:22-68.220.241.50:49560.service: Deactivated successfully. Mar 14 00:16:44.549262 systemd[1]: session-22.scope: Deactivated successfully. Mar 14 00:16:44.550884 systemd-logind[1911]: Session 22 logged out. Waiting for processes to exit. Mar 14 00:16:44.555588 systemd-logind[1911]: Removed session 22. Mar 14 00:16:49.633538 systemd[1]: Started sshd@22-172.31.26.130:22-68.220.241.50:49562.service - OpenSSH per-connection server daemon (68.220.241.50:49562). Mar 14 00:16:50.160199 sshd[6941]: Accepted publickey for core from 68.220.241.50 port 49562 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:50.163144 sshd[6941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:50.177492 systemd-logind[1911]: New session 23 of user core. Mar 14 00:16:50.181285 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 14 00:16:50.648665 sshd[6941]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:50.653887 systemd-logind[1911]: Session 23 logged out. Waiting for processes to exit. Mar 14 00:16:50.654659 systemd[1]: sshd@22-172.31.26.130:22-68.220.241.50:49562.service: Deactivated successfully. Mar 14 00:16:50.659220 systemd[1]: session-23.scope: Deactivated successfully. Mar 14 00:16:50.665696 systemd-logind[1911]: Removed session 23. Mar 14 00:16:55.834878 containerd[1937]: time="2026-03-14T00:16:55.834695821Z" level=info msg="StopPodSandbox for \"0d24a217da064b6d39fe42e0c76edfc6e3bcf68889ec067a4537d5522a9eba33\"" Mar 14 00:17:04.828228 kubelet[3148]: E0314 00:17:04.827877 3148 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-130?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 14 00:17:04.969411 systemd[1]: cri-containerd-1752256aa7458b7802c9bae0fc9b309c85787ae9ceffcebb9e5914161b53f352.scope: Deactivated successfully. Mar 14 00:17:04.971397 systemd[1]: cri-containerd-1752256aa7458b7802c9bae0fc9b309c85787ae9ceffcebb9e5914161b53f352.scope: Consumed 4.926s CPU time, 22.6M memory peak, 0B memory swap peak. Mar 14 00:17:05.019817 containerd[1937]: time="2026-03-14T00:17:05.019535875Z" level=info msg="shim disconnected" id=1752256aa7458b7802c9bae0fc9b309c85787ae9ceffcebb9e5914161b53f352 namespace=k8s.io Mar 14 00:17:05.020762 containerd[1937]: time="2026-03-14T00:17:05.019798699Z" level=warning msg="cleaning up after shim disconnected" id=1752256aa7458b7802c9bae0fc9b309c85787ae9ceffcebb9e5914161b53f352 namespace=k8s.io Mar 14 00:17:05.020762 containerd[1937]: time="2026-03-14T00:17:05.019848343Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:17:05.022920 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1752256aa7458b7802c9bae0fc9b309c85787ae9ceffcebb9e5914161b53f352-rootfs.mount: Deactivated successfully. Mar 14 00:17:05.295991 kubelet[3148]: I0314 00:17:05.294700 3148 scope.go:117] "RemoveContainer" containerID="1752256aa7458b7802c9bae0fc9b309c85787ae9ceffcebb9e5914161b53f352" Mar 14 00:17:05.304211 containerd[1937]: time="2026-03-14T00:17:05.304124636Z" level=info msg="CreateContainer within sandbox \"47f47703000a4c2c8bf45f456dce6bd17e7bc23be629c750a11480b99f508b67\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 14 00:17:05.333927 containerd[1937]: time="2026-03-14T00:17:05.333782924Z" level=info msg="CreateContainer within sandbox \"47f47703000a4c2c8bf45f456dce6bd17e7bc23be629c750a11480b99f508b67\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"ff8333ac7cea5d9ae9964f69c71b44ced1ae6c9632a91cb219f1e81943373cc2\"" Mar 14 00:17:05.336971 containerd[1937]: time="2026-03-14T00:17:05.335174672Z" level=info msg="StartContainer for \"ff8333ac7cea5d9ae9964f69c71b44ced1ae6c9632a91cb219f1e81943373cc2\"" Mar 14 00:17:05.405364 systemd[1]: Started cri-containerd-ff8333ac7cea5d9ae9964f69c71b44ced1ae6c9632a91cb219f1e81943373cc2.scope - libcontainer container ff8333ac7cea5d9ae9964f69c71b44ced1ae6c9632a91cb219f1e81943373cc2. Mar 14 00:17:05.481144 containerd[1937]: time="2026-03-14T00:17:05.481071789Z" level=info msg="StartContainer for \"ff8333ac7cea5d9ae9964f69c71b44ced1ae6c9632a91cb219f1e81943373cc2\" returns successfully" Mar 14 00:17:06.021095 systemd[1]: run-containerd-runc-k8s.io-ff8333ac7cea5d9ae9964f69c71b44ced1ae6c9632a91cb219f1e81943373cc2-runc.grtZGY.mount: Deactivated successfully. Mar 14 00:17:06.146892 systemd[1]: cri-containerd-40dd9bdc747a186e71b3da118e3394be6c9b1954b5f56d60ab98612b737f251f.scope: Deactivated successfully. Mar 14 00:17:06.149276 systemd[1]: cri-containerd-40dd9bdc747a186e71b3da118e3394be6c9b1954b5f56d60ab98612b737f251f.scope: Consumed 34.213s CPU time. Mar 14 00:17:06.202268 containerd[1937]: time="2026-03-14T00:17:06.201932493Z" level=info msg="shim disconnected" id=40dd9bdc747a186e71b3da118e3394be6c9b1954b5f56d60ab98612b737f251f namespace=k8s.io Mar 14 00:17:06.204254 containerd[1937]: time="2026-03-14T00:17:06.204112293Z" level=warning msg="cleaning up after shim disconnected" id=40dd9bdc747a186e71b3da118e3394be6c9b1954b5f56d60ab98612b737f251f namespace=k8s.io Mar 14 00:17:06.204254 containerd[1937]: time="2026-03-14T00:17:06.204182709Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:17:06.212098 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40dd9bdc747a186e71b3da118e3394be6c9b1954b5f56d60ab98612b737f251f-rootfs.mount: Deactivated successfully. Mar 14 00:17:06.301000 kubelet[3148]: I0314 00:17:06.300526 3148 scope.go:117] "RemoveContainer" containerID="40dd9bdc747a186e71b3da118e3394be6c9b1954b5f56d60ab98612b737f251f" Mar 14 00:17:06.304291 containerd[1937]: time="2026-03-14T00:17:06.303896817Z" level=info msg="CreateContainer within sandbox \"874ccf8d2d7cdeac615889c519f37d0131bfe8ce54bbd6aa0a3df16338c07ee1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Mar 14 00:17:06.330256 containerd[1937]: time="2026-03-14T00:17:06.330181497Z" level=info msg="CreateContainer within sandbox \"874ccf8d2d7cdeac615889c519f37d0131bfe8ce54bbd6aa0a3df16338c07ee1\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"222533458f6a543409087162f58c420f20254dede2596d5cd9801695d108ae1c\"" Mar 14 00:17:06.333524 containerd[1937]: time="2026-03-14T00:17:06.333468201Z" level=info msg="StartContainer for \"222533458f6a543409087162f58c420f20254dede2596d5cd9801695d108ae1c\"" Mar 14 00:17:06.410279 systemd[1]: Started cri-containerd-222533458f6a543409087162f58c420f20254dede2596d5cd9801695d108ae1c.scope - libcontainer container 222533458f6a543409087162f58c420f20254dede2596d5cd9801695d108ae1c. Mar 14 00:17:06.476800 containerd[1937]: time="2026-03-14T00:17:06.476732122Z" level=info msg="StartContainer for \"222533458f6a543409087162f58c420f20254dede2596d5cd9801695d108ae1c\" returns successfully" Mar 14 00:17:09.403318 systemd[1]: cri-containerd-449be0ac4a09b6a365357fc2219efb0692234b6c67e80f2d01e208446de6bd1e.scope: Deactivated successfully. Mar 14 00:17:09.405544 systemd[1]: cri-containerd-449be0ac4a09b6a365357fc2219efb0692234b6c67e80f2d01e208446de6bd1e.scope: Consumed 4.761s CPU time, 14.0M memory peak, 0B memory swap peak. Mar 14 00:17:09.446996 containerd[1937]: time="2026-03-14T00:17:09.445118725Z" level=info msg="shim disconnected" id=449be0ac4a09b6a365357fc2219efb0692234b6c67e80f2d01e208446de6bd1e namespace=k8s.io Mar 14 00:17:09.447533 containerd[1937]: time="2026-03-14T00:17:09.447002161Z" level=warning msg="cleaning up after shim disconnected" id=449be0ac4a09b6a365357fc2219efb0692234b6c67e80f2d01e208446de6bd1e namespace=k8s.io Mar 14 00:17:09.447533 containerd[1937]: time="2026-03-14T00:17:09.447045757Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:17:09.452780 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-449be0ac4a09b6a365357fc2219efb0692234b6c67e80f2d01e208446de6bd1e-rootfs.mount: Deactivated successfully. Mar 14 00:17:10.335241 kubelet[3148]: I0314 00:17:10.335178 3148 scope.go:117] "RemoveContainer" containerID="449be0ac4a09b6a365357fc2219efb0692234b6c67e80f2d01e208446de6bd1e" Mar 14 00:17:10.338976 containerd[1937]: time="2026-03-14T00:17:10.338741461Z" level=info msg="CreateContainer within sandbox \"d7bb5bb142f1d85aaf7f8d1c21ad906c5b8fbbd89c605f6fa89a1e0f5987c73d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 14 00:17:10.374007 containerd[1937]: time="2026-03-14T00:17:10.371332525Z" level=info msg="CreateContainer within sandbox \"d7bb5bb142f1d85aaf7f8d1c21ad906c5b8fbbd89c605f6fa89a1e0f5987c73d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"b4ba957c3567b2abc3e07728e87d42c3dc8e8d65042a3962ef77b92ef7c4869b\"" Mar 14 00:17:10.374007 containerd[1937]: time="2026-03-14T00:17:10.372151297Z" level=info msg="StartContainer for \"b4ba957c3567b2abc3e07728e87d42c3dc8e8d65042a3962ef77b92ef7c4869b\"" Mar 14 00:17:10.374886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount947099653.mount: Deactivated successfully. Mar 14 00:17:10.439259 systemd[1]: Started cri-containerd-b4ba957c3567b2abc3e07728e87d42c3dc8e8d65042a3962ef77b92ef7c4869b.scope - libcontainer container b4ba957c3567b2abc3e07728e87d42c3dc8e8d65042a3962ef77b92ef7c4869b. Mar 14 00:17:10.517980 containerd[1937]: time="2026-03-14T00:17:10.517137710Z" level=info msg="StartContainer for \"b4ba957c3567b2abc3e07728e87d42c3dc8e8d65042a3962ef77b92ef7c4869b\" returns successfully" Mar 14 00:17:14.829686 kubelet[3148]: E0314 00:17:14.829380 3148 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-130?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 14 00:17:15.061392 systemd[1]: run-containerd-runc-k8s.io-af2fde5709097b44e30659b731daf1c92cc9e267fb4493be616f9eb48adc9937-runc.mBebll.mount: Deactivated successfully. Mar 14 00:17:17.879315 systemd[1]: cri-containerd-222533458f6a543409087162f58c420f20254dede2596d5cd9801695d108ae1c.scope: Deactivated successfully. Mar 14 00:17:17.921339 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-222533458f6a543409087162f58c420f20254dede2596d5cd9801695d108ae1c-rootfs.mount: Deactivated successfully. Mar 14 00:17:17.930132 containerd[1937]: time="2026-03-14T00:17:17.930054083Z" level=info msg="shim disconnected" id=222533458f6a543409087162f58c420f20254dede2596d5cd9801695d108ae1c namespace=k8s.io Mar 14 00:17:17.930132 containerd[1937]: time="2026-03-14T00:17:17.930128747Z" level=warning msg="cleaning up after shim disconnected" id=222533458f6a543409087162f58c420f20254dede2596d5cd9801695d108ae1c namespace=k8s.io Mar 14 00:17:17.930132 containerd[1937]: time="2026-03-14T00:17:17.930151343Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:17:18.367964 kubelet[3148]: I0314 00:17:18.367900 3148 scope.go:117] "RemoveContainer" containerID="40dd9bdc747a186e71b3da118e3394be6c9b1954b5f56d60ab98612b737f251f" Mar 14 00:17:18.369507 kubelet[3148]: I0314 00:17:18.369097 3148 scope.go:117] "RemoveContainer" containerID="222533458f6a543409087162f58c420f20254dede2596d5cd9801695d108ae1c" Mar 14 00:17:18.369507 kubelet[3148]: E0314 00:17:18.369387 3148 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-6bf85f8dd-6k8kl_tigera-operator(d9bfb8b6-e9ea-4e5f-924a-50d4d5c98c25)\"" pod="tigera-operator/tigera-operator-6bf85f8dd-6k8kl" podUID="d9bfb8b6-e9ea-4e5f-924a-50d4d5c98c25" Mar 14 00:17:18.370227 containerd[1937]: time="2026-03-14T00:17:18.370178661Z" level=info msg="RemoveContainer for \"40dd9bdc747a186e71b3da118e3394be6c9b1954b5f56d60ab98612b737f251f\"" Mar 14 00:17:18.378197 containerd[1937]: time="2026-03-14T00:17:18.377456721Z" level=info msg="RemoveContainer for \"40dd9bdc747a186e71b3da118e3394be6c9b1954b5f56d60ab98612b737f251f\" returns successfully" Mar 14 00:17:24.831346 kubelet[3148]: E0314 00:17:24.831036 3148 controller.go:195] "Failed to update lease" err="Put \"https://172.31.26.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-130?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"