Mar 14 00:13:01.284074 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Mar 14 00:13:01.284120 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Mar 13 22:32:52 -00 2026 Mar 14 00:13:01.284145 kernel: KASLR disabled due to lack of seed Mar 14 00:13:01.284201 kernel: efi: EFI v2.7 by EDK II Mar 14 00:13:01.284223 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Mar 14 00:13:01.284239 kernel: ACPI: Early table checksum verification disabled Mar 14 00:13:01.284257 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Mar 14 00:13:01.284274 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Mar 14 00:13:01.284290 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Mar 14 00:13:01.284306 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Mar 14 00:13:01.284328 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Mar 14 00:13:01.284345 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Mar 14 00:13:01.284361 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Mar 14 00:13:01.284377 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Mar 14 00:13:01.284396 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Mar 14 00:13:01.284417 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Mar 14 00:13:01.284435 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Mar 14 00:13:01.284471 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Mar 14 00:13:01.284490 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Mar 14 00:13:01.284508 kernel: printk: bootconsole [uart0] enabled Mar 14 00:13:01.284525 kernel: NUMA: Failed to initialise from firmware Mar 14 00:13:01.284543 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Mar 14 00:13:01.284559 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Mar 14 00:13:01.284576 kernel: Zone ranges: Mar 14 00:13:01.284593 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Mar 14 00:13:01.284610 kernel: DMA32 empty Mar 14 00:13:01.284633 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Mar 14 00:13:01.284650 kernel: Movable zone start for each node Mar 14 00:13:01.284667 kernel: Early memory node ranges Mar 14 00:13:01.284683 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Mar 14 00:13:01.284700 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Mar 14 00:13:01.284716 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Mar 14 00:13:01.284733 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Mar 14 00:13:01.284750 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Mar 14 00:13:01.284767 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Mar 14 00:13:01.284783 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Mar 14 00:13:01.284800 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Mar 14 00:13:01.284817 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Mar 14 00:13:01.284837 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Mar 14 00:13:01.284855 kernel: psci: probing for conduit method from ACPI. Mar 14 00:13:01.284878 kernel: psci: PSCIv1.0 detected in firmware. Mar 14 00:13:01.284896 kernel: psci: Using standard PSCI v0.2 function IDs Mar 14 00:13:01.284914 kernel: psci: Trusted OS migration not required Mar 14 00:13:01.284935 kernel: psci: SMC Calling Convention v1.1 Mar 14 00:13:01.284954 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Mar 14 00:13:01.284971 kernel: percpu: Embedded 30 pages/cpu s85736 r8192 d28952 u122880 Mar 14 00:13:01.284989 kernel: pcpu-alloc: s85736 r8192 d28952 u122880 alloc=30*4096 Mar 14 00:13:01.285007 kernel: pcpu-alloc: [0] 0 [0] 1 Mar 14 00:13:01.285025 kernel: Detected PIPT I-cache on CPU0 Mar 14 00:13:01.285043 kernel: CPU features: detected: GIC system register CPU interface Mar 14 00:13:01.285062 kernel: CPU features: detected: Spectre-v2 Mar 14 00:13:01.285081 kernel: CPU features: detected: Spectre-v3a Mar 14 00:13:01.285100 kernel: CPU features: detected: Spectre-BHB Mar 14 00:13:01.285120 kernel: CPU features: detected: ARM erratum 1742098 Mar 14 00:13:01.285145 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Mar 14 00:13:01.288223 kernel: alternatives: applying boot alternatives Mar 14 00:13:01.288268 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=704dcf876dede90264a8630d1e6c631c8df8e652c7e2ae2e5d334e632916c980 Mar 14 00:13:01.288289 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 14 00:13:01.288307 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 14 00:13:01.288326 kernel: Fallback order for Node 0: 0 Mar 14 00:13:01.288343 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Mar 14 00:13:01.288362 kernel: Policy zone: Normal Mar 14 00:13:01.288380 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 14 00:13:01.288398 kernel: software IO TLB: area num 2. Mar 14 00:13:01.288415 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Mar 14 00:13:01.288461 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8116K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Mar 14 00:13:01.288487 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 14 00:13:01.288505 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 14 00:13:01.288524 kernel: rcu: RCU event tracing is enabled. Mar 14 00:13:01.288543 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 14 00:13:01.288561 kernel: Trampoline variant of Tasks RCU enabled. Mar 14 00:13:01.288579 kernel: Tracing variant of Tasks RCU enabled. Mar 14 00:13:01.288597 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 14 00:13:01.288616 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 14 00:13:01.288634 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 14 00:13:01.288652 kernel: GICv3: 96 SPIs implemented Mar 14 00:13:01.288675 kernel: GICv3: 0 Extended SPIs implemented Mar 14 00:13:01.288694 kernel: Root IRQ handler: gic_handle_irq Mar 14 00:13:01.288712 kernel: GICv3: GICv3 features: 16 PPIs Mar 14 00:13:01.288730 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Mar 14 00:13:01.288747 kernel: ITS [mem 0x10080000-0x1009ffff] Mar 14 00:13:01.288765 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Mar 14 00:13:01.288784 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Mar 14 00:13:01.288802 kernel: GICv3: using LPI property table @0x00000004000d0000 Mar 14 00:13:01.288819 kernel: ITS: Using hypervisor restricted LPI range [128] Mar 14 00:13:01.288837 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Mar 14 00:13:01.288855 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 14 00:13:01.288873 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Mar 14 00:13:01.288896 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Mar 14 00:13:01.288915 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Mar 14 00:13:01.288933 kernel: Console: colour dummy device 80x25 Mar 14 00:13:01.288951 kernel: printk: console [tty1] enabled Mar 14 00:13:01.288969 kernel: ACPI: Core revision 20230628 Mar 14 00:13:01.288988 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Mar 14 00:13:01.289006 kernel: pid_max: default: 32768 minimum: 301 Mar 14 00:13:01.289024 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 14 00:13:01.289042 kernel: landlock: Up and running. Mar 14 00:13:01.289064 kernel: SELinux: Initializing. Mar 14 00:13:01.289083 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:13:01.289101 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:13:01.289120 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:13:01.289139 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 14 00:13:01.289156 kernel: rcu: Hierarchical SRCU implementation. Mar 14 00:13:01.289225 kernel: rcu: Max phase no-delay instances is 400. Mar 14 00:13:01.289245 kernel: Platform MSI: ITS@0x10080000 domain created Mar 14 00:13:01.289263 kernel: PCI/MSI: ITS@0x10080000 domain created Mar 14 00:13:01.289287 kernel: Remapping and enabling EFI services. Mar 14 00:13:01.289305 kernel: smp: Bringing up secondary CPUs ... Mar 14 00:13:01.289324 kernel: Detected PIPT I-cache on CPU1 Mar 14 00:13:01.289342 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Mar 14 00:13:01.289360 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Mar 14 00:13:01.289378 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Mar 14 00:13:01.289397 kernel: smp: Brought up 1 node, 2 CPUs Mar 14 00:13:01.289416 kernel: SMP: Total of 2 processors activated. Mar 14 00:13:01.289434 kernel: CPU features: detected: 32-bit EL0 Support Mar 14 00:13:01.289457 kernel: CPU features: detected: 32-bit EL1 Support Mar 14 00:13:01.289475 kernel: CPU features: detected: CRC32 instructions Mar 14 00:13:01.289494 kernel: CPU: All CPU(s) started at EL1 Mar 14 00:13:01.289527 kernel: alternatives: applying system-wide alternatives Mar 14 00:13:01.289551 kernel: devtmpfs: initialized Mar 14 00:13:01.289571 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 14 00:13:01.289591 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 14 00:13:01.289610 kernel: pinctrl core: initialized pinctrl subsystem Mar 14 00:13:01.289630 kernel: SMBIOS 3.0.0 present. Mar 14 00:13:01.289655 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Mar 14 00:13:01.289674 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 14 00:13:01.289695 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 14 00:13:01.289716 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 14 00:13:01.289735 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 14 00:13:01.289754 kernel: audit: initializing netlink subsys (disabled) Mar 14 00:13:01.289774 kernel: audit: type=2000 audit(0.289:1): state=initialized audit_enabled=0 res=1 Mar 14 00:13:01.289793 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 14 00:13:01.289819 kernel: cpuidle: using governor menu Mar 14 00:13:01.289838 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 14 00:13:01.289858 kernel: ASID allocator initialised with 65536 entries Mar 14 00:13:01.289878 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 14 00:13:01.289898 kernel: Serial: AMBA PL011 UART driver Mar 14 00:13:01.289917 kernel: Modules: 17488 pages in range for non-PLT usage Mar 14 00:13:01.289936 kernel: Modules: 509008 pages in range for PLT usage Mar 14 00:13:01.289959 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 14 00:13:01.289979 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 14 00:13:01.290003 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 14 00:13:01.290022 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 14 00:13:01.290041 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 14 00:13:01.290060 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 14 00:13:01.290079 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 14 00:13:01.290098 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 14 00:13:01.290119 kernel: ACPI: Added _OSI(Module Device) Mar 14 00:13:01.290138 kernel: ACPI: Added _OSI(Processor Device) Mar 14 00:13:01.290157 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 14 00:13:01.292239 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 14 00:13:01.292260 kernel: ACPI: Interpreter enabled Mar 14 00:13:01.292280 kernel: ACPI: Using GIC for interrupt routing Mar 14 00:13:01.292299 kernel: ACPI: MCFG table detected, 1 entries Mar 14 00:13:01.292318 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Mar 14 00:13:01.292646 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 14 00:13:01.292871 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 14 00:13:01.293090 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 14 00:13:01.295907 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Mar 14 00:13:01.296127 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Mar 14 00:13:01.296153 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Mar 14 00:13:01.296197 kernel: acpiphp: Slot [1] registered Mar 14 00:13:01.296218 kernel: acpiphp: Slot [2] registered Mar 14 00:13:01.296237 kernel: acpiphp: Slot [3] registered Mar 14 00:13:01.296256 kernel: acpiphp: Slot [4] registered Mar 14 00:13:01.296275 kernel: acpiphp: Slot [5] registered Mar 14 00:13:01.296303 kernel: acpiphp: Slot [6] registered Mar 14 00:13:01.296322 kernel: acpiphp: Slot [7] registered Mar 14 00:13:01.296342 kernel: acpiphp: Slot [8] registered Mar 14 00:13:01.296360 kernel: acpiphp: Slot [9] registered Mar 14 00:13:01.296379 kernel: acpiphp: Slot [10] registered Mar 14 00:13:01.296398 kernel: acpiphp: Slot [11] registered Mar 14 00:13:01.296417 kernel: acpiphp: Slot [12] registered Mar 14 00:13:01.296436 kernel: acpiphp: Slot [13] registered Mar 14 00:13:01.296474 kernel: acpiphp: Slot [14] registered Mar 14 00:13:01.296496 kernel: acpiphp: Slot [15] registered Mar 14 00:13:01.296523 kernel: acpiphp: Slot [16] registered Mar 14 00:13:01.296543 kernel: acpiphp: Slot [17] registered Mar 14 00:13:01.296562 kernel: acpiphp: Slot [18] registered Mar 14 00:13:01.296582 kernel: acpiphp: Slot [19] registered Mar 14 00:13:01.296602 kernel: acpiphp: Slot [20] registered Mar 14 00:13:01.296623 kernel: acpiphp: Slot [21] registered Mar 14 00:13:01.296641 kernel: acpiphp: Slot [22] registered Mar 14 00:13:01.296660 kernel: acpiphp: Slot [23] registered Mar 14 00:13:01.296679 kernel: acpiphp: Slot [24] registered Mar 14 00:13:01.296702 kernel: acpiphp: Slot [25] registered Mar 14 00:13:01.296721 kernel: acpiphp: Slot [26] registered Mar 14 00:13:01.296740 kernel: acpiphp: Slot [27] registered Mar 14 00:13:01.296759 kernel: acpiphp: Slot [28] registered Mar 14 00:13:01.296778 kernel: acpiphp: Slot [29] registered Mar 14 00:13:01.296797 kernel: acpiphp: Slot [30] registered Mar 14 00:13:01.296816 kernel: acpiphp: Slot [31] registered Mar 14 00:13:01.296835 kernel: PCI host bridge to bus 0000:00 Mar 14 00:13:01.297093 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Mar 14 00:13:01.297348 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 14 00:13:01.297543 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Mar 14 00:13:01.297731 kernel: pci_bus 0000:00: root bus resource [bus 00] Mar 14 00:13:01.297970 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Mar 14 00:13:01.299786 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Mar 14 00:13:01.300064 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Mar 14 00:13:01.300349 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Mar 14 00:13:01.300592 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Mar 14 00:13:01.300886 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 14 00:13:01.301113 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Mar 14 00:13:01.301368 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Mar 14 00:13:01.301577 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Mar 14 00:13:01.301787 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Mar 14 00:13:01.302046 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Mar 14 00:13:01.302384 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Mar 14 00:13:01.302576 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 14 00:13:01.302773 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Mar 14 00:13:01.302800 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 14 00:13:01.302820 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 14 00:13:01.302839 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 14 00:13:01.302858 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 14 00:13:01.302888 kernel: iommu: Default domain type: Translated Mar 14 00:13:01.302907 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 14 00:13:01.302926 kernel: efivars: Registered efivars operations Mar 14 00:13:01.302945 kernel: vgaarb: loaded Mar 14 00:13:01.302964 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 14 00:13:01.302983 kernel: VFS: Disk quotas dquot_6.6.0 Mar 14 00:13:01.303002 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 14 00:13:01.303021 kernel: pnp: PnP ACPI init Mar 14 00:13:01.303320 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Mar 14 00:13:01.303357 kernel: pnp: PnP ACPI: found 1 devices Mar 14 00:13:01.303377 kernel: NET: Registered PF_INET protocol family Mar 14 00:13:01.303397 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 14 00:13:01.303416 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 14 00:13:01.303436 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 14 00:13:01.303455 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 14 00:13:01.303475 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 14 00:13:01.303495 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 14 00:13:01.303518 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:13:01.303539 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:13:01.303560 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 14 00:13:01.303579 kernel: PCI: CLS 0 bytes, default 64 Mar 14 00:13:01.303598 kernel: kvm [1]: HYP mode not available Mar 14 00:13:01.303617 kernel: Initialise system trusted keyrings Mar 14 00:13:01.303636 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 14 00:13:01.303657 kernel: Key type asymmetric registered Mar 14 00:13:01.303676 kernel: Asymmetric key parser 'x509' registered Mar 14 00:13:01.303699 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 14 00:13:01.303719 kernel: io scheduler mq-deadline registered Mar 14 00:13:01.303738 kernel: io scheduler kyber registered Mar 14 00:13:01.303756 kernel: io scheduler bfq registered Mar 14 00:13:01.303989 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Mar 14 00:13:01.304020 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 14 00:13:01.304039 kernel: ACPI: button: Power Button [PWRB] Mar 14 00:13:01.304058 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Mar 14 00:13:01.304083 kernel: ACPI: button: Sleep Button [SLPB] Mar 14 00:13:01.304103 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 14 00:13:01.304123 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Mar 14 00:13:01.304423 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Mar 14 00:13:01.304473 kernel: printk: console [ttyS0] disabled Mar 14 00:13:01.304497 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Mar 14 00:13:01.304516 kernel: printk: console [ttyS0] enabled Mar 14 00:13:01.304536 kernel: printk: bootconsole [uart0] disabled Mar 14 00:13:01.304556 kernel: thunder_xcv, ver 1.0 Mar 14 00:13:01.304574 kernel: thunder_bgx, ver 1.0 Mar 14 00:13:01.304602 kernel: nicpf, ver 1.0 Mar 14 00:13:01.304620 kernel: nicvf, ver 1.0 Mar 14 00:13:01.304847 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 14 00:13:01.305047 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-03-14T00:13:00 UTC (1773447180) Mar 14 00:13:01.305073 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 14 00:13:01.305092 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Mar 14 00:13:01.305111 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 14 00:13:01.305136 kernel: watchdog: Hard watchdog permanently disabled Mar 14 00:13:01.305155 kernel: NET: Registered PF_INET6 protocol family Mar 14 00:13:01.305198 kernel: Segment Routing with IPv6 Mar 14 00:13:01.305218 kernel: In-situ OAM (IOAM) with IPv6 Mar 14 00:13:01.305236 kernel: NET: Registered PF_PACKET protocol family Mar 14 00:13:01.305255 kernel: Key type dns_resolver registered Mar 14 00:13:01.305273 kernel: registered taskstats version 1 Mar 14 00:13:01.305292 kernel: Loading compiled-in X.509 certificates Mar 14 00:13:01.305311 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 16e13a4d63c54048487d2b18c824fa4694264505' Mar 14 00:13:01.305329 kernel: Key type .fscrypt registered Mar 14 00:13:01.305354 kernel: Key type fscrypt-provisioning registered Mar 14 00:13:01.305372 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 14 00:13:01.305391 kernel: ima: Allocated hash algorithm: sha1 Mar 14 00:13:01.305410 kernel: ima: No architecture policies found Mar 14 00:13:01.305429 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 14 00:13:01.305447 kernel: clk: Disabling unused clocks Mar 14 00:13:01.305466 kernel: Freeing unused kernel memory: 39424K Mar 14 00:13:01.305484 kernel: Run /init as init process Mar 14 00:13:01.305502 kernel: with arguments: Mar 14 00:13:01.305525 kernel: /init Mar 14 00:13:01.305544 kernel: with environment: Mar 14 00:13:01.305562 kernel: HOME=/ Mar 14 00:13:01.305580 kernel: TERM=linux Mar 14 00:13:01.305603 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:13:01.305626 systemd[1]: Detected virtualization amazon. Mar 14 00:13:01.305647 systemd[1]: Detected architecture arm64. Mar 14 00:13:01.305671 systemd[1]: Running in initrd. Mar 14 00:13:01.305692 systemd[1]: No hostname configured, using default hostname. Mar 14 00:13:01.305712 systemd[1]: Hostname set to . Mar 14 00:13:01.305733 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:13:01.305753 systemd[1]: Queued start job for default target initrd.target. Mar 14 00:13:01.305773 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:13:01.305793 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:13:01.305815 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 14 00:13:01.305839 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:13:01.305861 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 14 00:13:01.305882 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 14 00:13:01.305905 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 14 00:13:01.305926 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 14 00:13:01.305947 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:13:01.305968 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:13:01.305993 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:13:01.306014 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:13:01.306034 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:13:01.306054 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:13:01.306075 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:13:01.306095 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:13:01.306116 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:13:01.306137 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:13:01.306157 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:13:01.306204 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:13:01.306226 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:13:01.306247 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:13:01.306267 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 14 00:13:01.306288 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:13:01.306309 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 14 00:13:01.306329 systemd[1]: Starting systemd-fsck-usr.service... Mar 14 00:13:01.306350 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:13:01.306374 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:13:01.306395 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:13:01.306416 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 14 00:13:01.306436 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:13:01.306457 systemd[1]: Finished systemd-fsck-usr.service. Mar 14 00:13:01.306479 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:13:01.306540 systemd-journald[252]: Collecting audit messages is disabled. Mar 14 00:13:01.306585 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:13:01.306607 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:13:01.306633 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 14 00:13:01.306652 kernel: Bridge firewalling registered Mar 14 00:13:01.306672 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:13:01.306692 systemd-journald[252]: Journal started Mar 14 00:13:01.306730 systemd-journald[252]: Runtime Journal (/run/log/journal/ec247f86c0467cd3498b9dbcf17d823f) is 8.0M, max 75.3M, 67.3M free. Mar 14 00:13:01.252230 systemd-modules-load[253]: Inserted module 'overlay' Mar 14 00:13:01.303600 systemd-modules-load[253]: Inserted module 'br_netfilter' Mar 14 00:13:01.315730 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:13:01.320963 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:13:01.329461 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:13:01.339236 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:13:01.349520 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:13:01.378831 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:13:01.384204 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:13:01.397760 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:13:01.409421 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 14 00:13:01.415770 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:13:01.425528 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:13:01.450424 dracut-cmdline[285]: dracut-dracut-053 Mar 14 00:13:01.458784 dracut-cmdline[285]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=704dcf876dede90264a8630d1e6c631c8df8e652c7e2ae2e5d334e632916c980 Mar 14 00:13:01.534821 systemd-resolved[288]: Positive Trust Anchors: Mar 14 00:13:01.534859 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:13:01.534924 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:13:01.641213 kernel: SCSI subsystem initialized Mar 14 00:13:01.649198 kernel: Loading iSCSI transport class v2.0-870. Mar 14 00:13:01.663217 kernel: iscsi: registered transport (tcp) Mar 14 00:13:01.685993 kernel: iscsi: registered transport (qla4xxx) Mar 14 00:13:01.686070 kernel: QLogic iSCSI HBA Driver Mar 14 00:13:01.765226 kernel: random: crng init done Mar 14 00:13:01.765961 systemd-resolved[288]: Defaulting to hostname 'linux'. Mar 14 00:13:01.770467 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:13:01.780743 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:13:01.804730 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 14 00:13:01.817560 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 14 00:13:01.864655 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 14 00:13:01.864735 kernel: device-mapper: uevent: version 1.0.3 Mar 14 00:13:01.866754 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 14 00:13:01.937241 kernel: raid6: neonx8 gen() 6653 MB/s Mar 14 00:13:01.955223 kernel: raid6: neonx4 gen() 6394 MB/s Mar 14 00:13:01.973227 kernel: raid6: neonx2 gen() 5361 MB/s Mar 14 00:13:01.991221 kernel: raid6: neonx1 gen() 3924 MB/s Mar 14 00:13:02.009229 kernel: raid6: int64x8 gen() 3773 MB/s Mar 14 00:13:02.026223 kernel: raid6: int64x4 gen() 3678 MB/s Mar 14 00:13:02.044226 kernel: raid6: int64x2 gen() 3568 MB/s Mar 14 00:13:02.062471 kernel: raid6: int64x1 gen() 2736 MB/s Mar 14 00:13:02.062554 kernel: raid6: using algorithm neonx8 gen() 6653 MB/s Mar 14 00:13:02.081830 kernel: raid6: .... xor() 4864 MB/s, rmw enabled Mar 14 00:13:02.081928 kernel: raid6: using neon recovery algorithm Mar 14 00:13:02.090216 kernel: xor: measuring software checksum speed Mar 14 00:13:02.092919 kernel: 8regs : 9903 MB/sec Mar 14 00:13:02.092989 kernel: 32regs : 11898 MB/sec Mar 14 00:13:02.095846 kernel: arm64_neon : 8914 MB/sec Mar 14 00:13:02.095919 kernel: xor: using function: 32regs (11898 MB/sec) Mar 14 00:13:02.184224 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 14 00:13:02.205078 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:13:02.221440 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:13:02.254854 systemd-udevd[472]: Using default interface naming scheme 'v255'. Mar 14 00:13:02.263878 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:13:02.279543 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 14 00:13:02.320580 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Mar 14 00:13:02.387244 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:13:02.399497 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:13:02.530078 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:13:02.545430 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 14 00:13:02.600384 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 14 00:13:02.607772 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:13:02.613985 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:13:02.617123 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:13:02.637014 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 14 00:13:02.687345 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:13:02.748081 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 14 00:13:02.748152 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Mar 14 00:13:02.756839 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:13:02.764959 kernel: ena 0000:00:05.0: ENA device version: 0.10 Mar 14 00:13:02.768545 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Mar 14 00:13:02.757118 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:13:02.765216 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:13:02.768580 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:13:02.768888 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:13:02.771987 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:13:02.799189 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:75:65:02:9c:63 Mar 14 00:13:02.803082 (udev-worker)[525]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:13:02.807731 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:13:02.844047 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Mar 14 00:13:02.844128 kernel: nvme nvme0: pci function 0000:00:04.0 Mar 14 00:13:02.855360 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:13:02.868202 kernel: nvme nvme0: 2/0/0 default/read/poll queues Mar 14 00:13:02.870560 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:13:02.886408 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 14 00:13:02.886479 kernel: GPT:9289727 != 33554431 Mar 14 00:13:02.886505 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 14 00:13:02.887751 kernel: GPT:9289727 != 33554431 Mar 14 00:13:02.887808 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 14 00:13:02.889054 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:13:02.902744 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:13:02.992228 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (516) Mar 14 00:13:03.004775 kernel: BTRFS: device fsid df62721e-ebc0-40bc-8956-1227b067a773 devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (518) Mar 14 00:13:03.071863 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Mar 14 00:13:03.121595 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Mar 14 00:13:03.139964 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 14 00:13:03.154275 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Mar 14 00:13:03.154878 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Mar 14 00:13:03.171438 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 14 00:13:03.186984 disk-uuid[663]: Primary Header is updated. Mar 14 00:13:03.186984 disk-uuid[663]: Secondary Entries is updated. Mar 14 00:13:03.186984 disk-uuid[663]: Secondary Header is updated. Mar 14 00:13:03.201216 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:13:03.211212 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:13:04.221250 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Mar 14 00:13:04.221322 disk-uuid[664]: The operation has completed successfully. Mar 14 00:13:04.422307 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 14 00:13:04.422544 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 14 00:13:04.473483 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 14 00:13:04.498585 sh[922]: Success Mar 14 00:13:04.530750 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 14 00:13:04.618682 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 14 00:13:04.632425 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 14 00:13:04.649245 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 14 00:13:04.681567 kernel: BTRFS info (device dm-0): first mount of filesystem df62721e-ebc0-40bc-8956-1227b067a773 Mar 14 00:13:04.681643 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:13:04.681670 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 14 00:13:04.683188 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 14 00:13:04.684556 kernel: BTRFS info (device dm-0): using free space tree Mar 14 00:13:04.818208 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 14 00:13:04.832477 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 14 00:13:04.837228 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 14 00:13:04.852653 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 14 00:13:04.854547 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 14 00:13:04.895354 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:13:04.895447 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:13:04.897545 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 14 00:13:04.917256 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 14 00:13:04.938345 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 14 00:13:04.944987 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:13:04.957101 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 14 00:13:04.972620 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 14 00:13:05.094982 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:13:05.115529 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:13:05.164531 systemd-networkd[1115]: lo: Link UP Mar 14 00:13:05.164554 systemd-networkd[1115]: lo: Gained carrier Mar 14 00:13:05.170578 systemd-networkd[1115]: Enumeration completed Mar 14 00:13:05.170767 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:13:05.173548 systemd[1]: Reached target network.target - Network. Mar 14 00:13:05.173824 systemd-networkd[1115]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:13:05.173831 systemd-networkd[1115]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:13:05.182603 systemd-networkd[1115]: eth0: Link UP Mar 14 00:13:05.182612 systemd-networkd[1115]: eth0: Gained carrier Mar 14 00:13:05.182632 systemd-networkd[1115]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:13:05.214308 systemd-networkd[1115]: eth0: DHCPv4 address 172.31.18.130/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 14 00:13:05.444263 ignition[1035]: Ignition 2.19.0 Mar 14 00:13:05.444932 ignition[1035]: Stage: fetch-offline Mar 14 00:13:05.446880 ignition[1035]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:05.446909 ignition[1035]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:13:05.447790 ignition[1035]: Ignition finished successfully Mar 14 00:13:05.460573 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:13:05.473540 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 14 00:13:05.514889 ignition[1125]: Ignition 2.19.0 Mar 14 00:13:05.516935 ignition[1125]: Stage: fetch Mar 14 00:13:05.519277 ignition[1125]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:05.519328 ignition[1125]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:13:05.521653 ignition[1125]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:13:05.536927 ignition[1125]: PUT result: OK Mar 14 00:13:05.540561 ignition[1125]: parsed url from cmdline: "" Mar 14 00:13:05.540580 ignition[1125]: no config URL provided Mar 14 00:13:05.540596 ignition[1125]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:13:05.540624 ignition[1125]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:13:05.540659 ignition[1125]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:13:05.543254 ignition[1125]: PUT result: OK Mar 14 00:13:05.543352 ignition[1125]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Mar 14 00:13:05.548677 ignition[1125]: GET result: OK Mar 14 00:13:05.548909 ignition[1125]: parsing config with SHA512: 4038425d0167766e6f2c9bb8b09ea89f868323a5b3dbdd018a2d2803b274275595752cec2acf26fb5d2691fd1faaf54ce472516e67819b400e7e12ea8200e294 Mar 14 00:13:05.567147 unknown[1125]: fetched base config from "system" Mar 14 00:13:05.567226 unknown[1125]: fetched base config from "system" Mar 14 00:13:05.567244 unknown[1125]: fetched user config from "aws" Mar 14 00:13:05.570392 ignition[1125]: fetch: fetch complete Mar 14 00:13:05.570407 ignition[1125]: fetch: fetch passed Mar 14 00:13:05.572809 ignition[1125]: Ignition finished successfully Mar 14 00:13:05.583380 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 14 00:13:05.602549 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 14 00:13:05.629749 ignition[1132]: Ignition 2.19.0 Mar 14 00:13:05.629787 ignition[1132]: Stage: kargs Mar 14 00:13:05.631946 ignition[1132]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:05.631977 ignition[1132]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:13:05.633418 ignition[1132]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:13:05.638568 ignition[1132]: PUT result: OK Mar 14 00:13:05.649503 ignition[1132]: kargs: kargs passed Mar 14 00:13:05.649706 ignition[1132]: Ignition finished successfully Mar 14 00:13:05.657267 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 14 00:13:05.669668 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 14 00:13:05.707955 ignition[1138]: Ignition 2.19.0 Mar 14 00:13:05.708014 ignition[1138]: Stage: disks Mar 14 00:13:05.710157 ignition[1138]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:05.710233 ignition[1138]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:13:05.710503 ignition[1138]: PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:13:05.713830 ignition[1138]: PUT result: OK Mar 14 00:13:05.727292 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 14 00:13:05.722456 ignition[1138]: disks: disks passed Mar 14 00:13:05.733055 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 14 00:13:05.722589 ignition[1138]: Ignition finished successfully Mar 14 00:13:05.736211 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:13:05.739409 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:13:05.742711 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:13:05.765131 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:13:05.779624 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 14 00:13:05.830872 systemd-fsck[1146]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 14 00:13:05.838968 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 14 00:13:05.859373 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 14 00:13:05.961223 kernel: EXT4-fs (nvme0n1p9): mounted filesystem af566013-4e57-4e7f-9689-a2e15898536d r/w with ordered data mode. Quota mode: none. Mar 14 00:13:05.963053 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 14 00:13:05.968750 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 14 00:13:05.987373 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:13:05.997432 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 14 00:13:06.005007 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 14 00:13:06.005119 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 14 00:13:06.005207 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:13:06.030304 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1165) Mar 14 00:13:06.035955 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:13:06.036036 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:13:06.036082 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 14 00:13:06.046895 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 14 00:13:06.061254 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 14 00:13:06.056367 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:13:06.073639 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 14 00:13:06.469282 initrd-setup-root[1189]: cut: /sysroot/etc/passwd: No such file or directory Mar 14 00:13:06.504048 initrd-setup-root[1196]: cut: /sysroot/etc/group: No such file or directory Mar 14 00:13:06.527350 initrd-setup-root[1203]: cut: /sysroot/etc/shadow: No such file or directory Mar 14 00:13:06.538739 initrd-setup-root[1210]: cut: /sysroot/etc/gshadow: No such file or directory Mar 14 00:13:06.781536 systemd-networkd[1115]: eth0: Gained IPv6LL Mar 14 00:13:06.941610 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 14 00:13:06.953431 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 14 00:13:06.973492 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 14 00:13:06.995298 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 14 00:13:06.998797 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:13:07.036764 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 14 00:13:07.048456 ignition[1278]: INFO : Ignition 2.19.0 Mar 14 00:13:07.050658 ignition[1278]: INFO : Stage: mount Mar 14 00:13:07.050658 ignition[1278]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:07.050658 ignition[1278]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:13:07.058122 ignition[1278]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:13:07.061729 ignition[1278]: INFO : PUT result: OK Mar 14 00:13:07.069278 ignition[1278]: INFO : mount: mount passed Mar 14 00:13:07.071448 ignition[1278]: INFO : Ignition finished successfully Mar 14 00:13:07.074145 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 14 00:13:07.092406 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 14 00:13:07.107117 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:13:07.140204 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1290) Mar 14 00:13:07.145541 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 46234e4d-1d66-4ce6-8ed2-e270b1beee70 Mar 14 00:13:07.145613 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Mar 14 00:13:07.147020 kernel: BTRFS info (device nvme0n1p6): using free space tree Mar 14 00:13:07.152214 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Mar 14 00:13:07.157134 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:13:07.198341 ignition[1307]: INFO : Ignition 2.19.0 Mar 14 00:13:07.198341 ignition[1307]: INFO : Stage: files Mar 14 00:13:07.202952 ignition[1307]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:07.202952 ignition[1307]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:13:07.202952 ignition[1307]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:13:07.202952 ignition[1307]: INFO : PUT result: OK Mar 14 00:13:07.216323 ignition[1307]: DEBUG : files: compiled without relabeling support, skipping Mar 14 00:13:07.231429 ignition[1307]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 14 00:13:07.231429 ignition[1307]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 14 00:13:07.271334 ignition[1307]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 14 00:13:07.274959 ignition[1307]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 14 00:13:07.278724 unknown[1307]: wrote ssh authorized keys file for user: core Mar 14 00:13:07.281621 ignition[1307]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 14 00:13:07.294584 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 14 00:13:07.294584 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Mar 14 00:13:07.379791 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 14 00:13:07.539799 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Mar 14 00:13:07.548001 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 14 00:13:07.548001 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 14 00:13:07.548001 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:13:07.548001 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:13:07.548001 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:13:07.548001 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:13:07.548001 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:13:07.548001 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:13:07.548001 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:13:07.548001 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:13:07.548001 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 14 00:13:07.548001 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 14 00:13:07.548001 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 14 00:13:07.548001 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-arm64.raw: attempt #1 Mar 14 00:13:08.048186 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 14 00:13:08.492886 ignition[1307]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-arm64.raw" Mar 14 00:13:08.492886 ignition[1307]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 14 00:13:08.502950 ignition[1307]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:13:08.502950 ignition[1307]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:13:08.502950 ignition[1307]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 14 00:13:08.502950 ignition[1307]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 14 00:13:08.502950 ignition[1307]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 14 00:13:08.502950 ignition[1307]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:13:08.502950 ignition[1307]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:13:08.502950 ignition[1307]: INFO : files: files passed Mar 14 00:13:08.502950 ignition[1307]: INFO : Ignition finished successfully Mar 14 00:13:08.524269 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 14 00:13:08.544001 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 14 00:13:08.558273 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 14 00:13:08.573770 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 14 00:13:08.576326 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 14 00:13:08.610205 initrd-setup-root-after-ignition[1335]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:13:08.610205 initrd-setup-root-after-ignition[1335]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:13:08.618446 initrd-setup-root-after-ignition[1339]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:13:08.626317 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:13:08.630110 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 14 00:13:08.653367 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 14 00:13:08.708386 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 14 00:13:08.710317 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 14 00:13:08.717788 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 14 00:13:08.725206 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 14 00:13:08.727814 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 14 00:13:08.740478 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 14 00:13:08.779196 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:13:08.790654 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 14 00:13:08.824845 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:13:08.831637 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:13:08.834958 systemd[1]: Stopped target timers.target - Timer Units. Mar 14 00:13:08.838592 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 14 00:13:08.839195 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:13:08.851789 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 14 00:13:08.854997 systemd[1]: Stopped target basic.target - Basic System. Mar 14 00:13:08.861036 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 14 00:13:08.866095 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:13:08.874828 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 14 00:13:08.878241 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 14 00:13:08.884219 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:13:08.890025 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 14 00:13:08.895756 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 14 00:13:08.899001 systemd[1]: Stopped target swap.target - Swaps. Mar 14 00:13:08.902984 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 14 00:13:08.903337 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:13:08.908724 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:13:08.914104 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:13:08.920229 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 14 00:13:08.922392 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:13:08.922929 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 14 00:13:08.923324 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 14 00:13:08.931525 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 14 00:13:08.931886 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:13:08.935486 systemd[1]: ignition-files.service: Deactivated successfully. Mar 14 00:13:08.935933 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 14 00:13:08.954336 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 14 00:13:08.964812 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 14 00:13:08.972676 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 14 00:13:08.973748 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:13:08.988550 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 14 00:13:08.990398 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:13:09.010442 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 14 00:13:09.010685 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 14 00:13:09.050046 ignition[1359]: INFO : Ignition 2.19.0 Mar 14 00:13:09.052917 ignition[1359]: INFO : Stage: umount Mar 14 00:13:09.052917 ignition[1359]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:09.052917 ignition[1359]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Mar 14 00:13:09.067401 ignition[1359]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Mar 14 00:13:09.067401 ignition[1359]: INFO : PUT result: OK Mar 14 00:13:09.061046 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 14 00:13:09.072366 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 14 00:13:09.074721 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 14 00:13:09.090834 ignition[1359]: INFO : umount: umount passed Mar 14 00:13:09.090834 ignition[1359]: INFO : Ignition finished successfully Mar 14 00:13:09.095015 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 14 00:13:09.095787 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 14 00:13:09.107006 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 14 00:13:09.107124 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 14 00:13:09.110047 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 14 00:13:09.110195 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 14 00:13:09.112803 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 14 00:13:09.112909 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 14 00:13:09.115603 systemd[1]: Stopped target network.target - Network. Mar 14 00:13:09.117804 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 14 00:13:09.117928 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:13:09.120921 systemd[1]: Stopped target paths.target - Path Units. Mar 14 00:13:09.123219 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 14 00:13:09.127823 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:13:09.131097 systemd[1]: Stopped target slices.target - Slice Units. Mar 14 00:13:09.133431 systemd[1]: Stopped target sockets.target - Socket Units. Mar 14 00:13:09.135917 systemd[1]: iscsid.socket: Deactivated successfully. Mar 14 00:13:09.136013 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:13:09.138919 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 14 00:13:09.139010 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:13:09.141697 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 14 00:13:09.141819 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 14 00:13:09.144443 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 14 00:13:09.144560 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 14 00:13:09.147387 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 14 00:13:09.147497 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 14 00:13:09.150857 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 14 00:13:09.167241 systemd-networkd[1115]: eth0: DHCPv6 lease lost Mar 14 00:13:09.174308 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 14 00:13:09.177496 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 14 00:13:09.177739 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 14 00:13:09.182501 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 14 00:13:09.182643 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:13:09.218464 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 14 00:13:09.227348 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 14 00:13:09.227494 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:13:09.233787 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:13:09.249140 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 14 00:13:09.249457 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 14 00:13:09.290991 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 14 00:13:09.291603 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:13:09.303501 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 14 00:13:09.303704 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 14 00:13:09.313517 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 14 00:13:09.313623 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:13:09.323567 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 14 00:13:09.323718 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:13:09.328899 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 14 00:13:09.329024 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 14 00:13:09.332599 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:13:09.332736 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:13:09.359490 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 14 00:13:09.365571 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:13:09.366319 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:13:09.377197 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 14 00:13:09.377537 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 14 00:13:09.385496 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 14 00:13:09.385624 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:13:09.388760 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 14 00:13:09.388898 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:13:09.392451 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 14 00:13:09.392590 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:13:09.395784 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 14 00:13:09.395910 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:13:09.399235 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:13:09.399372 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:13:09.403423 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 14 00:13:09.403740 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 14 00:13:09.439356 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 14 00:13:09.443626 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 14 00:13:09.448479 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 14 00:13:09.463673 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 14 00:13:09.489579 systemd[1]: Switching root. Mar 14 00:13:09.544768 systemd-journald[252]: Journal stopped Mar 14 00:13:12.271658 systemd-journald[252]: Received SIGTERM from PID 1 (systemd). Mar 14 00:13:12.271818 kernel: SELinux: policy capability network_peer_controls=1 Mar 14 00:13:12.271867 kernel: SELinux: policy capability open_perms=1 Mar 14 00:13:12.271910 kernel: SELinux: policy capability extended_socket_class=1 Mar 14 00:13:12.271953 kernel: SELinux: policy capability always_check_network=0 Mar 14 00:13:12.271986 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 14 00:13:12.272018 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 14 00:13:12.272051 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 14 00:13:12.272083 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 14 00:13:12.272115 kernel: audit: type=1403 audit(1773447190.111:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 14 00:13:12.275245 systemd[1]: Successfully loaded SELinux policy in 70.205ms. Mar 14 00:13:12.275356 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.384ms. Mar 14 00:13:12.275397 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:13:12.275432 systemd[1]: Detected virtualization amazon. Mar 14 00:13:12.275466 systemd[1]: Detected architecture arm64. Mar 14 00:13:12.275498 systemd[1]: Detected first boot. Mar 14 00:13:12.275533 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:13:12.275567 zram_generator::config[1401]: No configuration found. Mar 14 00:13:12.275620 systemd[1]: Populated /etc with preset unit settings. Mar 14 00:13:12.275655 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 14 00:13:12.275694 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 14 00:13:12.275727 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 14 00:13:12.275764 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 14 00:13:12.275800 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 14 00:13:12.275834 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 14 00:13:12.275868 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 14 00:13:12.275902 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 14 00:13:12.275936 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 14 00:13:12.275974 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 14 00:13:12.276008 systemd[1]: Created slice user.slice - User and Session Slice. Mar 14 00:13:12.276045 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:13:12.276078 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:13:12.276110 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 14 00:13:12.276140 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 14 00:13:12.279053 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 14 00:13:12.279117 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:13:12.279151 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 14 00:13:12.279238 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:13:12.279279 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 14 00:13:12.279313 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 14 00:13:12.279348 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 14 00:13:12.279379 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 14 00:13:12.279410 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:13:12.279441 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:13:12.279472 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:13:12.279512 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:13:12.279543 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 14 00:13:12.279578 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 14 00:13:12.279608 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:13:12.279639 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:13:12.279672 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:13:12.279716 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 14 00:13:12.279747 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 14 00:13:12.279780 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 14 00:13:12.279815 systemd[1]: Mounting media.mount - External Media Directory... Mar 14 00:13:12.279849 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 14 00:13:12.279879 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 14 00:13:12.279913 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 14 00:13:12.279948 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 14 00:13:12.279980 systemd[1]: Reached target machines.target - Containers. Mar 14 00:13:12.280015 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 14 00:13:12.280050 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:13:12.280086 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:13:12.280118 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 14 00:13:12.280148 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:13:12.280867 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:13:12.282827 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:13:12.282882 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 14 00:13:12.282915 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:13:12.282950 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 14 00:13:12.282985 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 14 00:13:12.283029 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 14 00:13:12.283063 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 14 00:13:12.283095 systemd[1]: Stopped systemd-fsck-usr.service. Mar 14 00:13:12.283129 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:13:12.283183 kernel: fuse: init (API version 7.39) Mar 14 00:13:12.286223 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:13:12.286318 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 14 00:13:12.286362 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 14 00:13:12.286394 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:13:12.286437 systemd[1]: verity-setup.service: Deactivated successfully. Mar 14 00:13:12.286469 kernel: loop: module loaded Mar 14 00:13:12.286502 systemd[1]: Stopped verity-setup.service. Mar 14 00:13:12.286535 kernel: ACPI: bus type drm_connector registered Mar 14 00:13:12.286565 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 14 00:13:12.286596 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 14 00:13:12.286630 systemd[1]: Mounted media.mount - External Media Directory. Mar 14 00:13:12.286664 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 14 00:13:12.286703 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 14 00:13:12.286737 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 14 00:13:12.286769 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:13:12.286800 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 14 00:13:12.286831 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 14 00:13:12.286864 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:13:12.286969 systemd-journald[1486]: Collecting audit messages is disabled. Mar 14 00:13:12.287032 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:13:12.287066 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:13:12.287097 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:13:12.287130 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:13:12.287193 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:13:12.287231 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 14 00:13:12.287271 systemd-journald[1486]: Journal started Mar 14 00:13:12.287326 systemd-journald[1486]: Runtime Journal (/run/log/journal/ec247f86c0467cd3498b9dbcf17d823f) is 8.0M, max 75.3M, 67.3M free. Mar 14 00:13:11.517058 systemd[1]: Queued start job for default target multi-user.target. Mar 14 00:13:11.591218 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Mar 14 00:13:11.592111 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 14 00:13:12.293525 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 14 00:13:12.303238 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:13:12.305491 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 14 00:13:12.309470 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:13:12.309828 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:13:12.314874 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:13:12.318766 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 14 00:13:12.322871 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 14 00:13:12.355967 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 14 00:13:12.367422 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 14 00:13:12.378720 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 14 00:13:12.386418 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 14 00:13:12.386488 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:13:12.394943 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 14 00:13:12.413687 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 14 00:13:12.423499 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 14 00:13:12.426675 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:13:12.445725 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 14 00:13:12.456005 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 14 00:13:12.459295 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:13:12.464646 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 14 00:13:12.468512 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:13:12.471517 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:13:12.480547 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 14 00:13:12.490512 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:13:12.500901 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 14 00:13:12.505567 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 14 00:13:12.509316 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 14 00:13:12.558853 systemd-journald[1486]: Time spent on flushing to /var/log/journal/ec247f86c0467cd3498b9dbcf17d823f is 177.514ms for 900 entries. Mar 14 00:13:12.558853 systemd-journald[1486]: System Journal (/var/log/journal/ec247f86c0467cd3498b9dbcf17d823f) is 8.0M, max 195.6M, 187.6M free. Mar 14 00:13:12.767483 systemd-journald[1486]: Received client request to flush runtime journal. Mar 14 00:13:12.767579 kernel: loop0: detected capacity change from 0 to 114432 Mar 14 00:13:12.767644 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 14 00:13:12.585261 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:13:12.600645 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 14 00:13:12.604187 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 14 00:13:12.607517 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 14 00:13:12.617585 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 14 00:13:12.678630 udevadm[1537]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 14 00:13:12.719714 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:13:12.754477 systemd-tmpfiles[1531]: ACLs are not supported, ignoring. Mar 14 00:13:12.754505 systemd-tmpfiles[1531]: ACLs are not supported, ignoring. Mar 14 00:13:12.774740 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 14 00:13:12.788584 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 14 00:13:12.791915 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:13:12.796496 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 14 00:13:12.814884 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 14 00:13:12.840793 kernel: loop1: detected capacity change from 0 to 114328 Mar 14 00:13:12.908816 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 14 00:13:12.924643 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:13:12.977006 systemd-tmpfiles[1552]: ACLs are not supported, ignoring. Mar 14 00:13:12.977052 systemd-tmpfiles[1552]: ACLs are not supported, ignoring. Mar 14 00:13:12.979214 kernel: loop2: detected capacity change from 0 to 197488 Mar 14 00:13:12.990299 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:13:13.048279 kernel: loop3: detected capacity change from 0 to 52536 Mar 14 00:13:13.176203 kernel: loop4: detected capacity change from 0 to 114432 Mar 14 00:13:13.207214 kernel: loop5: detected capacity change from 0 to 114328 Mar 14 00:13:13.225341 kernel: loop6: detected capacity change from 0 to 197488 Mar 14 00:13:13.264207 kernel: loop7: detected capacity change from 0 to 52536 Mar 14 00:13:13.279609 (sd-merge)[1558]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Mar 14 00:13:13.280882 (sd-merge)[1558]: Merged extensions into '/usr'. Mar 14 00:13:13.291680 systemd[1]: Reloading requested from client PID 1530 ('systemd-sysext') (unit systemd-sysext.service)... Mar 14 00:13:13.291711 systemd[1]: Reloading... Mar 14 00:13:13.426216 zram_generator::config[1580]: No configuration found. Mar 14 00:13:13.834139 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:13:13.966330 systemd[1]: Reloading finished in 673 ms. Mar 14 00:13:14.025046 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 14 00:13:14.029625 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 14 00:13:14.050594 systemd[1]: Starting ensure-sysext.service... Mar 14 00:13:14.058473 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:13:14.086405 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:13:14.100229 systemd[1]: Reloading requested from client PID 1636 ('systemctl') (unit ensure-sysext.service)... Mar 14 00:13:14.100269 systemd[1]: Reloading... Mar 14 00:13:14.126037 systemd-tmpfiles[1637]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 14 00:13:14.126832 systemd-tmpfiles[1637]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 14 00:13:14.135456 systemd-tmpfiles[1637]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 14 00:13:14.137764 systemd-tmpfiles[1637]: ACLs are not supported, ignoring. Mar 14 00:13:14.137999 systemd-tmpfiles[1637]: ACLs are not supported, ignoring. Mar 14 00:13:14.154446 systemd-tmpfiles[1637]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:13:14.154478 systemd-tmpfiles[1637]: Skipping /boot Mar 14 00:13:14.219520 systemd-tmpfiles[1637]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:13:14.219556 systemd-tmpfiles[1637]: Skipping /boot Mar 14 00:13:14.282084 systemd-udevd[1638]: Using default interface naming scheme 'v255'. Mar 14 00:13:14.345240 zram_generator::config[1663]: No configuration found. Mar 14 00:13:14.446080 ldconfig[1525]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 14 00:13:14.571419 (udev-worker)[1699]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:13:14.792312 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:13:14.799218 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1693) Mar 14 00:13:14.955798 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 14 00:13:14.956501 systemd[1]: Reloading finished in 855 ms. Mar 14 00:13:15.018826 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:13:15.025531 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 14 00:13:15.059272 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:13:15.188773 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 14 00:13:15.208210 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Mar 14 00:13:15.212236 systemd[1]: Finished ensure-sysext.service. Mar 14 00:13:15.225582 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:13:15.238724 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 14 00:13:15.242457 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:13:15.257261 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 14 00:13:15.274546 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:13:15.293619 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:13:15.307562 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:13:15.316665 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:13:15.320755 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:13:15.325721 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 14 00:13:15.334624 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 14 00:13:15.345579 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:13:15.357660 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:13:15.365382 lvm[1838]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:13:15.360353 systemd[1]: Reached target time-set.target - System Time Set. Mar 14 00:13:15.368868 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 14 00:13:15.375501 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:13:15.380806 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:13:15.383425 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:13:15.387089 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:13:15.387483 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:13:15.419209 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 14 00:13:15.440252 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 14 00:13:15.459656 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:13:15.460896 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:13:15.464041 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 14 00:13:15.471137 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:13:15.475780 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:13:15.476336 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:13:15.480851 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:13:15.519578 augenrules[1870]: No rules Mar 14 00:13:15.526264 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:13:15.562402 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 14 00:13:15.568292 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 14 00:13:15.574667 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:13:15.587907 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 14 00:13:15.599754 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 14 00:13:15.626239 lvm[1880]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:13:15.673053 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 14 00:13:15.683691 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 14 00:13:15.690892 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 14 00:13:15.723249 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 14 00:13:15.726661 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 14 00:13:15.758292 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:13:15.832054 systemd-networkd[1852]: lo: Link UP Mar 14 00:13:15.832086 systemd-networkd[1852]: lo: Gained carrier Mar 14 00:13:15.835208 systemd-networkd[1852]: Enumeration completed Mar 14 00:13:15.835424 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:13:15.837562 systemd-networkd[1852]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:13:15.837571 systemd-networkd[1852]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:13:15.842841 systemd-networkd[1852]: eth0: Link UP Mar 14 00:13:15.843412 systemd-networkd[1852]: eth0: Gained carrier Mar 14 00:13:15.843450 systemd-networkd[1852]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:13:15.845555 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 14 00:13:15.852218 systemd-resolved[1853]: Positive Trust Anchors: Mar 14 00:13:15.852257 systemd-resolved[1853]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:13:15.852322 systemd-resolved[1853]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:13:15.855357 systemd-networkd[1852]: eth0: DHCPv4 address 172.31.18.130/20, gateway 172.31.16.1 acquired from 172.31.16.1 Mar 14 00:13:15.870786 systemd-resolved[1853]: Defaulting to hostname 'linux'. Mar 14 00:13:15.874689 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:13:15.877963 systemd[1]: Reached target network.target - Network. Mar 14 00:13:15.880445 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:13:15.883733 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:13:15.886630 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 14 00:13:15.889716 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 14 00:13:15.893454 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 14 00:13:15.896714 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 14 00:13:15.899937 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 14 00:13:15.903211 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 14 00:13:15.903275 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:13:15.905569 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:13:15.909286 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 14 00:13:15.914771 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 14 00:13:15.930081 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 14 00:13:15.933850 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 14 00:13:15.936730 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:13:15.939077 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:13:15.941715 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:13:15.941774 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:13:15.953411 systemd[1]: Starting containerd.service - containerd container runtime... Mar 14 00:13:15.973688 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 14 00:13:15.979680 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 14 00:13:15.995650 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 14 00:13:16.006495 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 14 00:13:16.011591 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 14 00:13:16.023610 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 14 00:13:16.028201 jq[1901]: false Mar 14 00:13:16.045455 systemd[1]: Started ntpd.service - Network Time Service. Mar 14 00:13:16.056131 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 14 00:13:16.062703 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 14 00:13:16.068601 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 14 00:13:16.078577 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 14 00:13:16.087651 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 14 00:13:16.092084 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 14 00:13:16.094122 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 14 00:13:16.096083 systemd[1]: Starting update-engine.service - Update Engine... Mar 14 00:13:16.103696 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 14 00:13:16.117010 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 14 00:13:16.117424 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 14 00:13:16.193100 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 14 00:13:16.197294 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 14 00:13:16.220513 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 14 00:13:16.218914 dbus-daemon[1900]: [system] SELinux support is enabled Mar 14 00:13:16.227396 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 14 00:13:16.227482 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 14 00:13:16.231217 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 14 00:13:16.231263 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 14 00:13:16.266457 extend-filesystems[1902]: Found loop4 Mar 14 00:13:16.266457 extend-filesystems[1902]: Found loop5 Mar 14 00:13:16.266457 extend-filesystems[1902]: Found loop6 Mar 14 00:13:16.266457 extend-filesystems[1902]: Found loop7 Mar 14 00:13:16.266457 extend-filesystems[1902]: Found nvme0n1 Mar 14 00:13:16.266457 extend-filesystems[1902]: Found nvme0n1p1 Mar 14 00:13:16.266457 extend-filesystems[1902]: Found nvme0n1p2 Mar 14 00:13:16.266457 extend-filesystems[1902]: Found nvme0n1p3 Mar 14 00:13:16.266457 extend-filesystems[1902]: Found usr Mar 14 00:13:16.266457 extend-filesystems[1902]: Found nvme0n1p4 Mar 14 00:13:16.266457 extend-filesystems[1902]: Found nvme0n1p6 Mar 14 00:13:16.266457 extend-filesystems[1902]: Found nvme0n1p7 Mar 14 00:13:16.266457 extend-filesystems[1902]: Found nvme0n1p9 Mar 14 00:13:16.266457 extend-filesystems[1902]: Checking size of /dev/nvme0n1p9 Mar 14 00:13:16.277944 systemd[1]: motdgen.service: Deactivated successfully. Mar 14 00:13:16.321487 ntpd[1904]: 14 Mar 00:13:16 ntpd[1904]: ntpd 4.2.8p17@1.4004-o Fri Mar 13 21:57:55 UTC 2026 (1): Starting Mar 14 00:13:16.321487 ntpd[1904]: 14 Mar 00:13:16 ntpd[1904]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 14 00:13:16.321487 ntpd[1904]: 14 Mar 00:13:16 ntpd[1904]: ---------------------------------------------------- Mar 14 00:13:16.321487 ntpd[1904]: 14 Mar 00:13:16 ntpd[1904]: ntp-4 is maintained by Network Time Foundation, Mar 14 00:13:16.321487 ntpd[1904]: 14 Mar 00:13:16 ntpd[1904]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 14 00:13:16.321487 ntpd[1904]: 14 Mar 00:13:16 ntpd[1904]: corporation. Support and training for ntp-4 are Mar 14 00:13:16.321487 ntpd[1904]: 14 Mar 00:13:16 ntpd[1904]: available at https://www.nwtime.org/support Mar 14 00:13:16.321487 ntpd[1904]: 14 Mar 00:13:16 ntpd[1904]: ---------------------------------------------------- Mar 14 00:13:16.340428 jq[1915]: true Mar 14 00:13:16.294791 dbus-daemon[1900]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1852 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 14 00:13:16.278440 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 14 00:13:16.349505 ntpd[1904]: 14 Mar 00:13:16 ntpd[1904]: proto: precision = 0.096 usec (-23) Mar 14 00:13:16.349505 ntpd[1904]: 14 Mar 00:13:16 ntpd[1904]: basedate set to 2026-03-01 Mar 14 00:13:16.349505 ntpd[1904]: 14 Mar 00:13:16 ntpd[1904]: gps base set to 2026-03-01 (week 2408) Mar 14 00:13:16.299019 ntpd[1904]: ntpd 4.2.8p17@1.4004-o Fri Mar 13 21:57:55 UTC 2026 (1): Starting Mar 14 00:13:16.350118 tar[1927]: linux-arm64/LICENSE Mar 14 00:13:16.350118 tar[1927]: linux-arm64/helm Mar 14 00:13:16.337677 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 14 00:13:16.299069 ntpd[1904]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 14 00:13:16.299090 ntpd[1904]: ---------------------------------------------------- Mar 14 00:13:16.299109 ntpd[1904]: ntp-4 is maintained by Network Time Foundation, Mar 14 00:13:16.361453 ntpd[1904]: 14 Mar 00:13:16 ntpd[1904]: Listen and drop on 0 v6wildcard [::]:123 Mar 14 00:13:16.361453 ntpd[1904]: 14 Mar 00:13:16 ntpd[1904]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 14 00:13:16.361453 ntpd[1904]: 14 Mar 00:13:16 ntpd[1904]: Listen normally on 2 lo 127.0.0.1:123 Mar 14 00:13:16.361453 ntpd[1904]: 14 Mar 00:13:16 ntpd[1904]: Listen normally on 3 eth0 172.31.18.130:123 Mar 14 00:13:16.361453 ntpd[1904]: 14 Mar 00:13:16 ntpd[1904]: Listen normally on 4 lo [::1]:123 Mar 14 00:13:16.361453 ntpd[1904]: 14 Mar 00:13:16 ntpd[1904]: bind(21) AF_INET6 fe80::475:65ff:fe02:9c63%2#123 flags 0x11 failed: Cannot assign requested address Mar 14 00:13:16.361453 ntpd[1904]: 14 Mar 00:13:16 ntpd[1904]: unable to create socket on eth0 (5) for fe80::475:65ff:fe02:9c63%2#123 Mar 14 00:13:16.361453 ntpd[1904]: 14 Mar 00:13:16 ntpd[1904]: failed to init interface for address fe80::475:65ff:fe02:9c63%2 Mar 14 00:13:16.361453 ntpd[1904]: 14 Mar 00:13:16 ntpd[1904]: Listening on routing socket on fd #21 for interface updates Mar 14 00:13:16.299128 ntpd[1904]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 14 00:13:16.299146 ntpd[1904]: corporation. Support and training for ntp-4 are Mar 14 00:13:16.302540 ntpd[1904]: available at https://www.nwtime.org/support Mar 14 00:13:16.302576 ntpd[1904]: ---------------------------------------------------- Mar 14 00:13:16.325857 ntpd[1904]: proto: precision = 0.096 usec (-23) Mar 14 00:13:16.329571 ntpd[1904]: basedate set to 2026-03-01 Mar 14 00:13:16.329609 ntpd[1904]: gps base set to 2026-03-01 (week 2408) Mar 14 00:13:16.352776 ntpd[1904]: Listen and drop on 0 v6wildcard [::]:123 Mar 14 00:13:16.352872 ntpd[1904]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 14 00:13:16.357259 ntpd[1904]: Listen normally on 2 lo 127.0.0.1:123 Mar 14 00:13:16.357380 ntpd[1904]: Listen normally on 3 eth0 172.31.18.130:123 Mar 14 00:13:16.357452 ntpd[1904]: Listen normally on 4 lo [::1]:123 Mar 14 00:13:16.357539 ntpd[1904]: bind(21) AF_INET6 fe80::475:65ff:fe02:9c63%2#123 flags 0x11 failed: Cannot assign requested address Mar 14 00:13:16.357581 ntpd[1904]: unable to create socket on eth0 (5) for fe80::475:65ff:fe02:9c63%2#123 Mar 14 00:13:16.357609 ntpd[1904]: failed to init interface for address fe80::475:65ff:fe02:9c63%2 Mar 14 00:13:16.357673 ntpd[1904]: Listening on routing socket on fd #21 for interface updates Mar 14 00:13:16.370896 (ntainerd)[1934]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 14 00:13:16.410690 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 14 00:13:16.416260 ntpd[1904]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:13:16.417324 ntpd[1904]: 14 Mar 00:13:16 ntpd[1904]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:13:16.417324 ntpd[1904]: 14 Mar 00:13:16 ntpd[1904]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:13:16.416328 ntpd[1904]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 14 00:13:16.441004 extend-filesystems[1902]: Resized partition /dev/nvme0n1p9 Mar 14 00:13:16.451803 jq[1940]: true Mar 14 00:13:16.468350 extend-filesystems[1951]: resize2fs 1.47.1 (20-May-2024) Mar 14 00:13:16.491807 coreos-metadata[1899]: Mar 14 00:13:16.471 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 14 00:13:16.491807 coreos-metadata[1899]: Mar 14 00:13:16.478 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Mar 14 00:13:16.501364 coreos-metadata[1899]: Mar 14 00:13:16.492 INFO Fetch successful Mar 14 00:13:16.501364 coreos-metadata[1899]: Mar 14 00:13:16.492 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Mar 14 00:13:16.501364 coreos-metadata[1899]: Mar 14 00:13:16.499 INFO Fetch successful Mar 14 00:13:16.501364 coreos-metadata[1899]: Mar 14 00:13:16.499 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Mar 14 00:13:16.501713 update_engine[1914]: I20260314 00:13:16.500502 1914 main.cc:92] Flatcar Update Engine starting Mar 14 00:13:16.510000 coreos-metadata[1899]: Mar 14 00:13:16.502 INFO Fetch successful Mar 14 00:13:16.510000 coreos-metadata[1899]: Mar 14 00:13:16.502 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Mar 14 00:13:16.510000 coreos-metadata[1899]: Mar 14 00:13:16.509 INFO Fetch successful Mar 14 00:13:16.510000 coreos-metadata[1899]: Mar 14 00:13:16.509 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Mar 14 00:13:16.511211 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Mar 14 00:13:16.517078 coreos-metadata[1899]: Mar 14 00:13:16.515 INFO Fetch failed with 404: resource not found Mar 14 00:13:16.517078 coreos-metadata[1899]: Mar 14 00:13:16.515 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Mar 14 00:13:16.520272 coreos-metadata[1899]: Mar 14 00:13:16.519 INFO Fetch successful Mar 14 00:13:16.520272 coreos-metadata[1899]: Mar 14 00:13:16.519 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Mar 14 00:13:16.528427 coreos-metadata[1899]: Mar 14 00:13:16.527 INFO Fetch successful Mar 14 00:13:16.528427 coreos-metadata[1899]: Mar 14 00:13:16.527 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Mar 14 00:13:16.534825 systemd[1]: Started update-engine.service - Update Engine. Mar 14 00:13:16.543508 coreos-metadata[1899]: Mar 14 00:13:16.534 INFO Fetch successful Mar 14 00:13:16.543508 coreos-metadata[1899]: Mar 14 00:13:16.534 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Mar 14 00:13:16.549807 update_engine[1914]: I20260314 00:13:16.549506 1914 update_check_scheduler.cc:74] Next update check in 11m11s Mar 14 00:13:16.555214 coreos-metadata[1899]: Mar 14 00:13:16.553 INFO Fetch successful Mar 14 00:13:16.555214 coreos-metadata[1899]: Mar 14 00:13:16.553 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Mar 14 00:13:16.561695 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 14 00:13:16.575396 coreos-metadata[1899]: Mar 14 00:13:16.575 INFO Fetch successful Mar 14 00:13:16.583944 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 14 00:13:16.654628 systemd-logind[1913]: Watching system buttons on /dev/input/event0 (Power Button) Mar 14 00:13:16.654674 systemd-logind[1913]: Watching system buttons on /dev/input/event1 (Sleep Button) Mar 14 00:13:16.655865 systemd-logind[1913]: New seat seat0. Mar 14 00:13:16.658454 systemd[1]: Started systemd-logind.service - User Login Management. Mar 14 00:13:16.757206 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Mar 14 00:13:16.781338 extend-filesystems[1951]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Mar 14 00:13:16.781338 extend-filesystems[1951]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 14 00:13:16.781338 extend-filesystems[1951]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Mar 14 00:13:16.802393 extend-filesystems[1902]: Resized filesystem in /dev/nvme0n1p9 Mar 14 00:13:16.799117 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 14 00:13:16.799537 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 14 00:13:16.811244 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 14 00:13:16.814777 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 14 00:13:16.833293 bash[1981]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:13:16.839772 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 14 00:13:16.859997 systemd[1]: Starting sshkeys.service... Mar 14 00:13:16.917210 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1703) Mar 14 00:13:16.960494 locksmithd[1955]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 14 00:13:16.978564 dbus-daemon[1900]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 14 00:13:16.978857 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 14 00:13:16.983657 dbus-daemon[1900]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1939 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 14 00:13:16.994061 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 14 00:13:17.023970 systemd-networkd[1852]: eth0: Gained IPv6LL Mar 14 00:13:17.043153 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 14 00:13:17.051255 systemd[1]: Starting polkit.service - Authorization Manager... Mar 14 00:13:17.055814 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 14 00:13:17.066079 systemd[1]: Reached target network-online.target - Network is Online. Mar 14 00:13:17.079925 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Mar 14 00:13:17.089748 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:13:17.099874 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 14 00:13:17.174503 polkitd[2023]: Started polkitd version 121 Mar 14 00:13:17.244480 amazon-ssm-agent[2026]: Initializing new seelog logger Mar 14 00:13:17.263824 amazon-ssm-agent[2026]: New Seelog Logger Creation Complete Mar 14 00:13:17.263824 amazon-ssm-agent[2026]: 2026/03/14 00:13:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:13:17.263824 amazon-ssm-agent[2026]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:13:17.263824 amazon-ssm-agent[2026]: 2026/03/14 00:13:17 processing appconfig overrides Mar 14 00:13:17.263824 amazon-ssm-agent[2026]: 2026/03/14 00:13:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:13:17.263824 amazon-ssm-agent[2026]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:13:17.263824 amazon-ssm-agent[2026]: 2026/03/14 00:13:17 processing appconfig overrides Mar 14 00:13:17.263824 amazon-ssm-agent[2026]: 2026/03/14 00:13:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:13:17.263824 amazon-ssm-agent[2026]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:13:17.263824 amazon-ssm-agent[2026]: 2026/03/14 00:13:17 processing appconfig overrides Mar 14 00:13:17.263824 amazon-ssm-agent[2026]: 2026-03-14 00:13:17 INFO Proxy environment variables: Mar 14 00:13:17.263824 amazon-ssm-agent[2026]: 2026/03/14 00:13:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:13:17.263824 amazon-ssm-agent[2026]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Mar 14 00:13:17.263824 amazon-ssm-agent[2026]: 2026/03/14 00:13:17 processing appconfig overrides Mar 14 00:13:17.262756 polkitd[2023]: Loading rules from directory /etc/polkit-1/rules.d Mar 14 00:13:17.262889 polkitd[2023]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 14 00:13:17.281116 polkitd[2023]: Finished loading, compiling and executing 2 rules Mar 14 00:13:17.291413 dbus-daemon[1900]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 14 00:13:17.291733 systemd[1]: Started polkit.service - Authorization Manager. Mar 14 00:13:17.296652 polkitd[2023]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 14 00:13:17.331420 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 14 00:13:17.353732 amazon-ssm-agent[2026]: 2026-03-14 00:13:17 INFO https_proxy: Mar 14 00:13:17.377150 systemd-hostnamed[1939]: Hostname set to (transient) Mar 14 00:13:17.378996 systemd-resolved[1853]: System hostname changed to 'ip-172-31-18-130'. Mar 14 00:13:17.451332 amazon-ssm-agent[2026]: 2026-03-14 00:13:17 INFO http_proxy: Mar 14 00:13:17.520563 coreos-metadata[2013]: Mar 14 00:13:17.519 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Mar 14 00:13:17.523454 coreos-metadata[2013]: Mar 14 00:13:17.523 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Mar 14 00:13:17.527205 coreos-metadata[2013]: Mar 14 00:13:17.526 INFO Fetch successful Mar 14 00:13:17.527205 coreos-metadata[2013]: Mar 14 00:13:17.526 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 14 00:13:17.530472 coreos-metadata[2013]: Mar 14 00:13:17.530 INFO Fetch successful Mar 14 00:13:17.537343 unknown[2013]: wrote ssh authorized keys file for user: core Mar 14 00:13:17.552033 amazon-ssm-agent[2026]: 2026-03-14 00:13:17 INFO no_proxy: Mar 14 00:13:17.578220 containerd[1934]: time="2026-03-14T00:13:17.575083992Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 14 00:13:17.640294 update-ssh-keys[2095]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:13:17.643298 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 14 00:13:17.650341 amazon-ssm-agent[2026]: 2026-03-14 00:13:17 INFO Checking if agent identity type OnPrem can be assumed Mar 14 00:13:17.682034 systemd[1]: Finished sshkeys.service. Mar 14 00:13:17.749044 amazon-ssm-agent[2026]: 2026-03-14 00:13:17 INFO Checking if agent identity type EC2 can be assumed Mar 14 00:13:17.786587 containerd[1934]: time="2026-03-14T00:13:17.786482401Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:13:17.817914 containerd[1934]: time="2026-03-14T00:13:17.817517306Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:13:17.817914 containerd[1934]: time="2026-03-14T00:13:17.817603250Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 14 00:13:17.817914 containerd[1934]: time="2026-03-14T00:13:17.817643774Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 14 00:13:17.818128 containerd[1934]: time="2026-03-14T00:13:17.817995182Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 14 00:13:17.818128 containerd[1934]: time="2026-03-14T00:13:17.818040650Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 14 00:13:17.818596 containerd[1934]: time="2026-03-14T00:13:17.818221658Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:13:17.818596 containerd[1934]: time="2026-03-14T00:13:17.818264870Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:13:17.818692 containerd[1934]: time="2026-03-14T00:13:17.818651426Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:13:17.818743 containerd[1934]: time="2026-03-14T00:13:17.818689874Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 14 00:13:17.818743 containerd[1934]: time="2026-03-14T00:13:17.818727830Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:13:17.818826 containerd[1934]: time="2026-03-14T00:13:17.818753114Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 14 00:13:17.819024 containerd[1934]: time="2026-03-14T00:13:17.818955374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:13:17.841009 containerd[1934]: time="2026-03-14T00:13:17.840913898Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:13:17.845921 containerd[1934]: time="2026-03-14T00:13:17.844375310Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:13:17.845921 containerd[1934]: time="2026-03-14T00:13:17.844449410Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 14 00:13:17.845921 containerd[1934]: time="2026-03-14T00:13:17.844731026Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 14 00:13:17.845921 containerd[1934]: time="2026-03-14T00:13:17.844847882Z" level=info msg="metadata content store policy set" policy=shared Mar 14 00:13:17.848343 amazon-ssm-agent[2026]: 2026-03-14 00:13:17 INFO Agent will take identity from EC2 Mar 14 00:13:17.872212 containerd[1934]: time="2026-03-14T00:13:17.869393594Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 14 00:13:17.872212 containerd[1934]: time="2026-03-14T00:13:17.869524322Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 14 00:13:17.872212 containerd[1934]: time="2026-03-14T00:13:17.869653430Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 14 00:13:17.872212 containerd[1934]: time="2026-03-14T00:13:17.869702150Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 14 00:13:17.872212 containerd[1934]: time="2026-03-14T00:13:17.869782010Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 14 00:13:17.872212 containerd[1934]: time="2026-03-14T00:13:17.870209798Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 14 00:13:17.877204 containerd[1934]: time="2026-03-14T00:13:17.875920610Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 14 00:13:17.877204 containerd[1934]: time="2026-03-14T00:13:17.876328502Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 14 00:13:17.877204 containerd[1934]: time="2026-03-14T00:13:17.876393254Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 14 00:13:17.877204 containerd[1934]: time="2026-03-14T00:13:17.876429854Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 14 00:13:17.877204 containerd[1934]: time="2026-03-14T00:13:17.876463430Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 14 00:13:17.877204 containerd[1934]: time="2026-03-14T00:13:17.876496058Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 14 00:13:17.877204 containerd[1934]: time="2026-03-14T00:13:17.876538346Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 14 00:13:17.877204 containerd[1934]: time="2026-03-14T00:13:17.876572294Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 14 00:13:17.877204 containerd[1934]: time="2026-03-14T00:13:17.876605942Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 14 00:13:17.877204 containerd[1934]: time="2026-03-14T00:13:17.876636890Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 14 00:13:17.877204 containerd[1934]: time="2026-03-14T00:13:17.876669770Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 14 00:13:17.877204 containerd[1934]: time="2026-03-14T00:13:17.876702686Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 14 00:13:17.877204 containerd[1934]: time="2026-03-14T00:13:17.876747806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 14 00:13:17.877204 containerd[1934]: time="2026-03-14T00:13:17.876783962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 14 00:13:17.877923 containerd[1934]: time="2026-03-14T00:13:17.876815858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 14 00:13:17.877923 containerd[1934]: time="2026-03-14T00:13:17.876849362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 14 00:13:17.877923 containerd[1934]: time="2026-03-14T00:13:17.876879218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 14 00:13:17.877923 containerd[1934]: time="2026-03-14T00:13:17.876911906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 14 00:13:17.877923 containerd[1934]: time="2026-03-14T00:13:17.876948386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 14 00:13:17.877923 containerd[1934]: time="2026-03-14T00:13:17.876982454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 14 00:13:17.877923 containerd[1934]: time="2026-03-14T00:13:17.877013486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 14 00:13:17.877923 containerd[1934]: time="2026-03-14T00:13:17.877067198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 14 00:13:17.877923 containerd[1934]: time="2026-03-14T00:13:17.877101470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 14 00:13:17.877923 containerd[1934]: time="2026-03-14T00:13:17.877132154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 14 00:13:17.885834 containerd[1934]: time="2026-03-14T00:13:17.885742670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 14 00:13:17.885971 containerd[1934]: time="2026-03-14T00:13:17.885845522Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 14 00:13:17.885971 containerd[1934]: time="2026-03-14T00:13:17.885928406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 14 00:13:17.887808 containerd[1934]: time="2026-03-14T00:13:17.885964838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 14 00:13:17.887808 containerd[1934]: time="2026-03-14T00:13:17.885994550Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 14 00:13:17.887808 containerd[1934]: time="2026-03-14T00:13:17.886305482Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 14 00:13:17.887808 containerd[1934]: time="2026-03-14T00:13:17.886667210Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 14 00:13:17.887808 containerd[1934]: time="2026-03-14T00:13:17.886706534Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 14 00:13:17.887808 containerd[1934]: time="2026-03-14T00:13:17.886737950Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 14 00:13:17.887808 containerd[1934]: time="2026-03-14T00:13:17.886785602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 14 00:13:17.887808 containerd[1934]: time="2026-03-14T00:13:17.886819490Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 14 00:13:17.887808 containerd[1934]: time="2026-03-14T00:13:17.886844114Z" level=info msg="NRI interface is disabled by configuration." Mar 14 00:13:17.887808 containerd[1934]: time="2026-03-14T00:13:17.886871414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 14 00:13:17.893242 containerd[1934]: time="2026-03-14T00:13:17.890392850Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 14 00:13:17.893242 containerd[1934]: time="2026-03-14T00:13:17.890562350Z" level=info msg="Connect containerd service" Mar 14 00:13:17.893242 containerd[1934]: time="2026-03-14T00:13:17.890655986Z" level=info msg="using legacy CRI server" Mar 14 00:13:17.893242 containerd[1934]: time="2026-03-14T00:13:17.890677442Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 14 00:13:17.895408 containerd[1934]: time="2026-03-14T00:13:17.895296278Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 14 00:13:17.912204 containerd[1934]: time="2026-03-14T00:13:17.909980270Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:13:17.912204 containerd[1934]: time="2026-03-14T00:13:17.910472786Z" level=info msg="Start subscribing containerd event" Mar 14 00:13:17.912204 containerd[1934]: time="2026-03-14T00:13:17.910555418Z" level=info msg="Start recovering state" Mar 14 00:13:17.912204 containerd[1934]: time="2026-03-14T00:13:17.910722878Z" level=info msg="Start event monitor" Mar 14 00:13:17.912204 containerd[1934]: time="2026-03-14T00:13:17.910750670Z" level=info msg="Start snapshots syncer" Mar 14 00:13:17.912204 containerd[1934]: time="2026-03-14T00:13:17.910774682Z" level=info msg="Start cni network conf syncer for default" Mar 14 00:13:17.912204 containerd[1934]: time="2026-03-14T00:13:17.910796498Z" level=info msg="Start streaming server" Mar 14 00:13:17.921352 containerd[1934]: time="2026-03-14T00:13:17.921273242Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 14 00:13:17.921481 containerd[1934]: time="2026-03-14T00:13:17.921424058Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 14 00:13:17.921679 systemd[1]: Started containerd.service - containerd container runtime. Mar 14 00:13:17.936617 containerd[1934]: time="2026-03-14T00:13:17.936540578Z" level=info msg="containerd successfully booted in 0.381247s" Mar 14 00:13:17.947776 amazon-ssm-agent[2026]: 2026-03-14 00:13:17 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 14 00:13:18.047395 amazon-ssm-agent[2026]: 2026-03-14 00:13:17 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 14 00:13:18.150194 amazon-ssm-agent[2026]: 2026-03-14 00:13:17 INFO [amazon-ssm-agent] using named pipe channel for IPC Mar 14 00:13:18.247537 amazon-ssm-agent[2026]: 2026-03-14 00:13:17 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Mar 14 00:13:18.349326 amazon-ssm-agent[2026]: 2026-03-14 00:13:17 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Mar 14 00:13:18.449583 amazon-ssm-agent[2026]: 2026-03-14 00:13:17 INFO [amazon-ssm-agent] Starting Core Agent Mar 14 00:13:18.504483 sshd_keygen[1950]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 14 00:13:18.550618 amazon-ssm-agent[2026]: 2026-03-14 00:13:17 INFO [amazon-ssm-agent] registrar detected. Attempting registration Mar 14 00:13:18.599827 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 14 00:13:18.619792 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 14 00:13:18.632864 systemd[1]: Started sshd@0-172.31.18.130:22-68.220.241.50:58934.service - OpenSSH per-connection server daemon (68.220.241.50:58934). Mar 14 00:13:18.652301 amazon-ssm-agent[2026]: 2026-03-14 00:13:17 INFO [Registrar] Starting registrar module Mar 14 00:13:18.693677 systemd[1]: issuegen.service: Deactivated successfully. Mar 14 00:13:18.694115 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 14 00:13:18.706825 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 14 00:13:18.752633 amazon-ssm-agent[2026]: 2026-03-14 00:13:17 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Mar 14 00:13:18.768260 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 14 00:13:18.789043 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 14 00:13:18.805841 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 14 00:13:18.809744 systemd[1]: Reached target getty.target - Login Prompts. Mar 14 00:13:19.059725 tar[1927]: linux-arm64/README.md Mar 14 00:13:19.101690 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 14 00:13:19.227643 sshd[2132]: Accepted publickey for core from 68.220.241.50 port 58934 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:13:19.234511 sshd[2132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:13:19.269257 systemd-logind[1913]: New session 1 of user core. Mar 14 00:13:19.270745 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 14 00:13:19.290963 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 14 00:13:19.304048 ntpd[1904]: Listen normally on 6 eth0 [fe80::475:65ff:fe02:9c63%2]:123 Mar 14 00:13:19.308573 ntpd[1904]: 14 Mar 00:13:19 ntpd[1904]: Listen normally on 6 eth0 [fe80::475:65ff:fe02:9c63%2]:123 Mar 14 00:13:19.336587 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 14 00:13:19.361780 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 14 00:13:19.383648 (systemd)[2146]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 14 00:13:19.513538 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:13:19.520334 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 14 00:13:19.552947 (kubelet)[2158]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:13:19.710694 systemd[2146]: Queued start job for default target default.target. Mar 14 00:13:19.721549 systemd[2146]: Created slice app.slice - User Application Slice. Mar 14 00:13:19.721625 systemd[2146]: Reached target paths.target - Paths. Mar 14 00:13:19.721660 systemd[2146]: Reached target timers.target - Timers. Mar 14 00:13:19.726492 systemd[2146]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 14 00:13:19.776153 systemd[2146]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 14 00:13:19.776501 systemd[2146]: Reached target sockets.target - Sockets. Mar 14 00:13:19.776555 systemd[2146]: Reached target basic.target - Basic System. Mar 14 00:13:19.776647 systemd[2146]: Reached target default.target - Main User Target. Mar 14 00:13:19.776716 systemd[2146]: Startup finished in 360ms. Mar 14 00:13:19.776777 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 14 00:13:19.788551 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 14 00:13:19.791990 systemd[1]: Startup finished in 1.310s (kernel) + 9.276s (initrd) + 9.751s (userspace) = 20.338s. Mar 14 00:13:19.867987 amazon-ssm-agent[2026]: 2026-03-14 00:13:19 INFO [EC2Identity] EC2 registration was successful. Mar 14 00:13:19.899220 amazon-ssm-agent[2026]: 2026-03-14 00:13:19 INFO [CredentialRefresher] credentialRefresher has started Mar 14 00:13:19.899220 amazon-ssm-agent[2026]: 2026-03-14 00:13:19 INFO [CredentialRefresher] Starting credentials refresher loop Mar 14 00:13:19.899220 amazon-ssm-agent[2026]: 2026-03-14 00:13:19 INFO EC2RoleProvider Successfully connected with instance profile role credentials Mar 14 00:13:19.968965 amazon-ssm-agent[2026]: 2026-03-14 00:13:19 INFO [CredentialRefresher] Next credential rotation will be in 31.933320002733332 minutes Mar 14 00:13:20.177658 systemd[1]: Started sshd@1-172.31.18.130:22-68.220.241.50:58938.service - OpenSSH per-connection server daemon (68.220.241.50:58938). Mar 14 00:13:20.490455 kubelet[2158]: E0314 00:13:20.490366 2158 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:13:20.495323 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:13:20.495826 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:13:20.496878 systemd[1]: kubelet.service: Consumed 1.351s CPU time. Mar 14 00:13:20.680211 sshd[2172]: Accepted publickey for core from 68.220.241.50 port 58938 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:13:20.682459 sshd[2172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:13:20.692316 systemd-logind[1913]: New session 2 of user core. Mar 14 00:13:20.698491 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 14 00:13:20.929476 amazon-ssm-agent[2026]: 2026-03-14 00:13:20 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Mar 14 00:13:21.030415 amazon-ssm-agent[2026]: 2026-03-14 00:13:20 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2178) started Mar 14 00:13:21.037506 sshd[2172]: pam_unix(sshd:session): session closed for user core Mar 14 00:13:21.044370 systemd[1]: sshd@1-172.31.18.130:22-68.220.241.50:58938.service: Deactivated successfully. Mar 14 00:13:21.048192 systemd[1]: session-2.scope: Deactivated successfully. Mar 14 00:13:21.053938 systemd-logind[1913]: Session 2 logged out. Waiting for processes to exit. Mar 14 00:13:21.057631 systemd-logind[1913]: Removed session 2. Mar 14 00:13:21.134197 amazon-ssm-agent[2026]: 2026-03-14 00:13:20 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Mar 14 00:13:21.140911 systemd[1]: Started sshd@2-172.31.18.130:22-68.220.241.50:58950.service - OpenSSH per-connection server daemon (68.220.241.50:58950). Mar 14 00:13:21.649061 sshd[2188]: Accepted publickey for core from 68.220.241.50 port 58950 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:13:21.651863 sshd[2188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:13:21.661315 systemd-logind[1913]: New session 3 of user core. Mar 14 00:13:21.671552 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 14 00:13:21.994902 sshd[2188]: pam_unix(sshd:session): session closed for user core Mar 14 00:13:22.002705 systemd-logind[1913]: Session 3 logged out. Waiting for processes to exit. Mar 14 00:13:22.004285 systemd[1]: sshd@2-172.31.18.130:22-68.220.241.50:58950.service: Deactivated successfully. Mar 14 00:13:22.008971 systemd[1]: session-3.scope: Deactivated successfully. Mar 14 00:13:22.010993 systemd-logind[1913]: Removed session 3. Mar 14 00:13:22.108680 systemd[1]: Started sshd@3-172.31.18.130:22-68.220.241.50:54314.service - OpenSSH per-connection server daemon (68.220.241.50:54314). Mar 14 00:13:22.640197 sshd[2198]: Accepted publickey for core from 68.220.241.50 port 54314 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:13:22.643432 sshd[2198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:13:22.654334 systemd-logind[1913]: New session 4 of user core. Mar 14 00:13:22.665549 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 14 00:13:23.023790 sshd[2198]: pam_unix(sshd:session): session closed for user core Mar 14 00:13:23.031927 systemd[1]: sshd@3-172.31.18.130:22-68.220.241.50:54314.service: Deactivated successfully. Mar 14 00:13:23.036444 systemd[1]: session-4.scope: Deactivated successfully. Mar 14 00:13:23.038222 systemd-logind[1913]: Session 4 logged out. Waiting for processes to exit. Mar 14 00:13:23.041008 systemd-logind[1913]: Removed session 4. Mar 14 00:13:23.125724 systemd[1]: Started sshd@4-172.31.18.130:22-68.220.241.50:54316.service - OpenSSH per-connection server daemon (68.220.241.50:54316). Mar 14 00:13:23.632051 systemd-resolved[1853]: Clock change detected. Flushing caches. Mar 14 00:13:23.996135 sshd[2206]: Accepted publickey for core from 68.220.241.50 port 54316 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:13:23.998881 sshd[2206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:13:24.008864 systemd-logind[1913]: New session 5 of user core. Mar 14 00:13:24.015783 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 14 00:13:24.342156 sudo[2209]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 14 00:13:24.343573 sudo[2209]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:13:24.360348 sudo[2209]: pam_unix(sudo:session): session closed for user root Mar 14 00:13:24.444849 sshd[2206]: pam_unix(sshd:session): session closed for user core Mar 14 00:13:24.451627 systemd-logind[1913]: Session 5 logged out. Waiting for processes to exit. Mar 14 00:13:24.452212 systemd[1]: sshd@4-172.31.18.130:22-68.220.241.50:54316.service: Deactivated successfully. Mar 14 00:13:24.456406 systemd[1]: session-5.scope: Deactivated successfully. Mar 14 00:13:24.461500 systemd-logind[1913]: Removed session 5. Mar 14 00:13:24.553058 systemd[1]: Started sshd@5-172.31.18.130:22-68.220.241.50:54332.service - OpenSSH per-connection server daemon (68.220.241.50:54332). Mar 14 00:13:25.087316 sshd[2214]: Accepted publickey for core from 68.220.241.50 port 54332 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:13:25.089530 sshd[2214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:13:25.100546 systemd-logind[1913]: New session 6 of user core. Mar 14 00:13:25.110800 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 14 00:13:25.386854 sudo[2218]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 14 00:13:25.387663 sudo[2218]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:13:25.394739 sudo[2218]: pam_unix(sudo:session): session closed for user root Mar 14 00:13:25.407624 sudo[2217]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 14 00:13:25.409063 sudo[2217]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:13:25.441773 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 14 00:13:25.446238 auditctl[2221]: No rules Mar 14 00:13:25.447488 systemd[1]: audit-rules.service: Deactivated successfully. Mar 14 00:13:25.447961 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 14 00:13:25.456179 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:13:25.515565 augenrules[2239]: No rules Mar 14 00:13:25.519208 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:13:25.522217 sudo[2217]: pam_unix(sudo:session): session closed for user root Mar 14 00:13:25.605914 sshd[2214]: pam_unix(sshd:session): session closed for user core Mar 14 00:13:25.614042 systemd[1]: sshd@5-172.31.18.130:22-68.220.241.50:54332.service: Deactivated successfully. Mar 14 00:13:25.619346 systemd[1]: session-6.scope: Deactivated successfully. Mar 14 00:13:25.620932 systemd-logind[1913]: Session 6 logged out. Waiting for processes to exit. Mar 14 00:13:25.623682 systemd-logind[1913]: Removed session 6. Mar 14 00:13:25.701009 systemd[1]: Started sshd@6-172.31.18.130:22-68.220.241.50:54334.service - OpenSSH per-connection server daemon (68.220.241.50:54334). Mar 14 00:13:26.191473 sshd[2247]: Accepted publickey for core from 68.220.241.50 port 54334 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:13:26.193468 sshd[2247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:13:26.202410 systemd-logind[1913]: New session 7 of user core. Mar 14 00:13:26.208734 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 14 00:13:26.469600 sudo[2250]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 14 00:13:26.470387 sudo[2250]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:13:27.118976 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 14 00:13:27.121659 (dockerd)[2266]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 14 00:13:27.668795 dockerd[2266]: time="2026-03-14T00:13:27.668705436Z" level=info msg="Starting up" Mar 14 00:13:27.939576 dockerd[2266]: time="2026-03-14T00:13:27.938658169Z" level=info msg="Loading containers: start." Mar 14 00:13:28.180511 kernel: Initializing XFRM netlink socket Mar 14 00:13:28.239016 (udev-worker)[2289]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:13:28.344557 systemd-networkd[1852]: docker0: Link UP Mar 14 00:13:28.380830 dockerd[2266]: time="2026-03-14T00:13:28.380285711Z" level=info msg="Loading containers: done." Mar 14 00:13:28.408467 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck816698836-merged.mount: Deactivated successfully. Mar 14 00:13:28.420162 dockerd[2266]: time="2026-03-14T00:13:28.420077927Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 14 00:13:28.420597 dockerd[2266]: time="2026-03-14T00:13:28.420256763Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 14 00:13:28.420597 dockerd[2266]: time="2026-03-14T00:13:28.420496691Z" level=info msg="Daemon has completed initialization" Mar 14 00:13:28.504837 dockerd[2266]: time="2026-03-14T00:13:28.502344468Z" level=info msg="API listen on /run/docker.sock" Mar 14 00:13:28.504361 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 14 00:13:29.322893 containerd[1934]: time="2026-03-14T00:13:29.322825884Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 14 00:13:30.092938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3859038625.mount: Deactivated successfully. Mar 14 00:13:30.859032 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 14 00:13:30.869209 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:13:31.585836 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:13:31.595984 (kubelet)[2474]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:13:31.694014 kubelet[2474]: E0314 00:13:31.693828 2474 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:13:31.703106 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:13:31.704227 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:13:32.151400 containerd[1934]: time="2026-03-14T00:13:32.151330622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:32.154338 containerd[1934]: time="2026-03-14T00:13:32.153777170Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=24701796" Mar 14 00:13:32.154338 containerd[1934]: time="2026-03-14T00:13:32.154270178Z" level=info msg="ImageCreate event name:\"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:32.162756 containerd[1934]: time="2026-03-14T00:13:32.162281138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:32.167205 containerd[1934]: time="2026-03-14T00:13:32.166765058Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"24698395\" in 2.843866874s" Mar 14 00:13:32.167205 containerd[1934]: time="2026-03-14T00:13:32.166833830Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:713a7d5fc5ed8383c9ffe550e487150c9818d05f0c4c012688fbb27885fcc7bf\"" Mar 14 00:13:32.168059 containerd[1934]: time="2026-03-14T00:13:32.167990702Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 14 00:13:34.307574 containerd[1934]: time="2026-03-14T00:13:34.307509005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:34.312110 containerd[1934]: time="2026-03-14T00:13:34.312047705Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=19063039" Mar 14 00:13:34.312552 containerd[1934]: time="2026-03-14T00:13:34.312498509Z" level=info msg="ImageCreate event name:\"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:34.321031 containerd[1934]: time="2026-03-14T00:13:34.320969429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:34.325145 containerd[1934]: time="2026-03-14T00:13:34.325080269Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"20675140\" in 2.157021239s" Mar 14 00:13:34.325379 containerd[1934]: time="2026-03-14T00:13:34.325344269Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:6137f51959af5f0a4da7fb6c0bd868f615a534c02d42e303ad6fb31345ee4854\"" Mar 14 00:13:34.326466 containerd[1934]: time="2026-03-14T00:13:34.326388581Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 14 00:13:36.083361 containerd[1934]: time="2026-03-14T00:13:36.083290577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:36.085631 containerd[1934]: time="2026-03-14T00:13:36.085558637Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=13797901" Mar 14 00:13:36.087729 containerd[1934]: time="2026-03-14T00:13:36.086677505Z" level=info msg="ImageCreate event name:\"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:36.093247 containerd[1934]: time="2026-03-14T00:13:36.093182321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:36.096066 containerd[1934]: time="2026-03-14T00:13:36.095998265Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"15410020\" in 1.769500892s" Mar 14 00:13:36.096291 containerd[1934]: time="2026-03-14T00:13:36.096256169Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:6ad431b09accba3ccc8ac6df4b239aa11c7adf8ee0a477b9f0b54cf9f083f8c6\"" Mar 14 00:13:36.097174 containerd[1934]: time="2026-03-14T00:13:36.097103705Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 14 00:13:37.494572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4055639586.mount: Deactivated successfully. Mar 14 00:13:37.866422 containerd[1934]: time="2026-03-14T00:13:37.866324842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:37.868375 containerd[1934]: time="2026-03-14T00:13:37.868064302Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=22329583" Mar 14 00:13:37.869646 containerd[1934]: time="2026-03-14T00:13:37.869554918Z" level=info msg="ImageCreate event name:\"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:37.873229 containerd[1934]: time="2026-03-14T00:13:37.873144322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:37.875037 containerd[1934]: time="2026-03-14T00:13:37.874809850Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"22328602\" in 1.777638477s" Mar 14 00:13:37.875037 containerd[1934]: time="2026-03-14T00:13:37.874867834Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:df7dcaf93e84e5dfbe96b2f86588b38a8959748d9c84b2e0532e2b5ae1bc5884\"" Mar 14 00:13:37.875954 containerd[1934]: time="2026-03-14T00:13:37.875776174Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 14 00:13:38.452842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1022247994.mount: Deactivated successfully. Mar 14 00:13:39.895941 containerd[1934]: time="2026-03-14T00:13:39.895882344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:39.898590 containerd[1934]: time="2026-03-14T00:13:39.898525344Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=21172211" Mar 14 00:13:39.899152 containerd[1934]: time="2026-03-14T00:13:39.899113260Z" level=info msg="ImageCreate event name:\"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:39.910959 containerd[1934]: time="2026-03-14T00:13:39.910872948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:39.915047 containerd[1934]: time="2026-03-14T00:13:39.914979816Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"21168808\" in 2.039148886s" Mar 14 00:13:39.915313 containerd[1934]: time="2026-03-14T00:13:39.915278820Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf\"" Mar 14 00:13:39.915997 containerd[1934]: time="2026-03-14T00:13:39.915929652Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 14 00:13:40.442849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3317048897.mount: Deactivated successfully. Mar 14 00:13:40.449314 containerd[1934]: time="2026-03-14T00:13:40.449206571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:40.451649 containerd[1934]: time="2026-03-14T00:13:40.451591163Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Mar 14 00:13:40.452854 containerd[1934]: time="2026-03-14T00:13:40.452780999Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:40.457466 containerd[1934]: time="2026-03-14T00:13:40.457145267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:40.459215 containerd[1934]: time="2026-03-14T00:13:40.458981111Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 542.861691ms" Mar 14 00:13:40.459215 containerd[1934]: time="2026-03-14T00:13:40.459040667Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Mar 14 00:13:40.460747 containerd[1934]: time="2026-03-14T00:13:40.460691999Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 14 00:13:41.000565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1667550730.mount: Deactivated successfully. Mar 14 00:13:41.859843 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 14 00:13:41.871649 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:13:42.215812 containerd[1934]: time="2026-03-14T00:13:42.212288688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:42.215812 containerd[1934]: time="2026-03-14T00:13:42.215159100Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=21738165" Mar 14 00:13:42.217936 containerd[1934]: time="2026-03-14T00:13:42.217474152Z" level=info msg="ImageCreate event name:\"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:42.229489 containerd[1934]: time="2026-03-14T00:13:42.229386756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:13:42.233991 containerd[1934]: time="2026-03-14T00:13:42.233903376Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"21749640\" in 1.773144693s" Mar 14 00:13:42.233991 containerd[1934]: time="2026-03-14T00:13:42.233979516Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:271e49a0ebc56647476845128fcd2a73bb138beeca3878cc3bf52b4ff1172a57\"" Mar 14 00:13:42.271767 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:13:42.276655 (kubelet)[2627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:13:42.373461 kubelet[2627]: E0314 00:13:42.372672 2627 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:13:42.378411 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:13:42.379854 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:13:46.262219 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:13:46.279200 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:13:46.340257 systemd[1]: Reloading requested from client PID 2661 ('systemctl') (unit session-7.scope)... Mar 14 00:13:46.340524 systemd[1]: Reloading... Mar 14 00:13:46.612855 zram_generator::config[2703]: No configuration found. Mar 14 00:13:46.890335 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:13:47.085939 systemd[1]: Reloading finished in 744 ms. Mar 14 00:13:47.195732 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 14 00:13:47.195961 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 14 00:13:47.197615 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:13:47.207032 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:13:47.595749 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:13:47.605301 (kubelet)[2765]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:13:47.678667 kubelet[2765]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:13:47.744051 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 14 00:13:48.658092 kubelet[2765]: I0314 00:13:48.657999 2765 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 14 00:13:48.658516 kubelet[2765]: I0314 00:13:48.658291 2765 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:13:48.662749 kubelet[2765]: I0314 00:13:48.661895 2765 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 14 00:13:48.662749 kubelet[2765]: I0314 00:13:48.662162 2765 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:13:48.662749 kubelet[2765]: I0314 00:13:48.662696 2765 server.go:951] "Client rotation is on, will bootstrap in background" Mar 14 00:13:48.673899 kubelet[2765]: I0314 00:13:48.673209 2765 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:13:48.674723 kubelet[2765]: E0314 00:13:48.674669 2765 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.18.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.130:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:13:48.680196 kubelet[2765]: E0314 00:13:48.680070 2765 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:13:48.680873 kubelet[2765]: I0314 00:13:48.680254 2765 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 14 00:13:48.686054 kubelet[2765]: I0314 00:13:48.686008 2765 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 14 00:13:48.687980 kubelet[2765]: I0314 00:13:48.687872 2765 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:13:48.688282 kubelet[2765]: I0314 00:13:48.687978 2765 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-130","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:13:48.688282 kubelet[2765]: I0314 00:13:48.688280 2765 topology_manager.go:143] "Creating topology manager with none policy" Mar 14 00:13:48.688584 kubelet[2765]: I0314 00:13:48.688303 2765 container_manager_linux.go:308] "Creating device plugin manager" Mar 14 00:13:48.688584 kubelet[2765]: I0314 00:13:48.688512 2765 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 14 00:13:48.690965 kubelet[2765]: I0314 00:13:48.690889 2765 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 14 00:13:48.691398 kubelet[2765]: I0314 00:13:48.691362 2765 kubelet.go:482] "Attempting to sync node with API server" Mar 14 00:13:48.691530 kubelet[2765]: I0314 00:13:48.691416 2765 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:13:48.691530 kubelet[2765]: I0314 00:13:48.691492 2765 kubelet.go:394] "Adding apiserver pod source" Mar 14 00:13:48.691530 kubelet[2765]: I0314 00:13:48.691515 2765 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:13:48.699520 kubelet[2765]: I0314 00:13:48.698837 2765 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:13:48.701474 kubelet[2765]: I0314 00:13:48.701020 2765 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:13:48.701474 kubelet[2765]: I0314 00:13:48.701101 2765 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 14 00:13:48.701474 kubelet[2765]: W0314 00:13:48.701189 2765 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 14 00:13:48.706537 kubelet[2765]: I0314 00:13:48.706482 2765 server.go:1257] "Started kubelet" Mar 14 00:13:48.710395 kubelet[2765]: I0314 00:13:48.710312 2765 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:13:48.715485 kubelet[2765]: I0314 00:13:48.714640 2765 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:13:48.719058 kubelet[2765]: I0314 00:13:48.718194 2765 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:13:48.719058 kubelet[2765]: I0314 00:13:48.718337 2765 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 14 00:13:48.719058 kubelet[2765]: I0314 00:13:48.718904 2765 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:13:48.722821 kubelet[2765]: E0314 00:13:48.719184 2765 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.130:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.130:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-130.189c8cdbbef22dc0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-130,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-130,},FirstTimestamp:2026-03-14 00:13:48.706401728 +0000 UTC m=+1.093944378,LastTimestamp:2026-03-14 00:13:48.706401728 +0000 UTC m=+1.093944378,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-130,}" Mar 14 00:13:48.731262 kubelet[2765]: I0314 00:13:48.731177 2765 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 14 00:13:48.733139 kubelet[2765]: I0314 00:13:48.733064 2765 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:13:48.737946 kubelet[2765]: E0314 00:13:48.737042 2765 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:13:48.737946 kubelet[2765]: E0314 00:13:48.737233 2765 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-18-130\" not found" Mar 14 00:13:48.737946 kubelet[2765]: I0314 00:13:48.737310 2765 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 14 00:13:48.737946 kubelet[2765]: I0314 00:13:48.737925 2765 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 00:13:48.738263 kubelet[2765]: I0314 00:13:48.738057 2765 reconciler.go:29] "Reconciler: start to sync state" Mar 14 00:13:48.739971 kubelet[2765]: I0314 00:13:48.739913 2765 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:13:48.740139 kubelet[2765]: I0314 00:13:48.740111 2765 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:13:48.742701 kubelet[2765]: I0314 00:13:48.742649 2765 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:13:48.747151 kubelet[2765]: E0314 00:13:48.747091 2765 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-130?timeout=10s\": dial tcp 172.31.18.130:6443: connect: connection refused" interval="200ms" Mar 14 00:13:48.777549 kubelet[2765]: I0314 00:13:48.777511 2765 cpu_manager.go:225] "Starting" policy="none" Mar 14 00:13:48.778411 kubelet[2765]: I0314 00:13:48.778377 2765 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 14 00:13:48.778637 kubelet[2765]: I0314 00:13:48.778600 2765 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 14 00:13:48.782782 kubelet[2765]: I0314 00:13:48.782744 2765 policy_none.go:50] "Start" Mar 14 00:13:48.783346 kubelet[2765]: I0314 00:13:48.782951 2765 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 14 00:13:48.783346 kubelet[2765]: I0314 00:13:48.782982 2765 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 14 00:13:48.785646 kubelet[2765]: I0314 00:13:48.784611 2765 policy_none.go:44] "Start" Mar 14 00:13:48.798336 kubelet[2765]: I0314 00:13:48.798269 2765 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 14 00:13:48.798553 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 14 00:13:48.802254 kubelet[2765]: I0314 00:13:48.802191 2765 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 14 00:13:48.802254 kubelet[2765]: I0314 00:13:48.802262 2765 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 14 00:13:48.802494 kubelet[2765]: I0314 00:13:48.802302 2765 kubelet.go:2501] "Starting kubelet main sync loop" Mar 14 00:13:48.802494 kubelet[2765]: E0314 00:13:48.802401 2765 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:13:48.821149 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 14 00:13:48.829625 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 14 00:13:48.837480 kubelet[2765]: E0314 00:13:48.837405 2765 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ip-172-31-18-130\" not found" Mar 14 00:13:48.840331 kubelet[2765]: E0314 00:13:48.840257 2765 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:13:48.840676 kubelet[2765]: I0314 00:13:48.840630 2765 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 14 00:13:48.840754 kubelet[2765]: I0314 00:13:48.840663 2765 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:13:48.846059 kubelet[2765]: I0314 00:13:48.844251 2765 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 14 00:13:48.848521 kubelet[2765]: E0314 00:13:48.847259 2765 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:13:48.848521 kubelet[2765]: E0314 00:13:48.847933 2765 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-130\" not found" Mar 14 00:13:48.928238 systemd[1]: Created slice kubepods-burstable-pod1f447e88b4d088ddc40850d478b27c7f.slice - libcontainer container kubepods-burstable-pod1f447e88b4d088ddc40850d478b27c7f.slice. Mar 14 00:13:48.939353 kubelet[2765]: I0314 00:13:48.938892 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1f447e88b4d088ddc40850d478b27c7f-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-130\" (UID: \"1f447e88b4d088ddc40850d478b27c7f\") " pod="kube-system/kube-apiserver-ip-172-31-18-130" Mar 14 00:13:48.939353 kubelet[2765]: I0314 00:13:48.938956 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fe098f5658ae420a9be0f0103fb5366f-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-130\" (UID: \"fe098f5658ae420a9be0f0103fb5366f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-130" Mar 14 00:13:48.939353 kubelet[2765]: I0314 00:13:48.938990 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fe098f5658ae420a9be0f0103fb5366f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-130\" (UID: \"fe098f5658ae420a9be0f0103fb5366f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-130" Mar 14 00:13:48.939353 kubelet[2765]: I0314 00:13:48.939043 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fe098f5658ae420a9be0f0103fb5366f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-130\" (UID: \"fe098f5658ae420a9be0f0103fb5366f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-130" Mar 14 00:13:48.939353 kubelet[2765]: I0314 00:13:48.939080 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe098f5658ae420a9be0f0103fb5366f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-130\" (UID: \"fe098f5658ae420a9be0f0103fb5366f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-130" Mar 14 00:13:48.939825 kubelet[2765]: I0314 00:13:48.939114 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fbf409a191539a08d52fca33e6b52517-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-130\" (UID: \"fbf409a191539a08d52fca33e6b52517\") " pod="kube-system/kube-scheduler-ip-172-31-18-130" Mar 14 00:13:48.939825 kubelet[2765]: I0314 00:13:48.939145 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1f447e88b4d088ddc40850d478b27c7f-ca-certs\") pod \"kube-apiserver-ip-172-31-18-130\" (UID: \"1f447e88b4d088ddc40850d478b27c7f\") " pod="kube-system/kube-apiserver-ip-172-31-18-130" Mar 14 00:13:48.939825 kubelet[2765]: I0314 00:13:48.939179 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1f447e88b4d088ddc40850d478b27c7f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-130\" (UID: \"1f447e88b4d088ddc40850d478b27c7f\") " pod="kube-system/kube-apiserver-ip-172-31-18-130" Mar 14 00:13:48.939825 kubelet[2765]: I0314 00:13:48.939215 2765 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fe098f5658ae420a9be0f0103fb5366f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-130\" (UID: \"fe098f5658ae420a9be0f0103fb5366f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-130" Mar 14 00:13:48.944627 kubelet[2765]: I0314 00:13:48.943868 2765 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-18-130" Mar 14 00:13:48.944627 kubelet[2765]: E0314 00:13:48.944572 2765 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.18.130:6443/api/v1/nodes\": dial tcp 172.31.18.130:6443: connect: connection refused" node="ip-172-31-18-130" Mar 14 00:13:48.946621 kubelet[2765]: E0314 00:13:48.946557 2765 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-130\" not found" node="ip-172-31-18-130" Mar 14 00:13:48.949050 kubelet[2765]: E0314 00:13:48.948593 2765 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-130?timeout=10s\": dial tcp 172.31.18.130:6443: connect: connection refused" interval="400ms" Mar 14 00:13:48.951904 systemd[1]: Created slice kubepods-burstable-podfe098f5658ae420a9be0f0103fb5366f.slice - libcontainer container kubepods-burstable-podfe098f5658ae420a9be0f0103fb5366f.slice. Mar 14 00:13:48.957655 kubelet[2765]: E0314 00:13:48.957590 2765 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-130\" not found" node="ip-172-31-18-130" Mar 14 00:13:48.964228 systemd[1]: Created slice kubepods-burstable-podfbf409a191539a08d52fca33e6b52517.slice - libcontainer container kubepods-burstable-podfbf409a191539a08d52fca33e6b52517.slice. Mar 14 00:13:48.968690 kubelet[2765]: E0314 00:13:48.968633 2765 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-130\" not found" node="ip-172-31-18-130" Mar 14 00:13:49.147712 kubelet[2765]: I0314 00:13:49.147652 2765 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-18-130" Mar 14 00:13:49.148257 kubelet[2765]: E0314 00:13:49.148206 2765 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.18.130:6443/api/v1/nodes\": dial tcp 172.31.18.130:6443: connect: connection refused" node="ip-172-31-18-130" Mar 14 00:13:49.251516 containerd[1934]: time="2026-03-14T00:13:49.251335075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-130,Uid:1f447e88b4d088ddc40850d478b27c7f,Namespace:kube-system,Attempt:0,}" Mar 14 00:13:49.261787 containerd[1934]: time="2026-03-14T00:13:49.261359143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-130,Uid:fe098f5658ae420a9be0f0103fb5366f,Namespace:kube-system,Attempt:0,}" Mar 14 00:13:49.272932 containerd[1934]: time="2026-03-14T00:13:49.272545495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-130,Uid:fbf409a191539a08d52fca33e6b52517,Namespace:kube-system,Attempt:0,}" Mar 14 00:13:49.350105 kubelet[2765]: E0314 00:13:49.350005 2765 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-130?timeout=10s\": dial tcp 172.31.18.130:6443: connect: connection refused" interval="800ms" Mar 14 00:13:49.550910 kubelet[2765]: I0314 00:13:49.550733 2765 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-18-130" Mar 14 00:13:49.552093 kubelet[2765]: E0314 00:13:49.552015 2765 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.18.130:6443/api/v1/nodes\": dial tcp 172.31.18.130:6443: connect: connection refused" node="ip-172-31-18-130" Mar 14 00:13:49.739818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount558383729.mount: Deactivated successfully. Mar 14 00:13:49.749482 containerd[1934]: time="2026-03-14T00:13:49.748083765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:13:49.750157 containerd[1934]: time="2026-03-14T00:13:49.750109149Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:13:49.751332 containerd[1934]: time="2026-03-14T00:13:49.751229481Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:13:49.756347 containerd[1934]: time="2026-03-14T00:13:49.756274329Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Mar 14 00:13:49.763282 containerd[1934]: time="2026-03-14T00:13:49.763211457Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:13:49.766348 containerd[1934]: time="2026-03-14T00:13:49.766290321Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:13:49.770564 containerd[1934]: time="2026-03-14T00:13:49.770504541Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 508.976954ms" Mar 14 00:13:49.773869 containerd[1934]: time="2026-03-14T00:13:49.773810577Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:13:49.779076 containerd[1934]: time="2026-03-14T00:13:49.779008329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:13:49.782185 containerd[1934]: time="2026-03-14T00:13:49.782116677Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 509.46365ms" Mar 14 00:13:49.786104 containerd[1934]: time="2026-03-14T00:13:49.785661549Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 534.180662ms" Mar 14 00:13:50.026537 containerd[1934]: time="2026-03-14T00:13:50.026334847Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:50.026537 containerd[1934]: time="2026-03-14T00:13:50.026457511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:50.027091 containerd[1934]: time="2026-03-14T00:13:50.026769883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:50.027091 containerd[1934]: time="2026-03-14T00:13:50.027000379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:50.032664 containerd[1934]: time="2026-03-14T00:13:50.032493619Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:50.032895 containerd[1934]: time="2026-03-14T00:13:50.032601139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:50.033097 containerd[1934]: time="2026-03-14T00:13:50.033018835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:50.033538 containerd[1934]: time="2026-03-14T00:13:50.033361027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:50.036581 containerd[1934]: time="2026-03-14T00:13:50.035965039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:13:50.036581 containerd[1934]: time="2026-03-14T00:13:50.036108955Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:13:50.036581 containerd[1934]: time="2026-03-14T00:13:50.036140899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:50.036581 containerd[1934]: time="2026-03-14T00:13:50.036327535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:13:50.087842 systemd[1]: Started cri-containerd-d76f540614f4bac28ca4c2c7a9a935d885efb5e2453824c919bca03e3d9f6cc6.scope - libcontainer container d76f540614f4bac28ca4c2c7a9a935d885efb5e2453824c919bca03e3d9f6cc6. Mar 14 00:13:50.105792 systemd[1]: Started cri-containerd-bd0f1a0aa9ed3b943f26b72fb6a6d7be6a9828863ddf154b76a914f3d15f2002.scope - libcontainer container bd0f1a0aa9ed3b943f26b72fb6a6d7be6a9828863ddf154b76a914f3d15f2002. Mar 14 00:13:50.109895 systemd[1]: Started cri-containerd-da0d0197b828f2db236bf476e3dbbd4144157123c50cbba5e93be2467d47b896.scope - libcontainer container da0d0197b828f2db236bf476e3dbbd4144157123c50cbba5e93be2467d47b896. Mar 14 00:13:50.151737 kubelet[2765]: E0314 00:13:50.151646 2765 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-130?timeout=10s\": dial tcp 172.31.18.130:6443: connect: connection refused" interval="1.6s" Mar 14 00:13:50.209370 containerd[1934]: time="2026-03-14T00:13:50.209292560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-130,Uid:1f447e88b4d088ddc40850d478b27c7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd0f1a0aa9ed3b943f26b72fb6a6d7be6a9828863ddf154b76a914f3d15f2002\"" Mar 14 00:13:50.231238 containerd[1934]: time="2026-03-14T00:13:50.231163688Z" level=info msg="CreateContainer within sandbox \"bd0f1a0aa9ed3b943f26b72fb6a6d7be6a9828863ddf154b76a914f3d15f2002\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 14 00:13:50.236331 containerd[1934]: time="2026-03-14T00:13:50.236261372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-130,Uid:fbf409a191539a08d52fca33e6b52517,Namespace:kube-system,Attempt:0,} returns sandbox id \"d76f540614f4bac28ca4c2c7a9a935d885efb5e2453824c919bca03e3d9f6cc6\"" Mar 14 00:13:50.248056 containerd[1934]: time="2026-03-14T00:13:50.247997420Z" level=info msg="CreateContainer within sandbox \"d76f540614f4bac28ca4c2c7a9a935d885efb5e2453824c919bca03e3d9f6cc6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 14 00:13:50.259794 containerd[1934]: time="2026-03-14T00:13:50.259706000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-130,Uid:fe098f5658ae420a9be0f0103fb5366f,Namespace:kube-system,Attempt:0,} returns sandbox id \"da0d0197b828f2db236bf476e3dbbd4144157123c50cbba5e93be2467d47b896\"" Mar 14 00:13:50.267661 containerd[1934]: time="2026-03-14T00:13:50.267530108Z" level=info msg="CreateContainer within sandbox \"da0d0197b828f2db236bf476e3dbbd4144157123c50cbba5e93be2467d47b896\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 14 00:13:50.271493 containerd[1934]: time="2026-03-14T00:13:50.271264076Z" level=info msg="CreateContainer within sandbox \"bd0f1a0aa9ed3b943f26b72fb6a6d7be6a9828863ddf154b76a914f3d15f2002\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2dedaa1660074a7bbff5c9763431b1dd5342a83b9fc581545d67648528b1a7e6\"" Mar 14 00:13:50.272680 containerd[1934]: time="2026-03-14T00:13:50.272500172Z" level=info msg="StartContainer for \"2dedaa1660074a7bbff5c9763431b1dd5342a83b9fc581545d67648528b1a7e6\"" Mar 14 00:13:50.275486 containerd[1934]: time="2026-03-14T00:13:50.274530908Z" level=info msg="CreateContainer within sandbox \"d76f540614f4bac28ca4c2c7a9a935d885efb5e2453824c919bca03e3d9f6cc6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f15529d790a54875a9511117797dc6fba044928126632977689333e2667881eb\"" Mar 14 00:13:50.276338 containerd[1934]: time="2026-03-14T00:13:50.276275924Z" level=info msg="StartContainer for \"f15529d790a54875a9511117797dc6fba044928126632977689333e2667881eb\"" Mar 14 00:13:50.299738 containerd[1934]: time="2026-03-14T00:13:50.298239404Z" level=info msg="CreateContainer within sandbox \"da0d0197b828f2db236bf476e3dbbd4144157123c50cbba5e93be2467d47b896\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1db24d5b3c9c60b5337de72e17a015a90450d95bdad048e2404ba7f0e2554368\"" Mar 14 00:13:50.301753 containerd[1934]: time="2026-03-14T00:13:50.301685744Z" level=info msg="StartContainer for \"1db24d5b3c9c60b5337de72e17a015a90450d95bdad048e2404ba7f0e2554368\"" Mar 14 00:13:50.334072 systemd[1]: Started cri-containerd-2dedaa1660074a7bbff5c9763431b1dd5342a83b9fc581545d67648528b1a7e6.scope - libcontainer container 2dedaa1660074a7bbff5c9763431b1dd5342a83b9fc581545d67648528b1a7e6. Mar 14 00:13:50.358875 systemd[1]: Started cri-containerd-f15529d790a54875a9511117797dc6fba044928126632977689333e2667881eb.scope - libcontainer container f15529d790a54875a9511117797dc6fba044928126632977689333e2667881eb. Mar 14 00:13:50.359740 kubelet[2765]: I0314 00:13:50.359679 2765 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-18-130" Mar 14 00:13:50.360771 kubelet[2765]: E0314 00:13:50.360206 2765 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.31.18.130:6443/api/v1/nodes\": dial tcp 172.31.18.130:6443: connect: connection refused" node="ip-172-31-18-130" Mar 14 00:13:50.410744 systemd[1]: Started cri-containerd-1db24d5b3c9c60b5337de72e17a015a90450d95bdad048e2404ba7f0e2554368.scope - libcontainer container 1db24d5b3c9c60b5337de72e17a015a90450d95bdad048e2404ba7f0e2554368. Mar 14 00:13:50.465361 containerd[1934]: time="2026-03-14T00:13:50.465175341Z" level=info msg="StartContainer for \"2dedaa1660074a7bbff5c9763431b1dd5342a83b9fc581545d67648528b1a7e6\" returns successfully" Mar 14 00:13:50.543565 containerd[1934]: time="2026-03-14T00:13:50.543003273Z" level=info msg="StartContainer for \"f15529d790a54875a9511117797dc6fba044928126632977689333e2667881eb\" returns successfully" Mar 14 00:13:50.577078 containerd[1934]: time="2026-03-14T00:13:50.576931293Z" level=info msg="StartContainer for \"1db24d5b3c9c60b5337de72e17a015a90450d95bdad048e2404ba7f0e2554368\" returns successfully" Mar 14 00:13:50.842513 kubelet[2765]: E0314 00:13:50.841029 2765 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-130\" not found" node="ip-172-31-18-130" Mar 14 00:13:50.844316 kubelet[2765]: E0314 00:13:50.844245 2765 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-130\" not found" node="ip-172-31-18-130" Mar 14 00:13:50.851374 kubelet[2765]: E0314 00:13:50.851302 2765 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-130\" not found" node="ip-172-31-18-130" Mar 14 00:13:51.853893 kubelet[2765]: E0314 00:13:51.853835 2765 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-130\" not found" node="ip-172-31-18-130" Mar 14 00:13:51.855478 kubelet[2765]: E0314 00:13:51.854635 2765 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-130\" not found" node="ip-172-31-18-130" Mar 14 00:13:51.855478 kubelet[2765]: E0314 00:13:51.855379 2765 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-130\" not found" node="ip-172-31-18-130" Mar 14 00:13:51.965662 kubelet[2765]: I0314 00:13:51.965218 2765 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-18-130" Mar 14 00:13:53.426222 kubelet[2765]: E0314 00:13:53.426173 2765 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-130\" not found" node="ip-172-31-18-130" Mar 14 00:13:53.537783 kubelet[2765]: E0314 00:13:53.537642 2765 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-18-130.189c8cdbbef22dc0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-130,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-130,},FirstTimestamp:2026-03-14 00:13:48.706401728 +0000 UTC m=+1.093944378,LastTimestamp:2026-03-14 00:13:48.706401728 +0000 UTC m=+1.093944378,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-130,}" Mar 14 00:13:53.592642 kubelet[2765]: I0314 00:13:53.592545 2765 kubelet_node_status.go:77] "Successfully registered node" node="ip-172-31-18-130" Mar 14 00:13:53.619606 kubelet[2765]: E0314 00:13:53.619390 2765 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-18-130.189c8cdbc0c532f8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-130,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-18-130,},FirstTimestamp:2026-03-14 00:13:48.737008376 +0000 UTC m=+1.124551002,LastTimestamp:2026-03-14 00:13:48.737008376 +0000 UTC m=+1.124551002,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-130,}" Mar 14 00:13:53.650472 kubelet[2765]: I0314 00:13:53.647744 2765 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-130" Mar 14 00:13:53.669652 kubelet[2765]: E0314 00:13:53.669233 2765 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-130\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-18-130" Mar 14 00:13:53.669652 kubelet[2765]: I0314 00:13:53.669283 2765 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-130" Mar 14 00:13:53.684170 kubelet[2765]: E0314 00:13:53.683987 2765 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-18-130\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-18-130" Mar 14 00:13:53.684170 kubelet[2765]: I0314 00:13:53.684056 2765 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-130" Mar 14 00:13:53.700749 kubelet[2765]: I0314 00:13:53.700666 2765 apiserver.go:52] "Watching apiserver" Mar 14 00:13:53.710831 kubelet[2765]: E0314 00:13:53.710683 2765 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-18-130\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-18-130" Mar 14 00:13:53.738518 kubelet[2765]: I0314 00:13:53.738422 2765 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 00:13:54.968146 kubelet[2765]: I0314 00:13:54.968015 2765 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-130" Mar 14 00:13:55.332359 kubelet[2765]: I0314 00:13:55.332041 2765 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-130" Mar 14 00:13:56.112006 systemd[1]: Reloading requested from client PID 3053 ('systemctl') (unit session-7.scope)... Mar 14 00:13:56.112054 systemd[1]: Reloading... Mar 14 00:13:56.335488 zram_generator::config[3099]: No configuration found. Mar 14 00:13:56.601347 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:13:56.844256 systemd[1]: Reloading finished in 731 ms. Mar 14 00:13:56.931911 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:13:56.945094 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 00:13:56.946525 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:13:56.946611 systemd[1]: kubelet.service: Consumed 1.944s CPU time, 123.5M memory peak, 0B memory swap peak. Mar 14 00:13:56.958083 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:13:57.368194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:13:57.393699 (kubelet)[3153]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:13:57.502772 kubelet[3153]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:13:57.529540 kubelet[3153]: I0314 00:13:57.528392 3153 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 14 00:13:57.529540 kubelet[3153]: I0314 00:13:57.528516 3153 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:13:57.529540 kubelet[3153]: I0314 00:13:57.528564 3153 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 14 00:13:57.529540 kubelet[3153]: I0314 00:13:57.528578 3153 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:13:57.530698 kubelet[3153]: I0314 00:13:57.530383 3153 server.go:951] "Client rotation is on, will bootstrap in background" Mar 14 00:13:57.534446 kubelet[3153]: I0314 00:13:57.534374 3153 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 14 00:13:57.539077 kubelet[3153]: I0314 00:13:57.539013 3153 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:13:57.548664 kubelet[3153]: E0314 00:13:57.548588 3153 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:13:57.548850 kubelet[3153]: I0314 00:13:57.548717 3153 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 14 00:13:57.554985 kubelet[3153]: I0314 00:13:57.554737 3153 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 14 00:13:57.555328 kubelet[3153]: I0314 00:13:57.555274 3153 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:13:57.555828 kubelet[3153]: I0314 00:13:57.555325 3153 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-130","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:13:57.556681 kubelet[3153]: I0314 00:13:57.555831 3153 topology_manager.go:143] "Creating topology manager with none policy" Mar 14 00:13:57.556681 kubelet[3153]: I0314 00:13:57.555856 3153 container_manager_linux.go:308] "Creating device plugin manager" Mar 14 00:13:57.556681 kubelet[3153]: I0314 00:13:57.555897 3153 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 14 00:13:57.556681 kubelet[3153]: I0314 00:13:57.556303 3153 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 14 00:13:57.556681 kubelet[3153]: I0314 00:13:57.556679 3153 kubelet.go:482] "Attempting to sync node with API server" Mar 14 00:13:57.560398 kubelet[3153]: I0314 00:13:57.558089 3153 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:13:57.560398 kubelet[3153]: I0314 00:13:57.558203 3153 kubelet.go:394] "Adding apiserver pod source" Mar 14 00:13:57.560398 kubelet[3153]: I0314 00:13:57.559474 3153 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:13:57.568187 kubelet[3153]: I0314 00:13:57.567073 3153 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:13:57.579097 kubelet[3153]: I0314 00:13:57.578926 3153 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:13:57.579270 kubelet[3153]: I0314 00:13:57.579222 3153 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 14 00:13:57.613900 kubelet[3153]: I0314 00:13:57.610337 3153 server.go:1257] "Started kubelet" Mar 14 00:13:57.620334 kubelet[3153]: I0314 00:13:57.619173 3153 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:13:57.620334 kubelet[3153]: I0314 00:13:57.619300 3153 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 14 00:13:57.624740 kubelet[3153]: I0314 00:13:57.623625 3153 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:13:57.628159 kubelet[3153]: I0314 00:13:57.628079 3153 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:13:57.634466 kubelet[3153]: I0314 00:13:57.633482 3153 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:13:57.636210 kubelet[3153]: I0314 00:13:57.636175 3153 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 14 00:13:57.640751 kubelet[3153]: I0314 00:13:57.639982 3153 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:13:57.650955 kubelet[3153]: I0314 00:13:57.650924 3153 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 14 00:13:57.653277 kubelet[3153]: I0314 00:13:57.653221 3153 reconciler.go:29] "Reconciler: start to sync state" Mar 14 00:13:57.653696 kubelet[3153]: I0314 00:13:57.653615 3153 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 00:13:57.667092 kubelet[3153]: I0314 00:13:57.666666 3153 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:13:57.667092 kubelet[3153]: I0314 00:13:57.666947 3153 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:13:57.676221 kubelet[3153]: E0314 00:13:57.675259 3153 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:13:57.679743 kubelet[3153]: I0314 00:13:57.679663 3153 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:13:57.712816 kubelet[3153]: I0314 00:13:57.712047 3153 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 14 00:13:57.719248 kubelet[3153]: I0314 00:13:57.717282 3153 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 14 00:13:57.719248 kubelet[3153]: I0314 00:13:57.717337 3153 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 14 00:13:57.719248 kubelet[3153]: I0314 00:13:57.717373 3153 kubelet.go:2501] "Starting kubelet main sync loop" Mar 14 00:13:57.719248 kubelet[3153]: E0314 00:13:57.717492 3153 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:13:57.822398 kubelet[3153]: E0314 00:13:57.822348 3153 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 14 00:13:57.836869 kubelet[3153]: I0314 00:13:57.836799 3153 cpu_manager.go:225] "Starting" policy="none" Mar 14 00:13:57.836869 kubelet[3153]: I0314 00:13:57.836834 3153 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 14 00:13:57.836869 kubelet[3153]: I0314 00:13:57.836872 3153 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 14 00:13:57.837242 kubelet[3153]: I0314 00:13:57.837109 3153 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 14 00:13:57.837242 kubelet[3153]: I0314 00:13:57.837144 3153 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 14 00:13:57.837242 kubelet[3153]: I0314 00:13:57.837194 3153 policy_none.go:50] "Start" Mar 14 00:13:57.837242 kubelet[3153]: I0314 00:13:57.837210 3153 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 14 00:13:57.837242 kubelet[3153]: I0314 00:13:57.837230 3153 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 14 00:13:57.838395 kubelet[3153]: I0314 00:13:57.837414 3153 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 14 00:13:57.838947 kubelet[3153]: I0314 00:13:57.838900 3153 policy_none.go:44] "Start" Mar 14 00:13:57.855075 kubelet[3153]: E0314 00:13:57.855024 3153 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:13:57.855390 kubelet[3153]: I0314 00:13:57.855353 3153 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 14 00:13:57.855517 kubelet[3153]: I0314 00:13:57.855386 3153 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:13:57.858663 kubelet[3153]: E0314 00:13:57.858590 3153 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:13:57.861170 kubelet[3153]: I0314 00:13:57.861024 3153 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 14 00:13:57.971709 kubelet[3153]: I0314 00:13:57.971551 3153 kubelet_node_status.go:74] "Attempting to register node" node="ip-172-31-18-130" Mar 14 00:13:57.990599 kubelet[3153]: I0314 00:13:57.990535 3153 kubelet_node_status.go:123] "Node was previously registered" node="ip-172-31-18-130" Mar 14 00:13:57.990766 kubelet[3153]: I0314 00:13:57.990666 3153 kubelet_node_status.go:77] "Successfully registered node" node="ip-172-31-18-130" Mar 14 00:13:58.026166 kubelet[3153]: I0314 00:13:58.024193 3153 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-130" Mar 14 00:13:58.026166 kubelet[3153]: I0314 00:13:58.024370 3153 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-130" Mar 14 00:13:58.026166 kubelet[3153]: I0314 00:13:58.024977 3153 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-130" Mar 14 00:13:58.040045 kubelet[3153]: E0314 00:13:58.039977 3153 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-18-130\" already exists" pod="kube-system/kube-scheduler-ip-172-31-18-130" Mar 14 00:13:58.044571 kubelet[3153]: E0314 00:13:58.044512 3153 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-130\" already exists" pod="kube-system/kube-apiserver-ip-172-31-18-130" Mar 14 00:13:58.155782 kubelet[3153]: I0314 00:13:58.155650 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1f447e88b4d088ddc40850d478b27c7f-ca-certs\") pod \"kube-apiserver-ip-172-31-18-130\" (UID: \"1f447e88b4d088ddc40850d478b27c7f\") " pod="kube-system/kube-apiserver-ip-172-31-18-130" Mar 14 00:13:58.155967 kubelet[3153]: I0314 00:13:58.155853 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1f447e88b4d088ddc40850d478b27c7f-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-130\" (UID: \"1f447e88b4d088ddc40850d478b27c7f\") " pod="kube-system/kube-apiserver-ip-172-31-18-130" Mar 14 00:13:58.156041 kubelet[3153]: I0314 00:13:58.155972 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fe098f5658ae420a9be0f0103fb5366f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-130\" (UID: \"fe098f5658ae420a9be0f0103fb5366f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-130" Mar 14 00:13:58.156103 kubelet[3153]: I0314 00:13:58.156048 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fe098f5658ae420a9be0f0103fb5366f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-130\" (UID: \"fe098f5658ae420a9be0f0103fb5366f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-130" Mar 14 00:13:58.156163 kubelet[3153]: I0314 00:13:58.156133 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe098f5658ae420a9be0f0103fb5366f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-130\" (UID: \"fe098f5658ae420a9be0f0103fb5366f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-130" Mar 14 00:13:58.156890 kubelet[3153]: I0314 00:13:58.156172 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fbf409a191539a08d52fca33e6b52517-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-130\" (UID: \"fbf409a191539a08d52fca33e6b52517\") " pod="kube-system/kube-scheduler-ip-172-31-18-130" Mar 14 00:13:58.156890 kubelet[3153]: I0314 00:13:58.156305 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1f447e88b4d088ddc40850d478b27c7f-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-130\" (UID: \"1f447e88b4d088ddc40850d478b27c7f\") " pod="kube-system/kube-apiserver-ip-172-31-18-130" Mar 14 00:13:58.156890 kubelet[3153]: I0314 00:13:58.156394 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fe098f5658ae420a9be0f0103fb5366f-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-130\" (UID: \"fe098f5658ae420a9be0f0103fb5366f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-130" Mar 14 00:13:58.156890 kubelet[3153]: I0314 00:13:58.156517 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fe098f5658ae420a9be0f0103fb5366f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-130\" (UID: \"fe098f5658ae420a9be0f0103fb5366f\") " pod="kube-system/kube-controller-manager-ip-172-31-18-130" Mar 14 00:13:58.562640 kubelet[3153]: I0314 00:13:58.562272 3153 apiserver.go:52] "Watching apiserver" Mar 14 00:13:58.654551 kubelet[3153]: I0314 00:13:58.654476 3153 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 00:13:58.758914 kubelet[3153]: I0314 00:13:58.758854 3153 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-130" Mar 14 00:13:58.772073 kubelet[3153]: E0314 00:13:58.772005 3153 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-130\" already exists" pod="kube-system/kube-apiserver-ip-172-31-18-130" Mar 14 00:13:58.822107 kubelet[3153]: I0314 00:13:58.821886 3153 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-130" podStartSLOduration=4.821866914 podStartE2EDuration="4.821866914s" podCreationTimestamp="2026-03-14 00:13:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:13:58.80409657 +0000 UTC m=+1.398276992" watchObservedRunningTime="2026-03-14 00:13:58.821866914 +0000 UTC m=+1.416047252" Mar 14 00:13:58.856188 kubelet[3153]: I0314 00:13:58.856085 3153 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-130" podStartSLOduration=3.856069314 podStartE2EDuration="3.856069314s" podCreationTimestamp="2026-03-14 00:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:13:58.822313794 +0000 UTC m=+1.416494156" watchObservedRunningTime="2026-03-14 00:13:58.856069314 +0000 UTC m=+1.450249664" Mar 14 00:13:59.011720 kubelet[3153]: I0314 00:13:59.010698 3153 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-130" podStartSLOduration=1.010679967 podStartE2EDuration="1.010679967s" podCreationTimestamp="2026-03-14 00:13:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:13:58.857531994 +0000 UTC m=+1.451712368" watchObservedRunningTime="2026-03-14 00:13:59.010679967 +0000 UTC m=+1.604860305" Mar 14 00:14:01.676492 kubelet[3153]: I0314 00:14:01.676194 3153 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 14 00:14:01.677291 containerd[1934]: time="2026-03-14T00:14:01.677116196Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 14 00:14:01.678847 kubelet[3153]: I0314 00:14:01.678494 3153 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 14 00:14:01.782654 update_engine[1914]: I20260314 00:14:01.782514 1914 update_attempter.cc:509] Updating boot flags... Mar 14 00:14:01.879597 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3218) Mar 14 00:14:02.222629 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3219) Mar 14 00:14:02.730875 systemd[1]: Created slice kubepods-besteffort-podd21d7442_1b4a_4c06_b2be_73493ebcbd42.slice - libcontainer container kubepods-besteffort-podd21d7442_1b4a_4c06_b2be_73493ebcbd42.slice. Mar 14 00:14:02.789461 kubelet[3153]: I0314 00:14:02.789387 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d21d7442-1b4a-4c06-b2be-73493ebcbd42-kube-proxy\") pod \"kube-proxy-wdzbx\" (UID: \"d21d7442-1b4a-4c06-b2be-73493ebcbd42\") " pod="kube-system/kube-proxy-wdzbx" Mar 14 00:14:02.790053 kubelet[3153]: I0314 00:14:02.789478 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d21d7442-1b4a-4c06-b2be-73493ebcbd42-xtables-lock\") pod \"kube-proxy-wdzbx\" (UID: \"d21d7442-1b4a-4c06-b2be-73493ebcbd42\") " pod="kube-system/kube-proxy-wdzbx" Mar 14 00:14:02.790053 kubelet[3153]: I0314 00:14:02.789514 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d21d7442-1b4a-4c06-b2be-73493ebcbd42-lib-modules\") pod \"kube-proxy-wdzbx\" (UID: \"d21d7442-1b4a-4c06-b2be-73493ebcbd42\") " pod="kube-system/kube-proxy-wdzbx" Mar 14 00:14:02.790053 kubelet[3153]: I0314 00:14:02.789552 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmvsc\" (UniqueName: \"kubernetes.io/projected/d21d7442-1b4a-4c06-b2be-73493ebcbd42-kube-api-access-wmvsc\") pod \"kube-proxy-wdzbx\" (UID: \"d21d7442-1b4a-4c06-b2be-73493ebcbd42\") " pod="kube-system/kube-proxy-wdzbx" Mar 14 00:14:02.890046 systemd[1]: Created slice kubepods-besteffort-pod725d0b19_c966_43a4_9e56_5aec24257e7c.slice - libcontainer container kubepods-besteffort-pod725d0b19_c966_43a4_9e56_5aec24257e7c.slice. Mar 14 00:14:02.895489 kubelet[3153]: I0314 00:14:02.892834 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl9nj\" (UniqueName: \"kubernetes.io/projected/725d0b19-c966-43a4-9e56-5aec24257e7c-kube-api-access-pl9nj\") pod \"tigera-operator-6cf4cccc57-qcdxv\" (UID: \"725d0b19-c966-43a4-9e56-5aec24257e7c\") " pod="tigera-operator/tigera-operator-6cf4cccc57-qcdxv" Mar 14 00:14:02.895489 kubelet[3153]: I0314 00:14:02.892907 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/725d0b19-c966-43a4-9e56-5aec24257e7c-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-qcdxv\" (UID: \"725d0b19-c966-43a4-9e56-5aec24257e7c\") " pod="tigera-operator/tigera-operator-6cf4cccc57-qcdxv" Mar 14 00:14:03.045957 containerd[1934]: time="2026-03-14T00:14:03.045324535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wdzbx,Uid:d21d7442-1b4a-4c06-b2be-73493ebcbd42,Namespace:kube-system,Attempt:0,}" Mar 14 00:14:03.087635 containerd[1934]: time="2026-03-14T00:14:03.087362551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:14:03.088805 containerd[1934]: time="2026-03-14T00:14:03.088414567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:14:03.088805 containerd[1934]: time="2026-03-14T00:14:03.088488715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:03.088805 containerd[1934]: time="2026-03-14T00:14:03.088668271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:03.139834 systemd[1]: Started cri-containerd-08a219ee6112196278eed73962d9a0735840e1899e1e5a44e2600483e885c717.scope - libcontainer container 08a219ee6112196278eed73962d9a0735840e1899e1e5a44e2600483e885c717. Mar 14 00:14:03.195971 containerd[1934]: time="2026-03-14T00:14:03.195861476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wdzbx,Uid:d21d7442-1b4a-4c06-b2be-73493ebcbd42,Namespace:kube-system,Attempt:0,} returns sandbox id \"08a219ee6112196278eed73962d9a0735840e1899e1e5a44e2600483e885c717\"" Mar 14 00:14:03.203965 containerd[1934]: time="2026-03-14T00:14:03.203750732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-qcdxv,Uid:725d0b19-c966-43a4-9e56-5aec24257e7c,Namespace:tigera-operator,Attempt:0,}" Mar 14 00:14:03.209355 containerd[1934]: time="2026-03-14T00:14:03.209290856Z" level=info msg="CreateContainer within sandbox \"08a219ee6112196278eed73962d9a0735840e1899e1e5a44e2600483e885c717\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 14 00:14:03.247118 containerd[1934]: time="2026-03-14T00:14:03.246407000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:14:03.247118 containerd[1934]: time="2026-03-14T00:14:03.246916496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:14:03.247592 containerd[1934]: time="2026-03-14T00:14:03.246968612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:03.248661 containerd[1934]: time="2026-03-14T00:14:03.247238540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:03.251914 containerd[1934]: time="2026-03-14T00:14:03.251253620Z" level=info msg="CreateContainer within sandbox \"08a219ee6112196278eed73962d9a0735840e1899e1e5a44e2600483e885c717\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"224c9d5f1a6551abca552e2b30ce9f4c1f84e07747952a2afe59117fa58b4a3e\"" Mar 14 00:14:03.254365 containerd[1934]: time="2026-03-14T00:14:03.253124624Z" level=info msg="StartContainer for \"224c9d5f1a6551abca552e2b30ce9f4c1f84e07747952a2afe59117fa58b4a3e\"" Mar 14 00:14:03.288781 systemd[1]: Started cri-containerd-e3e4a5398604fca96ca49dac194f74608ae429625cc8e43b43916769cad21c87.scope - libcontainer container e3e4a5398604fca96ca49dac194f74608ae429625cc8e43b43916769cad21c87. Mar 14 00:14:03.333744 systemd[1]: Started cri-containerd-224c9d5f1a6551abca552e2b30ce9f4c1f84e07747952a2afe59117fa58b4a3e.scope - libcontainer container 224c9d5f1a6551abca552e2b30ce9f4c1f84e07747952a2afe59117fa58b4a3e. Mar 14 00:14:03.402773 containerd[1934]: time="2026-03-14T00:14:03.402679521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-qcdxv,Uid:725d0b19-c966-43a4-9e56-5aec24257e7c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e3e4a5398604fca96ca49dac194f74608ae429625cc8e43b43916769cad21c87\"" Mar 14 00:14:03.408197 containerd[1934]: time="2026-03-14T00:14:03.407866989Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 14 00:14:03.417777 containerd[1934]: time="2026-03-14T00:14:03.417711237Z" level=info msg="StartContainer for \"224c9d5f1a6551abca552e2b30ce9f4c1f84e07747952a2afe59117fa58b4a3e\" returns successfully" Mar 14 00:14:04.950123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1984767737.mount: Deactivated successfully. Mar 14 00:14:05.606150 kubelet[3153]: I0314 00:14:05.606034 3153 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-wdzbx" podStartSLOduration=3.605998956 podStartE2EDuration="3.605998956s" podCreationTimestamp="2026-03-14 00:14:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:14:03.793743191 +0000 UTC m=+6.387923553" watchObservedRunningTime="2026-03-14 00:14:05.605998956 +0000 UTC m=+8.200179282" Mar 14 00:14:06.191727 containerd[1934]: time="2026-03-14T00:14:06.190413023Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:06.193971 containerd[1934]: time="2026-03-14T00:14:06.193912067Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=25071565" Mar 14 00:14:06.196975 containerd[1934]: time="2026-03-14T00:14:06.196890911Z" level=info msg="ImageCreate event name:\"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:06.201367 containerd[1934]: time="2026-03-14T00:14:06.201306875Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:06.203381 containerd[1934]: time="2026-03-14T00:14:06.203294243Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"25067560\" in 2.795337914s" Mar 14 00:14:06.203722 containerd[1934]: time="2026-03-14T00:14:06.203496023Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:b2fef69c2456aa0a6f6dcb63425a69d11dc35a73b1883b250e4d92f5a697fefe\"" Mar 14 00:14:06.212660 containerd[1934]: time="2026-03-14T00:14:06.212581799Z" level=info msg="CreateContainer within sandbox \"e3e4a5398604fca96ca49dac194f74608ae429625cc8e43b43916769cad21c87\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 14 00:14:06.230425 containerd[1934]: time="2026-03-14T00:14:06.230245931Z" level=info msg="CreateContainer within sandbox \"e3e4a5398604fca96ca49dac194f74608ae429625cc8e43b43916769cad21c87\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"369ba1cafe629032a6a3eee129d60a65fa8e6a8fb9baca78dbd1b5fe1ed61fc7\"" Mar 14 00:14:06.233494 containerd[1934]: time="2026-03-14T00:14:06.232709951Z" level=info msg="StartContainer for \"369ba1cafe629032a6a3eee129d60a65fa8e6a8fb9baca78dbd1b5fe1ed61fc7\"" Mar 14 00:14:06.292951 systemd[1]: run-containerd-runc-k8s.io-369ba1cafe629032a6a3eee129d60a65fa8e6a8fb9baca78dbd1b5fe1ed61fc7-runc.9rBtRp.mount: Deactivated successfully. Mar 14 00:14:06.303782 systemd[1]: Started cri-containerd-369ba1cafe629032a6a3eee129d60a65fa8e6a8fb9baca78dbd1b5fe1ed61fc7.scope - libcontainer container 369ba1cafe629032a6a3eee129d60a65fa8e6a8fb9baca78dbd1b5fe1ed61fc7. Mar 14 00:14:06.357645 containerd[1934]: time="2026-03-14T00:14:06.357585972Z" level=info msg="StartContainer for \"369ba1cafe629032a6a3eee129d60a65fa8e6a8fb9baca78dbd1b5fe1ed61fc7\" returns successfully" Mar 14 00:14:10.819742 kubelet[3153]: I0314 00:14:10.819607 3153 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-qcdxv" podStartSLOduration=6.020333976 podStartE2EDuration="8.819584562s" podCreationTimestamp="2026-03-14 00:14:02 +0000 UTC" firstStartedPulling="2026-03-14 00:14:03.406868817 +0000 UTC m=+6.001049155" lastFinishedPulling="2026-03-14 00:14:06.206119415 +0000 UTC m=+8.800299741" observedRunningTime="2026-03-14 00:14:06.807848054 +0000 UTC m=+9.402028416" watchObservedRunningTime="2026-03-14 00:14:10.819584562 +0000 UTC m=+13.413764900" Mar 14 00:14:15.248235 sudo[2250]: pam_unix(sudo:session): session closed for user root Mar 14 00:14:15.328752 sshd[2247]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:15.338641 systemd[1]: sshd@6-172.31.18.130:22-68.220.241.50:54334.service: Deactivated successfully. Mar 14 00:14:15.347852 systemd[1]: session-7.scope: Deactivated successfully. Mar 14 00:14:15.349466 systemd[1]: session-7.scope: Consumed 7.949s CPU time, 150.1M memory peak, 0B memory swap peak. Mar 14 00:14:15.352033 systemd-logind[1913]: Session 7 logged out. Waiting for processes to exit. Mar 14 00:14:15.356539 systemd-logind[1913]: Removed session 7. Mar 14 00:14:24.440259 systemd[1]: Created slice kubepods-besteffort-pod28fb7e0c_1325_4b34_9cf3_3473cd58aa26.slice - libcontainer container kubepods-besteffort-pod28fb7e0c_1325_4b34_9cf3_3473cd58aa26.slice. Mar 14 00:14:24.445501 kubelet[3153]: I0314 00:14:24.443223 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26rz5\" (UniqueName: \"kubernetes.io/projected/28fb7e0c-1325-4b34-9cf3-3473cd58aa26-kube-api-access-26rz5\") pod \"calico-typha-668c7897c-vx2qr\" (UID: \"28fb7e0c-1325-4b34-9cf3-3473cd58aa26\") " pod="calico-system/calico-typha-668c7897c-vx2qr" Mar 14 00:14:24.445501 kubelet[3153]: I0314 00:14:24.443294 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/28fb7e0c-1325-4b34-9cf3-3473cd58aa26-typha-certs\") pod \"calico-typha-668c7897c-vx2qr\" (UID: \"28fb7e0c-1325-4b34-9cf3-3473cd58aa26\") " pod="calico-system/calico-typha-668c7897c-vx2qr" Mar 14 00:14:24.445501 kubelet[3153]: I0314 00:14:24.443336 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28fb7e0c-1325-4b34-9cf3-3473cd58aa26-tigera-ca-bundle\") pod \"calico-typha-668c7897c-vx2qr\" (UID: \"28fb7e0c-1325-4b34-9cf3-3473cd58aa26\") " pod="calico-system/calico-typha-668c7897c-vx2qr" Mar 14 00:14:24.648454 systemd[1]: Created slice kubepods-besteffort-pode6790d12_b16c_4f68_a683_05b839549199.slice - libcontainer container kubepods-besteffort-pode6790d12_b16c_4f68_a683_05b839549199.slice. Mar 14 00:14:24.746828 kubelet[3153]: I0314 00:14:24.745803 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e6790d12-b16c-4f68-a683-05b839549199-var-lib-calico\") pod \"calico-node-snzgj\" (UID: \"e6790d12-b16c-4f68-a683-05b839549199\") " pod="calico-system/calico-node-snzgj" Mar 14 00:14:24.747338 kubelet[3153]: E0314 00:14:24.746243 3153 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45kq2" podUID="3bae5412-2e2d-4fc4-8221-ace1b28b2f13" Mar 14 00:14:24.747338 kubelet[3153]: I0314 00:14:24.747126 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t85qt\" (UniqueName: \"kubernetes.io/projected/e6790d12-b16c-4f68-a683-05b839549199-kube-api-access-t85qt\") pod \"calico-node-snzgj\" (UID: \"e6790d12-b16c-4f68-a683-05b839549199\") " pod="calico-system/calico-node-snzgj" Mar 14 00:14:24.747338 kubelet[3153]: I0314 00:14:24.747252 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/e6790d12-b16c-4f68-a683-05b839549199-bpffs\") pod \"calico-node-snzgj\" (UID: \"e6790d12-b16c-4f68-a683-05b839549199\") " pod="calico-system/calico-node-snzgj" Mar 14 00:14:24.749220 kubelet[3153]: I0314 00:14:24.747630 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/e6790d12-b16c-4f68-a683-05b839549199-nodeproc\") pod \"calico-node-snzgj\" (UID: \"e6790d12-b16c-4f68-a683-05b839549199\") " pod="calico-system/calico-node-snzgj" Mar 14 00:14:24.749220 kubelet[3153]: I0314 00:14:24.748518 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e6790d12-b16c-4f68-a683-05b839549199-policysync\") pod \"calico-node-snzgj\" (UID: \"e6790d12-b16c-4f68-a683-05b839549199\") " pod="calico-system/calico-node-snzgj" Mar 14 00:14:24.749220 kubelet[3153]: I0314 00:14:24.748588 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e6790d12-b16c-4f68-a683-05b839549199-node-certs\") pod \"calico-node-snzgj\" (UID: \"e6790d12-b16c-4f68-a683-05b839549199\") " pod="calico-system/calico-node-snzgj" Mar 14 00:14:24.749220 kubelet[3153]: I0314 00:14:24.748623 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e6790d12-b16c-4f68-a683-05b839549199-flexvol-driver-host\") pod \"calico-node-snzgj\" (UID: \"e6790d12-b16c-4f68-a683-05b839549199\") " pod="calico-system/calico-node-snzgj" Mar 14 00:14:24.749220 kubelet[3153]: I0314 00:14:24.748668 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e6790d12-b16c-4f68-a683-05b839549199-cni-net-dir\") pod \"calico-node-snzgj\" (UID: \"e6790d12-b16c-4f68-a683-05b839549199\") " pod="calico-system/calico-node-snzgj" Mar 14 00:14:24.749639 kubelet[3153]: I0314 00:14:24.748719 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6790d12-b16c-4f68-a683-05b839549199-tigera-ca-bundle\") pod \"calico-node-snzgj\" (UID: \"e6790d12-b16c-4f68-a683-05b839549199\") " pod="calico-system/calico-node-snzgj" Mar 14 00:14:24.749639 kubelet[3153]: I0314 00:14:24.748758 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/e6790d12-b16c-4f68-a683-05b839549199-sys-fs\") pod \"calico-node-snzgj\" (UID: \"e6790d12-b16c-4f68-a683-05b839549199\") " pod="calico-system/calico-node-snzgj" Mar 14 00:14:24.749639 kubelet[3153]: I0314 00:14:24.748792 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6790d12-b16c-4f68-a683-05b839549199-xtables-lock\") pod \"calico-node-snzgj\" (UID: \"e6790d12-b16c-4f68-a683-05b839549199\") " pod="calico-system/calico-node-snzgj" Mar 14 00:14:24.749639 kubelet[3153]: I0314 00:14:24.748827 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e6790d12-b16c-4f68-a683-05b839549199-cni-bin-dir\") pod \"calico-node-snzgj\" (UID: \"e6790d12-b16c-4f68-a683-05b839549199\") " pod="calico-system/calico-node-snzgj" Mar 14 00:14:24.749639 kubelet[3153]: I0314 00:14:24.748864 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e6790d12-b16c-4f68-a683-05b839549199-cni-log-dir\") pod \"calico-node-snzgj\" (UID: \"e6790d12-b16c-4f68-a683-05b839549199\") " pod="calico-system/calico-node-snzgj" Mar 14 00:14:24.749913 kubelet[3153]: I0314 00:14:24.748901 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e6790d12-b16c-4f68-a683-05b839549199-var-run-calico\") pod \"calico-node-snzgj\" (UID: \"e6790d12-b16c-4f68-a683-05b839549199\") " pod="calico-system/calico-node-snzgj" Mar 14 00:14:24.749913 kubelet[3153]: I0314 00:14:24.748941 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6790d12-b16c-4f68-a683-05b839549199-lib-modules\") pod \"calico-node-snzgj\" (UID: \"e6790d12-b16c-4f68-a683-05b839549199\") " pod="calico-system/calico-node-snzgj" Mar 14 00:14:24.754007 containerd[1934]: time="2026-03-14T00:14:24.753378223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-668c7897c-vx2qr,Uid:28fb7e0c-1325-4b34-9cf3-3473cd58aa26,Namespace:calico-system,Attempt:0,}" Mar 14 00:14:24.824593 containerd[1934]: time="2026-03-14T00:14:24.821945599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:14:24.824593 containerd[1934]: time="2026-03-14T00:14:24.823274227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:14:24.824593 containerd[1934]: time="2026-03-14T00:14:24.823391599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:24.825119 containerd[1934]: time="2026-03-14T00:14:24.823620811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:24.849867 kubelet[3153]: I0314 00:14:24.849481 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwpzq\" (UniqueName: \"kubernetes.io/projected/3bae5412-2e2d-4fc4-8221-ace1b28b2f13-kube-api-access-xwpzq\") pod \"csi-node-driver-45kq2\" (UID: \"3bae5412-2e2d-4fc4-8221-ace1b28b2f13\") " pod="calico-system/csi-node-driver-45kq2" Mar 14 00:14:24.852559 kubelet[3153]: I0314 00:14:24.852477 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3bae5412-2e2d-4fc4-8221-ace1b28b2f13-varrun\") pod \"csi-node-driver-45kq2\" (UID: \"3bae5412-2e2d-4fc4-8221-ace1b28b2f13\") " pod="calico-system/csi-node-driver-45kq2" Mar 14 00:14:24.852939 kubelet[3153]: I0314 00:14:24.852845 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3bae5412-2e2d-4fc4-8221-ace1b28b2f13-registration-dir\") pod \"csi-node-driver-45kq2\" (UID: \"3bae5412-2e2d-4fc4-8221-ace1b28b2f13\") " pod="calico-system/csi-node-driver-45kq2" Mar 14 00:14:24.853359 kubelet[3153]: I0314 00:14:24.853040 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3bae5412-2e2d-4fc4-8221-ace1b28b2f13-socket-dir\") pod \"csi-node-driver-45kq2\" (UID: \"3bae5412-2e2d-4fc4-8221-ace1b28b2f13\") " pod="calico-system/csi-node-driver-45kq2" Mar 14 00:14:24.854945 kubelet[3153]: I0314 00:14:24.854850 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3bae5412-2e2d-4fc4-8221-ace1b28b2f13-kubelet-dir\") pod \"csi-node-driver-45kq2\" (UID: \"3bae5412-2e2d-4fc4-8221-ace1b28b2f13\") " pod="calico-system/csi-node-driver-45kq2" Mar 14 00:14:24.876738 kubelet[3153]: E0314 00:14:24.875759 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:24.876738 kubelet[3153]: W0314 00:14:24.875806 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:24.876738 kubelet[3153]: E0314 00:14:24.875861 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:24.919817 systemd[1]: Started cri-containerd-30a85e63007a626154ce98b0bd4c9b379f9d08bfd20b387bddc959e06dc74c9d.scope - libcontainer container 30a85e63007a626154ce98b0bd4c9b379f9d08bfd20b387bddc959e06dc74c9d. Mar 14 00:14:24.943459 kubelet[3153]: E0314 00:14:24.943267 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:24.943459 kubelet[3153]: W0314 00:14:24.943301 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:24.944006 kubelet[3153]: E0314 00:14:24.943933 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:24.957520 kubelet[3153]: E0314 00:14:24.957208 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:24.957520 kubelet[3153]: W0314 00:14:24.957344 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:24.957520 kubelet[3153]: E0314 00:14:24.957386 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:24.959099 kubelet[3153]: E0314 00:14:24.959044 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:24.960008 kubelet[3153]: W0314 00:14:24.959106 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:24.960008 kubelet[3153]: E0314 00:14:24.959144 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:24.961419 kubelet[3153]: E0314 00:14:24.961369 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:24.961675 kubelet[3153]: W0314 00:14:24.961444 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:24.961675 kubelet[3153]: E0314 00:14:24.961484 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:24.963405 containerd[1934]: time="2026-03-14T00:14:24.963316520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-snzgj,Uid:e6790d12-b16c-4f68-a683-05b839549199,Namespace:calico-system,Attempt:0,}" Mar 14 00:14:24.963602 kubelet[3153]: E0314 00:14:24.963161 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:24.964129 kubelet[3153]: W0314 00:14:24.963590 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:24.964129 kubelet[3153]: E0314 00:14:24.963950 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:24.965545 kubelet[3153]: E0314 00:14:24.964891 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:24.965545 kubelet[3153]: W0314 00:14:24.964950 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:24.965545 kubelet[3153]: E0314 00:14:24.964996 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:24.966628 kubelet[3153]: E0314 00:14:24.966545 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:24.966628 kubelet[3153]: W0314 00:14:24.966579 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:24.967039 kubelet[3153]: E0314 00:14:24.966733 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:24.968742 kubelet[3153]: E0314 00:14:24.968199 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:24.968742 kubelet[3153]: W0314 00:14:24.968261 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:24.968742 kubelet[3153]: E0314 00:14:24.968327 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:24.970366 kubelet[3153]: E0314 00:14:24.969879 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:24.970366 kubelet[3153]: W0314 00:14:24.970097 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:24.970366 kubelet[3153]: E0314 00:14:24.970133 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:24.971798 kubelet[3153]: E0314 00:14:24.971370 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:24.971798 kubelet[3153]: W0314 00:14:24.971419 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:24.971798 kubelet[3153]: E0314 00:14:24.971503 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:24.973208 kubelet[3153]: E0314 00:14:24.972951 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:24.973208 kubelet[3153]: W0314 00:14:24.972984 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:24.973208 kubelet[3153]: E0314 00:14:24.973038 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:24.974378 kubelet[3153]: E0314 00:14:24.973537 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:24.974378 kubelet[3153]: W0314 00:14:24.973561 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:24.974378 kubelet[3153]: E0314 00:14:24.973591 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:24.974639 kubelet[3153]: E0314 00:14:24.974599 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:24.974639 kubelet[3153]: W0314 00:14:24.974623 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:24.975294 kubelet[3153]: E0314 00:14:24.974653 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:24.975294 kubelet[3153]: E0314 00:14:24.975085 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:24.975294 kubelet[3153]: W0314 00:14:24.975105 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:24.975294 kubelet[3153]: E0314 00:14:24.975128 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:24.976656 kubelet[3153]: E0314 00:14:24.976550 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:24.976656 kubelet[3153]: W0314 00:14:24.976597 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:24.976656 kubelet[3153]: E0314 00:14:24.976634 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:24.977713 kubelet[3153]: E0314 00:14:24.977079 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:24.977713 kubelet[3153]: W0314 00:14:24.977101 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:24.977713 kubelet[3153]: E0314 00:14:24.977144 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:24.978053 kubelet[3153]: E0314 00:14:24.977716 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:24.978053 kubelet[3153]: W0314 00:14:24.977737 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:24.978053 kubelet[3153]: E0314 00:14:24.977763 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:24.978799 kubelet[3153]: E0314 00:14:24.978762 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:24.978799 kubelet[3153]: W0314 00:14:24.978796 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:24.979353 kubelet[3153]: E0314 00:14:24.978831 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:24.979353 kubelet[3153]: E0314 00:14:24.979191 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:24.979353 kubelet[3153]: W0314 00:14:24.979210 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:24.979353 kubelet[3153]: E0314 00:14:24.979232 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:24.980862 kubelet[3153]: E0314 00:14:24.980179 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:24.980862 kubelet[3153]: W0314 00:14:24.980215 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:24.980862 kubelet[3153]: E0314 00:14:24.980249 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:24.980862 kubelet[3153]: E0314 00:14:24.980790 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:24.980862 kubelet[3153]: W0314 00:14:24.980813 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:24.980862 kubelet[3153]: E0314 00:14:24.980840 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:24.982690 kubelet[3153]: E0314 00:14:24.981284 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:24.982690 kubelet[3153]: W0314 00:14:24.981304 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:24.982690 kubelet[3153]: E0314 00:14:24.981328 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:24.982690 kubelet[3153]: E0314 00:14:24.981733 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:24.982690 kubelet[3153]: W0314 00:14:24.981793 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:24.982690 kubelet[3153]: E0314 00:14:24.981819 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:24.982690 kubelet[3153]: E0314 00:14:24.982421 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:24.982690 kubelet[3153]: W0314 00:14:24.982527 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:24.982690 kubelet[3153]: E0314 00:14:24.982590 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:24.988478 kubelet[3153]: E0314 00:14:24.984890 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:24.988478 kubelet[3153]: W0314 00:14:24.984930 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:24.988478 kubelet[3153]: E0314 00:14:24.984966 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:24.988478 kubelet[3153]: E0314 00:14:24.986634 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:24.988478 kubelet[3153]: W0314 00:14:24.986659 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:24.988478 kubelet[3153]: E0314 00:14:24.986691 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:25.026518 kubelet[3153]: E0314 00:14:25.024200 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:25.026518 kubelet[3153]: W0314 00:14:25.024240 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:25.026518 kubelet[3153]: E0314 00:14:25.024297 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:25.051710 containerd[1934]: time="2026-03-14T00:14:25.051088205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:14:25.051710 containerd[1934]: time="2026-03-14T00:14:25.051487325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:14:25.051710 containerd[1934]: time="2026-03-14T00:14:25.051535793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:25.054104 containerd[1934]: time="2026-03-14T00:14:25.053114069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:25.074183 containerd[1934]: time="2026-03-14T00:14:25.073742681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-668c7897c-vx2qr,Uid:28fb7e0c-1325-4b34-9cf3-3473cd58aa26,Namespace:calico-system,Attempt:0,} returns sandbox id \"30a85e63007a626154ce98b0bd4c9b379f9d08bfd20b387bddc959e06dc74c9d\"" Mar 14 00:14:25.078693 containerd[1934]: time="2026-03-14T00:14:25.078516233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 14 00:14:25.107748 systemd[1]: Started cri-containerd-8ac064384c3142cdc85a305f35e929afee07505bfb521881e4c7116515926b95.scope - libcontainer container 8ac064384c3142cdc85a305f35e929afee07505bfb521881e4c7116515926b95. Mar 14 00:14:25.165403 containerd[1934]: time="2026-03-14T00:14:25.165339785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-snzgj,Uid:e6790d12-b16c-4f68-a683-05b839549199,Namespace:calico-system,Attempt:0,} returns sandbox id \"8ac064384c3142cdc85a305f35e929afee07505bfb521881e4c7116515926b95\"" Mar 14 00:14:26.292117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2673082426.mount: Deactivated successfully. Mar 14 00:14:26.718526 kubelet[3153]: E0314 00:14:26.718470 3153 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45kq2" podUID="3bae5412-2e2d-4fc4-8221-ace1b28b2f13" Mar 14 00:14:27.197947 containerd[1934]: time="2026-03-14T00:14:27.197881075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:27.199482 containerd[1934]: time="2026-03-14T00:14:27.199407943Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=33865174" Mar 14 00:14:27.200402 containerd[1934]: time="2026-03-14T00:14:27.200306443Z" level=info msg="ImageCreate event name:\"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:27.206778 containerd[1934]: time="2026-03-14T00:14:27.206121007Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:27.207908 containerd[1934]: time="2026-03-14T00:14:27.207830611Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"33865028\" in 2.129249074s" Mar 14 00:14:27.208040 containerd[1934]: time="2026-03-14T00:14:27.207906019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:e836e1dea560d4c477b347f1c93c245aec618361306b23eda1d6bb7665476182\"" Mar 14 00:14:27.213464 containerd[1934]: time="2026-03-14T00:14:27.213364447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 14 00:14:27.243386 containerd[1934]: time="2026-03-14T00:14:27.243310495Z" level=info msg="CreateContainer within sandbox \"30a85e63007a626154ce98b0bd4c9b379f9d08bfd20b387bddc959e06dc74c9d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 14 00:14:27.270018 containerd[1934]: time="2026-03-14T00:14:27.269610788Z" level=info msg="CreateContainer within sandbox \"30a85e63007a626154ce98b0bd4c9b379f9d08bfd20b387bddc959e06dc74c9d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9f65e2c630fa36d90a94a8ea3cd5408fc3c05410783a36daf0da491f6c966e5d\"" Mar 14 00:14:27.271557 containerd[1934]: time="2026-03-14T00:14:27.271480256Z" level=info msg="StartContainer for \"9f65e2c630fa36d90a94a8ea3cd5408fc3c05410783a36daf0da491f6c966e5d\"" Mar 14 00:14:27.332512 systemd[1]: Started cri-containerd-9f65e2c630fa36d90a94a8ea3cd5408fc3c05410783a36daf0da491f6c966e5d.scope - libcontainer container 9f65e2c630fa36d90a94a8ea3cd5408fc3c05410783a36daf0da491f6c966e5d. Mar 14 00:14:27.438016 containerd[1934]: time="2026-03-14T00:14:27.437945696Z" level=info msg="StartContainer for \"9f65e2c630fa36d90a94a8ea3cd5408fc3c05410783a36daf0da491f6c966e5d\" returns successfully" Mar 14 00:14:27.952715 kubelet[3153]: E0314 00:14:27.952580 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:27.953536 kubelet[3153]: W0314 00:14:27.952703 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:27.953536 kubelet[3153]: E0314 00:14:27.952835 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:27.953849 kubelet[3153]: E0314 00:14:27.953807 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:27.953937 kubelet[3153]: W0314 00:14:27.953843 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:27.953937 kubelet[3153]: E0314 00:14:27.953878 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:27.954418 kubelet[3153]: E0314 00:14:27.954308 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:27.954418 kubelet[3153]: W0314 00:14:27.954360 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:27.954418 kubelet[3153]: E0314 00:14:27.954386 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:27.954988 kubelet[3153]: E0314 00:14:27.954948 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:27.954988 kubelet[3153]: W0314 00:14:27.954981 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:27.955159 kubelet[3153]: E0314 00:14:27.955013 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:27.956114 kubelet[3153]: E0314 00:14:27.956065 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:27.956282 kubelet[3153]: W0314 00:14:27.956124 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:27.956282 kubelet[3153]: E0314 00:14:27.956165 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:27.956936 kubelet[3153]: E0314 00:14:27.956879 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:27.956936 kubelet[3153]: W0314 00:14:27.956927 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:27.957399 kubelet[3153]: E0314 00:14:27.956960 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:27.957814 kubelet[3153]: E0314 00:14:27.957761 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:27.957906 kubelet[3153]: W0314 00:14:27.957836 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:27.957906 kubelet[3153]: E0314 00:14:27.957894 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:27.961485 kubelet[3153]: E0314 00:14:27.960417 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:27.961636 kubelet[3153]: W0314 00:14:27.961482 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:27.961636 kubelet[3153]: E0314 00:14:27.961524 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:27.962043 kubelet[3153]: E0314 00:14:27.962001 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:27.962043 kubelet[3153]: W0314 00:14:27.962031 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:27.962548 kubelet[3153]: E0314 00:14:27.962057 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:27.962801 kubelet[3153]: E0314 00:14:27.962762 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:27.962869 kubelet[3153]: W0314 00:14:27.962795 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:27.962869 kubelet[3153]: E0314 00:14:27.962828 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:27.963248 kubelet[3153]: E0314 00:14:27.963200 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:27.963248 kubelet[3153]: W0314 00:14:27.963233 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:27.963585 kubelet[3153]: E0314 00:14:27.963257 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:27.964238 kubelet[3153]: E0314 00:14:27.964191 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:27.964238 kubelet[3153]: W0314 00:14:27.964227 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:27.964762 kubelet[3153]: E0314 00:14:27.964282 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:27.966427 kubelet[3153]: E0314 00:14:27.966373 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:27.966427 kubelet[3153]: W0314 00:14:27.966412 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:27.967106 kubelet[3153]: E0314 00:14:27.966471 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:27.967394 kubelet[3153]: E0314 00:14:27.967345 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:27.967394 kubelet[3153]: W0314 00:14:27.967381 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:27.969498 kubelet[3153]: E0314 00:14:27.967415 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:27.969498 kubelet[3153]: E0314 00:14:27.967847 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:27.969498 kubelet[3153]: W0314 00:14:27.967869 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:27.969498 kubelet[3153]: E0314 00:14:27.967896 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:27.996719 kubelet[3153]: E0314 00:14:27.996677 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:27.997100 kubelet[3153]: W0314 00:14:27.996892 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:27.997100 kubelet[3153]: E0314 00:14:27.996936 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:27.997759 kubelet[3153]: E0314 00:14:27.997697 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:27.998145 kubelet[3153]: W0314 00:14:27.997995 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:27.998145 kubelet[3153]: E0314 00:14:27.998036 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:28.000654 kubelet[3153]: E0314 00:14:28.000599 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:28.000654 kubelet[3153]: W0314 00:14:28.000642 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:28.000973 kubelet[3153]: E0314 00:14:28.000680 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:28.003138 kubelet[3153]: E0314 00:14:28.003085 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:28.003138 kubelet[3153]: W0314 00:14:28.003126 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:28.003639 kubelet[3153]: E0314 00:14:28.003164 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:28.003875 kubelet[3153]: E0314 00:14:28.003837 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:28.003959 kubelet[3153]: W0314 00:14:28.003871 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:28.003959 kubelet[3153]: E0314 00:14:28.003900 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:28.005638 kubelet[3153]: E0314 00:14:28.005584 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:28.005638 kubelet[3153]: W0314 00:14:28.005626 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:28.005946 kubelet[3153]: E0314 00:14:28.005665 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:28.006597 kubelet[3153]: E0314 00:14:28.006537 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:28.006597 kubelet[3153]: W0314 00:14:28.006586 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:28.008828 kubelet[3153]: E0314 00:14:28.006621 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:28.008828 kubelet[3153]: E0314 00:14:28.007910 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:28.008828 kubelet[3153]: W0314 00:14:28.007985 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:28.008828 kubelet[3153]: E0314 00:14:28.008025 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:28.010004 kubelet[3153]: E0314 00:14:28.009212 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:28.010004 kubelet[3153]: W0314 00:14:28.009237 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:28.010004 kubelet[3153]: E0314 00:14:28.009269 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:28.011244 kubelet[3153]: E0314 00:14:28.010158 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:28.011244 kubelet[3153]: W0314 00:14:28.010183 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:28.011244 kubelet[3153]: E0314 00:14:28.010214 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:28.011704 kubelet[3153]: E0314 00:14:28.011673 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:28.011895 kubelet[3153]: W0314 00:14:28.011848 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:28.012024 kubelet[3153]: E0314 00:14:28.011999 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:28.012708 kubelet[3153]: E0314 00:14:28.012583 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:28.014553 kubelet[3153]: W0314 00:14:28.014495 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:28.014767 kubelet[3153]: E0314 00:14:28.014739 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:28.015637 kubelet[3153]: E0314 00:14:28.015605 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:28.015852 kubelet[3153]: W0314 00:14:28.015822 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:28.015973 kubelet[3153]: E0314 00:14:28.015949 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:28.016615 kubelet[3153]: E0314 00:14:28.016584 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:28.017114 kubelet[3153]: W0314 00:14:28.016841 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:28.017114 kubelet[3153]: E0314 00:14:28.016884 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:28.020622 kubelet[3153]: E0314 00:14:28.020582 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:28.021288 kubelet[3153]: W0314 00:14:28.020793 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:28.021288 kubelet[3153]: E0314 00:14:28.020837 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:28.022795 kubelet[3153]: E0314 00:14:28.022756 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:28.022999 kubelet[3153]: W0314 00:14:28.022967 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:28.023162 kubelet[3153]: E0314 00:14:28.023092 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:28.024407 kubelet[3153]: E0314 00:14:28.023810 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:28.024407 kubelet[3153]: W0314 00:14:28.023841 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:28.024407 kubelet[3153]: E0314 00:14:28.023874 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:28.028652 kubelet[3153]: E0314 00:14:28.028613 3153 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 14 00:14:28.028828 kubelet[3153]: W0314 00:14:28.028799 3153 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 14 00:14:28.028973 kubelet[3153]: E0314 00:14:28.028948 3153 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 14 00:14:28.518376 containerd[1934]: time="2026-03-14T00:14:28.518298238Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:28.520116 containerd[1934]: time="2026-03-14T00:14:28.520041178Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4457682" Mar 14 00:14:28.521143 containerd[1934]: time="2026-03-14T00:14:28.520557310Z" level=info msg="ImageCreate event name:\"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:28.525784 containerd[1934]: time="2026-03-14T00:14:28.524927242Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:28.526812 containerd[1934]: time="2026-03-14T00:14:28.526739026Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"5855167\" in 1.313298883s" Mar 14 00:14:28.526812 containerd[1934]: time="2026-03-14T00:14:28.526811482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:449a6463eaa02e13b190ef7c4057191febcc65ab9418bae3bc0995f5bce65798\"" Mar 14 00:14:28.535796 containerd[1934]: time="2026-03-14T00:14:28.535733938Z" level=info msg="CreateContainer within sandbox \"8ac064384c3142cdc85a305f35e929afee07505bfb521881e4c7116515926b95\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 14 00:14:28.560539 containerd[1934]: time="2026-03-14T00:14:28.560459518Z" level=info msg="CreateContainer within sandbox \"8ac064384c3142cdc85a305f35e929afee07505bfb521881e4c7116515926b95\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e1bf5a28a0e7ada307fa18c20ea6233dc6fb78f3f9edef5a95cefa373bec764d\"" Mar 14 00:14:28.565466 containerd[1934]: time="2026-03-14T00:14:28.565100122Z" level=info msg="StartContainer for \"e1bf5a28a0e7ada307fa18c20ea6233dc6fb78f3f9edef5a95cefa373bec764d\"" Mar 14 00:14:28.646744 systemd[1]: Started cri-containerd-e1bf5a28a0e7ada307fa18c20ea6233dc6fb78f3f9edef5a95cefa373bec764d.scope - libcontainer container e1bf5a28a0e7ada307fa18c20ea6233dc6fb78f3f9edef5a95cefa373bec764d. Mar 14 00:14:28.702841 containerd[1934]: time="2026-03-14T00:14:28.702710339Z" level=info msg="StartContainer for \"e1bf5a28a0e7ada307fa18c20ea6233dc6fb78f3f9edef5a95cefa373bec764d\" returns successfully" Mar 14 00:14:28.718645 kubelet[3153]: E0314 00:14:28.718473 3153 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45kq2" podUID="3bae5412-2e2d-4fc4-8221-ace1b28b2f13" Mar 14 00:14:28.742764 systemd[1]: cri-containerd-e1bf5a28a0e7ada307fa18c20ea6233dc6fb78f3f9edef5a95cefa373bec764d.scope: Deactivated successfully. Mar 14 00:14:28.911459 kubelet[3153]: I0314 00:14:28.909407 3153 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:14:28.958478 kubelet[3153]: I0314 00:14:28.957546 3153 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-668c7897c-vx2qr" podStartSLOduration=2.824806102 podStartE2EDuration="4.957521964s" podCreationTimestamp="2026-03-14 00:14:24 +0000 UTC" firstStartedPulling="2026-03-14 00:14:25.077661881 +0000 UTC m=+27.671842207" lastFinishedPulling="2026-03-14 00:14:27.210377707 +0000 UTC m=+29.804558069" observedRunningTime="2026-03-14 00:14:28.063478268 +0000 UTC m=+30.657658642" watchObservedRunningTime="2026-03-14 00:14:28.957521964 +0000 UTC m=+31.551702398" Mar 14 00:14:29.230091 containerd[1934]: time="2026-03-14T00:14:29.229911789Z" level=info msg="shim disconnected" id=e1bf5a28a0e7ada307fa18c20ea6233dc6fb78f3f9edef5a95cefa373bec764d namespace=k8s.io Mar 14 00:14:29.230091 containerd[1934]: time="2026-03-14T00:14:29.229990593Z" level=warning msg="cleaning up after shim disconnected" id=e1bf5a28a0e7ada307fa18c20ea6233dc6fb78f3f9edef5a95cefa373bec764d namespace=k8s.io Mar 14 00:14:29.230091 containerd[1934]: time="2026-03-14T00:14:29.230013189Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:14:29.254237 containerd[1934]: time="2026-03-14T00:14:29.254167785Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:14:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:14:29.552489 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1bf5a28a0e7ada307fa18c20ea6233dc6fb78f3f9edef5a95cefa373bec764d-rootfs.mount: Deactivated successfully. Mar 14 00:14:29.918764 containerd[1934]: time="2026-03-14T00:14:29.918598477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 14 00:14:30.718476 kubelet[3153]: E0314 00:14:30.718389 3153 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45kq2" podUID="3bae5412-2e2d-4fc4-8221-ace1b28b2f13" Mar 14 00:14:32.719584 kubelet[3153]: E0314 00:14:32.718873 3153 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45kq2" podUID="3bae5412-2e2d-4fc4-8221-ace1b28b2f13" Mar 14 00:14:34.718500 kubelet[3153]: E0314 00:14:34.717789 3153 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45kq2" podUID="3bae5412-2e2d-4fc4-8221-ace1b28b2f13" Mar 14 00:14:36.304635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1932280969.mount: Deactivated successfully. Mar 14 00:14:36.364028 containerd[1934]: time="2026-03-14T00:14:36.362903405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:36.364983 containerd[1934]: time="2026-03-14T00:14:36.364139549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=153921674" Mar 14 00:14:36.365359 containerd[1934]: time="2026-03-14T00:14:36.365305385Z" level=info msg="ImageCreate event name:\"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:36.370578 containerd[1934]: time="2026-03-14T00:14:36.370507349Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:36.372756 containerd[1934]: time="2026-03-14T00:14:36.372681833Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"153921536\" in 6.45396752s" Mar 14 00:14:36.372756 containerd[1934]: time="2026-03-14T00:14:36.372746609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:27be54f2b9e47d96c7e9e5ad16e26ec298c1829f31885c81a622d50472c8ac97\"" Mar 14 00:14:36.380735 containerd[1934]: time="2026-03-14T00:14:36.380659253Z" level=info msg="CreateContainer within sandbox \"8ac064384c3142cdc85a305f35e929afee07505bfb521881e4c7116515926b95\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 14 00:14:36.404831 containerd[1934]: time="2026-03-14T00:14:36.404696501Z" level=info msg="CreateContainer within sandbox \"8ac064384c3142cdc85a305f35e929afee07505bfb521881e4c7116515926b95\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"5fe7880779131f87a9ee3faca496163657760ed158640a883c906309cbe24992\"" Mar 14 00:14:36.405894 containerd[1934]: time="2026-03-14T00:14:36.405721037Z" level=info msg="StartContainer for \"5fe7880779131f87a9ee3faca496163657760ed158640a883c906309cbe24992\"" Mar 14 00:14:36.471765 systemd[1]: Started cri-containerd-5fe7880779131f87a9ee3faca496163657760ed158640a883c906309cbe24992.scope - libcontainer container 5fe7880779131f87a9ee3faca496163657760ed158640a883c906309cbe24992. Mar 14 00:14:36.528838 containerd[1934]: time="2026-03-14T00:14:36.528674670Z" level=info msg="StartContainer for \"5fe7880779131f87a9ee3faca496163657760ed158640a883c906309cbe24992\" returns successfully" Mar 14 00:14:36.717875 systemd[1]: cri-containerd-5fe7880779131f87a9ee3faca496163657760ed158640a883c906309cbe24992.scope: Deactivated successfully. Mar 14 00:14:36.722359 kubelet[3153]: E0314 00:14:36.721808 3153 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45kq2" podUID="3bae5412-2e2d-4fc4-8221-ace1b28b2f13" Mar 14 00:14:37.290467 containerd[1934]: time="2026-03-14T00:14:37.290014445Z" level=info msg="shim disconnected" id=5fe7880779131f87a9ee3faca496163657760ed158640a883c906309cbe24992 namespace=k8s.io Mar 14 00:14:37.290467 containerd[1934]: time="2026-03-14T00:14:37.290169761Z" level=warning msg="cleaning up after shim disconnected" id=5fe7880779131f87a9ee3faca496163657760ed158640a883c906309cbe24992 namespace=k8s.io Mar 14 00:14:37.290467 containerd[1934]: time="2026-03-14T00:14:37.290190689Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:14:37.310359 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5fe7880779131f87a9ee3faca496163657760ed158640a883c906309cbe24992-rootfs.mount: Deactivated successfully. Mar 14 00:14:37.955683 containerd[1934]: time="2026-03-14T00:14:37.955327917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 14 00:14:38.718294 kubelet[3153]: E0314 00:14:38.718212 3153 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45kq2" podUID="3bae5412-2e2d-4fc4-8221-ace1b28b2f13" Mar 14 00:14:40.718248 kubelet[3153]: E0314 00:14:40.718176 3153 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45kq2" podUID="3bae5412-2e2d-4fc4-8221-ace1b28b2f13" Mar 14 00:14:40.773819 containerd[1934]: time="2026-03-14T00:14:40.772134995Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:40.773819 containerd[1934]: time="2026-03-14T00:14:40.773749331Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=66009216" Mar 14 00:14:40.774664 containerd[1934]: time="2026-03-14T00:14:40.774595439Z" level=info msg="ImageCreate event name:\"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:40.778841 containerd[1934]: time="2026-03-14T00:14:40.778765883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:40.780847 containerd[1934]: time="2026-03-14T00:14:40.780794255Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"67406741\" in 2.825407154s" Mar 14 00:14:40.781028 containerd[1934]: time="2026-03-14T00:14:40.780997439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c10bed152367fad8c19e9400f12b748d6fbc20498086983df13e70e36f24511b\"" Mar 14 00:14:40.789926 containerd[1934]: time="2026-03-14T00:14:40.789681107Z" level=info msg="CreateContainer within sandbox \"8ac064384c3142cdc85a305f35e929afee07505bfb521881e4c7116515926b95\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 14 00:14:40.811387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2961092678.mount: Deactivated successfully. Mar 14 00:14:40.815206 containerd[1934]: time="2026-03-14T00:14:40.815140571Z" level=info msg="CreateContainer within sandbox \"8ac064384c3142cdc85a305f35e929afee07505bfb521881e4c7116515926b95\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1350f477d2d0908477060d973fecfc5ed9f2624d7a3636ed4669894c4fec6233\"" Mar 14 00:14:40.818400 containerd[1934]: time="2026-03-14T00:14:40.818344223Z" level=info msg="StartContainer for \"1350f477d2d0908477060d973fecfc5ed9f2624d7a3636ed4669894c4fec6233\"" Mar 14 00:14:40.887819 systemd[1]: Started cri-containerd-1350f477d2d0908477060d973fecfc5ed9f2624d7a3636ed4669894c4fec6233.scope - libcontainer container 1350f477d2d0908477060d973fecfc5ed9f2624d7a3636ed4669894c4fec6233. Mar 14 00:14:40.943698 containerd[1934]: time="2026-03-14T00:14:40.943638660Z" level=info msg="StartContainer for \"1350f477d2d0908477060d973fecfc5ed9f2624d7a3636ed4669894c4fec6233\" returns successfully" Mar 14 00:14:42.719300 kubelet[3153]: E0314 00:14:42.718663 3153 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-45kq2" podUID="3bae5412-2e2d-4fc4-8221-ace1b28b2f13" Mar 14 00:14:42.803106 containerd[1934]: time="2026-03-14T00:14:42.803031097Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:14:42.808520 systemd[1]: cri-containerd-1350f477d2d0908477060d973fecfc5ed9f2624d7a3636ed4669894c4fec6233.scope: Deactivated successfully. Mar 14 00:14:42.808941 systemd[1]: cri-containerd-1350f477d2d0908477060d973fecfc5ed9f2624d7a3636ed4669894c4fec6233.scope: Consumed 1.021s CPU time. Mar 14 00:14:42.828465 kubelet[3153]: I0314 00:14:42.827762 3153 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 14 00:14:42.913328 systemd[1]: Created slice kubepods-burstable-podbc321aa2_bc1d_4b5f_9fdf_44eb597d2609.slice - libcontainer container kubepods-burstable-podbc321aa2_bc1d_4b5f_9fdf_44eb597d2609.slice. Mar 14 00:14:42.946053 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1350f477d2d0908477060d973fecfc5ed9f2624d7a3636ed4669894c4fec6233-rootfs.mount: Deactivated successfully. Mar 14 00:14:42.965054 containerd[1934]: time="2026-03-14T00:14:42.964957586Z" level=info msg="shim disconnected" id=1350f477d2d0908477060d973fecfc5ed9f2624d7a3636ed4669894c4fec6233 namespace=k8s.io Mar 14 00:14:42.967636 containerd[1934]: time="2026-03-14T00:14:42.965253746Z" level=warning msg="cleaning up after shim disconnected" id=1350f477d2d0908477060d973fecfc5ed9f2624d7a3636ed4669894c4fec6233 namespace=k8s.io Mar 14 00:14:42.967636 containerd[1934]: time="2026-03-14T00:14:42.965282546Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:14:42.985581 systemd[1]: Created slice kubepods-besteffort-pod460ebb16_a0e6_4f2b_b440_dc924ce7846b.slice - libcontainer container kubepods-besteffort-pod460ebb16_a0e6_4f2b_b440_dc924ce7846b.slice. Mar 14 00:14:43.017690 systemd[1]: Created slice kubepods-besteffort-pod89705421_5074_4ee1_8f8a_b24f0bbde701.slice - libcontainer container kubepods-besteffort-pod89705421_5074_4ee1_8f8a_b24f0bbde701.slice. Mar 14 00:14:43.028943 kubelet[3153]: I0314 00:14:43.028873 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjb7f\" (UniqueName: \"kubernetes.io/projected/d4e6f04d-05da-499d-9d04-d1cc54b42952-kube-api-access-pjb7f\") pod \"coredns-7d764666f9-v8blg\" (UID: \"d4e6f04d-05da-499d-9d04-d1cc54b42952\") " pod="kube-system/coredns-7d764666f9-v8blg" Mar 14 00:14:43.029840 kubelet[3153]: I0314 00:14:43.029773 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9dac252b-d2f0-4f51-8a32-3e7e6f3e4118-calico-apiserver-certs\") pod \"calico-apiserver-696c5cfc7f-ng7xc\" (UID: \"9dac252b-d2f0-4f51-8a32-3e7e6f3e4118\") " pod="calico-system/calico-apiserver-696c5cfc7f-ng7xc" Mar 14 00:14:43.031183 kubelet[3153]: I0314 00:14:43.031115 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/89705421-5074-4ee1-8f8a-b24f0bbde701-calico-apiserver-certs\") pod \"calico-apiserver-696c5cfc7f-hzr5w\" (UID: \"89705421-5074-4ee1-8f8a-b24f0bbde701\") " pod="calico-system/calico-apiserver-696c5cfc7f-hzr5w" Mar 14 00:14:43.031485 kubelet[3153]: I0314 00:14:43.031401 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4e6f04d-05da-499d-9d04-d1cc54b42952-config-volume\") pod \"coredns-7d764666f9-v8blg\" (UID: \"d4e6f04d-05da-499d-9d04-d1cc54b42952\") " pod="kube-system/coredns-7d764666f9-v8blg" Mar 14 00:14:43.032198 kubelet[3153]: I0314 00:14:43.031583 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9prdd\" (UniqueName: \"kubernetes.io/projected/b2f14ff7-37b0-4a9b-9308-67b07dd8dd39-kube-api-access-9prdd\") pod \"calico-kube-controllers-646cdb9884-pj2xg\" (UID: \"b2f14ff7-37b0-4a9b-9308-67b07dd8dd39\") " pod="calico-system/calico-kube-controllers-646cdb9884-pj2xg" Mar 14 00:14:43.032198 kubelet[3153]: I0314 00:14:43.032079 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/460ebb16-a0e6-4f2b-b440-dc924ce7846b-whisker-backend-key-pair\") pod \"whisker-5fd54db7bb-9gfrr\" (UID: \"460ebb16-a0e6-4f2b-b440-dc924ce7846b\") " pod="calico-system/whisker-5fd54db7bb-9gfrr" Mar 14 00:14:43.033404 kubelet[3153]: I0314 00:14:43.032132 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxf8l\" (UniqueName: \"kubernetes.io/projected/460ebb16-a0e6-4f2b-b440-dc924ce7846b-kube-api-access-vxf8l\") pod \"whisker-5fd54db7bb-9gfrr\" (UID: \"460ebb16-a0e6-4f2b-b440-dc924ce7846b\") " pod="calico-system/whisker-5fd54db7bb-9gfrr" Mar 14 00:14:43.033404 kubelet[3153]: I0314 00:14:43.032675 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c45km\" (UniqueName: \"kubernetes.io/projected/9dac252b-d2f0-4f51-8a32-3e7e6f3e4118-kube-api-access-c45km\") pod \"calico-apiserver-696c5cfc7f-ng7xc\" (UID: \"9dac252b-d2f0-4f51-8a32-3e7e6f3e4118\") " pod="calico-system/calico-apiserver-696c5cfc7f-ng7xc" Mar 14 00:14:43.033404 kubelet[3153]: I0314 00:14:43.032721 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grk2q\" (UniqueName: \"kubernetes.io/projected/89705421-5074-4ee1-8f8a-b24f0bbde701-kube-api-access-grk2q\") pod \"calico-apiserver-696c5cfc7f-hzr5w\" (UID: \"89705421-5074-4ee1-8f8a-b24f0bbde701\") " pod="calico-system/calico-apiserver-696c5cfc7f-hzr5w" Mar 14 00:14:43.033404 kubelet[3153]: I0314 00:14:43.032763 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8686320-fc9c-4a3f-a2c7-dfa460638fd7-config\") pod \"goldmane-9f7667bb8-nbdj4\" (UID: \"c8686320-fc9c-4a3f-a2c7-dfa460638fd7\") " pod="calico-system/goldmane-9f7667bb8-nbdj4" Mar 14 00:14:43.033404 kubelet[3153]: I0314 00:14:43.032815 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vh7w\" (UniqueName: \"kubernetes.io/projected/c8686320-fc9c-4a3f-a2c7-dfa460638fd7-kube-api-access-7vh7w\") pod \"goldmane-9f7667bb8-nbdj4\" (UID: \"c8686320-fc9c-4a3f-a2c7-dfa460638fd7\") " pod="calico-system/goldmane-9f7667bb8-nbdj4" Mar 14 00:14:43.034347 kubelet[3153]: I0314 00:14:43.032861 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8686320-fc9c-4a3f-a2c7-dfa460638fd7-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-nbdj4\" (UID: \"c8686320-fc9c-4a3f-a2c7-dfa460638fd7\") " pod="calico-system/goldmane-9f7667bb8-nbdj4" Mar 14 00:14:43.034347 kubelet[3153]: I0314 00:14:43.032898 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c8686320-fc9c-4a3f-a2c7-dfa460638fd7-goldmane-key-pair\") pod \"goldmane-9f7667bb8-nbdj4\" (UID: \"c8686320-fc9c-4a3f-a2c7-dfa460638fd7\") " pod="calico-system/goldmane-9f7667bb8-nbdj4" Mar 14 00:14:43.034347 kubelet[3153]: I0314 00:14:43.032937 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2f14ff7-37b0-4a9b-9308-67b07dd8dd39-tigera-ca-bundle\") pod \"calico-kube-controllers-646cdb9884-pj2xg\" (UID: \"b2f14ff7-37b0-4a9b-9308-67b07dd8dd39\") " pod="calico-system/calico-kube-controllers-646cdb9884-pj2xg" Mar 14 00:14:43.037088 kubelet[3153]: I0314 00:14:43.034785 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/460ebb16-a0e6-4f2b-b440-dc924ce7846b-nginx-config\") pod \"whisker-5fd54db7bb-9gfrr\" (UID: \"460ebb16-a0e6-4f2b-b440-dc924ce7846b\") " pod="calico-system/whisker-5fd54db7bb-9gfrr" Mar 14 00:14:43.037088 kubelet[3153]: I0314 00:14:43.034848 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc321aa2-bc1d-4b5f-9fdf-44eb597d2609-config-volume\") pod \"coredns-7d764666f9-fzzfp\" (UID: \"bc321aa2-bc1d-4b5f-9fdf-44eb597d2609\") " pod="kube-system/coredns-7d764666f9-fzzfp" Mar 14 00:14:43.040233 kubelet[3153]: I0314 00:14:43.040165 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8wjz\" (UniqueName: \"kubernetes.io/projected/bc321aa2-bc1d-4b5f-9fdf-44eb597d2609-kube-api-access-p8wjz\") pod \"coredns-7d764666f9-fzzfp\" (UID: \"bc321aa2-bc1d-4b5f-9fdf-44eb597d2609\") " pod="kube-system/coredns-7d764666f9-fzzfp" Mar 14 00:14:43.040577 kubelet[3153]: I0314 00:14:43.040524 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/460ebb16-a0e6-4f2b-b440-dc924ce7846b-whisker-ca-bundle\") pod \"whisker-5fd54db7bb-9gfrr\" (UID: \"460ebb16-a0e6-4f2b-b440-dc924ce7846b\") " pod="calico-system/whisker-5fd54db7bb-9gfrr" Mar 14 00:14:43.052829 systemd[1]: Created slice kubepods-besteffort-pod9dac252b_d2f0_4f51_8a32_3e7e6f3e4118.slice - libcontainer container kubepods-besteffort-pod9dac252b_d2f0_4f51_8a32_3e7e6f3e4118.slice. Mar 14 00:14:43.065808 containerd[1934]: time="2026-03-14T00:14:43.064399774Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:14:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:14:43.094727 systemd[1]: Created slice kubepods-burstable-podd4e6f04d_05da_499d_9d04_d1cc54b42952.slice - libcontainer container kubepods-burstable-podd4e6f04d_05da_499d_9d04_d1cc54b42952.slice. Mar 14 00:14:43.116129 systemd[1]: Created slice kubepods-besteffort-podc8686320_fc9c_4a3f_a2c7_dfa460638fd7.slice - libcontainer container kubepods-besteffort-podc8686320_fc9c_4a3f_a2c7_dfa460638fd7.slice. Mar 14 00:14:43.135762 systemd[1]: Created slice kubepods-besteffort-podb2f14ff7_37b0_4a9b_9308_67b07dd8dd39.slice - libcontainer container kubepods-besteffort-podb2f14ff7_37b0_4a9b_9308_67b07dd8dd39.slice. Mar 14 00:14:43.315274 containerd[1934]: time="2026-03-14T00:14:43.313413779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fd54db7bb-9gfrr,Uid:460ebb16-a0e6-4f2b-b440-dc924ce7846b,Namespace:calico-system,Attempt:0,}" Mar 14 00:14:43.350643 containerd[1934]: time="2026-03-14T00:14:43.350567891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-696c5cfc7f-hzr5w,Uid:89705421-5074-4ee1-8f8a-b24f0bbde701,Namespace:calico-system,Attempt:0,}" Mar 14 00:14:43.392465 containerd[1934]: time="2026-03-14T00:14:43.392354508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-696c5cfc7f-ng7xc,Uid:9dac252b-d2f0-4f51-8a32-3e7e6f3e4118,Namespace:calico-system,Attempt:0,}" Mar 14 00:14:43.444077 containerd[1934]: time="2026-03-14T00:14:43.443913264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-nbdj4,Uid:c8686320-fc9c-4a3f-a2c7-dfa460638fd7,Namespace:calico-system,Attempt:0,}" Mar 14 00:14:43.445293 containerd[1934]: time="2026-03-14T00:14:43.445239480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-v8blg,Uid:d4e6f04d-05da-499d-9d04-d1cc54b42952,Namespace:kube-system,Attempt:0,}" Mar 14 00:14:43.451819 containerd[1934]: time="2026-03-14T00:14:43.451620948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-646cdb9884-pj2xg,Uid:b2f14ff7-37b0-4a9b-9308-67b07dd8dd39,Namespace:calico-system,Attempt:0,}" Mar 14 00:14:43.528741 containerd[1934]: time="2026-03-14T00:14:43.528664932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-fzzfp,Uid:bc321aa2-bc1d-4b5f-9fdf-44eb597d2609,Namespace:kube-system,Attempt:0,}" Mar 14 00:14:44.083398 containerd[1934]: time="2026-03-14T00:14:44.083182571Z" level=info msg="CreateContainer within sandbox \"8ac064384c3142cdc85a305f35e929afee07505bfb521881e4c7116515926b95\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 14 00:14:44.169868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount730343057.mount: Deactivated successfully. Mar 14 00:14:44.214361 containerd[1934]: time="2026-03-14T00:14:44.214258764Z" level=info msg="CreateContainer within sandbox \"8ac064384c3142cdc85a305f35e929afee07505bfb521881e4c7116515926b95\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ee32b4f0c07dd89dac1639e567bee4f363931d2b14716b42edbf12737f826ad1\"" Mar 14 00:14:44.215677 containerd[1934]: time="2026-03-14T00:14:44.215627988Z" level=info msg="StartContainer for \"ee32b4f0c07dd89dac1639e567bee4f363931d2b14716b42edbf12737f826ad1\"" Mar 14 00:14:44.396795 systemd[1]: Started cri-containerd-ee32b4f0c07dd89dac1639e567bee4f363931d2b14716b42edbf12737f826ad1.scope - libcontainer container ee32b4f0c07dd89dac1639e567bee4f363931d2b14716b42edbf12737f826ad1. Mar 14 00:14:44.456491 containerd[1934]: time="2026-03-14T00:14:44.455534173Z" level=error msg="Failed to destroy network for sandbox \"7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:14:44.460141 containerd[1934]: time="2026-03-14T00:14:44.459565909Z" level=error msg="encountered an error cleaning up failed sandbox \"7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:14:44.461468 containerd[1934]: time="2026-03-14T00:14:44.460366765Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5fd54db7bb-9gfrr,Uid:460ebb16-a0e6-4f2b-b440-dc924ce7846b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:14:44.466348 kubelet[3153]: E0314 00:14:44.465719 3153 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:14:44.466348 kubelet[3153]: E0314 00:14:44.465842 3153 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5fd54db7bb-9gfrr" Mar 14 00:14:44.466348 kubelet[3153]: E0314 00:14:44.465879 3153 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5fd54db7bb-9gfrr" Mar 14 00:14:44.468111 kubelet[3153]: E0314 00:14:44.465978 3153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5fd54db7bb-9gfrr_calico-system(460ebb16-a0e6-4f2b-b440-dc924ce7846b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5fd54db7bb-9gfrr_calico-system(460ebb16-a0e6-4f2b-b440-dc924ce7846b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5fd54db7bb-9gfrr" podUID="460ebb16-a0e6-4f2b-b440-dc924ce7846b" Mar 14 00:14:44.496674 containerd[1934]: time="2026-03-14T00:14:44.496029085Z" level=error msg="Failed to destroy network for sandbox \"44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:14:44.499547 containerd[1934]: time="2026-03-14T00:14:44.499325209Z" level=error msg="encountered an error cleaning up failed sandbox \"44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:14:44.499547 containerd[1934]: time="2026-03-14T00:14:44.499460449Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-646cdb9884-pj2xg,Uid:b2f14ff7-37b0-4a9b-9308-67b07dd8dd39,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:14:44.500747 kubelet[3153]: E0314 00:14:44.500047 3153 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:14:44.500747 kubelet[3153]: E0314 00:14:44.500142 3153 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-646cdb9884-pj2xg" Mar 14 00:14:44.500747 kubelet[3153]: E0314 00:14:44.500191 3153 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-646cdb9884-pj2xg" Mar 14 00:14:44.501076 kubelet[3153]: E0314 00:14:44.500287 3153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-646cdb9884-pj2xg_calico-system(b2f14ff7-37b0-4a9b-9308-67b07dd8dd39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-646cdb9884-pj2xg_calico-system(b2f14ff7-37b0-4a9b-9308-67b07dd8dd39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-646cdb9884-pj2xg" podUID="b2f14ff7-37b0-4a9b-9308-67b07dd8dd39" Mar 14 00:14:44.517854 containerd[1934]: time="2026-03-14T00:14:44.517537825Z" level=error msg="Failed to destroy network for sandbox \"f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:14:44.523032 containerd[1934]: time="2026-03-14T00:14:44.522213541Z" level=error msg="encountered an error cleaning up failed sandbox \"f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:14:44.524045 containerd[1934]: time="2026-03-14T00:14:44.523828129Z" level=error msg="Failed to destroy network for sandbox \"31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:14:44.525122 containerd[1934]: time="2026-03-14T00:14:44.524244457Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-696c5cfc7f-ng7xc,Uid:9dac252b-d2f0-4f51-8a32-3e7e6f3e4118,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:14:44.525356 kubelet[3153]: E0314 00:14:44.524667 3153 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:14:44.525356 kubelet[3153]: E0314 00:14:44.524761 3153 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-696c5cfc7f-ng7xc" Mar 14 00:14:44.525356 kubelet[3153]: E0314 00:14:44.524798 3153 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-696c5cfc7f-ng7xc" Mar 14 00:14:44.526361 kubelet[3153]: E0314 00:14:44.524889 3153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-696c5cfc7f-ng7xc_calico-system(9dac252b-d2f0-4f51-8a32-3e7e6f3e4118)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-696c5cfc7f-ng7xc_calico-system(9dac252b-d2f0-4f51-8a32-3e7e6f3e4118)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-696c5cfc7f-ng7xc" podUID="9dac252b-d2f0-4f51-8a32-3e7e6f3e4118" Mar 14 00:14:44.529121 containerd[1934]: time="2026-03-14T00:14:44.529031569Z" level=error msg="encountered an error cleaning up failed sandbox \"31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:14:44.529246 containerd[1934]: time="2026-03-14T00:14:44.529136041Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-696c5cfc7f-hzr5w,Uid:89705421-5074-4ee1-8f8a-b24f0bbde701,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:14:44.529623 kubelet[3153]: E0314 00:14:44.529490 3153 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:14:44.529623 kubelet[3153]: E0314 00:14:44.529580 3153 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-696c5cfc7f-hzr5w" Mar 14 00:14:44.529752 kubelet[3153]: E0314 00:14:44.529612 3153 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-696c5cfc7f-hzr5w" Mar 14 00:14:44.531495 kubelet[3153]: E0314 00:14:44.530781 3153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-696c5cfc7f-hzr5w_calico-system(89705421-5074-4ee1-8f8a-b24f0bbde701)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-696c5cfc7f-hzr5w_calico-system(89705421-5074-4ee1-8f8a-b24f0bbde701)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-696c5cfc7f-hzr5w" podUID="89705421-5074-4ee1-8f8a-b24f0bbde701" Mar 14 00:14:44.532930 containerd[1934]: time="2026-03-14T00:14:44.532058449Z" level=error msg="Failed to destroy network for sandbox \"68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:14:44.534724 containerd[1934]: time="2026-03-14T00:14:44.534634777Z" level=error msg="Failed to destroy network for sandbox \"a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:14:44.535020 containerd[1934]: time="2026-03-14T00:14:44.534947665Z" level=error msg="Failed to destroy network for sandbox \"231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:14:44.536693 containerd[1934]: time="2026-03-14T00:14:44.536597053Z" level=error msg="encountered an error cleaning up failed sandbox \"68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:14:44.537578 containerd[1934]: time="2026-03-14T00:14:44.537264697Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-v8blg,Uid:d4e6f04d-05da-499d-9d04-d1cc54b42952,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:14:44.537750 containerd[1934]: time="2026-03-14T00:14:44.537189721Z" level=error msg="encountered an error cleaning up failed sandbox \"a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:14:44.538635 kubelet[3153]: E0314 00:14:44.538205 3153 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:14:44.538635 kubelet[3153]: E0314 00:14:44.538293 3153 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-v8blg" Mar 14 00:14:44.538635 kubelet[3153]: E0314 00:14:44.538326 3153 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-v8blg" Mar 14 00:14:44.538961 kubelet[3153]: E0314 00:14:44.538493 3153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-v8blg_kube-system(d4e6f04d-05da-499d-9d04-d1cc54b42952)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-v8blg_kube-system(d4e6f04d-05da-499d-9d04-d1cc54b42952)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-v8blg" podUID="d4e6f04d-05da-499d-9d04-d1cc54b42952" Mar 14 00:14:44.539697 containerd[1934]: time="2026-03-14T00:14:44.539413693Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-fzzfp,Uid:bc321aa2-bc1d-4b5f-9fdf-44eb597d2609,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:14:44.539891 kubelet[3153]: E0314 00:14:44.539817 3153 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:14:44.539978 kubelet[3153]: E0314 00:14:44.539891 3153 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-fzzfp" Mar 14 00:14:44.540056 kubelet[3153]: E0314 00:14:44.539954 3153 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-fzzfp" Mar 14 00:14:44.540117 kubelet[3153]: E0314 00:14:44.540051 3153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-fzzfp_kube-system(bc321aa2-bc1d-4b5f-9fdf-44eb597d2609)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-fzzfp_kube-system(bc321aa2-bc1d-4b5f-9fdf-44eb597d2609)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-fzzfp" podUID="bc321aa2-bc1d-4b5f-9fdf-44eb597d2609" Mar 14 00:14:44.541739 containerd[1934]: time="2026-03-14T00:14:44.540809077Z" level=error msg="encountered an error cleaning up failed sandbox \"231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:14:44.541739 containerd[1934]: time="2026-03-14T00:14:44.540918061Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-nbdj4,Uid:c8686320-fc9c-4a3f-a2c7-dfa460638fd7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:14:44.543865 kubelet[3153]: E0314 00:14:44.542644 3153 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:14:44.543865 kubelet[3153]: E0314 00:14:44.542752 3153 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-nbdj4" Mar 14 00:14:44.543865 kubelet[3153]: E0314 00:14:44.542786 3153 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-nbdj4" Mar 14 00:14:44.544158 kubelet[3153]: E0314 00:14:44.542869 3153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-nbdj4_calico-system(c8686320-fc9c-4a3f-a2c7-dfa460638fd7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-nbdj4_calico-system(c8686320-fc9c-4a3f-a2c7-dfa460638fd7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-nbdj4" podUID="c8686320-fc9c-4a3f-a2c7-dfa460638fd7" Mar 14 00:14:44.574474 containerd[1934]: time="2026-03-14T00:14:44.572191166Z" level=info msg="StartContainer for \"ee32b4f0c07dd89dac1639e567bee4f363931d2b14716b42edbf12737f826ad1\" returns successfully" Mar 14 00:14:44.735827 systemd[1]: Created slice kubepods-besteffort-pod3bae5412_2e2d_4fc4_8221_ace1b28b2f13.slice - libcontainer container kubepods-besteffort-pod3bae5412_2e2d_4fc4_8221_ace1b28b2f13.slice. Mar 14 00:14:44.749683 containerd[1934]: time="2026-03-14T00:14:44.749570570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-45kq2,Uid:3bae5412-2e2d-4fc4-8221-ace1b28b2f13,Namespace:calico-system,Attempt:0,}" Mar 14 00:14:44.923989 containerd[1934]: time="2026-03-14T00:14:44.923927019Z" level=error msg="Failed to destroy network for sandbox \"b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:14:44.925284 containerd[1934]: time="2026-03-14T00:14:44.925209279Z" level=error msg="encountered an error cleaning up failed sandbox \"b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:14:44.925542 containerd[1934]: time="2026-03-14T00:14:44.925496967Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-45kq2,Uid:3bae5412-2e2d-4fc4-8221-ace1b28b2f13,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:14:44.925966 kubelet[3153]: E0314 00:14:44.925894 3153 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 14 00:14:44.926223 kubelet[3153]: E0314 00:14:44.926166 3153 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-45kq2" Mar 14 00:14:44.926409 kubelet[3153]: E0314 00:14:44.926340 3153 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-45kq2" Mar 14 00:14:44.927687 kubelet[3153]: E0314 00:14:44.927607 3153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-45kq2_calico-system(3bae5412-2e2d-4fc4-8221-ace1b28b2f13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-45kq2_calico-system(3bae5412-2e2d-4fc4-8221-ace1b28b2f13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-45kq2" podUID="3bae5412-2e2d-4fc4-8221-ace1b28b2f13" Mar 14 00:14:44.947902 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78-shm.mount: Deactivated successfully. Mar 14 00:14:44.948101 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a-shm.mount: Deactivated successfully. Mar 14 00:14:44.948268 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc-shm.mount: Deactivated successfully. Mar 14 00:14:44.957516 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0-shm.mount: Deactivated successfully. Mar 14 00:14:44.957732 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b-shm.mount: Deactivated successfully. Mar 14 00:14:44.957889 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f-shm.mount: Deactivated successfully. Mar 14 00:14:44.958041 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc-shm.mount: Deactivated successfully. Mar 14 00:14:45.045892 kubelet[3153]: I0314 00:14:45.043077 3153 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" Mar 14 00:14:45.046903 containerd[1934]: time="2026-03-14T00:14:45.045798924Z" level=info msg="StopPodSandbox for \"b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666\"" Mar 14 00:14:45.049472 containerd[1934]: time="2026-03-14T00:14:45.047737284Z" level=info msg="Ensure that sandbox b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666 in task-service has been cleanup successfully" Mar 14 00:14:45.051796 kubelet[3153]: I0314 00:14:45.051654 3153 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" Mar 14 00:14:45.053365 containerd[1934]: time="2026-03-14T00:14:45.052779240Z" level=info msg="StopPodSandbox for \"68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc\"" Mar 14 00:14:45.055808 containerd[1934]: time="2026-03-14T00:14:45.055210632Z" level=info msg="Ensure that sandbox 68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc in task-service has been cleanup successfully" Mar 14 00:14:45.060116 kubelet[3153]: I0314 00:14:45.060046 3153 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" Mar 14 00:14:45.064588 containerd[1934]: time="2026-03-14T00:14:45.063972516Z" level=info msg="StopPodSandbox for \"a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78\"" Mar 14 00:14:45.064588 containerd[1934]: time="2026-03-14T00:14:45.064328016Z" level=info msg="Ensure that sandbox a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78 in task-service has been cleanup successfully" Mar 14 00:14:45.077365 kubelet[3153]: I0314 00:14:45.077038 3153 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" Mar 14 00:14:45.083947 containerd[1934]: time="2026-03-14T00:14:45.083134824Z" level=info msg="StopPodSandbox for \"31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b\"" Mar 14 00:14:45.083947 containerd[1934]: time="2026-03-14T00:14:45.083468016Z" level=info msg="Ensure that sandbox 31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b in task-service has been cleanup successfully" Mar 14 00:14:45.086908 kubelet[3153]: I0314 00:14:45.085452 3153 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" Mar 14 00:14:45.089470 containerd[1934]: time="2026-03-14T00:14:45.088063440Z" level=info msg="StopPodSandbox for \"7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc\"" Mar 14 00:14:45.090566 containerd[1934]: time="2026-03-14T00:14:45.090495012Z" level=info msg="Ensure that sandbox 7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc in task-service has been cleanup successfully" Mar 14 00:14:45.107118 kubelet[3153]: I0314 00:14:45.106958 3153 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" Mar 14 00:14:45.120066 containerd[1934]: time="2026-03-14T00:14:45.119592636Z" level=info msg="StopPodSandbox for \"44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a\"" Mar 14 00:14:45.122472 containerd[1934]: time="2026-03-14T00:14:45.120925692Z" level=info msg="Ensure that sandbox 44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a in task-service has been cleanup successfully" Mar 14 00:14:45.147596 kubelet[3153]: I0314 00:14:45.146960 3153 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" Mar 14 00:14:45.155528 containerd[1934]: time="2026-03-14T00:14:45.149842884Z" level=info msg="StopPodSandbox for \"231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0\"" Mar 14 00:14:45.155528 containerd[1934]: time="2026-03-14T00:14:45.155328744Z" level=info msg="Ensure that sandbox 231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0 in task-service has been cleanup successfully" Mar 14 00:14:45.166580 kubelet[3153]: I0314 00:14:45.165307 3153 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-snzgj" podStartSLOduration=2.302191842 podStartE2EDuration="21.165285708s" podCreationTimestamp="2026-03-14 00:14:24 +0000 UTC" firstStartedPulling="2026-03-14 00:14:25.168512849 +0000 UTC m=+27.762693187" lastFinishedPulling="2026-03-14 00:14:44.031606691 +0000 UTC m=+46.625787053" observedRunningTime="2026-03-14 00:14:45.10637874 +0000 UTC m=+47.700559090" watchObservedRunningTime="2026-03-14 00:14:45.165285708 +0000 UTC m=+47.759466046" Mar 14 00:14:45.187467 kubelet[3153]: I0314 00:14:45.186726 3153 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" Mar 14 00:14:45.188540 containerd[1934]: time="2026-03-14T00:14:45.188359429Z" level=info msg="StopPodSandbox for \"f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f\"" Mar 14 00:14:45.189505 containerd[1934]: time="2026-03-14T00:14:45.189049573Z" level=info msg="Ensure that sandbox f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f in task-service has been cleanup successfully" Mar 14 00:14:46.332630 systemd[1]: run-containerd-runc-k8s.io-ee32b4f0c07dd89dac1639e567bee4f363931d2b14716b42edbf12737f826ad1-runc.f36hO5.mount: Deactivated successfully. Mar 14 00:14:46.473491 containerd[1934]: 2026-03-14 00:14:46.027 [INFO][4481] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" Mar 14 00:14:46.473491 containerd[1934]: 2026-03-14 00:14:46.027 [INFO][4481] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" iface="eth0" netns="/var/run/netns/cni-9c9481d9-5ca8-832f-935c-da870fb8080c" Mar 14 00:14:46.473491 containerd[1934]: 2026-03-14 00:14:46.029 [INFO][4481] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" iface="eth0" netns="/var/run/netns/cni-9c9481d9-5ca8-832f-935c-da870fb8080c" Mar 14 00:14:46.473491 containerd[1934]: 2026-03-14 00:14:46.052 [INFO][4481] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" iface="eth0" netns="/var/run/netns/cni-9c9481d9-5ca8-832f-935c-da870fb8080c" Mar 14 00:14:46.473491 containerd[1934]: 2026-03-14 00:14:46.053 [INFO][4481] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" Mar 14 00:14:46.473491 containerd[1934]: 2026-03-14 00:14:46.053 [INFO][4481] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" Mar 14 00:14:46.473491 containerd[1934]: 2026-03-14 00:14:46.394 [INFO][4586] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" HandleID="k8s-pod-network.31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" Workload="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--hzr5w-eth0" Mar 14 00:14:46.473491 containerd[1934]: 2026-03-14 00:14:46.397 [INFO][4586] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:46.473491 containerd[1934]: 2026-03-14 00:14:46.399 [INFO][4586] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:46.473491 containerd[1934]: 2026-03-14 00:14:46.445 [WARNING][4586] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" HandleID="k8s-pod-network.31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" Workload="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--hzr5w-eth0" Mar 14 00:14:46.473491 containerd[1934]: 2026-03-14 00:14:46.445 [INFO][4586] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" HandleID="k8s-pod-network.31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" Workload="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--hzr5w-eth0" Mar 14 00:14:46.473491 containerd[1934]: 2026-03-14 00:14:46.450 [INFO][4586] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:46.473491 containerd[1934]: 2026-03-14 00:14:46.464 [INFO][4481] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" Mar 14 00:14:46.478590 containerd[1934]: time="2026-03-14T00:14:46.478516359Z" level=info msg="TearDown network for sandbox \"31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b\" successfully" Mar 14 00:14:46.480652 containerd[1934]: time="2026-03-14T00:14:46.480514635Z" level=info msg="StopPodSandbox for \"31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b\" returns successfully" Mar 14 00:14:46.482989 systemd[1]: run-netns-cni\x2d9c9481d9\x2d5ca8\x2d832f\x2d935c\x2dda870fb8080c.mount: Deactivated successfully. Mar 14 00:14:46.491196 containerd[1934]: time="2026-03-14T00:14:46.491054511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-696c5cfc7f-hzr5w,Uid:89705421-5074-4ee1-8f8a-b24f0bbde701,Namespace:calico-system,Attempt:1,}" Mar 14 00:14:46.506607 containerd[1934]: 2026-03-14 00:14:46.008 [INFO][4519] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" Mar 14 00:14:46.506607 containerd[1934]: 2026-03-14 00:14:46.009 [INFO][4519] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" iface="eth0" netns="/var/run/netns/cni-0b62d5ee-0613-3be7-cf5d-34b3d2ff49d3" Mar 14 00:14:46.506607 containerd[1934]: 2026-03-14 00:14:46.009 [INFO][4519] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" iface="eth0" netns="/var/run/netns/cni-0b62d5ee-0613-3be7-cf5d-34b3d2ff49d3" Mar 14 00:14:46.506607 containerd[1934]: 2026-03-14 00:14:46.054 [INFO][4519] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" iface="eth0" netns="/var/run/netns/cni-0b62d5ee-0613-3be7-cf5d-34b3d2ff49d3" Mar 14 00:14:46.506607 containerd[1934]: 2026-03-14 00:14:46.054 [INFO][4519] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" Mar 14 00:14:46.506607 containerd[1934]: 2026-03-14 00:14:46.054 [INFO][4519] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" Mar 14 00:14:46.506607 containerd[1934]: 2026-03-14 00:14:46.408 [INFO][4583] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" HandleID="k8s-pod-network.231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" Workload="ip--172--31--18--130-k8s-goldmane--9f7667bb8--nbdj4-eth0" Mar 14 00:14:46.506607 containerd[1934]: 2026-03-14 00:14:46.412 [INFO][4583] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:46.506607 containerd[1934]: 2026-03-14 00:14:46.450 [INFO][4583] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:46.506607 containerd[1934]: 2026-03-14 00:14:46.476 [WARNING][4583] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" HandleID="k8s-pod-network.231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" Workload="ip--172--31--18--130-k8s-goldmane--9f7667bb8--nbdj4-eth0" Mar 14 00:14:46.506607 containerd[1934]: 2026-03-14 00:14:46.476 [INFO][4583] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" HandleID="k8s-pod-network.231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" Workload="ip--172--31--18--130-k8s-goldmane--9f7667bb8--nbdj4-eth0" Mar 14 00:14:46.506607 containerd[1934]: 2026-03-14 00:14:46.484 [INFO][4583] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:46.506607 containerd[1934]: 2026-03-14 00:14:46.497 [INFO][4519] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" Mar 14 00:14:46.512001 containerd[1934]: time="2026-03-14T00:14:46.506829867Z" level=info msg="TearDown network for sandbox \"231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0\" successfully" Mar 14 00:14:46.512001 containerd[1934]: time="2026-03-14T00:14:46.506868699Z" level=info msg="StopPodSandbox for \"231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0\" returns successfully" Mar 14 00:14:46.515111 systemd[1]: run-netns-cni\x2d0b62d5ee\x2d0613\x2d3be7\x2dcf5d\x2d34b3d2ff49d3.mount: Deactivated successfully. Mar 14 00:14:46.516654 containerd[1934]: time="2026-03-14T00:14:46.515087643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-nbdj4,Uid:c8686320-fc9c-4a3f-a2c7-dfa460638fd7,Namespace:calico-system,Attempt:1,}" Mar 14 00:14:46.576359 containerd[1934]: 2026-03-14 00:14:46.025 [INFO][4444] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" Mar 14 00:14:46.576359 containerd[1934]: 2026-03-14 00:14:46.026 [INFO][4444] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" iface="eth0" netns="/var/run/netns/cni-a5f6df41-ea0a-4e49-8c93-ac662b6778c7" Mar 14 00:14:46.576359 containerd[1934]: 2026-03-14 00:14:46.031 [INFO][4444] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" iface="eth0" netns="/var/run/netns/cni-a5f6df41-ea0a-4e49-8c93-ac662b6778c7" Mar 14 00:14:46.576359 containerd[1934]: 2026-03-14 00:14:46.051 [INFO][4444] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" iface="eth0" netns="/var/run/netns/cni-a5f6df41-ea0a-4e49-8c93-ac662b6778c7" Mar 14 00:14:46.576359 containerd[1934]: 2026-03-14 00:14:46.051 [INFO][4444] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" Mar 14 00:14:46.576359 containerd[1934]: 2026-03-14 00:14:46.051 [INFO][4444] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" Mar 14 00:14:46.576359 containerd[1934]: 2026-03-14 00:14:46.413 [INFO][4582] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" HandleID="k8s-pod-network.68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" Workload="ip--172--31--18--130-k8s-coredns--7d764666f9--v8blg-eth0" Mar 14 00:14:46.576359 containerd[1934]: 2026-03-14 00:14:46.413 [INFO][4582] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:46.576359 containerd[1934]: 2026-03-14 00:14:46.484 [INFO][4582] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:46.576359 containerd[1934]: 2026-03-14 00:14:46.528 [WARNING][4582] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" HandleID="k8s-pod-network.68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" Workload="ip--172--31--18--130-k8s-coredns--7d764666f9--v8blg-eth0" Mar 14 00:14:46.576359 containerd[1934]: 2026-03-14 00:14:46.528 [INFO][4582] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" HandleID="k8s-pod-network.68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" Workload="ip--172--31--18--130-k8s-coredns--7d764666f9--v8blg-eth0" Mar 14 00:14:46.576359 containerd[1934]: 2026-03-14 00:14:46.532 [INFO][4582] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:46.576359 containerd[1934]: 2026-03-14 00:14:46.558 [INFO][4444] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" Mar 14 00:14:46.581190 containerd[1934]: time="2026-03-14T00:14:46.579813676Z" level=info msg="TearDown network for sandbox \"68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc\" successfully" Mar 14 00:14:46.581361 containerd[1934]: time="2026-03-14T00:14:46.581186752Z" level=info msg="StopPodSandbox for \"68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc\" returns successfully" Mar 14 00:14:46.586361 containerd[1934]: time="2026-03-14T00:14:46.586187392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-v8blg,Uid:d4e6f04d-05da-499d-9d04-d1cc54b42952,Namespace:kube-system,Attempt:1,}" Mar 14 00:14:46.617406 containerd[1934]: 2026-03-14 00:14:46.028 [INFO][4517] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" Mar 14 00:14:46.617406 containerd[1934]: 2026-03-14 00:14:46.030 [INFO][4517] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" iface="eth0" netns="/var/run/netns/cni-86933f2a-78ef-7df2-52f6-325dc49c091b" Mar 14 00:14:46.617406 containerd[1934]: 2026-03-14 00:14:46.031 [INFO][4517] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" iface="eth0" netns="/var/run/netns/cni-86933f2a-78ef-7df2-52f6-325dc49c091b" Mar 14 00:14:46.617406 containerd[1934]: 2026-03-14 00:14:46.050 [INFO][4517] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" iface="eth0" netns="/var/run/netns/cni-86933f2a-78ef-7df2-52f6-325dc49c091b" Mar 14 00:14:46.617406 containerd[1934]: 2026-03-14 00:14:46.050 [INFO][4517] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" Mar 14 00:14:46.617406 containerd[1934]: 2026-03-14 00:14:46.050 [INFO][4517] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" Mar 14 00:14:46.617406 containerd[1934]: 2026-03-14 00:14:46.410 [INFO][4585] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" HandleID="k8s-pod-network.f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" Workload="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--ng7xc-eth0" Mar 14 00:14:46.617406 containerd[1934]: 2026-03-14 00:14:46.417 [INFO][4585] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:46.617406 containerd[1934]: 2026-03-14 00:14:46.533 [INFO][4585] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:46.617406 containerd[1934]: 2026-03-14 00:14:46.589 [WARNING][4585] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" HandleID="k8s-pod-network.f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" Workload="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--ng7xc-eth0" Mar 14 00:14:46.617406 containerd[1934]: 2026-03-14 00:14:46.590 [INFO][4585] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" HandleID="k8s-pod-network.f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" Workload="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--ng7xc-eth0" Mar 14 00:14:46.617406 containerd[1934]: 2026-03-14 00:14:46.594 [INFO][4585] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:46.617406 containerd[1934]: 2026-03-14 00:14:46.609 [INFO][4517] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" Mar 14 00:14:46.618685 containerd[1934]: time="2026-03-14T00:14:46.618529300Z" level=info msg="TearDown network for sandbox \"f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f\" successfully" Mar 14 00:14:46.618685 containerd[1934]: time="2026-03-14T00:14:46.618572992Z" level=info msg="StopPodSandbox for \"f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f\" returns successfully" Mar 14 00:14:46.623416 containerd[1934]: time="2026-03-14T00:14:46.623296252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-696c5cfc7f-ng7xc,Uid:9dac252b-d2f0-4f51-8a32-3e7e6f3e4118,Namespace:calico-system,Attempt:1,}" Mar 14 00:14:46.686076 containerd[1934]: 2026-03-14 00:14:46.010 [INFO][4478] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" Mar 14 00:14:46.686076 containerd[1934]: 2026-03-14 00:14:46.010 [INFO][4478] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" iface="eth0" netns="/var/run/netns/cni-d3bdedab-b954-cb25-a90f-ae77720d074d" Mar 14 00:14:46.686076 containerd[1934]: 2026-03-14 00:14:46.011 [INFO][4478] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" iface="eth0" netns="/var/run/netns/cni-d3bdedab-b954-cb25-a90f-ae77720d074d" Mar 14 00:14:46.686076 containerd[1934]: 2026-03-14 00:14:46.052 [INFO][4478] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" iface="eth0" netns="/var/run/netns/cni-d3bdedab-b954-cb25-a90f-ae77720d074d" Mar 14 00:14:46.686076 containerd[1934]: 2026-03-14 00:14:46.052 [INFO][4478] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" Mar 14 00:14:46.686076 containerd[1934]: 2026-03-14 00:14:46.052 [INFO][4478] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" Mar 14 00:14:46.686076 containerd[1934]: 2026-03-14 00:14:46.436 [INFO][4584] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" HandleID="k8s-pod-network.a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" Workload="ip--172--31--18--130-k8s-coredns--7d764666f9--fzzfp-eth0" Mar 14 00:14:46.686076 containerd[1934]: 2026-03-14 00:14:46.442 [INFO][4584] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:46.686076 containerd[1934]: 2026-03-14 00:14:46.594 [INFO][4584] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:46.686076 containerd[1934]: 2026-03-14 00:14:46.629 [WARNING][4584] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" HandleID="k8s-pod-network.a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" Workload="ip--172--31--18--130-k8s-coredns--7d764666f9--fzzfp-eth0" Mar 14 00:14:46.686076 containerd[1934]: 2026-03-14 00:14:46.630 [INFO][4584] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" HandleID="k8s-pod-network.a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" Workload="ip--172--31--18--130-k8s-coredns--7d764666f9--fzzfp-eth0" Mar 14 00:14:46.686076 containerd[1934]: 2026-03-14 00:14:46.637 [INFO][4584] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:46.686076 containerd[1934]: 2026-03-14 00:14:46.656 [INFO][4478] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" Mar 14 00:14:46.688394 containerd[1934]: time="2026-03-14T00:14:46.686448736Z" level=info msg="TearDown network for sandbox \"a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78\" successfully" Mar 14 00:14:46.688394 containerd[1934]: time="2026-03-14T00:14:46.686551180Z" level=info msg="StopPodSandbox for \"a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78\" returns successfully" Mar 14 00:14:46.711085 containerd[1934]: time="2026-03-14T00:14:46.710389924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-fzzfp,Uid:bc321aa2-bc1d-4b5f-9fdf-44eb597d2609,Namespace:kube-system,Attempt:1,}" Mar 14 00:14:46.728465 containerd[1934]: 2026-03-14 00:14:46.105 [INFO][4443] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" Mar 14 00:14:46.728465 containerd[1934]: 2026-03-14 00:14:46.109 [INFO][4443] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" iface="eth0" netns="/var/run/netns/cni-c470a82e-f627-de86-cb1b-ba9deba6749f" Mar 14 00:14:46.728465 containerd[1934]: 2026-03-14 00:14:46.110 [INFO][4443] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" iface="eth0" netns="/var/run/netns/cni-c470a82e-f627-de86-cb1b-ba9deba6749f" Mar 14 00:14:46.728465 containerd[1934]: 2026-03-14 00:14:46.110 [INFO][4443] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" iface="eth0" netns="/var/run/netns/cni-c470a82e-f627-de86-cb1b-ba9deba6749f" Mar 14 00:14:46.728465 containerd[1934]: 2026-03-14 00:14:46.110 [INFO][4443] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" Mar 14 00:14:46.728465 containerd[1934]: 2026-03-14 00:14:46.110 [INFO][4443] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" Mar 14 00:14:46.728465 containerd[1934]: 2026-03-14 00:14:46.446 [INFO][4609] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" HandleID="k8s-pod-network.b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" Workload="ip--172--31--18--130-k8s-csi--node--driver--45kq2-eth0" Mar 14 00:14:46.728465 containerd[1934]: 2026-03-14 00:14:46.446 [INFO][4609] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:46.728465 containerd[1934]: 2026-03-14 00:14:46.643 [INFO][4609] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:46.728465 containerd[1934]: 2026-03-14 00:14:46.680 [WARNING][4609] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" HandleID="k8s-pod-network.b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" Workload="ip--172--31--18--130-k8s-csi--node--driver--45kq2-eth0" Mar 14 00:14:46.728465 containerd[1934]: 2026-03-14 00:14:46.680 [INFO][4609] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" HandleID="k8s-pod-network.b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" Workload="ip--172--31--18--130-k8s-csi--node--driver--45kq2-eth0" Mar 14 00:14:46.728465 containerd[1934]: 2026-03-14 00:14:46.685 [INFO][4609] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:46.728465 containerd[1934]: 2026-03-14 00:14:46.709 [INFO][4443] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" Mar 14 00:14:46.738502 containerd[1934]: time="2026-03-14T00:14:46.737823724Z" level=info msg="TearDown network for sandbox \"b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666\" successfully" Mar 14 00:14:46.738502 containerd[1934]: time="2026-03-14T00:14:46.737876344Z" level=info msg="StopPodSandbox for \"b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666\" returns successfully" Mar 14 00:14:46.750407 containerd[1934]: time="2026-03-14T00:14:46.750220528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-45kq2,Uid:3bae5412-2e2d-4fc4-8221-ace1b28b2f13,Namespace:calico-system,Attempt:1,}" Mar 14 00:14:46.785795 containerd[1934]: 2026-03-14 00:14:46.029 [INFO][4513] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" Mar 14 00:14:46.785795 containerd[1934]: 2026-03-14 00:14:46.029 [INFO][4513] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" iface="eth0" netns="/var/run/netns/cni-09587a09-956e-fc72-5533-437f7a51f6d2" Mar 14 00:14:46.785795 containerd[1934]: 2026-03-14 00:14:46.032 [INFO][4513] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" iface="eth0" netns="/var/run/netns/cni-09587a09-956e-fc72-5533-437f7a51f6d2" Mar 14 00:14:46.785795 containerd[1934]: 2026-03-14 00:14:46.059 [INFO][4513] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" iface="eth0" netns="/var/run/netns/cni-09587a09-956e-fc72-5533-437f7a51f6d2" Mar 14 00:14:46.785795 containerd[1934]: 2026-03-14 00:14:46.059 [INFO][4513] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" Mar 14 00:14:46.785795 containerd[1934]: 2026-03-14 00:14:46.060 [INFO][4513] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" Mar 14 00:14:46.785795 containerd[1934]: 2026-03-14 00:14:46.449 [INFO][4588] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" HandleID="k8s-pod-network.44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" Workload="ip--172--31--18--130-k8s-calico--kube--controllers--646cdb9884--pj2xg-eth0" Mar 14 00:14:46.785795 containerd[1934]: 2026-03-14 00:14:46.449 [INFO][4588] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:46.785795 containerd[1934]: 2026-03-14 00:14:46.689 [INFO][4588] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:46.785795 containerd[1934]: 2026-03-14 00:14:46.725 [WARNING][4588] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" HandleID="k8s-pod-network.44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" Workload="ip--172--31--18--130-k8s-calico--kube--controllers--646cdb9884--pj2xg-eth0" Mar 14 00:14:46.785795 containerd[1934]: 2026-03-14 00:14:46.725 [INFO][4588] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" HandleID="k8s-pod-network.44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" Workload="ip--172--31--18--130-k8s-calico--kube--controllers--646cdb9884--pj2xg-eth0" Mar 14 00:14:46.785795 containerd[1934]: 2026-03-14 00:14:46.731 [INFO][4588] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:46.785795 containerd[1934]: 2026-03-14 00:14:46.765 [INFO][4513] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" Mar 14 00:14:46.788245 containerd[1934]: time="2026-03-14T00:14:46.787296965Z" level=info msg="TearDown network for sandbox \"44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a\" successfully" Mar 14 00:14:46.788245 containerd[1934]: time="2026-03-14T00:14:46.787342301Z" level=info msg="StopPodSandbox for \"44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a\" returns successfully" Mar 14 00:14:46.802826 containerd[1934]: time="2026-03-14T00:14:46.799924325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-646cdb9884-pj2xg,Uid:b2f14ff7-37b0-4a9b-9308-67b07dd8dd39,Namespace:calico-system,Attempt:1,}" Mar 14 00:14:46.841571 containerd[1934]: 2026-03-14 00:14:46.053 [INFO][4475] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" Mar 14 00:14:46.841571 containerd[1934]: 2026-03-14 00:14:46.054 [INFO][4475] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" iface="eth0" netns="/var/run/netns/cni-fd15563a-f795-8813-a3a4-2539d6215bc2" Mar 14 00:14:46.841571 containerd[1934]: 2026-03-14 00:14:46.055 [INFO][4475] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" iface="eth0" netns="/var/run/netns/cni-fd15563a-f795-8813-a3a4-2539d6215bc2" Mar 14 00:14:46.841571 containerd[1934]: 2026-03-14 00:14:46.066 [INFO][4475] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" iface="eth0" netns="/var/run/netns/cni-fd15563a-f795-8813-a3a4-2539d6215bc2" Mar 14 00:14:46.841571 containerd[1934]: 2026-03-14 00:14:46.066 [INFO][4475] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" Mar 14 00:14:46.841571 containerd[1934]: 2026-03-14 00:14:46.066 [INFO][4475] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" Mar 14 00:14:46.841571 containerd[1934]: 2026-03-14 00:14:46.457 [INFO][4589] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" HandleID="k8s-pod-network.7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" Workload="ip--172--31--18--130-k8s-whisker--5fd54db7bb--9gfrr-eth0" Mar 14 00:14:46.841571 containerd[1934]: 2026-03-14 00:14:46.457 [INFO][4589] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:46.841571 containerd[1934]: 2026-03-14 00:14:46.734 [INFO][4589] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:46.841571 containerd[1934]: 2026-03-14 00:14:46.788 [WARNING][4589] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" HandleID="k8s-pod-network.7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" Workload="ip--172--31--18--130-k8s-whisker--5fd54db7bb--9gfrr-eth0" Mar 14 00:14:46.841571 containerd[1934]: 2026-03-14 00:14:46.789 [INFO][4589] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" HandleID="k8s-pod-network.7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" Workload="ip--172--31--18--130-k8s-whisker--5fd54db7bb--9gfrr-eth0" Mar 14 00:14:46.841571 containerd[1934]: 2026-03-14 00:14:46.796 [INFO][4589] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:46.841571 containerd[1934]: 2026-03-14 00:14:46.827 [INFO][4475] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" Mar 14 00:14:46.853727 containerd[1934]: time="2026-03-14T00:14:46.852838121Z" level=info msg="TearDown network for sandbox \"7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc\" successfully" Mar 14 00:14:46.853727 containerd[1934]: time="2026-03-14T00:14:46.852897017Z" level=info msg="StopPodSandbox for \"7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc\" returns successfully" Mar 14 00:14:47.005836 kubelet[3153]: I0314 00:14:47.003895 3153 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/460ebb16-a0e6-4f2b-b440-dc924ce7846b-nginx-config\" (UniqueName: \"kubernetes.io/configmap/460ebb16-a0e6-4f2b-b440-dc924ce7846b-nginx-config\") pod \"460ebb16-a0e6-4f2b-b440-dc924ce7846b\" (UID: \"460ebb16-a0e6-4f2b-b440-dc924ce7846b\") " Mar 14 00:14:47.005836 kubelet[3153]: I0314 00:14:47.003995 3153 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/460ebb16-a0e6-4f2b-b440-dc924ce7846b-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/460ebb16-a0e6-4f2b-b440-dc924ce7846b-whisker-backend-key-pair\") pod \"460ebb16-a0e6-4f2b-b440-dc924ce7846b\" (UID: \"460ebb16-a0e6-4f2b-b440-dc924ce7846b\") " Mar 14 00:14:47.005836 kubelet[3153]: I0314 00:14:47.004043 3153 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/460ebb16-a0e6-4f2b-b440-dc924ce7846b-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/460ebb16-a0e6-4f2b-b440-dc924ce7846b-whisker-ca-bundle\") pod \"460ebb16-a0e6-4f2b-b440-dc924ce7846b\" (UID: \"460ebb16-a0e6-4f2b-b440-dc924ce7846b\") " Mar 14 00:14:47.005836 kubelet[3153]: I0314 00:14:47.004101 3153 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/460ebb16-a0e6-4f2b-b440-dc924ce7846b-kube-api-access-vxf8l\" (UniqueName: \"kubernetes.io/projected/460ebb16-a0e6-4f2b-b440-dc924ce7846b-kube-api-access-vxf8l\") pod \"460ebb16-a0e6-4f2b-b440-dc924ce7846b\" (UID: \"460ebb16-a0e6-4f2b-b440-dc924ce7846b\") " Mar 14 00:14:47.015669 kubelet[3153]: I0314 00:14:47.013158 3153 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/460ebb16-a0e6-4f2b-b440-dc924ce7846b-whisker-ca-bundle" pod "460ebb16-a0e6-4f2b-b440-dc924ce7846b" (UID: "460ebb16-a0e6-4f2b-b440-dc924ce7846b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:14:47.022595 kubelet[3153]: I0314 00:14:47.022465 3153 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/460ebb16-a0e6-4f2b-b440-dc924ce7846b-nginx-config" pod "460ebb16-a0e6-4f2b-b440-dc924ce7846b" (UID: "460ebb16-a0e6-4f2b-b440-dc924ce7846b"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:14:47.079933 kubelet[3153]: I0314 00:14:47.079841 3153 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/460ebb16-a0e6-4f2b-b440-dc924ce7846b-kube-api-access-vxf8l" pod "460ebb16-a0e6-4f2b-b440-dc924ce7846b" (UID: "460ebb16-a0e6-4f2b-b440-dc924ce7846b"). InnerVolumeSpecName "kube-api-access-vxf8l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:14:47.080469 kubelet[3153]: I0314 00:14:47.080002 3153 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/460ebb16-a0e6-4f2b-b440-dc924ce7846b-whisker-backend-key-pair" pod "460ebb16-a0e6-4f2b-b440-dc924ce7846b" (UID: "460ebb16-a0e6-4f2b-b440-dc924ce7846b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 14 00:14:47.106033 kubelet[3153]: I0314 00:14:47.105270 3153 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/460ebb16-a0e6-4f2b-b440-dc924ce7846b-whisker-backend-key-pair\") on node \"ip-172-31-18-130\" DevicePath \"\"" Mar 14 00:14:47.106033 kubelet[3153]: I0314 00:14:47.105320 3153 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/460ebb16-a0e6-4f2b-b440-dc924ce7846b-whisker-ca-bundle\") on node \"ip-172-31-18-130\" DevicePath \"\"" Mar 14 00:14:47.106033 kubelet[3153]: I0314 00:14:47.105343 3153 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vxf8l\" (UniqueName: \"kubernetes.io/projected/460ebb16-a0e6-4f2b-b440-dc924ce7846b-kube-api-access-vxf8l\") on node \"ip-172-31-18-130\" DevicePath \"\"" Mar 14 00:14:47.106033 kubelet[3153]: I0314 00:14:47.105366 3153 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/460ebb16-a0e6-4f2b-b440-dc924ce7846b-nginx-config\") on node \"ip-172-31-18-130\" DevicePath \"\"" Mar 14 00:14:47.226374 systemd[1]: Removed slice kubepods-besteffort-pod460ebb16_a0e6_4f2b_b440_dc924ce7846b.slice - libcontainer container kubepods-besteffort-pod460ebb16_a0e6_4f2b_b440_dc924ce7846b.slice. Mar 14 00:14:47.355878 systemd[1]: run-netns-cni\x2dc470a82e\x2df627\x2dde86\x2dcb1b\x2dba9deba6749f.mount: Deactivated successfully. Mar 14 00:14:47.356073 systemd[1]: run-netns-cni\x2dd3bdedab\x2db954\x2dcb25\x2da90f\x2dae77720d074d.mount: Deactivated successfully. Mar 14 00:14:47.356242 systemd[1]: run-netns-cni\x2d09587a09\x2d956e\x2dfc72\x2d5533\x2d437f7a51f6d2.mount: Deactivated successfully. Mar 14 00:14:47.356380 systemd[1]: run-netns-cni\x2da5f6df41\x2dea0a\x2d4e49\x2d8c93\x2dac662b6778c7.mount: Deactivated successfully. Mar 14 00:14:47.358534 systemd[1]: run-netns-cni\x2d86933f2a\x2d78ef\x2d7df2\x2d52f6\x2d325dc49c091b.mount: Deactivated successfully. Mar 14 00:14:47.358703 systemd[1]: run-netns-cni\x2dfd15563a\x2df795\x2d8813\x2da3a4\x2d2539d6215bc2.mount: Deactivated successfully. Mar 14 00:14:47.358864 systemd[1]: var-lib-kubelet-pods-460ebb16\x2da0e6\x2d4f2b\x2db440\x2ddc924ce7846b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvxf8l.mount: Deactivated successfully. Mar 14 00:14:47.359048 systemd[1]: var-lib-kubelet-pods-460ebb16\x2da0e6\x2d4f2b\x2db440\x2ddc924ce7846b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 14 00:14:47.542955 systemd[1]: Created slice kubepods-besteffort-pod3b987674_767d_428c_b72b_e820b77114cf.slice - libcontainer container kubepods-besteffort-pod3b987674_767d_428c_b72b_e820b77114cf.slice. Mar 14 00:14:47.612467 kubelet[3153]: I0314 00:14:47.610093 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b987674-767d-428c-b72b-e820b77114cf-whisker-ca-bundle\") pod \"whisker-667f9f4987-qrmz9\" (UID: \"3b987674-767d-428c-b72b-e820b77114cf\") " pod="calico-system/whisker-667f9f4987-qrmz9" Mar 14 00:14:47.612467 kubelet[3153]: I0314 00:14:47.610170 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/3b987674-767d-428c-b72b-e820b77114cf-nginx-config\") pod \"whisker-667f9f4987-qrmz9\" (UID: \"3b987674-767d-428c-b72b-e820b77114cf\") " pod="calico-system/whisker-667f9f4987-qrmz9" Mar 14 00:14:47.612467 kubelet[3153]: I0314 00:14:47.610214 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3b987674-767d-428c-b72b-e820b77114cf-whisker-backend-key-pair\") pod \"whisker-667f9f4987-qrmz9\" (UID: \"3b987674-767d-428c-b72b-e820b77114cf\") " pod="calico-system/whisker-667f9f4987-qrmz9" Mar 14 00:14:47.612467 kubelet[3153]: I0314 00:14:47.610260 3153 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm5cb\" (UniqueName: \"kubernetes.io/projected/3b987674-767d-428c-b72b-e820b77114cf-kube-api-access-qm5cb\") pod \"whisker-667f9f4987-qrmz9\" (UID: \"3b987674-767d-428c-b72b-e820b77114cf\") " pod="calico-system/whisker-667f9f4987-qrmz9" Mar 14 00:14:47.775465 kubelet[3153]: I0314 00:14:47.775207 3153 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="460ebb16-a0e6-4f2b-b440-dc924ce7846b" path="/var/lib/kubelet/pods/460ebb16-a0e6-4f2b-b440-dc924ce7846b/volumes" Mar 14 00:14:47.801375 (udev-worker)[4863]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:14:47.803774 systemd-networkd[1852]: calif07464b8a09: Link UP Mar 14 00:14:47.809756 systemd-networkd[1852]: calif07464b8a09: Gained carrier Mar 14 00:14:47.867256 containerd[1934]: time="2026-03-14T00:14:47.866030430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-667f9f4987-qrmz9,Uid:3b987674-767d-428c-b72b-e820b77114cf,Namespace:calico-system,Attempt:0,}" Mar 14 00:14:48.018225 (udev-worker)[4862]: Network interface NamePolicy= disabled on kernel command line. Mar 14 00:14:48.027069 systemd-networkd[1852]: cali2ea8ce6114e: Link UP Mar 14 00:14:48.033355 systemd-networkd[1852]: cali2ea8ce6114e: Gained carrier Mar 14 00:14:48.078215 containerd[1934]: 2026-03-14 00:14:46.718 [ERROR][4654] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:14:48.078215 containerd[1934]: 2026-03-14 00:14:46.824 [INFO][4654] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--hzr5w-eth0 calico-apiserver-696c5cfc7f- calico-system 89705421-5074-4ee1-8f8a-b24f0bbde701 945 0 2026-03-14 00:14:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:696c5cfc7f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-130 calico-apiserver-696c5cfc7f-hzr5w eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calif07464b8a09 [] [] }} ContainerID="dd2325820fc71221b458ff5e8c09c5bfc835609b5335c9130cdae9310d6d4dda" Namespace="calico-system" Pod="calico-apiserver-696c5cfc7f-hzr5w" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--hzr5w-" Mar 14 00:14:48.078215 containerd[1934]: 2026-03-14 00:14:46.833 [INFO][4654] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dd2325820fc71221b458ff5e8c09c5bfc835609b5335c9130cdae9310d6d4dda" Namespace="calico-system" Pod="calico-apiserver-696c5cfc7f-hzr5w" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--hzr5w-eth0" Mar 14 00:14:48.078215 containerd[1934]: 2026-03-14 00:14:47.210 [INFO][4740] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dd2325820fc71221b458ff5e8c09c5bfc835609b5335c9130cdae9310d6d4dda" HandleID="k8s-pod-network.dd2325820fc71221b458ff5e8c09c5bfc835609b5335c9130cdae9310d6d4dda" Workload="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--hzr5w-eth0" Mar 14 00:14:48.078215 containerd[1934]: 2026-03-14 00:14:47.277 [INFO][4740] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="dd2325820fc71221b458ff5e8c09c5bfc835609b5335c9130cdae9310d6d4dda" HandleID="k8s-pod-network.dd2325820fc71221b458ff5e8c09c5bfc835609b5335c9130cdae9310d6d4dda" Workload="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--hzr5w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a8490), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-130", "pod":"calico-apiserver-696c5cfc7f-hzr5w", "timestamp":"2026-03-14 00:14:47.210357831 +0000 UTC"}, Hostname:"ip-172-31-18-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x400044a000)} Mar 14 00:14:48.078215 containerd[1934]: 2026-03-14 00:14:47.277 [INFO][4740] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:48.078215 containerd[1934]: 2026-03-14 00:14:47.412 [INFO][4740] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:48.078215 containerd[1934]: 2026-03-14 00:14:47.412 [INFO][4740] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-130' Mar 14 00:14:48.078215 containerd[1934]: 2026-03-14 00:14:47.512 [INFO][4740] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.dd2325820fc71221b458ff5e8c09c5bfc835609b5335c9130cdae9310d6d4dda" host="ip-172-31-18-130" Mar 14 00:14:48.078215 containerd[1934]: 2026-03-14 00:14:47.613 [INFO][4740] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-18-130" Mar 14 00:14:48.078215 containerd[1934]: 2026-03-14 00:14:47.643 [INFO][4740] ipam/ipam.go 526: Trying affinity for 192.168.44.64/26 host="ip-172-31-18-130" Mar 14 00:14:48.078215 containerd[1934]: 2026-03-14 00:14:47.650 [INFO][4740] ipam/ipam.go 160: Attempting to load block cidr=192.168.44.64/26 host="ip-172-31-18-130" Mar 14 00:14:48.078215 containerd[1934]: 2026-03-14 00:14:47.663 [INFO][4740] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="ip-172-31-18-130" Mar 14 00:14:48.078215 containerd[1934]: 2026-03-14 00:14:47.663 [INFO][4740] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.dd2325820fc71221b458ff5e8c09c5bfc835609b5335c9130cdae9310d6d4dda" host="ip-172-31-18-130" Mar 14 00:14:48.078215 containerd[1934]: 2026-03-14 00:14:47.670 [INFO][4740] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.dd2325820fc71221b458ff5e8c09c5bfc835609b5335c9130cdae9310d6d4dda Mar 14 00:14:48.078215 containerd[1934]: 2026-03-14 00:14:47.688 [INFO][4740] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.dd2325820fc71221b458ff5e8c09c5bfc835609b5335c9130cdae9310d6d4dda" host="ip-172-31-18-130" Mar 14 00:14:48.078215 containerd[1934]: 2026-03-14 00:14:47.706 [INFO][4740] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.44.65/26] block=192.168.44.64/26 handle="k8s-pod-network.dd2325820fc71221b458ff5e8c09c5bfc835609b5335c9130cdae9310d6d4dda" host="ip-172-31-18-130" Mar 14 00:14:48.078215 containerd[1934]: 2026-03-14 00:14:47.706 [INFO][4740] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.44.65/26] handle="k8s-pod-network.dd2325820fc71221b458ff5e8c09c5bfc835609b5335c9130cdae9310d6d4dda" host="ip-172-31-18-130" Mar 14 00:14:48.078215 containerd[1934]: 2026-03-14 00:14:47.706 [INFO][4740] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:48.078215 containerd[1934]: 2026-03-14 00:14:47.706 [INFO][4740] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.44.65/26] IPv6=[] ContainerID="dd2325820fc71221b458ff5e8c09c5bfc835609b5335c9130cdae9310d6d4dda" HandleID="k8s-pod-network.dd2325820fc71221b458ff5e8c09c5bfc835609b5335c9130cdae9310d6d4dda" Workload="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--hzr5w-eth0" Mar 14 00:14:48.079631 containerd[1934]: 2026-03-14 00:14:47.741 [INFO][4654] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dd2325820fc71221b458ff5e8c09c5bfc835609b5335c9130cdae9310d6d4dda" Namespace="calico-system" Pod="calico-apiserver-696c5cfc7f-hzr5w" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--hzr5w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--hzr5w-eth0", GenerateName:"calico-apiserver-696c5cfc7f-", Namespace:"calico-system", SelfLink:"", UID:"89705421-5074-4ee1-8f8a-b24f0bbde701", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"696c5cfc7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"", Pod:"calico-apiserver-696c5cfc7f-hzr5w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif07464b8a09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:14:48.079631 containerd[1934]: 2026-03-14 00:14:47.742 [INFO][4654] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.65/32] ContainerID="dd2325820fc71221b458ff5e8c09c5bfc835609b5335c9130cdae9310d6d4dda" Namespace="calico-system" Pod="calico-apiserver-696c5cfc7f-hzr5w" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--hzr5w-eth0" Mar 14 00:14:48.079631 containerd[1934]: 2026-03-14 00:14:47.742 [INFO][4654] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif07464b8a09 ContainerID="dd2325820fc71221b458ff5e8c09c5bfc835609b5335c9130cdae9310d6d4dda" Namespace="calico-system" Pod="calico-apiserver-696c5cfc7f-hzr5w" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--hzr5w-eth0" Mar 14 00:14:48.079631 containerd[1934]: 2026-03-14 00:14:47.816 [INFO][4654] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dd2325820fc71221b458ff5e8c09c5bfc835609b5335c9130cdae9310d6d4dda" Namespace="calico-system" Pod="calico-apiserver-696c5cfc7f-hzr5w" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--hzr5w-eth0" Mar 14 00:14:48.079631 containerd[1934]: 2026-03-14 00:14:47.820 [INFO][4654] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dd2325820fc71221b458ff5e8c09c5bfc835609b5335c9130cdae9310d6d4dda" Namespace="calico-system" Pod="calico-apiserver-696c5cfc7f-hzr5w" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--hzr5w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--hzr5w-eth0", GenerateName:"calico-apiserver-696c5cfc7f-", Namespace:"calico-system", SelfLink:"", UID:"89705421-5074-4ee1-8f8a-b24f0bbde701", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"696c5cfc7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"dd2325820fc71221b458ff5e8c09c5bfc835609b5335c9130cdae9310d6d4dda", Pod:"calico-apiserver-696c5cfc7f-hzr5w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif07464b8a09", MAC:"fe:c5:28:61:0c:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:14:48.079631 containerd[1934]: 2026-03-14 00:14:48.037 [INFO][4654] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dd2325820fc71221b458ff5e8c09c5bfc835609b5335c9130cdae9310d6d4dda" Namespace="calico-system" Pod="calico-apiserver-696c5cfc7f-hzr5w" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--hzr5w-eth0" Mar 14 00:14:48.148274 containerd[1934]: 2026-03-14 00:14:46.702 [ERROR][4661] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:14:48.148274 containerd[1934]: 2026-03-14 00:14:46.821 [INFO][4661] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--130-k8s-goldmane--9f7667bb8--nbdj4-eth0 goldmane-9f7667bb8- calico-system c8686320-fc9c-4a3f-a2c7-dfa460638fd7 941 0 2026-03-14 00:14:23 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-18-130 goldmane-9f7667bb8-nbdj4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali2ea8ce6114e [] [] }} ContainerID="4909fa5b36c5a6f6d60071ca055c9fbabda165147ab988d745b5d2e665e242ab" Namespace="calico-system" Pod="goldmane-9f7667bb8-nbdj4" WorkloadEndpoint="ip--172--31--18--130-k8s-goldmane--9f7667bb8--nbdj4-" Mar 14 00:14:48.148274 containerd[1934]: 2026-03-14 00:14:46.831 [INFO][4661] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4909fa5b36c5a6f6d60071ca055c9fbabda165147ab988d745b5d2e665e242ab" Namespace="calico-system" Pod="goldmane-9f7667bb8-nbdj4" WorkloadEndpoint="ip--172--31--18--130-k8s-goldmane--9f7667bb8--nbdj4-eth0" Mar 14 00:14:48.148274 containerd[1934]: 2026-03-14 00:14:47.398 [INFO][4744] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4909fa5b36c5a6f6d60071ca055c9fbabda165147ab988d745b5d2e665e242ab" HandleID="k8s-pod-network.4909fa5b36c5a6f6d60071ca055c9fbabda165147ab988d745b5d2e665e242ab" Workload="ip--172--31--18--130-k8s-goldmane--9f7667bb8--nbdj4-eth0" Mar 14 00:14:48.148274 containerd[1934]: 2026-03-14 00:14:47.544 [INFO][4744] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4909fa5b36c5a6f6d60071ca055c9fbabda165147ab988d745b5d2e665e242ab" HandleID="k8s-pod-network.4909fa5b36c5a6f6d60071ca055c9fbabda165147ab988d745b5d2e665e242ab" Workload="ip--172--31--18--130-k8s-goldmane--9f7667bb8--nbdj4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400039c6e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-130", "pod":"goldmane-9f7667bb8-nbdj4", "timestamp":"2026-03-14 00:14:47.398924872 +0000 UTC"}, Hostname:"ip-172-31-18-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002fd340)} Mar 14 00:14:48.148274 containerd[1934]: 2026-03-14 00:14:47.544 [INFO][4744] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:48.148274 containerd[1934]: 2026-03-14 00:14:47.707 [INFO][4744] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:48.148274 containerd[1934]: 2026-03-14 00:14:47.711 [INFO][4744] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-130' Mar 14 00:14:48.148274 containerd[1934]: 2026-03-14 00:14:47.719 [INFO][4744] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4909fa5b36c5a6f6d60071ca055c9fbabda165147ab988d745b5d2e665e242ab" host="ip-172-31-18-130" Mar 14 00:14:48.148274 containerd[1934]: 2026-03-14 00:14:47.761 [INFO][4744] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-18-130" Mar 14 00:14:48.148274 containerd[1934]: 2026-03-14 00:14:47.817 [INFO][4744] ipam/ipam.go 526: Trying affinity for 192.168.44.64/26 host="ip-172-31-18-130" Mar 14 00:14:48.148274 containerd[1934]: 2026-03-14 00:14:47.831 [INFO][4744] ipam/ipam.go 160: Attempting to load block cidr=192.168.44.64/26 host="ip-172-31-18-130" Mar 14 00:14:48.148274 containerd[1934]: 2026-03-14 00:14:47.839 [INFO][4744] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="ip-172-31-18-130" Mar 14 00:14:48.148274 containerd[1934]: 2026-03-14 00:14:47.839 [INFO][4744] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.4909fa5b36c5a6f6d60071ca055c9fbabda165147ab988d745b5d2e665e242ab" host="ip-172-31-18-130" Mar 14 00:14:48.148274 containerd[1934]: 2026-03-14 00:14:47.850 [INFO][4744] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4909fa5b36c5a6f6d60071ca055c9fbabda165147ab988d745b5d2e665e242ab Mar 14 00:14:48.148274 containerd[1934]: 2026-03-14 00:14:47.875 [INFO][4744] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.4909fa5b36c5a6f6d60071ca055c9fbabda165147ab988d745b5d2e665e242ab" host="ip-172-31-18-130" Mar 14 00:14:48.148274 containerd[1934]: 2026-03-14 00:14:48.000 [INFO][4744] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.44.66/26] block=192.168.44.64/26 handle="k8s-pod-network.4909fa5b36c5a6f6d60071ca055c9fbabda165147ab988d745b5d2e665e242ab" host="ip-172-31-18-130" Mar 14 00:14:48.148274 containerd[1934]: 2026-03-14 00:14:48.000 [INFO][4744] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.44.66/26] handle="k8s-pod-network.4909fa5b36c5a6f6d60071ca055c9fbabda165147ab988d745b5d2e665e242ab" host="ip-172-31-18-130" Mar 14 00:14:48.148274 containerd[1934]: 2026-03-14 00:14:48.000 [INFO][4744] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:48.148274 containerd[1934]: 2026-03-14 00:14:48.000 [INFO][4744] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.44.66/26] IPv6=[] ContainerID="4909fa5b36c5a6f6d60071ca055c9fbabda165147ab988d745b5d2e665e242ab" HandleID="k8s-pod-network.4909fa5b36c5a6f6d60071ca055c9fbabda165147ab988d745b5d2e665e242ab" Workload="ip--172--31--18--130-k8s-goldmane--9f7667bb8--nbdj4-eth0" Mar 14 00:14:48.149523 containerd[1934]: 2026-03-14 00:14:48.008 [INFO][4661] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4909fa5b36c5a6f6d60071ca055c9fbabda165147ab988d745b5d2e665e242ab" Namespace="calico-system" Pod="goldmane-9f7667bb8-nbdj4" WorkloadEndpoint="ip--172--31--18--130-k8s-goldmane--9f7667bb8--nbdj4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-goldmane--9f7667bb8--nbdj4-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"c8686320-fc9c-4a3f-a2c7-dfa460638fd7", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"", Pod:"goldmane-9f7667bb8-nbdj4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.44.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2ea8ce6114e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:14:48.149523 containerd[1934]: 2026-03-14 00:14:48.009 [INFO][4661] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.66/32] ContainerID="4909fa5b36c5a6f6d60071ca055c9fbabda165147ab988d745b5d2e665e242ab" Namespace="calico-system" Pod="goldmane-9f7667bb8-nbdj4" WorkloadEndpoint="ip--172--31--18--130-k8s-goldmane--9f7667bb8--nbdj4-eth0" Mar 14 00:14:48.149523 containerd[1934]: 2026-03-14 00:14:48.009 [INFO][4661] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ea8ce6114e ContainerID="4909fa5b36c5a6f6d60071ca055c9fbabda165147ab988d745b5d2e665e242ab" Namespace="calico-system" Pod="goldmane-9f7667bb8-nbdj4" WorkloadEndpoint="ip--172--31--18--130-k8s-goldmane--9f7667bb8--nbdj4-eth0" Mar 14 00:14:48.149523 containerd[1934]: 2026-03-14 00:14:48.066 [INFO][4661] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4909fa5b36c5a6f6d60071ca055c9fbabda165147ab988d745b5d2e665e242ab" Namespace="calico-system" Pod="goldmane-9f7667bb8-nbdj4" WorkloadEndpoint="ip--172--31--18--130-k8s-goldmane--9f7667bb8--nbdj4-eth0" Mar 14 00:14:48.149523 containerd[1934]: 2026-03-14 00:14:48.072 [INFO][4661] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4909fa5b36c5a6f6d60071ca055c9fbabda165147ab988d745b5d2e665e242ab" Namespace="calico-system" Pod="goldmane-9f7667bb8-nbdj4" WorkloadEndpoint="ip--172--31--18--130-k8s-goldmane--9f7667bb8--nbdj4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-goldmane--9f7667bb8--nbdj4-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"c8686320-fc9c-4a3f-a2c7-dfa460638fd7", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"4909fa5b36c5a6f6d60071ca055c9fbabda165147ab988d745b5d2e665e242ab", Pod:"goldmane-9f7667bb8-nbdj4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.44.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2ea8ce6114e", MAC:"ce:73:20:25:c6:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:14:48.149523 containerd[1934]: 2026-03-14 00:14:48.128 [INFO][4661] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4909fa5b36c5a6f6d60071ca055c9fbabda165147ab988d745b5d2e665e242ab" Namespace="calico-system" Pod="goldmane-9f7667bb8-nbdj4" WorkloadEndpoint="ip--172--31--18--130-k8s-goldmane--9f7667bb8--nbdj4-eth0" Mar 14 00:14:48.247252 containerd[1934]: time="2026-03-14T00:14:48.243738976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:14:48.247252 containerd[1934]: time="2026-03-14T00:14:48.243861004Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:14:48.247252 containerd[1934]: time="2026-03-14T00:14:48.245152132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:48.247252 containerd[1934]: time="2026-03-14T00:14:48.246743176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:48.353620 systemd[1]: run-containerd-runc-k8s.io-dd2325820fc71221b458ff5e8c09c5bfc835609b5335c9130cdae9310d6d4dda-runc.2TKgR1.mount: Deactivated successfully. Mar 14 00:14:48.392001 systemd-networkd[1852]: cali3a1e0477228: Link UP Mar 14 00:14:48.400841 systemd[1]: Started cri-containerd-dd2325820fc71221b458ff5e8c09c5bfc835609b5335c9130cdae9310d6d4dda.scope - libcontainer container dd2325820fc71221b458ff5e8c09c5bfc835609b5335c9130cdae9310d6d4dda. Mar 14 00:14:48.404776 systemd-networkd[1852]: cali3a1e0477228: Gained carrier Mar 14 00:14:48.423576 containerd[1934]: time="2026-03-14T00:14:48.421864301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:14:48.423576 containerd[1934]: time="2026-03-14T00:14:48.422027657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:14:48.423576 containerd[1934]: time="2026-03-14T00:14:48.422066825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:48.423576 containerd[1934]: time="2026-03-14T00:14:48.422341517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:48.512773 systemd[1]: Started cri-containerd-4909fa5b36c5a6f6d60071ca055c9fbabda165147ab988d745b5d2e665e242ab.scope - libcontainer container 4909fa5b36c5a6f6d60071ca055c9fbabda165147ab988d745b5d2e665e242ab. Mar 14 00:14:48.577313 containerd[1934]: 2026-03-14 00:14:46.991 [ERROR][4678] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:14:48.577313 containerd[1934]: 2026-03-14 00:14:47.086 [INFO][4678] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--130-k8s-coredns--7d764666f9--v8blg-eth0 coredns-7d764666f9- kube-system d4e6f04d-05da-499d-9d04-d1cc54b42952 944 0 2026-03-14 00:14:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-130 coredns-7d764666f9-v8blg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3a1e0477228 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="3a49916aa2bbf14260e14b1b8b0cd1b3f41d09d079b3b5cbf3aeafa1df39cf3d" Namespace="kube-system" Pod="coredns-7d764666f9-v8blg" WorkloadEndpoint="ip--172--31--18--130-k8s-coredns--7d764666f9--v8blg-" Mar 14 00:14:48.577313 containerd[1934]: 2026-03-14 00:14:47.090 [INFO][4678] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3a49916aa2bbf14260e14b1b8b0cd1b3f41d09d079b3b5cbf3aeafa1df39cf3d" Namespace="kube-system" Pod="coredns-7d764666f9-v8blg" WorkloadEndpoint="ip--172--31--18--130-k8s-coredns--7d764666f9--v8blg-eth0" Mar 14 00:14:48.577313 containerd[1934]: 2026-03-14 00:14:47.450 [INFO][4794] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3a49916aa2bbf14260e14b1b8b0cd1b3f41d09d079b3b5cbf3aeafa1df39cf3d" HandleID="k8s-pod-network.3a49916aa2bbf14260e14b1b8b0cd1b3f41d09d079b3b5cbf3aeafa1df39cf3d" Workload="ip--172--31--18--130-k8s-coredns--7d764666f9--v8blg-eth0" Mar 14 00:14:48.577313 containerd[1934]: 2026-03-14 00:14:47.570 [INFO][4794] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3a49916aa2bbf14260e14b1b8b0cd1b3f41d09d079b3b5cbf3aeafa1df39cf3d" HandleID="k8s-pod-network.3a49916aa2bbf14260e14b1b8b0cd1b3f41d09d079b3b5cbf3aeafa1df39cf3d" Workload="ip--172--31--18--130-k8s-coredns--7d764666f9--v8blg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001201d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-130", "pod":"coredns-7d764666f9-v8blg", "timestamp":"2026-03-14 00:14:47.441422584 +0000 UTC"}, Hostname:"ip-172-31-18-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40006c4000)} Mar 14 00:14:48.577313 containerd[1934]: 2026-03-14 00:14:47.571 [INFO][4794] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:48.577313 containerd[1934]: 2026-03-14 00:14:48.001 [INFO][4794] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:48.577313 containerd[1934]: 2026-03-14 00:14:48.001 [INFO][4794] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-130' Mar 14 00:14:48.577313 containerd[1934]: 2026-03-14 00:14:48.068 [INFO][4794] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3a49916aa2bbf14260e14b1b8b0cd1b3f41d09d079b3b5cbf3aeafa1df39cf3d" host="ip-172-31-18-130" Mar 14 00:14:48.577313 containerd[1934]: 2026-03-14 00:14:48.121 [INFO][4794] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-18-130" Mar 14 00:14:48.577313 containerd[1934]: 2026-03-14 00:14:48.166 [INFO][4794] ipam/ipam.go 526: Trying affinity for 192.168.44.64/26 host="ip-172-31-18-130" Mar 14 00:14:48.577313 containerd[1934]: 2026-03-14 00:14:48.187 [INFO][4794] ipam/ipam.go 160: Attempting to load block cidr=192.168.44.64/26 host="ip-172-31-18-130" Mar 14 00:14:48.577313 containerd[1934]: 2026-03-14 00:14:48.210 [INFO][4794] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="ip-172-31-18-130" Mar 14 00:14:48.577313 containerd[1934]: 2026-03-14 00:14:48.210 [INFO][4794] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.3a49916aa2bbf14260e14b1b8b0cd1b3f41d09d079b3b5cbf3aeafa1df39cf3d" host="ip-172-31-18-130" Mar 14 00:14:48.577313 containerd[1934]: 2026-03-14 00:14:48.218 [INFO][4794] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3a49916aa2bbf14260e14b1b8b0cd1b3f41d09d079b3b5cbf3aeafa1df39cf3d Mar 14 00:14:48.577313 containerd[1934]: 2026-03-14 00:14:48.261 [INFO][4794] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.3a49916aa2bbf14260e14b1b8b0cd1b3f41d09d079b3b5cbf3aeafa1df39cf3d" host="ip-172-31-18-130" Mar 14 00:14:48.577313 containerd[1934]: 2026-03-14 00:14:48.300 [INFO][4794] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.44.67/26] block=192.168.44.64/26 handle="k8s-pod-network.3a49916aa2bbf14260e14b1b8b0cd1b3f41d09d079b3b5cbf3aeafa1df39cf3d" host="ip-172-31-18-130" Mar 14 00:14:48.577313 containerd[1934]: 2026-03-14 00:14:48.300 [INFO][4794] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.44.67/26] handle="k8s-pod-network.3a49916aa2bbf14260e14b1b8b0cd1b3f41d09d079b3b5cbf3aeafa1df39cf3d" host="ip-172-31-18-130" Mar 14 00:14:48.577313 containerd[1934]: 2026-03-14 00:14:48.302 [INFO][4794] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:48.577313 containerd[1934]: 2026-03-14 00:14:48.302 [INFO][4794] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.44.67/26] IPv6=[] ContainerID="3a49916aa2bbf14260e14b1b8b0cd1b3f41d09d079b3b5cbf3aeafa1df39cf3d" HandleID="k8s-pod-network.3a49916aa2bbf14260e14b1b8b0cd1b3f41d09d079b3b5cbf3aeafa1df39cf3d" Workload="ip--172--31--18--130-k8s-coredns--7d764666f9--v8blg-eth0" Mar 14 00:14:48.578901 containerd[1934]: 2026-03-14 00:14:48.343 [INFO][4678] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3a49916aa2bbf14260e14b1b8b0cd1b3f41d09d079b3b5cbf3aeafa1df39cf3d" Namespace="kube-system" Pod="coredns-7d764666f9-v8blg" WorkloadEndpoint="ip--172--31--18--130-k8s-coredns--7d764666f9--v8blg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-coredns--7d764666f9--v8blg-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"d4e6f04d-05da-499d-9d04-d1cc54b42952", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"", Pod:"coredns-7d764666f9-v8blg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3a1e0477228", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:14:48.578901 containerd[1934]: 2026-03-14 00:14:48.343 [INFO][4678] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.67/32] ContainerID="3a49916aa2bbf14260e14b1b8b0cd1b3f41d09d079b3b5cbf3aeafa1df39cf3d" Namespace="kube-system" Pod="coredns-7d764666f9-v8blg" WorkloadEndpoint="ip--172--31--18--130-k8s-coredns--7d764666f9--v8blg-eth0" Mar 14 00:14:48.578901 containerd[1934]: 2026-03-14 00:14:48.343 [INFO][4678] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3a1e0477228 ContainerID="3a49916aa2bbf14260e14b1b8b0cd1b3f41d09d079b3b5cbf3aeafa1df39cf3d" Namespace="kube-system" Pod="coredns-7d764666f9-v8blg" WorkloadEndpoint="ip--172--31--18--130-k8s-coredns--7d764666f9--v8blg-eth0" Mar 14 00:14:48.578901 containerd[1934]: 2026-03-14 00:14:48.409 [INFO][4678] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3a49916aa2bbf14260e14b1b8b0cd1b3f41d09d079b3b5cbf3aeafa1df39cf3d" Namespace="kube-system" Pod="coredns-7d764666f9-v8blg" WorkloadEndpoint="ip--172--31--18--130-k8s-coredns--7d764666f9--v8blg-eth0" Mar 14 00:14:48.578901 containerd[1934]: 2026-03-14 00:14:48.423 [INFO][4678] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3a49916aa2bbf14260e14b1b8b0cd1b3f41d09d079b3b5cbf3aeafa1df39cf3d" Namespace="kube-system" Pod="coredns-7d764666f9-v8blg" WorkloadEndpoint="ip--172--31--18--130-k8s-coredns--7d764666f9--v8blg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-coredns--7d764666f9--v8blg-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"d4e6f04d-05da-499d-9d04-d1cc54b42952", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"3a49916aa2bbf14260e14b1b8b0cd1b3f41d09d079b3b5cbf3aeafa1df39cf3d", Pod:"coredns-7d764666f9-v8blg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3a1e0477228", MAC:"aa:cc:55:62:0f:8b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:14:48.578901 containerd[1934]: 2026-03-14 00:14:48.574 [INFO][4678] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3a49916aa2bbf14260e14b1b8b0cd1b3f41d09d079b3b5cbf3aeafa1df39cf3d" Namespace="kube-system" Pod="coredns-7d764666f9-v8blg" WorkloadEndpoint="ip--172--31--18--130-k8s-coredns--7d764666f9--v8blg-eth0" Mar 14 00:14:48.631465 containerd[1934]: time="2026-03-14T00:14:48.629232018Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:14:48.631465 containerd[1934]: time="2026-03-14T00:14:48.629398986Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:14:48.631465 containerd[1934]: time="2026-03-14T00:14:48.629465502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:48.631465 containerd[1934]: time="2026-03-14T00:14:48.629675838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:48.684804 systemd[1]: Started cri-containerd-3a49916aa2bbf14260e14b1b8b0cd1b3f41d09d079b3b5cbf3aeafa1df39cf3d.scope - libcontainer container 3a49916aa2bbf14260e14b1b8b0cd1b3f41d09d079b3b5cbf3aeafa1df39cf3d. Mar 14 00:14:48.732898 systemd-networkd[1852]: cali75893884bdc: Link UP Mar 14 00:14:48.737189 systemd-networkd[1852]: cali75893884bdc: Gained carrier Mar 14 00:14:48.788759 containerd[1934]: 2026-03-14 00:14:47.054 [ERROR][4705] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:14:48.788759 containerd[1934]: 2026-03-14 00:14:47.153 [INFO][4705] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--ng7xc-eth0 calico-apiserver-696c5cfc7f- calico-system 9dac252b-d2f0-4f51-8a32-3e7e6f3e4118 943 0 2026-03-14 00:14:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:696c5cfc7f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-130 calico-apiserver-696c5cfc7f-ng7xc eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali75893884bdc [] [] }} ContainerID="6fbb90a16bf8c41b9b3aea5878d88dacf092e2566d11bff6df89b168d5db4375" Namespace="calico-system" Pod="calico-apiserver-696c5cfc7f-ng7xc" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--ng7xc-" Mar 14 00:14:48.788759 containerd[1934]: 2026-03-14 00:14:47.157 [INFO][4705] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6fbb90a16bf8c41b9b3aea5878d88dacf092e2566d11bff6df89b168d5db4375" Namespace="calico-system" Pod="calico-apiserver-696c5cfc7f-ng7xc" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--ng7xc-eth0" Mar 14 00:14:48.788759 containerd[1934]: 2026-03-14 00:14:47.498 [INFO][4801] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6fbb90a16bf8c41b9b3aea5878d88dacf092e2566d11bff6df89b168d5db4375" HandleID="k8s-pod-network.6fbb90a16bf8c41b9b3aea5878d88dacf092e2566d11bff6df89b168d5db4375" Workload="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--ng7xc-eth0" Mar 14 00:14:48.788759 containerd[1934]: 2026-03-14 00:14:47.614 [INFO][4801] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6fbb90a16bf8c41b9b3aea5878d88dacf092e2566d11bff6df89b168d5db4375" HandleID="k8s-pod-network.6fbb90a16bf8c41b9b3aea5878d88dacf092e2566d11bff6df89b168d5db4375" Workload="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--ng7xc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034fb40), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-130", "pod":"calico-apiserver-696c5cfc7f-ng7xc", "timestamp":"2026-03-14 00:14:47.498503956 +0000 UTC"}, Hostname:"ip-172-31-18-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40004ff1e0)} Mar 14 00:14:48.788759 containerd[1934]: 2026-03-14 00:14:47.614 [INFO][4801] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:48.788759 containerd[1934]: 2026-03-14 00:14:48.303 [INFO][4801] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:48.788759 containerd[1934]: 2026-03-14 00:14:48.303 [INFO][4801] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-130' Mar 14 00:14:48.788759 containerd[1934]: 2026-03-14 00:14:48.335 [INFO][4801] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6fbb90a16bf8c41b9b3aea5878d88dacf092e2566d11bff6df89b168d5db4375" host="ip-172-31-18-130" Mar 14 00:14:48.788759 containerd[1934]: 2026-03-14 00:14:48.408 [INFO][4801] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-18-130" Mar 14 00:14:48.788759 containerd[1934]: 2026-03-14 00:14:48.455 [INFO][4801] ipam/ipam.go 526: Trying affinity for 192.168.44.64/26 host="ip-172-31-18-130" Mar 14 00:14:48.788759 containerd[1934]: 2026-03-14 00:14:48.492 [INFO][4801] ipam/ipam.go 160: Attempting to load block cidr=192.168.44.64/26 host="ip-172-31-18-130" Mar 14 00:14:48.788759 containerd[1934]: 2026-03-14 00:14:48.507 [INFO][4801] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="ip-172-31-18-130" Mar 14 00:14:48.788759 containerd[1934]: 2026-03-14 00:14:48.508 [INFO][4801] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.6fbb90a16bf8c41b9b3aea5878d88dacf092e2566d11bff6df89b168d5db4375" host="ip-172-31-18-130" Mar 14 00:14:48.788759 containerd[1934]: 2026-03-14 00:14:48.563 [INFO][4801] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6fbb90a16bf8c41b9b3aea5878d88dacf092e2566d11bff6df89b168d5db4375 Mar 14 00:14:48.788759 containerd[1934]: 2026-03-14 00:14:48.598 [INFO][4801] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.6fbb90a16bf8c41b9b3aea5878d88dacf092e2566d11bff6df89b168d5db4375" host="ip-172-31-18-130" Mar 14 00:14:48.788759 containerd[1934]: 2026-03-14 00:14:48.633 [INFO][4801] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.44.68/26] block=192.168.44.64/26 handle="k8s-pod-network.6fbb90a16bf8c41b9b3aea5878d88dacf092e2566d11bff6df89b168d5db4375" host="ip-172-31-18-130" Mar 14 00:14:48.788759 containerd[1934]: 2026-03-14 00:14:48.635 [INFO][4801] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.44.68/26] handle="k8s-pod-network.6fbb90a16bf8c41b9b3aea5878d88dacf092e2566d11bff6df89b168d5db4375" host="ip-172-31-18-130" Mar 14 00:14:48.788759 containerd[1934]: 2026-03-14 00:14:48.638 [INFO][4801] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:48.788759 containerd[1934]: 2026-03-14 00:14:48.638 [INFO][4801] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.44.68/26] IPv6=[] ContainerID="6fbb90a16bf8c41b9b3aea5878d88dacf092e2566d11bff6df89b168d5db4375" HandleID="k8s-pod-network.6fbb90a16bf8c41b9b3aea5878d88dacf092e2566d11bff6df89b168d5db4375" Workload="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--ng7xc-eth0" Mar 14 00:14:48.792213 containerd[1934]: 2026-03-14 00:14:48.693 [INFO][4705] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6fbb90a16bf8c41b9b3aea5878d88dacf092e2566d11bff6df89b168d5db4375" Namespace="calico-system" Pod="calico-apiserver-696c5cfc7f-ng7xc" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--ng7xc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--ng7xc-eth0", GenerateName:"calico-apiserver-696c5cfc7f-", Namespace:"calico-system", SelfLink:"", UID:"9dac252b-d2f0-4f51-8a32-3e7e6f3e4118", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"696c5cfc7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"", Pod:"calico-apiserver-696c5cfc7f-ng7xc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali75893884bdc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:14:48.792213 containerd[1934]: 2026-03-14 00:14:48.695 [INFO][4705] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.68/32] ContainerID="6fbb90a16bf8c41b9b3aea5878d88dacf092e2566d11bff6df89b168d5db4375" Namespace="calico-system" Pod="calico-apiserver-696c5cfc7f-ng7xc" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--ng7xc-eth0" Mar 14 00:14:48.792213 containerd[1934]: 2026-03-14 00:14:48.695 [INFO][4705] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali75893884bdc ContainerID="6fbb90a16bf8c41b9b3aea5878d88dacf092e2566d11bff6df89b168d5db4375" Namespace="calico-system" Pod="calico-apiserver-696c5cfc7f-ng7xc" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--ng7xc-eth0" Mar 14 00:14:48.792213 containerd[1934]: 2026-03-14 00:14:48.741 [INFO][4705] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6fbb90a16bf8c41b9b3aea5878d88dacf092e2566d11bff6df89b168d5db4375" Namespace="calico-system" Pod="calico-apiserver-696c5cfc7f-ng7xc" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--ng7xc-eth0" Mar 14 00:14:48.792213 containerd[1934]: 2026-03-14 00:14:48.742 [INFO][4705] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6fbb90a16bf8c41b9b3aea5878d88dacf092e2566d11bff6df89b168d5db4375" Namespace="calico-system" Pod="calico-apiserver-696c5cfc7f-ng7xc" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--ng7xc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--ng7xc-eth0", GenerateName:"calico-apiserver-696c5cfc7f-", Namespace:"calico-system", SelfLink:"", UID:"9dac252b-d2f0-4f51-8a32-3e7e6f3e4118", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"696c5cfc7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"6fbb90a16bf8c41b9b3aea5878d88dacf092e2566d11bff6df89b168d5db4375", Pod:"calico-apiserver-696c5cfc7f-ng7xc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali75893884bdc", MAC:"d2:e7:84:5d:1a:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:14:48.792213 containerd[1934]: 2026-03-14 00:14:48.777 [INFO][4705] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6fbb90a16bf8c41b9b3aea5878d88dacf092e2566d11bff6df89b168d5db4375" Namespace="calico-system" Pod="calico-apiserver-696c5cfc7f-ng7xc" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--ng7xc-eth0" Mar 14 00:14:48.873164 systemd-networkd[1852]: cali64e12ef821e: Link UP Mar 14 00:14:48.873651 systemd-networkd[1852]: cali64e12ef821e: Gained carrier Mar 14 00:14:48.932090 containerd[1934]: time="2026-03-14T00:14:48.931994647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-v8blg,Uid:d4e6f04d-05da-499d-9d04-d1cc54b42952,Namespace:kube-system,Attempt:1,} returns sandbox id \"3a49916aa2bbf14260e14b1b8b0cd1b3f41d09d079b3b5cbf3aeafa1df39cf3d\"" Mar 14 00:14:48.960983 containerd[1934]: time="2026-03-14T00:14:48.960806995Z" level=info msg="CreateContainer within sandbox \"3a49916aa2bbf14260e14b1b8b0cd1b3f41d09d079b3b5cbf3aeafa1df39cf3d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:14:48.995954 containerd[1934]: time="2026-03-14T00:14:48.995853488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-nbdj4,Uid:c8686320-fc9c-4a3f-a2c7-dfa460638fd7,Namespace:calico-system,Attempt:1,} returns sandbox id \"4909fa5b36c5a6f6d60071ca055c9fbabda165147ab988d745b5d2e665e242ab\"" Mar 14 00:14:49.017833 containerd[1934]: time="2026-03-14T00:14:49.015332656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 14 00:14:49.034456 containerd[1934]: time="2026-03-14T00:14:49.033598624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:14:49.034456 containerd[1934]: time="2026-03-14T00:14:49.033682660Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:14:49.034456 containerd[1934]: time="2026-03-14T00:14:49.033708400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:49.038617 containerd[1934]: 2026-03-14 00:14:47.308 [ERROR][4724] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:14:49.038617 containerd[1934]: 2026-03-14 00:14:47.581 [INFO][4724] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--130-k8s-coredns--7d764666f9--fzzfp-eth0 coredns-7d764666f9- kube-system bc321aa2-bc1d-4b5f-9fdf-44eb597d2609 942 0 2026-03-14 00:14:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-130 coredns-7d764666f9-fzzfp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali64e12ef821e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="89670e183635c0fa37a514aff84d77e6213888974f7f1cff9f744ee459c506ee" Namespace="kube-system" Pod="coredns-7d764666f9-fzzfp" WorkloadEndpoint="ip--172--31--18--130-k8s-coredns--7d764666f9--fzzfp-" Mar 14 00:14:49.038617 containerd[1934]: 2026-03-14 00:14:47.581 [INFO][4724] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="89670e183635c0fa37a514aff84d77e6213888974f7f1cff9f744ee459c506ee" Namespace="kube-system" Pod="coredns-7d764666f9-fzzfp" WorkloadEndpoint="ip--172--31--18--130-k8s-coredns--7d764666f9--fzzfp-eth0" Mar 14 00:14:49.038617 containerd[1934]: 2026-03-14 00:14:47.938 [INFO][4837] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="89670e183635c0fa37a514aff84d77e6213888974f7f1cff9f744ee459c506ee" HandleID="k8s-pod-network.89670e183635c0fa37a514aff84d77e6213888974f7f1cff9f744ee459c506ee" Workload="ip--172--31--18--130-k8s-coredns--7d764666f9--fzzfp-eth0" Mar 14 00:14:49.038617 containerd[1934]: 2026-03-14 00:14:48.063 [INFO][4837] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="89670e183635c0fa37a514aff84d77e6213888974f7f1cff9f744ee459c506ee" HandleID="k8s-pod-network.89670e183635c0fa37a514aff84d77e6213888974f7f1cff9f744ee459c506ee" Workload="ip--172--31--18--130-k8s-coredns--7d764666f9--fzzfp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400037bae0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-130", "pod":"coredns-7d764666f9-fzzfp", "timestamp":"2026-03-14 00:14:47.938498478 +0000 UTC"}, Hostname:"ip-172-31-18-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x4000314420)} Mar 14 00:14:49.038617 containerd[1934]: 2026-03-14 00:14:48.063 [INFO][4837] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:49.038617 containerd[1934]: 2026-03-14 00:14:48.643 [INFO][4837] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:49.038617 containerd[1934]: 2026-03-14 00:14:48.644 [INFO][4837] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-130' Mar 14 00:14:49.038617 containerd[1934]: 2026-03-14 00:14:48.661 [INFO][4837] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.89670e183635c0fa37a514aff84d77e6213888974f7f1cff9f744ee459c506ee" host="ip-172-31-18-130" Mar 14 00:14:49.038617 containerd[1934]: 2026-03-14 00:14:48.688 [INFO][4837] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-18-130" Mar 14 00:14:49.038617 containerd[1934]: 2026-03-14 00:14:48.725 [INFO][4837] ipam/ipam.go 526: Trying affinity for 192.168.44.64/26 host="ip-172-31-18-130" Mar 14 00:14:49.038617 containerd[1934]: 2026-03-14 00:14:48.743 [INFO][4837] ipam/ipam.go 160: Attempting to load block cidr=192.168.44.64/26 host="ip-172-31-18-130" Mar 14 00:14:49.038617 containerd[1934]: 2026-03-14 00:14:48.753 [INFO][4837] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="ip-172-31-18-130" Mar 14 00:14:49.038617 containerd[1934]: 2026-03-14 00:14:48.754 [INFO][4837] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.89670e183635c0fa37a514aff84d77e6213888974f7f1cff9f744ee459c506ee" host="ip-172-31-18-130" Mar 14 00:14:49.038617 containerd[1934]: 2026-03-14 00:14:48.761 [INFO][4837] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.89670e183635c0fa37a514aff84d77e6213888974f7f1cff9f744ee459c506ee Mar 14 00:14:49.038617 containerd[1934]: 2026-03-14 00:14:48.786 [INFO][4837] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.89670e183635c0fa37a514aff84d77e6213888974f7f1cff9f744ee459c506ee" host="ip-172-31-18-130" Mar 14 00:14:49.038617 containerd[1934]: 2026-03-14 00:14:48.819 [INFO][4837] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.44.69/26] block=192.168.44.64/26 handle="k8s-pod-network.89670e183635c0fa37a514aff84d77e6213888974f7f1cff9f744ee459c506ee" host="ip-172-31-18-130" Mar 14 00:14:49.038617 containerd[1934]: 2026-03-14 00:14:48.819 [INFO][4837] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.44.69/26] handle="k8s-pod-network.89670e183635c0fa37a514aff84d77e6213888974f7f1cff9f744ee459c506ee" host="ip-172-31-18-130" Mar 14 00:14:49.038617 containerd[1934]: 2026-03-14 00:14:48.819 [INFO][4837] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:49.038617 containerd[1934]: 2026-03-14 00:14:48.819 [INFO][4837] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.44.69/26] IPv6=[] ContainerID="89670e183635c0fa37a514aff84d77e6213888974f7f1cff9f744ee459c506ee" HandleID="k8s-pod-network.89670e183635c0fa37a514aff84d77e6213888974f7f1cff9f744ee459c506ee" Workload="ip--172--31--18--130-k8s-coredns--7d764666f9--fzzfp-eth0" Mar 14 00:14:49.042470 containerd[1934]: 2026-03-14 00:14:48.829 [INFO][4724] cni-plugin/k8s.go 418: Populated endpoint ContainerID="89670e183635c0fa37a514aff84d77e6213888974f7f1cff9f744ee459c506ee" Namespace="kube-system" Pod="coredns-7d764666f9-fzzfp" WorkloadEndpoint="ip--172--31--18--130-k8s-coredns--7d764666f9--fzzfp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-coredns--7d764666f9--fzzfp-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"bc321aa2-bc1d-4b5f-9fdf-44eb597d2609", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"", Pod:"coredns-7d764666f9-fzzfp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali64e12ef821e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:14:49.042470 containerd[1934]: 2026-03-14 00:14:48.829 [INFO][4724] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.69/32] ContainerID="89670e183635c0fa37a514aff84d77e6213888974f7f1cff9f744ee459c506ee" Namespace="kube-system" Pod="coredns-7d764666f9-fzzfp" WorkloadEndpoint="ip--172--31--18--130-k8s-coredns--7d764666f9--fzzfp-eth0" Mar 14 00:14:49.042470 containerd[1934]: 2026-03-14 00:14:48.833 [INFO][4724] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali64e12ef821e ContainerID="89670e183635c0fa37a514aff84d77e6213888974f7f1cff9f744ee459c506ee" Namespace="kube-system" Pod="coredns-7d764666f9-fzzfp" WorkloadEndpoint="ip--172--31--18--130-k8s-coredns--7d764666f9--fzzfp-eth0" Mar 14 00:14:49.042470 containerd[1934]: 2026-03-14 00:14:48.900 [INFO][4724] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="89670e183635c0fa37a514aff84d77e6213888974f7f1cff9f744ee459c506ee" Namespace="kube-system" Pod="coredns-7d764666f9-fzzfp" WorkloadEndpoint="ip--172--31--18--130-k8s-coredns--7d764666f9--fzzfp-eth0" Mar 14 00:14:49.042470 containerd[1934]: 2026-03-14 00:14:48.914 [INFO][4724] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="89670e183635c0fa37a514aff84d77e6213888974f7f1cff9f744ee459c506ee" Namespace="kube-system" Pod="coredns-7d764666f9-fzzfp" WorkloadEndpoint="ip--172--31--18--130-k8s-coredns--7d764666f9--fzzfp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-coredns--7d764666f9--fzzfp-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"bc321aa2-bc1d-4b5f-9fdf-44eb597d2609", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"89670e183635c0fa37a514aff84d77e6213888974f7f1cff9f744ee459c506ee", Pod:"coredns-7d764666f9-fzzfp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali64e12ef821e", MAC:"f6:ea:01:4d:be:2a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:14:49.042470 containerd[1934]: 2026-03-14 00:14:49.008 [INFO][4724] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="89670e183635c0fa37a514aff84d77e6213888974f7f1cff9f744ee459c506ee" Namespace="kube-system" Pod="coredns-7d764666f9-fzzfp" WorkloadEndpoint="ip--172--31--18--130-k8s-coredns--7d764666f9--fzzfp-eth0" Mar 14 00:14:49.043072 containerd[1934]: time="2026-03-14T00:14:49.033862552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:49.126825 containerd[1934]: time="2026-03-14T00:14:49.126314452Z" level=info msg="CreateContainer within sandbox \"3a49916aa2bbf14260e14b1b8b0cd1b3f41d09d079b3b5cbf3aeafa1df39cf3d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bb82b749901c0b05a508a704475f3fd9aa812a1b7a8923f543451d54f410b192\"" Mar 14 00:14:49.129461 containerd[1934]: time="2026-03-14T00:14:49.129046072Z" level=info msg="StartContainer for \"bb82b749901c0b05a508a704475f3fd9aa812a1b7a8923f543451d54f410b192\"" Mar 14 00:14:49.165256 systemd[1]: Started cri-containerd-6fbb90a16bf8c41b9b3aea5878d88dacf092e2566d11bff6df89b168d5db4375.scope - libcontainer container 6fbb90a16bf8c41b9b3aea5878d88dacf092e2566d11bff6df89b168d5db4375. Mar 14 00:14:49.237635 systemd-networkd[1852]: cali2f12046c56c: Link UP Mar 14 00:14:49.242973 systemd-networkd[1852]: cali2f12046c56c: Gained carrier Mar 14 00:14:49.334806 systemd[1]: run-containerd-runc-k8s.io-4909fa5b36c5a6f6d60071ca055c9fbabda165147ab988d745b5d2e665e242ab-runc.vNM8pY.mount: Deactivated successfully. Mar 14 00:14:49.354047 containerd[1934]: 2026-03-14 00:14:47.407 [ERROR][4764] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:14:49.354047 containerd[1934]: 2026-03-14 00:14:47.601 [INFO][4764] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--130-k8s-calico--kube--controllers--646cdb9884--pj2xg-eth0 calico-kube-controllers-646cdb9884- calico-system b2f14ff7-37b0-4a9b-9308-67b07dd8dd39 946 0 2026-03-14 00:14:24 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:646cdb9884 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-18-130 calico-kube-controllers-646cdb9884-pj2xg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2f12046c56c [] [] }} ContainerID="c9c797587882950647fbb84eaec3649cbb99980e8d8f67562afad878f17ccdc2" Namespace="calico-system" Pod="calico-kube-controllers-646cdb9884-pj2xg" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--kube--controllers--646cdb9884--pj2xg-" Mar 14 00:14:49.354047 containerd[1934]: 2026-03-14 00:14:47.601 [INFO][4764] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c9c797587882950647fbb84eaec3649cbb99980e8d8f67562afad878f17ccdc2" Namespace="calico-system" Pod="calico-kube-controllers-646cdb9884-pj2xg" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--kube--controllers--646cdb9884--pj2xg-eth0" Mar 14 00:14:49.354047 containerd[1934]: 2026-03-14 00:14:47.903 [INFO][4840] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c9c797587882950647fbb84eaec3649cbb99980e8d8f67562afad878f17ccdc2" HandleID="k8s-pod-network.c9c797587882950647fbb84eaec3649cbb99980e8d8f67562afad878f17ccdc2" Workload="ip--172--31--18--130-k8s-calico--kube--controllers--646cdb9884--pj2xg-eth0" Mar 14 00:14:49.354047 containerd[1934]: 2026-03-14 00:14:48.064 [INFO][4840] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c9c797587882950647fbb84eaec3649cbb99980e8d8f67562afad878f17ccdc2" HandleID="k8s-pod-network.c9c797587882950647fbb84eaec3649cbb99980e8d8f67562afad878f17ccdc2" Workload="ip--172--31--18--130-k8s-calico--kube--controllers--646cdb9884--pj2xg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d850), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-130", "pod":"calico-kube-controllers-646cdb9884-pj2xg", "timestamp":"2026-03-14 00:14:47.903119046 +0000 UTC"}, Hostname:"ip-172-31-18-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40002e8580)} Mar 14 00:14:49.354047 containerd[1934]: 2026-03-14 00:14:48.064 [INFO][4840] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:49.354047 containerd[1934]: 2026-03-14 00:14:48.823 [INFO][4840] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:49.354047 containerd[1934]: 2026-03-14 00:14:48.823 [INFO][4840] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-130' Mar 14 00:14:49.354047 containerd[1934]: 2026-03-14 00:14:48.836 [INFO][4840] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c9c797587882950647fbb84eaec3649cbb99980e8d8f67562afad878f17ccdc2" host="ip-172-31-18-130" Mar 14 00:14:49.354047 containerd[1934]: 2026-03-14 00:14:48.898 [INFO][4840] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-18-130" Mar 14 00:14:49.354047 containerd[1934]: 2026-03-14 00:14:48.940 [INFO][4840] ipam/ipam.go 526: Trying affinity for 192.168.44.64/26 host="ip-172-31-18-130" Mar 14 00:14:49.354047 containerd[1934]: 2026-03-14 00:14:48.967 [INFO][4840] ipam/ipam.go 160: Attempting to load block cidr=192.168.44.64/26 host="ip-172-31-18-130" Mar 14 00:14:49.354047 containerd[1934]: 2026-03-14 00:14:48.980 [INFO][4840] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="ip-172-31-18-130" Mar 14 00:14:49.354047 containerd[1934]: 2026-03-14 00:14:48.982 [INFO][4840] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.c9c797587882950647fbb84eaec3649cbb99980e8d8f67562afad878f17ccdc2" host="ip-172-31-18-130" Mar 14 00:14:49.354047 containerd[1934]: 2026-03-14 00:14:49.010 [INFO][4840] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c9c797587882950647fbb84eaec3649cbb99980e8d8f67562afad878f17ccdc2 Mar 14 00:14:49.354047 containerd[1934]: 2026-03-14 00:14:49.045 [INFO][4840] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.c9c797587882950647fbb84eaec3649cbb99980e8d8f67562afad878f17ccdc2" host="ip-172-31-18-130" Mar 14 00:14:49.354047 containerd[1934]: 2026-03-14 00:14:49.082 [INFO][4840] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.44.70/26] block=192.168.44.64/26 handle="k8s-pod-network.c9c797587882950647fbb84eaec3649cbb99980e8d8f67562afad878f17ccdc2" host="ip-172-31-18-130" Mar 14 00:14:49.354047 containerd[1934]: 2026-03-14 00:14:49.082 [INFO][4840] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.44.70/26] handle="k8s-pod-network.c9c797587882950647fbb84eaec3649cbb99980e8d8f67562afad878f17ccdc2" host="ip-172-31-18-130" Mar 14 00:14:49.354047 containerd[1934]: 2026-03-14 00:14:49.088 [INFO][4840] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:49.354047 containerd[1934]: 2026-03-14 00:14:49.090 [INFO][4840] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.44.70/26] IPv6=[] ContainerID="c9c797587882950647fbb84eaec3649cbb99980e8d8f67562afad878f17ccdc2" HandleID="k8s-pod-network.c9c797587882950647fbb84eaec3649cbb99980e8d8f67562afad878f17ccdc2" Workload="ip--172--31--18--130-k8s-calico--kube--controllers--646cdb9884--pj2xg-eth0" Mar 14 00:14:49.355348 containerd[1934]: 2026-03-14 00:14:49.151 [INFO][4764] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c9c797587882950647fbb84eaec3649cbb99980e8d8f67562afad878f17ccdc2" Namespace="calico-system" Pod="calico-kube-controllers-646cdb9884-pj2xg" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--kube--controllers--646cdb9884--pj2xg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-calico--kube--controllers--646cdb9884--pj2xg-eth0", GenerateName:"calico-kube-controllers-646cdb9884-", Namespace:"calico-system", SelfLink:"", UID:"b2f14ff7-37b0-4a9b-9308-67b07dd8dd39", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"646cdb9884", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"", Pod:"calico-kube-controllers-646cdb9884-pj2xg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.44.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2f12046c56c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:14:49.355348 containerd[1934]: 2026-03-14 00:14:49.151 [INFO][4764] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.70/32] ContainerID="c9c797587882950647fbb84eaec3649cbb99980e8d8f67562afad878f17ccdc2" Namespace="calico-system" Pod="calico-kube-controllers-646cdb9884-pj2xg" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--kube--controllers--646cdb9884--pj2xg-eth0" Mar 14 00:14:49.355348 containerd[1934]: 2026-03-14 00:14:49.151 [INFO][4764] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f12046c56c ContainerID="c9c797587882950647fbb84eaec3649cbb99980e8d8f67562afad878f17ccdc2" Namespace="calico-system" Pod="calico-kube-controllers-646cdb9884-pj2xg" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--kube--controllers--646cdb9884--pj2xg-eth0" Mar 14 00:14:49.355348 containerd[1934]: 2026-03-14 00:14:49.251 [INFO][4764] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c9c797587882950647fbb84eaec3649cbb99980e8d8f67562afad878f17ccdc2" Namespace="calico-system" Pod="calico-kube-controllers-646cdb9884-pj2xg" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--kube--controllers--646cdb9884--pj2xg-eth0" Mar 14 00:14:49.355348 containerd[1934]: 2026-03-14 00:14:49.265 [INFO][4764] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c9c797587882950647fbb84eaec3649cbb99980e8d8f67562afad878f17ccdc2" Namespace="calico-system" Pod="calico-kube-controllers-646cdb9884-pj2xg" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--kube--controllers--646cdb9884--pj2xg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-calico--kube--controllers--646cdb9884--pj2xg-eth0", GenerateName:"calico-kube-controllers-646cdb9884-", Namespace:"calico-system", SelfLink:"", UID:"b2f14ff7-37b0-4a9b-9308-67b07dd8dd39", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"646cdb9884", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"c9c797587882950647fbb84eaec3649cbb99980e8d8f67562afad878f17ccdc2", Pod:"calico-kube-controllers-646cdb9884-pj2xg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.44.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2f12046c56c", MAC:"be:8e:f4:87:f8:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:14:49.355348 containerd[1934]: 2026-03-14 00:14:49.310 [INFO][4764] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c9c797587882950647fbb84eaec3649cbb99980e8d8f67562afad878f17ccdc2" Namespace="calico-system" Pod="calico-kube-controllers-646cdb9884-pj2xg" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--kube--controllers--646cdb9884--pj2xg-eth0" Mar 14 00:14:49.381803 systemd-networkd[1852]: calif07464b8a09: Gained IPv6LL Mar 14 00:14:49.400499 containerd[1934]: time="2026-03-14T00:14:49.394781333Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:14:49.400499 containerd[1934]: time="2026-03-14T00:14:49.394882841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:14:49.400499 containerd[1934]: time="2026-03-14T00:14:49.394940633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:49.400499 containerd[1934]: time="2026-03-14T00:14:49.397571670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:49.406610 containerd[1934]: time="2026-03-14T00:14:49.406502010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-696c5cfc7f-hzr5w,Uid:89705421-5074-4ee1-8f8a-b24f0bbde701,Namespace:calico-system,Attempt:1,} returns sandbox id \"dd2325820fc71221b458ff5e8c09c5bfc835609b5335c9130cdae9310d6d4dda\"" Mar 14 00:14:49.433829 kubelet[3153]: I0314 00:14:49.433753 3153 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:14:49.493066 systemd-networkd[1852]: cali28a1aa8fb0b: Link UP Mar 14 00:14:49.500538 systemd-networkd[1852]: cali28a1aa8fb0b: Gained carrier Mar 14 00:14:49.559715 containerd[1934]: time="2026-03-14T00:14:49.556859118Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:14:49.559715 containerd[1934]: time="2026-03-14T00:14:49.556998894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:14:49.559715 containerd[1934]: time="2026-03-14T00:14:49.557037006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:49.559715 containerd[1934]: time="2026-03-14T00:14:49.557207898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:49.633825 containerd[1934]: 2026-03-14 00:14:47.342 [ERROR][4746] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:14:49.633825 containerd[1934]: 2026-03-14 00:14:47.601 [INFO][4746] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--130-k8s-csi--node--driver--45kq2-eth0 csi-node-driver- calico-system 3bae5412-2e2d-4fc4-8221-ace1b28b2f13 949 0 2026-03-14 00:14:24 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-18-130 csi-node-driver-45kq2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali28a1aa8fb0b [] [] }} ContainerID="53ca381c0e927fa2414e734098259d1959c0b91a0293351bb8ae5e5644e28710" Namespace="calico-system" Pod="csi-node-driver-45kq2" WorkloadEndpoint="ip--172--31--18--130-k8s-csi--node--driver--45kq2-" Mar 14 00:14:49.633825 containerd[1934]: 2026-03-14 00:14:47.601 [INFO][4746] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="53ca381c0e927fa2414e734098259d1959c0b91a0293351bb8ae5e5644e28710" Namespace="calico-system" Pod="csi-node-driver-45kq2" WorkloadEndpoint="ip--172--31--18--130-k8s-csi--node--driver--45kq2-eth0" Mar 14 00:14:49.633825 containerd[1934]: 2026-03-14 00:14:47.937 [INFO][4841] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="53ca381c0e927fa2414e734098259d1959c0b91a0293351bb8ae5e5644e28710" HandleID="k8s-pod-network.53ca381c0e927fa2414e734098259d1959c0b91a0293351bb8ae5e5644e28710" Workload="ip--172--31--18--130-k8s-csi--node--driver--45kq2-eth0" Mar 14 00:14:49.633825 containerd[1934]: 2026-03-14 00:14:48.105 [INFO][4841] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="53ca381c0e927fa2414e734098259d1959c0b91a0293351bb8ae5e5644e28710" HandleID="k8s-pod-network.53ca381c0e927fa2414e734098259d1959c0b91a0293351bb8ae5e5644e28710" Workload="ip--172--31--18--130-k8s-csi--node--driver--45kq2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000418ea0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-130", "pod":"csi-node-driver-45kq2", "timestamp":"2026-03-14 00:14:47.937109334 +0000 UTC"}, Hostname:"ip-172-31-18-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40003fc000)} Mar 14 00:14:49.633825 containerd[1934]: 2026-03-14 00:14:48.105 [INFO][4841] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:49.633825 containerd[1934]: 2026-03-14 00:14:49.087 [INFO][4841] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:49.633825 containerd[1934]: 2026-03-14 00:14:49.087 [INFO][4841] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-130' Mar 14 00:14:49.633825 containerd[1934]: 2026-03-14 00:14:49.104 [INFO][4841] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.53ca381c0e927fa2414e734098259d1959c0b91a0293351bb8ae5e5644e28710" host="ip-172-31-18-130" Mar 14 00:14:49.633825 containerd[1934]: 2026-03-14 00:14:49.175 [INFO][4841] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-18-130" Mar 14 00:14:49.633825 containerd[1934]: 2026-03-14 00:14:49.255 [INFO][4841] ipam/ipam.go 526: Trying affinity for 192.168.44.64/26 host="ip-172-31-18-130" Mar 14 00:14:49.633825 containerd[1934]: 2026-03-14 00:14:49.275 [INFO][4841] ipam/ipam.go 160: Attempting to load block cidr=192.168.44.64/26 host="ip-172-31-18-130" Mar 14 00:14:49.633825 containerd[1934]: 2026-03-14 00:14:49.302 [INFO][4841] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="ip-172-31-18-130" Mar 14 00:14:49.633825 containerd[1934]: 2026-03-14 00:14:49.303 [INFO][4841] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.53ca381c0e927fa2414e734098259d1959c0b91a0293351bb8ae5e5644e28710" host="ip-172-31-18-130" Mar 14 00:14:49.633825 containerd[1934]: 2026-03-14 00:14:49.319 [INFO][4841] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.53ca381c0e927fa2414e734098259d1959c0b91a0293351bb8ae5e5644e28710 Mar 14 00:14:49.633825 containerd[1934]: 2026-03-14 00:14:49.348 [INFO][4841] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.53ca381c0e927fa2414e734098259d1959c0b91a0293351bb8ae5e5644e28710" host="ip-172-31-18-130" Mar 14 00:14:49.633825 containerd[1934]: 2026-03-14 00:14:49.395 [INFO][4841] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.44.71/26] block=192.168.44.64/26 handle="k8s-pod-network.53ca381c0e927fa2414e734098259d1959c0b91a0293351bb8ae5e5644e28710" host="ip-172-31-18-130" Mar 14 00:14:49.633825 containerd[1934]: 2026-03-14 00:14:49.395 [INFO][4841] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.44.71/26] handle="k8s-pod-network.53ca381c0e927fa2414e734098259d1959c0b91a0293351bb8ae5e5644e28710" host="ip-172-31-18-130" Mar 14 00:14:49.633825 containerd[1934]: 2026-03-14 00:14:49.395 [INFO][4841] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:49.633825 containerd[1934]: 2026-03-14 00:14:49.400 [INFO][4841] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.44.71/26] IPv6=[] ContainerID="53ca381c0e927fa2414e734098259d1959c0b91a0293351bb8ae5e5644e28710" HandleID="k8s-pod-network.53ca381c0e927fa2414e734098259d1959c0b91a0293351bb8ae5e5644e28710" Workload="ip--172--31--18--130-k8s-csi--node--driver--45kq2-eth0" Mar 14 00:14:49.635086 containerd[1934]: 2026-03-14 00:14:49.459 [INFO][4746] cni-plugin/k8s.go 418: Populated endpoint ContainerID="53ca381c0e927fa2414e734098259d1959c0b91a0293351bb8ae5e5644e28710" Namespace="calico-system" Pod="csi-node-driver-45kq2" WorkloadEndpoint="ip--172--31--18--130-k8s-csi--node--driver--45kq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-csi--node--driver--45kq2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3bae5412-2e2d-4fc4-8221-ace1b28b2f13", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"", Pod:"csi-node-driver-45kq2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.44.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali28a1aa8fb0b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:14:49.635086 containerd[1934]: 2026-03-14 00:14:49.460 [INFO][4746] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.71/32] ContainerID="53ca381c0e927fa2414e734098259d1959c0b91a0293351bb8ae5e5644e28710" Namespace="calico-system" Pod="csi-node-driver-45kq2" WorkloadEndpoint="ip--172--31--18--130-k8s-csi--node--driver--45kq2-eth0" Mar 14 00:14:49.635086 containerd[1934]: 2026-03-14 00:14:49.460 [INFO][4746] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28a1aa8fb0b ContainerID="53ca381c0e927fa2414e734098259d1959c0b91a0293351bb8ae5e5644e28710" Namespace="calico-system" Pod="csi-node-driver-45kq2" WorkloadEndpoint="ip--172--31--18--130-k8s-csi--node--driver--45kq2-eth0" Mar 14 00:14:49.635086 containerd[1934]: 2026-03-14 00:14:49.515 [INFO][4746] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="53ca381c0e927fa2414e734098259d1959c0b91a0293351bb8ae5e5644e28710" Namespace="calico-system" Pod="csi-node-driver-45kq2" WorkloadEndpoint="ip--172--31--18--130-k8s-csi--node--driver--45kq2-eth0" Mar 14 00:14:49.635086 containerd[1934]: 2026-03-14 00:14:49.520 [INFO][4746] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="53ca381c0e927fa2414e734098259d1959c0b91a0293351bb8ae5e5644e28710" Namespace="calico-system" Pod="csi-node-driver-45kq2" WorkloadEndpoint="ip--172--31--18--130-k8s-csi--node--driver--45kq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-csi--node--driver--45kq2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3bae5412-2e2d-4fc4-8221-ace1b28b2f13", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"53ca381c0e927fa2414e734098259d1959c0b91a0293351bb8ae5e5644e28710", Pod:"csi-node-driver-45kq2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.44.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali28a1aa8fb0b", MAC:"ce:6c:89:87:b0:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:14:49.635086 containerd[1934]: 2026-03-14 00:14:49.609 [INFO][4746] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="53ca381c0e927fa2414e734098259d1959c0b91a0293351bb8ae5e5644e28710" Namespace="calico-system" Pod="csi-node-driver-45kq2" WorkloadEndpoint="ip--172--31--18--130-k8s-csi--node--driver--45kq2-eth0" Mar 14 00:14:49.680811 systemd-networkd[1852]: calie87a09655f2: Link UP Mar 14 00:14:49.682767 systemd-networkd[1852]: calie87a09655f2: Gained carrier Mar 14 00:14:49.713701 systemd[1]: Started cri-containerd-bb82b749901c0b05a508a704475f3fd9aa812a1b7a8923f543451d54f410b192.scope - libcontainer container bb82b749901c0b05a508a704475f3fd9aa812a1b7a8923f543451d54f410b192. Mar 14 00:14:49.771933 containerd[1934]: 2026-03-14 00:14:48.415 [ERROR][4890] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 14 00:14:49.771933 containerd[1934]: 2026-03-14 00:14:48.604 [INFO][4890] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--130-k8s-whisker--667f9f4987--qrmz9-eth0 whisker-667f9f4987- calico-system 3b987674-767d-428c-b72b-e820b77114cf 968 0 2026-03-14 00:14:47 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:667f9f4987 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-18-130 whisker-667f9f4987-qrmz9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie87a09655f2 [] [] }} ContainerID="9efd7984d13f861f5d5070c3e2293ea46502853e8d42a9faa4d880267be77b28" Namespace="calico-system" Pod="whisker-667f9f4987-qrmz9" WorkloadEndpoint="ip--172--31--18--130-k8s-whisker--667f9f4987--qrmz9-" Mar 14 00:14:49.771933 containerd[1934]: 2026-03-14 00:14:48.606 [INFO][4890] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9efd7984d13f861f5d5070c3e2293ea46502853e8d42a9faa4d880267be77b28" Namespace="calico-system" Pod="whisker-667f9f4987-qrmz9" WorkloadEndpoint="ip--172--31--18--130-k8s-whisker--667f9f4987--qrmz9-eth0" Mar 14 00:14:49.771933 containerd[1934]: 2026-03-14 00:14:48.912 [INFO][5016] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9efd7984d13f861f5d5070c3e2293ea46502853e8d42a9faa4d880267be77b28" HandleID="k8s-pod-network.9efd7984d13f861f5d5070c3e2293ea46502853e8d42a9faa4d880267be77b28" Workload="ip--172--31--18--130-k8s-whisker--667f9f4987--qrmz9-eth0" Mar 14 00:14:49.771933 containerd[1934]: 2026-03-14 00:14:48.971 [INFO][5016] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9efd7984d13f861f5d5070c3e2293ea46502853e8d42a9faa4d880267be77b28" HandleID="k8s-pod-network.9efd7984d13f861f5d5070c3e2293ea46502853e8d42a9faa4d880267be77b28" Workload="ip--172--31--18--130-k8s-whisker--667f9f4987--qrmz9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000368630), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-130", "pod":"whisker-667f9f4987-qrmz9", "timestamp":"2026-03-14 00:14:48.912724015 +0000 UTC"}, Hostname:"ip-172-31-18-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0x40000c1080)} Mar 14 00:14:49.771933 containerd[1934]: 2026-03-14 00:14:48.971 [INFO][5016] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:49.771933 containerd[1934]: 2026-03-14 00:14:49.395 [INFO][5016] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:49.771933 containerd[1934]: 2026-03-14 00:14:49.395 [INFO][5016] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-130' Mar 14 00:14:49.771933 containerd[1934]: 2026-03-14 00:14:49.402 [INFO][5016] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9efd7984d13f861f5d5070c3e2293ea46502853e8d42a9faa4d880267be77b28" host="ip-172-31-18-130" Mar 14 00:14:49.771933 containerd[1934]: 2026-03-14 00:14:49.451 [INFO][5016] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-18-130" Mar 14 00:14:49.771933 containerd[1934]: 2026-03-14 00:14:49.486 [INFO][5016] ipam/ipam.go 526: Trying affinity for 192.168.44.64/26 host="ip-172-31-18-130" Mar 14 00:14:49.771933 containerd[1934]: 2026-03-14 00:14:49.515 [INFO][5016] ipam/ipam.go 160: Attempting to load block cidr=192.168.44.64/26 host="ip-172-31-18-130" Mar 14 00:14:49.771933 containerd[1934]: 2026-03-14 00:14:49.607 [INFO][5016] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="ip-172-31-18-130" Mar 14 00:14:49.771933 containerd[1934]: 2026-03-14 00:14:49.607 [INFO][5016] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.9efd7984d13f861f5d5070c3e2293ea46502853e8d42a9faa4d880267be77b28" host="ip-172-31-18-130" Mar 14 00:14:49.771933 containerd[1934]: 2026-03-14 00:14:49.627 [INFO][5016] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9efd7984d13f861f5d5070c3e2293ea46502853e8d42a9faa4d880267be77b28 Mar 14 00:14:49.771933 containerd[1934]: 2026-03-14 00:14:49.642 [INFO][5016] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.9efd7984d13f861f5d5070c3e2293ea46502853e8d42a9faa4d880267be77b28" host="ip-172-31-18-130" Mar 14 00:14:49.771933 containerd[1934]: 2026-03-14 00:14:49.668 [INFO][5016] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.44.72/26] block=192.168.44.64/26 handle="k8s-pod-network.9efd7984d13f861f5d5070c3e2293ea46502853e8d42a9faa4d880267be77b28" host="ip-172-31-18-130" Mar 14 00:14:49.771933 containerd[1934]: 2026-03-14 00:14:49.668 [INFO][5016] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.44.72/26] handle="k8s-pod-network.9efd7984d13f861f5d5070c3e2293ea46502853e8d42a9faa4d880267be77b28" host="ip-172-31-18-130" Mar 14 00:14:49.771933 containerd[1934]: 2026-03-14 00:14:49.669 [INFO][5016] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:49.771933 containerd[1934]: 2026-03-14 00:14:49.669 [INFO][5016] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.44.72/26] IPv6=[] ContainerID="9efd7984d13f861f5d5070c3e2293ea46502853e8d42a9faa4d880267be77b28" HandleID="k8s-pod-network.9efd7984d13f861f5d5070c3e2293ea46502853e8d42a9faa4d880267be77b28" Workload="ip--172--31--18--130-k8s-whisker--667f9f4987--qrmz9-eth0" Mar 14 00:14:49.780649 containerd[1934]: 2026-03-14 00:14:49.677 [INFO][4890] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9efd7984d13f861f5d5070c3e2293ea46502853e8d42a9faa4d880267be77b28" Namespace="calico-system" Pod="whisker-667f9f4987-qrmz9" WorkloadEndpoint="ip--172--31--18--130-k8s-whisker--667f9f4987--qrmz9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-whisker--667f9f4987--qrmz9-eth0", GenerateName:"whisker-667f9f4987-", Namespace:"calico-system", SelfLink:"", UID:"3b987674-767d-428c-b72b-e820b77114cf", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"667f9f4987", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"", Pod:"whisker-667f9f4987-qrmz9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.44.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie87a09655f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:14:49.780649 containerd[1934]: 2026-03-14 00:14:49.677 [INFO][4890] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.72/32] ContainerID="9efd7984d13f861f5d5070c3e2293ea46502853e8d42a9faa4d880267be77b28" Namespace="calico-system" Pod="whisker-667f9f4987-qrmz9" WorkloadEndpoint="ip--172--31--18--130-k8s-whisker--667f9f4987--qrmz9-eth0" Mar 14 00:14:49.780649 containerd[1934]: 2026-03-14 00:14:49.677 [INFO][4890] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie87a09655f2 ContainerID="9efd7984d13f861f5d5070c3e2293ea46502853e8d42a9faa4d880267be77b28" Namespace="calico-system" Pod="whisker-667f9f4987-qrmz9" WorkloadEndpoint="ip--172--31--18--130-k8s-whisker--667f9f4987--qrmz9-eth0" Mar 14 00:14:49.780649 containerd[1934]: 2026-03-14 00:14:49.683 [INFO][4890] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9efd7984d13f861f5d5070c3e2293ea46502853e8d42a9faa4d880267be77b28" Namespace="calico-system" Pod="whisker-667f9f4987-qrmz9" WorkloadEndpoint="ip--172--31--18--130-k8s-whisker--667f9f4987--qrmz9-eth0" Mar 14 00:14:49.780649 containerd[1934]: 2026-03-14 00:14:49.684 [INFO][4890] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9efd7984d13f861f5d5070c3e2293ea46502853e8d42a9faa4d880267be77b28" Namespace="calico-system" Pod="whisker-667f9f4987-qrmz9" WorkloadEndpoint="ip--172--31--18--130-k8s-whisker--667f9f4987--qrmz9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-whisker--667f9f4987--qrmz9-eth0", GenerateName:"whisker-667f9f4987-", Namespace:"calico-system", SelfLink:"", UID:"3b987674-767d-428c-b72b-e820b77114cf", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"667f9f4987", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"9efd7984d13f861f5d5070c3e2293ea46502853e8d42a9faa4d880267be77b28", Pod:"whisker-667f9f4987-qrmz9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.44.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie87a09655f2", MAC:"52:78:12:0f:4b:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:14:49.780649 containerd[1934]: 2026-03-14 00:14:49.753 [INFO][4890] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9efd7984d13f861f5d5070c3e2293ea46502853e8d42a9faa4d880267be77b28" Namespace="calico-system" Pod="whisker-667f9f4987-qrmz9" WorkloadEndpoint="ip--172--31--18--130-k8s-whisker--667f9f4987--qrmz9-eth0" Mar 14 00:14:49.774696 systemd[1]: Started cri-containerd-89670e183635c0fa37a514aff84d77e6213888974f7f1cff9f744ee459c506ee.scope - libcontainer container 89670e183635c0fa37a514aff84d77e6213888974f7f1cff9f744ee459c506ee. Mar 14 00:14:49.781943 systemd[1]: Started cri-containerd-c9c797587882950647fbb84eaec3649cbb99980e8d8f67562afad878f17ccdc2.scope - libcontainer container c9c797587882950647fbb84eaec3649cbb99980e8d8f67562afad878f17ccdc2. Mar 14 00:14:49.832581 systemd-networkd[1852]: cali2ea8ce6114e: Gained IPv6LL Mar 14 00:14:49.841900 containerd[1934]: time="2026-03-14T00:14:49.841567064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-696c5cfc7f-ng7xc,Uid:9dac252b-d2f0-4f51-8a32-3e7e6f3e4118,Namespace:calico-system,Attempt:1,} returns sandbox id \"6fbb90a16bf8c41b9b3aea5878d88dacf092e2566d11bff6df89b168d5db4375\"" Mar 14 00:14:49.871457 containerd[1934]: time="2026-03-14T00:14:49.869581112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:14:49.871457 containerd[1934]: time="2026-03-14T00:14:49.869671448Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:14:49.871457 containerd[1934]: time="2026-03-14T00:14:49.869696960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:49.871457 containerd[1934]: time="2026-03-14T00:14:49.869842856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:49.902864 containerd[1934]: time="2026-03-14T00:14:49.902637224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:14:49.903715 containerd[1934]: time="2026-03-14T00:14:49.902792468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:14:49.903715 containerd[1934]: time="2026-03-14T00:14:49.903137660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:49.903715 containerd[1934]: time="2026-03-14T00:14:49.903408788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:14:49.985582 systemd[1]: Started cri-containerd-53ca381c0e927fa2414e734098259d1959c0b91a0293351bb8ae5e5644e28710.scope - libcontainer container 53ca381c0e927fa2414e734098259d1959c0b91a0293351bb8ae5e5644e28710. Mar 14 00:14:50.000580 containerd[1934]: time="2026-03-14T00:14:49.998453672Z" level=info msg="StartContainer for \"bb82b749901c0b05a508a704475f3fd9aa812a1b7a8923f543451d54f410b192\" returns successfully" Mar 14 00:14:50.053755 systemd[1]: Started cri-containerd-9efd7984d13f861f5d5070c3e2293ea46502853e8d42a9faa4d880267be77b28.scope - libcontainer container 9efd7984d13f861f5d5070c3e2293ea46502853e8d42a9faa4d880267be77b28. Mar 14 00:14:50.070476 containerd[1934]: time="2026-03-14T00:14:50.069187133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-fzzfp,Uid:bc321aa2-bc1d-4b5f-9fdf-44eb597d2609,Namespace:kube-system,Attempt:1,} returns sandbox id \"89670e183635c0fa37a514aff84d77e6213888974f7f1cff9f744ee459c506ee\"" Mar 14 00:14:50.082830 containerd[1934]: time="2026-03-14T00:14:50.082755617Z" level=info msg="CreateContainer within sandbox \"89670e183635c0fa37a514aff84d77e6213888974f7f1cff9f744ee459c506ee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:14:50.162654 containerd[1934]: time="2026-03-14T00:14:50.162379541Z" level=info msg="CreateContainer within sandbox \"89670e183635c0fa37a514aff84d77e6213888974f7f1cff9f744ee459c506ee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5ecf5bf7bd4224fbfb219fc5498f4f42eae5c5064a15fae70b13e84578a6204b\"" Mar 14 00:14:50.172655 containerd[1934]: time="2026-03-14T00:14:50.170092409Z" level=info msg="StartContainer for \"5ecf5bf7bd4224fbfb219fc5498f4f42eae5c5064a15fae70b13e84578a6204b\"" Mar 14 00:14:50.214648 systemd-networkd[1852]: cali3a1e0477228: Gained IPv6LL Mar 14 00:14:50.264226 systemd[1]: Started cri-containerd-5ecf5bf7bd4224fbfb219fc5498f4f42eae5c5064a15fae70b13e84578a6204b.scope - libcontainer container 5ecf5bf7bd4224fbfb219fc5498f4f42eae5c5064a15fae70b13e84578a6204b. Mar 14 00:14:50.431473 kubelet[3153]: I0314 00:14:50.428415 3153 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-v8blg" podStartSLOduration=48.428397043 podStartE2EDuration="48.428397043s" podCreationTimestamp="2026-03-14 00:14:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:14:50.37219605 +0000 UTC m=+52.966376484" watchObservedRunningTime="2026-03-14 00:14:50.428397043 +0000 UTC m=+53.022577381" Mar 14 00:14:50.515077 containerd[1934]: time="2026-03-14T00:14:50.514713391Z" level=info msg="StartContainer for \"5ecf5bf7bd4224fbfb219fc5498f4f42eae5c5064a15fae70b13e84578a6204b\" returns successfully" Mar 14 00:14:50.516744 containerd[1934]: time="2026-03-14T00:14:50.516395335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-45kq2,Uid:3bae5412-2e2d-4fc4-8221-ace1b28b2f13,Namespace:calico-system,Attempt:1,} returns sandbox id \"53ca381c0e927fa2414e734098259d1959c0b91a0293351bb8ae5e5644e28710\"" Mar 14 00:14:50.534731 containerd[1934]: time="2026-03-14T00:14:50.534305695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-646cdb9884-pj2xg,Uid:b2f14ff7-37b0-4a9b-9308-67b07dd8dd39,Namespace:calico-system,Attempt:1,} returns sandbox id \"c9c797587882950647fbb84eaec3649cbb99980e8d8f67562afad878f17ccdc2\"" Mar 14 00:14:50.597603 systemd-networkd[1852]: cali28a1aa8fb0b: Gained IPv6LL Mar 14 00:14:50.599558 systemd-networkd[1852]: cali75893884bdc: Gained IPv6LL Mar 14 00:14:50.661626 systemd-networkd[1852]: cali64e12ef821e: Gained IPv6LL Mar 14 00:14:50.800902 containerd[1934]: time="2026-03-14T00:14:50.800760692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-667f9f4987-qrmz9,Uid:3b987674-767d-428c-b72b-e820b77114cf,Namespace:calico-system,Attempt:0,} returns sandbox id \"9efd7984d13f861f5d5070c3e2293ea46502853e8d42a9faa4d880267be77b28\"" Mar 14 00:14:50.853602 systemd-networkd[1852]: calie87a09655f2: Gained IPv6LL Mar 14 00:14:51.135467 kernel: calico-node[4706]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 14 00:14:51.301686 systemd-networkd[1852]: cali2f12046c56c: Gained IPv6LL Mar 14 00:14:51.368597 kubelet[3153]: I0314 00:14:51.367565 3153 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-fzzfp" podStartSLOduration=49.367538347 podStartE2EDuration="49.367538347s" podCreationTimestamp="2026-03-14 00:14:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:14:51.366134911 +0000 UTC m=+53.960315285" watchObservedRunningTime="2026-03-14 00:14:51.367538347 +0000 UTC m=+53.961718685" Mar 14 00:14:52.215321 systemd-networkd[1852]: vxlan.calico: Link UP Mar 14 00:14:52.215343 systemd-networkd[1852]: vxlan.calico: Gained carrier Mar 14 00:14:53.127979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1423626303.mount: Deactivated successfully. Mar 14 00:14:53.731631 containerd[1934]: time="2026-03-14T00:14:53.731551415Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:53.734762 containerd[1934]: time="2026-03-14T00:14:53.734691611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=51613980" Mar 14 00:14:53.737695 containerd[1934]: time="2026-03-14T00:14:53.737635967Z" level=info msg="ImageCreate event name:\"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:53.745563 containerd[1934]: time="2026-03-14T00:14:53.744029159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:53.746073 containerd[1934]: time="2026-03-14T00:14:53.746021447Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"51613826\" in 4.728164087s" Mar 14 00:14:53.746218 containerd[1934]: time="2026-03-14T00:14:53.746186891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:5274e98e9b12badfa0d6f106814630212e6de7abb8deaf896423b13e6ebdb41b\"" Mar 14 00:14:53.750457 containerd[1934]: time="2026-03-14T00:14:53.750353279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 14 00:14:53.758389 containerd[1934]: time="2026-03-14T00:14:53.758311871Z" level=info msg="CreateContainer within sandbox \"4909fa5b36c5a6f6d60071ca055c9fbabda165147ab988d745b5d2e665e242ab\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 14 00:14:53.875950 containerd[1934]: time="2026-03-14T00:14:53.875872584Z" level=info msg="CreateContainer within sandbox \"4909fa5b36c5a6f6d60071ca055c9fbabda165147ab988d745b5d2e665e242ab\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"6502df5aedc082732a5201569d61324defeb90c36385a931da25b4937d20c9f6\"" Mar 14 00:14:53.877653 containerd[1934]: time="2026-03-14T00:14:53.877598112Z" level=info msg="StartContainer for \"6502df5aedc082732a5201569d61324defeb90c36385a931da25b4937d20c9f6\"" Mar 14 00:14:54.009686 systemd[1]: Started cri-containerd-6502df5aedc082732a5201569d61324defeb90c36385a931da25b4937d20c9f6.scope - libcontainer container 6502df5aedc082732a5201569d61324defeb90c36385a931da25b4937d20c9f6. Mar 14 00:14:54.055928 systemd-networkd[1852]: vxlan.calico: Gained IPv6LL Mar 14 00:14:54.129794 systemd[1]: run-containerd-runc-k8s.io-6502df5aedc082732a5201569d61324defeb90c36385a931da25b4937d20c9f6-runc.aEJRd2.mount: Deactivated successfully. Mar 14 00:14:54.142481 containerd[1934]: time="2026-03-14T00:14:54.141870165Z" level=info msg="StartContainer for \"6502df5aedc082732a5201569d61324defeb90c36385a931da25b4937d20c9f6\" returns successfully" Mar 14 00:14:54.400563 kubelet[3153]: I0314 00:14:54.400373 3153 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-nbdj4" podStartSLOduration=26.659059939 podStartE2EDuration="31.40035775s" podCreationTimestamp="2026-03-14 00:14:23 +0000 UTC" firstStartedPulling="2026-03-14 00:14:49.0072313 +0000 UTC m=+51.601411626" lastFinishedPulling="2026-03-14 00:14:53.748529099 +0000 UTC m=+56.342709437" observedRunningTime="2026-03-14 00:14:54.398424526 +0000 UTC m=+56.992604888" watchObservedRunningTime="2026-03-14 00:14:54.40035775 +0000 UTC m=+56.994538088" Mar 14 00:14:54.428006 systemd[1]: run-containerd-runc-k8s.io-6502df5aedc082732a5201569d61324defeb90c36385a931da25b4937d20c9f6-runc.4pXfOu.mount: Deactivated successfully. Mar 14 00:14:56.462244 containerd[1934]: time="2026-03-14T00:14:56.462175813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:56.464029 containerd[1934]: time="2026-03-14T00:14:56.463953265Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=45552315" Mar 14 00:14:56.465133 containerd[1934]: time="2026-03-14T00:14:56.464995621Z" level=info msg="ImageCreate event name:\"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:56.469937 containerd[1934]: time="2026-03-14T00:14:56.469849513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:56.472585 containerd[1934]: time="2026-03-14T00:14:56.472360969Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 2.721931694s" Mar 14 00:14:56.472585 containerd[1934]: time="2026-03-14T00:14:56.472421221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 14 00:14:56.475192 containerd[1934]: time="2026-03-14T00:14:56.474795505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 14 00:14:56.480298 containerd[1934]: time="2026-03-14T00:14:56.479660581Z" level=info msg="CreateContainer within sandbox \"dd2325820fc71221b458ff5e8c09c5bfc835609b5335c9130cdae9310d6d4dda\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 14 00:14:56.508141 containerd[1934]: time="2026-03-14T00:14:56.507390757Z" level=info msg="CreateContainer within sandbox \"dd2325820fc71221b458ff5e8c09c5bfc835609b5335c9130cdae9310d6d4dda\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"42a83f312fd1dde3ecb92f3e5fc6cf2a274268c1dc01d784772343d0994a1ca9\"" Mar 14 00:14:56.514019 containerd[1934]: time="2026-03-14T00:14:56.510839641Z" level=info msg="StartContainer for \"42a83f312fd1dde3ecb92f3e5fc6cf2a274268c1dc01d784772343d0994a1ca9\"" Mar 14 00:14:56.586752 systemd[1]: Started cri-containerd-42a83f312fd1dde3ecb92f3e5fc6cf2a274268c1dc01d784772343d0994a1ca9.scope - libcontainer container 42a83f312fd1dde3ecb92f3e5fc6cf2a274268c1dc01d784772343d0994a1ca9. Mar 14 00:14:56.631863 ntpd[1904]: Listen normally on 7 vxlan.calico 192.168.44.64:123 Mar 14 00:14:56.632007 ntpd[1904]: Listen normally on 8 calif07464b8a09 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 14 00:14:56.632510 ntpd[1904]: 14 Mar 00:14:56 ntpd[1904]: Listen normally on 7 vxlan.calico 192.168.44.64:123 Mar 14 00:14:56.632510 ntpd[1904]: 14 Mar 00:14:56 ntpd[1904]: Listen normally on 8 calif07464b8a09 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 14 00:14:56.632510 ntpd[1904]: 14 Mar 00:14:56 ntpd[1904]: Listen normally on 9 cali2ea8ce6114e [fe80::ecee:eeff:feee:eeee%5]:123 Mar 14 00:14:56.632510 ntpd[1904]: 14 Mar 00:14:56 ntpd[1904]: Listen normally on 10 cali3a1e0477228 [fe80::ecee:eeff:feee:eeee%6]:123 Mar 14 00:14:56.632510 ntpd[1904]: 14 Mar 00:14:56 ntpd[1904]: Listen normally on 11 cali75893884bdc [fe80::ecee:eeff:feee:eeee%7]:123 Mar 14 00:14:56.632510 ntpd[1904]: 14 Mar 00:14:56 ntpd[1904]: Listen normally on 12 cali64e12ef821e [fe80::ecee:eeff:feee:eeee%8]:123 Mar 14 00:14:56.632510 ntpd[1904]: 14 Mar 00:14:56 ntpd[1904]: Listen normally on 13 cali2f12046c56c [fe80::ecee:eeff:feee:eeee%9]:123 Mar 14 00:14:56.632099 ntpd[1904]: Listen normally on 9 cali2ea8ce6114e [fe80::ecee:eeff:feee:eeee%5]:123 Mar 14 00:14:56.633276 ntpd[1904]: 14 Mar 00:14:56 ntpd[1904]: Listen normally on 14 cali28a1aa8fb0b [fe80::ecee:eeff:feee:eeee%10]:123 Mar 14 00:14:56.633276 ntpd[1904]: 14 Mar 00:14:56 ntpd[1904]: Listen normally on 15 calie87a09655f2 [fe80::ecee:eeff:feee:eeee%11]:123 Mar 14 00:14:56.633276 ntpd[1904]: 14 Mar 00:14:56 ntpd[1904]: Listen normally on 16 vxlan.calico [fe80::64b5:5ff:fe84:80e8%12]:123 Mar 14 00:14:56.632193 ntpd[1904]: Listen normally on 10 cali3a1e0477228 [fe80::ecee:eeff:feee:eeee%6]:123 Mar 14 00:14:56.632263 ntpd[1904]: Listen normally on 11 cali75893884bdc [fe80::ecee:eeff:feee:eeee%7]:123 Mar 14 00:14:56.632340 ntpd[1904]: Listen normally on 12 cali64e12ef821e [fe80::ecee:eeff:feee:eeee%8]:123 Mar 14 00:14:56.632408 ntpd[1904]: Listen normally on 13 cali2f12046c56c [fe80::ecee:eeff:feee:eeee%9]:123 Mar 14 00:14:56.632525 ntpd[1904]: Listen normally on 14 cali28a1aa8fb0b [fe80::ecee:eeff:feee:eeee%10]:123 Mar 14 00:14:56.632595 ntpd[1904]: Listen normally on 15 calie87a09655f2 [fe80::ecee:eeff:feee:eeee%11]:123 Mar 14 00:14:56.632664 ntpd[1904]: Listen normally on 16 vxlan.calico [fe80::64b5:5ff:fe84:80e8%12]:123 Mar 14 00:14:56.675129 containerd[1934]: time="2026-03-14T00:14:56.675051722Z" level=info msg="StartContainer for \"42a83f312fd1dde3ecb92f3e5fc6cf2a274268c1dc01d784772343d0994a1ca9\" returns successfully" Mar 14 00:14:56.781681 containerd[1934]: time="2026-03-14T00:14:56.781154258Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:56.783355 containerd[1934]: time="2026-03-14T00:14:56.783123950Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 14 00:14:56.791240 containerd[1934]: time="2026-03-14T00:14:56.791167166Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"46949856\" in 316.300693ms" Mar 14 00:14:56.791240 containerd[1934]: time="2026-03-14T00:14:56.791239850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:dca640051f09574f3e8821035bbfae8c638fb7dadca4c9a082e7223a234befc8\"" Mar 14 00:14:56.795974 containerd[1934]: time="2026-03-14T00:14:56.794948774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 14 00:14:56.802361 containerd[1934]: time="2026-03-14T00:14:56.802303790Z" level=info msg="CreateContainer within sandbox \"6fbb90a16bf8c41b9b3aea5878d88dacf092e2566d11bff6df89b168d5db4375\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 14 00:14:56.826246 containerd[1934]: time="2026-03-14T00:14:56.825834962Z" level=info msg="CreateContainer within sandbox \"6fbb90a16bf8c41b9b3aea5878d88dacf092e2566d11bff6df89b168d5db4375\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"85aaf96055ed489783df600d05407e7a7754ded46b5a00b38437f2aceb83ed95\"" Mar 14 00:14:56.846206 containerd[1934]: time="2026-03-14T00:14:56.845004122Z" level=info msg="StartContainer for \"85aaf96055ed489783df600d05407e7a7754ded46b5a00b38437f2aceb83ed95\"" Mar 14 00:14:56.914745 systemd[1]: Started cri-containerd-85aaf96055ed489783df600d05407e7a7754ded46b5a00b38437f2aceb83ed95.scope - libcontainer container 85aaf96055ed489783df600d05407e7a7754ded46b5a00b38437f2aceb83ed95. Mar 14 00:14:57.012754 containerd[1934]: time="2026-03-14T00:14:57.012684035Z" level=info msg="StartContainer for \"85aaf96055ed489783df600d05407e7a7754ded46b5a00b38437f2aceb83ed95\" returns successfully" Mar 14 00:14:57.460465 kubelet[3153]: I0314 00:14:57.459476 3153 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-696c5cfc7f-ng7xc" podStartSLOduration=27.521548688 podStartE2EDuration="34.459454394s" podCreationTimestamp="2026-03-14 00:14:23 +0000 UTC" firstStartedPulling="2026-03-14 00:14:49.854930156 +0000 UTC m=+52.449110494" lastFinishedPulling="2026-03-14 00:14:56.792835862 +0000 UTC m=+59.387016200" observedRunningTime="2026-03-14 00:14:57.425984533 +0000 UTC m=+60.020164871" watchObservedRunningTime="2026-03-14 00:14:57.459454394 +0000 UTC m=+60.053634876" Mar 14 00:14:57.652810 containerd[1934]: time="2026-03-14T00:14:57.652737399Z" level=info msg="StopPodSandbox for \"31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b\"" Mar 14 00:14:57.974751 containerd[1934]: 2026-03-14 00:14:57.835 [WARNING][5702] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--hzr5w-eth0", GenerateName:"calico-apiserver-696c5cfc7f-", Namespace:"calico-system", SelfLink:"", UID:"89705421-5074-4ee1-8f8a-b24f0bbde701", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"696c5cfc7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"dd2325820fc71221b458ff5e8c09c5bfc835609b5335c9130cdae9310d6d4dda", Pod:"calico-apiserver-696c5cfc7f-hzr5w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif07464b8a09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:14:57.974751 containerd[1934]: 2026-03-14 00:14:57.835 [INFO][5702] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" Mar 14 00:14:57.974751 containerd[1934]: 2026-03-14 00:14:57.836 [INFO][5702] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" iface="eth0" netns="" Mar 14 00:14:57.974751 containerd[1934]: 2026-03-14 00:14:57.836 [INFO][5702] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" Mar 14 00:14:57.974751 containerd[1934]: 2026-03-14 00:14:57.836 [INFO][5702] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" Mar 14 00:14:57.974751 containerd[1934]: 2026-03-14 00:14:57.925 [INFO][5711] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" HandleID="k8s-pod-network.31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" Workload="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--hzr5w-eth0" Mar 14 00:14:57.974751 containerd[1934]: 2026-03-14 00:14:57.925 [INFO][5711] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:57.974751 containerd[1934]: 2026-03-14 00:14:57.925 [INFO][5711] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:57.974751 containerd[1934]: 2026-03-14 00:14:57.949 [WARNING][5711] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" HandleID="k8s-pod-network.31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" Workload="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--hzr5w-eth0" Mar 14 00:14:57.974751 containerd[1934]: 2026-03-14 00:14:57.950 [INFO][5711] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" HandleID="k8s-pod-network.31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" Workload="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--hzr5w-eth0" Mar 14 00:14:57.974751 containerd[1934]: 2026-03-14 00:14:57.955 [INFO][5711] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:57.974751 containerd[1934]: 2026-03-14 00:14:57.966 [INFO][5702] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" Mar 14 00:14:57.978635 containerd[1934]: time="2026-03-14T00:14:57.974802148Z" level=info msg="TearDown network for sandbox \"31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b\" successfully" Mar 14 00:14:57.978635 containerd[1934]: time="2026-03-14T00:14:57.974843596Z" level=info msg="StopPodSandbox for \"31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b\" returns successfully" Mar 14 00:14:57.978635 containerd[1934]: time="2026-03-14T00:14:57.977090332Z" level=info msg="RemovePodSandbox for \"31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b\"" Mar 14 00:14:57.978635 containerd[1934]: time="2026-03-14T00:14:57.977149168Z" level=info msg="Forcibly stopping sandbox \"31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b\"" Mar 14 00:14:58.308575 containerd[1934]: 2026-03-14 00:14:58.143 [WARNING][5725] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--hzr5w-eth0", GenerateName:"calico-apiserver-696c5cfc7f-", Namespace:"calico-system", SelfLink:"", UID:"89705421-5074-4ee1-8f8a-b24f0bbde701", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"696c5cfc7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"dd2325820fc71221b458ff5e8c09c5bfc835609b5335c9130cdae9310d6d4dda", Pod:"calico-apiserver-696c5cfc7f-hzr5w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif07464b8a09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:14:58.308575 containerd[1934]: 2026-03-14 00:14:58.145 [INFO][5725] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" Mar 14 00:14:58.308575 containerd[1934]: 2026-03-14 00:14:58.146 [INFO][5725] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" iface="eth0" netns="" Mar 14 00:14:58.308575 containerd[1934]: 2026-03-14 00:14:58.146 [INFO][5725] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" Mar 14 00:14:58.308575 containerd[1934]: 2026-03-14 00:14:58.146 [INFO][5725] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" Mar 14 00:14:58.308575 containerd[1934]: 2026-03-14 00:14:58.262 [INFO][5738] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" HandleID="k8s-pod-network.31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" Workload="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--hzr5w-eth0" Mar 14 00:14:58.308575 containerd[1934]: 2026-03-14 00:14:58.262 [INFO][5738] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:58.308575 containerd[1934]: 2026-03-14 00:14:58.263 [INFO][5738] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:58.308575 containerd[1934]: 2026-03-14 00:14:58.278 [WARNING][5738] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" HandleID="k8s-pod-network.31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" Workload="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--hzr5w-eth0" Mar 14 00:14:58.308575 containerd[1934]: 2026-03-14 00:14:58.278 [INFO][5738] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" HandleID="k8s-pod-network.31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" Workload="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--hzr5w-eth0" Mar 14 00:14:58.308575 containerd[1934]: 2026-03-14 00:14:58.282 [INFO][5738] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:58.308575 containerd[1934]: 2026-03-14 00:14:58.289 [INFO][5725] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b" Mar 14 00:14:58.308575 containerd[1934]: time="2026-03-14T00:14:58.305187998Z" level=info msg="TearDown network for sandbox \"31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b\" successfully" Mar 14 00:14:58.329228 containerd[1934]: time="2026-03-14T00:14:58.328265870Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:14:58.329840 containerd[1934]: time="2026-03-14T00:14:58.329671958Z" level=info msg="RemovePodSandbox \"31a60a94f8cba2717221014e9be5b21f6757d563559182c84d01c615f7f3af4b\" returns successfully" Mar 14 00:14:58.334734 containerd[1934]: time="2026-03-14T00:14:58.334662842Z" level=info msg="StopPodSandbox for \"7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc\"" Mar 14 00:14:58.667286 containerd[1934]: time="2026-03-14T00:14:58.667205128Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:58.672328 containerd[1934]: time="2026-03-14T00:14:58.671868124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8261497" Mar 14 00:14:58.680004 containerd[1934]: time="2026-03-14T00:14:58.679782112Z" level=info msg="ImageCreate event name:\"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:58.689570 containerd[1934]: time="2026-03-14T00:14:58.688760896Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:14:58.693797 containerd[1934]: time="2026-03-14T00:14:58.693729712Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"9659022\" in 1.898720782s" Mar 14 00:14:58.694582 containerd[1934]: time="2026-03-14T00:14:58.694538188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:9cb4086a1b408b52c6b14e0b81520060e1766ee0243508d29d8a53c7b518051f\"" Mar 14 00:14:58.698492 containerd[1934]: time="2026-03-14T00:14:58.697960348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 14 00:14:58.724868 containerd[1934]: time="2026-03-14T00:14:58.724310656Z" level=info msg="CreateContainer within sandbox \"53ca381c0e927fa2414e734098259d1959c0b91a0293351bb8ae5e5644e28710\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 14 00:14:58.800337 containerd[1934]: time="2026-03-14T00:14:58.800244088Z" level=info msg="CreateContainer within sandbox \"53ca381c0e927fa2414e734098259d1959c0b91a0293351bb8ae5e5644e28710\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c335af2db5f1de7965ed1e710ec4b9b9932e177328450a2b44969c7014c5023f\"" Mar 14 00:14:58.801769 containerd[1934]: time="2026-03-14T00:14:58.801697396Z" level=info msg="StartContainer for \"c335af2db5f1de7965ed1e710ec4b9b9932e177328450a2b44969c7014c5023f\"" Mar 14 00:14:58.819410 containerd[1934]: 2026-03-14 00:14:58.602 [WARNING][5758] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" WorkloadEndpoint="ip--172--31--18--130-k8s-whisker--5fd54db7bb--9gfrr-eth0" Mar 14 00:14:58.819410 containerd[1934]: 2026-03-14 00:14:58.602 [INFO][5758] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" Mar 14 00:14:58.819410 containerd[1934]: 2026-03-14 00:14:58.602 [INFO][5758] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" iface="eth0" netns="" Mar 14 00:14:58.819410 containerd[1934]: 2026-03-14 00:14:58.602 [INFO][5758] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" Mar 14 00:14:58.819410 containerd[1934]: 2026-03-14 00:14:58.602 [INFO][5758] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" Mar 14 00:14:58.819410 containerd[1934]: 2026-03-14 00:14:58.754 [INFO][5773] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" HandleID="k8s-pod-network.7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" Workload="ip--172--31--18--130-k8s-whisker--5fd54db7bb--9gfrr-eth0" Mar 14 00:14:58.819410 containerd[1934]: 2026-03-14 00:14:58.755 [INFO][5773] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:58.819410 containerd[1934]: 2026-03-14 00:14:58.755 [INFO][5773] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:58.819410 containerd[1934]: 2026-03-14 00:14:58.789 [WARNING][5773] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" HandleID="k8s-pod-network.7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" Workload="ip--172--31--18--130-k8s-whisker--5fd54db7bb--9gfrr-eth0" Mar 14 00:14:58.819410 containerd[1934]: 2026-03-14 00:14:58.789 [INFO][5773] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" HandleID="k8s-pod-network.7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" Workload="ip--172--31--18--130-k8s-whisker--5fd54db7bb--9gfrr-eth0" Mar 14 00:14:58.819410 containerd[1934]: 2026-03-14 00:14:58.796 [INFO][5773] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:58.819410 containerd[1934]: 2026-03-14 00:14:58.813 [INFO][5758] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" Mar 14 00:14:58.820698 containerd[1934]: time="2026-03-14T00:14:58.820297492Z" level=info msg="TearDown network for sandbox \"7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc\" successfully" Mar 14 00:14:58.820698 containerd[1934]: time="2026-03-14T00:14:58.820349860Z" level=info msg="StopPodSandbox for \"7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc\" returns successfully" Mar 14 00:14:58.822665 containerd[1934]: time="2026-03-14T00:14:58.822341764Z" level=info msg="RemovePodSandbox for \"7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc\"" Mar 14 00:14:58.822665 containerd[1934]: time="2026-03-14T00:14:58.822510280Z" level=info msg="Forcibly stopping sandbox \"7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc\"" Mar 14 00:14:58.933252 systemd[1]: Started cri-containerd-c335af2db5f1de7965ed1e710ec4b9b9932e177328450a2b44969c7014c5023f.scope - libcontainer container c335af2db5f1de7965ed1e710ec4b9b9932e177328450a2b44969c7014c5023f. Mar 14 00:14:59.154947 containerd[1934]: 2026-03-14 00:14:59.016 [WARNING][5797] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" WorkloadEndpoint="ip--172--31--18--130-k8s-whisker--5fd54db7bb--9gfrr-eth0" Mar 14 00:14:59.154947 containerd[1934]: 2026-03-14 00:14:59.016 [INFO][5797] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" Mar 14 00:14:59.154947 containerd[1934]: 2026-03-14 00:14:59.016 [INFO][5797] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" iface="eth0" netns="" Mar 14 00:14:59.154947 containerd[1934]: 2026-03-14 00:14:59.016 [INFO][5797] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" Mar 14 00:14:59.154947 containerd[1934]: 2026-03-14 00:14:59.016 [INFO][5797] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" Mar 14 00:14:59.154947 containerd[1934]: 2026-03-14 00:14:59.099 [INFO][5823] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" HandleID="k8s-pod-network.7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" Workload="ip--172--31--18--130-k8s-whisker--5fd54db7bb--9gfrr-eth0" Mar 14 00:14:59.154947 containerd[1934]: 2026-03-14 00:14:59.100 [INFO][5823] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:59.154947 containerd[1934]: 2026-03-14 00:14:59.100 [INFO][5823] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:59.154947 containerd[1934]: 2026-03-14 00:14:59.133 [WARNING][5823] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" HandleID="k8s-pod-network.7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" Workload="ip--172--31--18--130-k8s-whisker--5fd54db7bb--9gfrr-eth0" Mar 14 00:14:59.154947 containerd[1934]: 2026-03-14 00:14:59.133 [INFO][5823] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" HandleID="k8s-pod-network.7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" Workload="ip--172--31--18--130-k8s-whisker--5fd54db7bb--9gfrr-eth0" Mar 14 00:14:59.154947 containerd[1934]: 2026-03-14 00:14:59.137 [INFO][5823] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:59.154947 containerd[1934]: 2026-03-14 00:14:59.148 [INFO][5797] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc" Mar 14 00:14:59.154947 containerd[1934]: time="2026-03-14T00:14:59.154203902Z" level=info msg="TearDown network for sandbox \"7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc\" successfully" Mar 14 00:14:59.170499 containerd[1934]: time="2026-03-14T00:14:59.169355630Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:14:59.170499 containerd[1934]: time="2026-03-14T00:14:59.169521602Z" level=info msg="RemovePodSandbox \"7df3a890ffb4033a8aabc48f7cb23ab457d45f01e99b73f97952d29930f1d8dc\" returns successfully" Mar 14 00:14:59.170499 containerd[1934]: time="2026-03-14T00:14:59.170193314Z" level=info msg="StopPodSandbox for \"b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666\"" Mar 14 00:14:59.265631 containerd[1934]: time="2026-03-14T00:14:59.265275339Z" level=info msg="StartContainer for \"c335af2db5f1de7965ed1e710ec4b9b9932e177328450a2b44969c7014c5023f\" returns successfully" Mar 14 00:14:59.440534 kubelet[3153]: I0314 00:14:59.438049 3153 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 14 00:14:59.455377 containerd[1934]: 2026-03-14 00:14:59.328 [WARNING][5838] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-csi--node--driver--45kq2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3bae5412-2e2d-4fc4-8221-ace1b28b2f13", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"53ca381c0e927fa2414e734098259d1959c0b91a0293351bb8ae5e5644e28710", Pod:"csi-node-driver-45kq2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.44.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali28a1aa8fb0b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:14:59.455377 containerd[1934]: 2026-03-14 00:14:59.331 [INFO][5838] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" Mar 14 00:14:59.455377 containerd[1934]: 2026-03-14 00:14:59.331 [INFO][5838] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" iface="eth0" netns="" Mar 14 00:14:59.455377 containerd[1934]: 2026-03-14 00:14:59.331 [INFO][5838] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" Mar 14 00:14:59.455377 containerd[1934]: 2026-03-14 00:14:59.331 [INFO][5838] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" Mar 14 00:14:59.455377 containerd[1934]: 2026-03-14 00:14:59.409 [INFO][5855] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" HandleID="k8s-pod-network.b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" Workload="ip--172--31--18--130-k8s-csi--node--driver--45kq2-eth0" Mar 14 00:14:59.455377 containerd[1934]: 2026-03-14 00:14:59.409 [INFO][5855] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:59.455377 containerd[1934]: 2026-03-14 00:14:59.410 [INFO][5855] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:59.455377 containerd[1934]: 2026-03-14 00:14:59.437 [WARNING][5855] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" HandleID="k8s-pod-network.b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" Workload="ip--172--31--18--130-k8s-csi--node--driver--45kq2-eth0" Mar 14 00:14:59.455377 containerd[1934]: 2026-03-14 00:14:59.437 [INFO][5855] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" HandleID="k8s-pod-network.b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" Workload="ip--172--31--18--130-k8s-csi--node--driver--45kq2-eth0" Mar 14 00:14:59.455377 containerd[1934]: 2026-03-14 00:14:59.444 [INFO][5855] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:59.455377 containerd[1934]: 2026-03-14 00:14:59.449 [INFO][5838] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" Mar 14 00:14:59.456461 containerd[1934]: time="2026-03-14T00:14:59.455480775Z" level=info msg="TearDown network for sandbox \"b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666\" successfully" Mar 14 00:14:59.456461 containerd[1934]: time="2026-03-14T00:14:59.455520579Z" level=info msg="StopPodSandbox for \"b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666\" returns successfully" Mar 14 00:14:59.457585 containerd[1934]: time="2026-03-14T00:14:59.457287927Z" level=info msg="RemovePodSandbox for \"b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666\"" Mar 14 00:14:59.458630 containerd[1934]: time="2026-03-14T00:14:59.458497095Z" level=info msg="Forcibly stopping sandbox \"b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666\"" Mar 14 00:14:59.689481 containerd[1934]: 2026-03-14 00:14:59.582 [WARNING][5869] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-csi--node--driver--45kq2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3bae5412-2e2d-4fc4-8221-ace1b28b2f13", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"53ca381c0e927fa2414e734098259d1959c0b91a0293351bb8ae5e5644e28710", Pod:"csi-node-driver-45kq2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.44.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali28a1aa8fb0b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:14:59.689481 containerd[1934]: 2026-03-14 00:14:59.584 [INFO][5869] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" Mar 14 00:14:59.689481 containerd[1934]: 2026-03-14 00:14:59.584 [INFO][5869] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" iface="eth0" netns="" Mar 14 00:14:59.689481 containerd[1934]: 2026-03-14 00:14:59.584 [INFO][5869] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" Mar 14 00:14:59.689481 containerd[1934]: 2026-03-14 00:14:59.584 [INFO][5869] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" Mar 14 00:14:59.689481 containerd[1934]: 2026-03-14 00:14:59.648 [INFO][5876] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" HandleID="k8s-pod-network.b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" Workload="ip--172--31--18--130-k8s-csi--node--driver--45kq2-eth0" Mar 14 00:14:59.689481 containerd[1934]: 2026-03-14 00:14:59.648 [INFO][5876] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:59.689481 containerd[1934]: 2026-03-14 00:14:59.649 [INFO][5876] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:59.689481 containerd[1934]: 2026-03-14 00:14:59.671 [WARNING][5876] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" HandleID="k8s-pod-network.b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" Workload="ip--172--31--18--130-k8s-csi--node--driver--45kq2-eth0" Mar 14 00:14:59.689481 containerd[1934]: 2026-03-14 00:14:59.671 [INFO][5876] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" HandleID="k8s-pod-network.b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" Workload="ip--172--31--18--130-k8s-csi--node--driver--45kq2-eth0" Mar 14 00:14:59.689481 containerd[1934]: 2026-03-14 00:14:59.676 [INFO][5876] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:59.689481 containerd[1934]: 2026-03-14 00:14:59.681 [INFO][5869] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666" Mar 14 00:14:59.689481 containerd[1934]: time="2026-03-14T00:14:59.686729321Z" level=info msg="TearDown network for sandbox \"b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666\" successfully" Mar 14 00:14:59.701100 containerd[1934]: time="2026-03-14T00:14:59.700727549Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:14:59.701100 containerd[1934]: time="2026-03-14T00:14:59.700857713Z" level=info msg="RemovePodSandbox \"b4ec02820fcc8e7e529a9543bb2a566d8d0bf67db8ec7a8ae6a6fc34fce82666\" returns successfully" Mar 14 00:14:59.702505 containerd[1934]: time="2026-03-14T00:14:59.702295397Z" level=info msg="StopPodSandbox for \"231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0\"" Mar 14 00:14:59.906154 containerd[1934]: 2026-03-14 00:14:59.821 [WARNING][5892] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-goldmane--9f7667bb8--nbdj4-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"c8686320-fc9c-4a3f-a2c7-dfa460638fd7", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"4909fa5b36c5a6f6d60071ca055c9fbabda165147ab988d745b5d2e665e242ab", Pod:"goldmane-9f7667bb8-nbdj4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.44.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2ea8ce6114e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:14:59.906154 containerd[1934]: 2026-03-14 00:14:59.822 [INFO][5892] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" Mar 14 00:14:59.906154 containerd[1934]: 2026-03-14 00:14:59.822 [INFO][5892] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" iface="eth0" netns="" Mar 14 00:14:59.906154 containerd[1934]: 2026-03-14 00:14:59.822 [INFO][5892] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" Mar 14 00:14:59.906154 containerd[1934]: 2026-03-14 00:14:59.822 [INFO][5892] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" Mar 14 00:14:59.906154 containerd[1934]: 2026-03-14 00:14:59.871 [INFO][5899] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" HandleID="k8s-pod-network.231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" Workload="ip--172--31--18--130-k8s-goldmane--9f7667bb8--nbdj4-eth0" Mar 14 00:14:59.906154 containerd[1934]: 2026-03-14 00:14:59.872 [INFO][5899] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:14:59.906154 containerd[1934]: 2026-03-14 00:14:59.872 [INFO][5899] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:14:59.906154 containerd[1934]: 2026-03-14 00:14:59.891 [WARNING][5899] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" HandleID="k8s-pod-network.231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" Workload="ip--172--31--18--130-k8s-goldmane--9f7667bb8--nbdj4-eth0" Mar 14 00:14:59.906154 containerd[1934]: 2026-03-14 00:14:59.891 [INFO][5899] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" HandleID="k8s-pod-network.231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" Workload="ip--172--31--18--130-k8s-goldmane--9f7667bb8--nbdj4-eth0" Mar 14 00:14:59.906154 containerd[1934]: 2026-03-14 00:14:59.895 [INFO][5899] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:14:59.906154 containerd[1934]: 2026-03-14 00:14:59.900 [INFO][5892] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" Mar 14 00:14:59.908985 containerd[1934]: time="2026-03-14T00:14:59.907312914Z" level=info msg="TearDown network for sandbox \"231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0\" successfully" Mar 14 00:14:59.908985 containerd[1934]: time="2026-03-14T00:14:59.907362822Z" level=info msg="StopPodSandbox for \"231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0\" returns successfully" Mar 14 00:14:59.910536 containerd[1934]: time="2026-03-14T00:14:59.909882870Z" level=info msg="RemovePodSandbox for \"231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0\"" Mar 14 00:14:59.910536 containerd[1934]: time="2026-03-14T00:14:59.909947946Z" level=info msg="Forcibly stopping sandbox \"231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0\"" Mar 14 00:15:00.183008 containerd[1934]: 2026-03-14 00:15:00.033 [WARNING][5913] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-goldmane--9f7667bb8--nbdj4-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"c8686320-fc9c-4a3f-a2c7-dfa460638fd7", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"4909fa5b36c5a6f6d60071ca055c9fbabda165147ab988d745b5d2e665e242ab", Pod:"goldmane-9f7667bb8-nbdj4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.44.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2ea8ce6114e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:00.183008 containerd[1934]: 2026-03-14 00:15:00.034 [INFO][5913] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" Mar 14 00:15:00.183008 containerd[1934]: 2026-03-14 00:15:00.036 [INFO][5913] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" iface="eth0" netns="" Mar 14 00:15:00.183008 containerd[1934]: 2026-03-14 00:15:00.037 [INFO][5913] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" Mar 14 00:15:00.183008 containerd[1934]: 2026-03-14 00:15:00.037 [INFO][5913] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" Mar 14 00:15:00.183008 containerd[1934]: 2026-03-14 00:15:00.142 [INFO][5920] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" HandleID="k8s-pod-network.231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" Workload="ip--172--31--18--130-k8s-goldmane--9f7667bb8--nbdj4-eth0" Mar 14 00:15:00.183008 containerd[1934]: 2026-03-14 00:15:00.142 [INFO][5920] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:00.183008 containerd[1934]: 2026-03-14 00:15:00.142 [INFO][5920] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:00.183008 containerd[1934]: 2026-03-14 00:15:00.166 [WARNING][5920] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" HandleID="k8s-pod-network.231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" Workload="ip--172--31--18--130-k8s-goldmane--9f7667bb8--nbdj4-eth0" Mar 14 00:15:00.183008 containerd[1934]: 2026-03-14 00:15:00.166 [INFO][5920] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" HandleID="k8s-pod-network.231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" Workload="ip--172--31--18--130-k8s-goldmane--9f7667bb8--nbdj4-eth0" Mar 14 00:15:00.183008 containerd[1934]: 2026-03-14 00:15:00.173 [INFO][5920] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:00.183008 containerd[1934]: 2026-03-14 00:15:00.179 [INFO][5913] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0" Mar 14 00:15:00.184807 containerd[1934]: time="2026-03-14T00:15:00.183104283Z" level=info msg="TearDown network for sandbox \"231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0\" successfully" Mar 14 00:15:00.191400 containerd[1934]: time="2026-03-14T00:15:00.191319723Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:15:00.191790 containerd[1934]: time="2026-03-14T00:15:00.191463903Z" level=info msg="RemovePodSandbox \"231fe8498450eb100cdc08107cf5a134b7ecc0e848dac606ef881ee64a9423e0\" returns successfully" Mar 14 00:15:00.194205 containerd[1934]: time="2026-03-14T00:15:00.192812595Z" level=info msg="StopPodSandbox for \"44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a\"" Mar 14 00:15:00.422333 containerd[1934]: 2026-03-14 00:15:00.304 [WARNING][5935] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-calico--kube--controllers--646cdb9884--pj2xg-eth0", GenerateName:"calico-kube-controllers-646cdb9884-", Namespace:"calico-system", SelfLink:"", UID:"b2f14ff7-37b0-4a9b-9308-67b07dd8dd39", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"646cdb9884", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"c9c797587882950647fbb84eaec3649cbb99980e8d8f67562afad878f17ccdc2", Pod:"calico-kube-controllers-646cdb9884-pj2xg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.44.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2f12046c56c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:00.422333 containerd[1934]: 2026-03-14 00:15:00.306 [INFO][5935] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" Mar 14 00:15:00.422333 containerd[1934]: 2026-03-14 00:15:00.306 [INFO][5935] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" iface="eth0" netns="" Mar 14 00:15:00.422333 containerd[1934]: 2026-03-14 00:15:00.306 [INFO][5935] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" Mar 14 00:15:00.422333 containerd[1934]: 2026-03-14 00:15:00.306 [INFO][5935] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" Mar 14 00:15:00.422333 containerd[1934]: 2026-03-14 00:15:00.382 [INFO][5942] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" HandleID="k8s-pod-network.44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" Workload="ip--172--31--18--130-k8s-calico--kube--controllers--646cdb9884--pj2xg-eth0" Mar 14 00:15:00.422333 containerd[1934]: 2026-03-14 00:15:00.383 [INFO][5942] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:00.422333 containerd[1934]: 2026-03-14 00:15:00.383 [INFO][5942] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:00.422333 containerd[1934]: 2026-03-14 00:15:00.405 [WARNING][5942] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" HandleID="k8s-pod-network.44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" Workload="ip--172--31--18--130-k8s-calico--kube--controllers--646cdb9884--pj2xg-eth0" Mar 14 00:15:00.422333 containerd[1934]: 2026-03-14 00:15:00.405 [INFO][5942] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" HandleID="k8s-pod-network.44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" Workload="ip--172--31--18--130-k8s-calico--kube--controllers--646cdb9884--pj2xg-eth0" Mar 14 00:15:00.422333 containerd[1934]: 2026-03-14 00:15:00.409 [INFO][5942] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:00.422333 containerd[1934]: 2026-03-14 00:15:00.416 [INFO][5935] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" Mar 14 00:15:00.424712 containerd[1934]: time="2026-03-14T00:15:00.422385820Z" level=info msg="TearDown network for sandbox \"44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a\" successfully" Mar 14 00:15:00.424712 containerd[1934]: time="2026-03-14T00:15:00.422426128Z" level=info msg="StopPodSandbox for \"44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a\" returns successfully" Mar 14 00:15:00.424712 containerd[1934]: time="2026-03-14T00:15:00.423161212Z" level=info msg="RemovePodSandbox for \"44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a\"" Mar 14 00:15:00.424712 containerd[1934]: time="2026-03-14T00:15:00.423241864Z" level=info msg="Forcibly stopping sandbox \"44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a\"" Mar 14 00:15:00.799912 containerd[1934]: 2026-03-14 00:15:00.604 [WARNING][5956] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-calico--kube--controllers--646cdb9884--pj2xg-eth0", GenerateName:"calico-kube-controllers-646cdb9884-", Namespace:"calico-system", SelfLink:"", UID:"b2f14ff7-37b0-4a9b-9308-67b07dd8dd39", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"646cdb9884", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"c9c797587882950647fbb84eaec3649cbb99980e8d8f67562afad878f17ccdc2", Pod:"calico-kube-controllers-646cdb9884-pj2xg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.44.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2f12046c56c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:00.799912 containerd[1934]: 2026-03-14 00:15:00.608 [INFO][5956] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" Mar 14 00:15:00.799912 containerd[1934]: 2026-03-14 00:15:00.609 [INFO][5956] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" iface="eth0" netns="" Mar 14 00:15:00.799912 containerd[1934]: 2026-03-14 00:15:00.611 [INFO][5956] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" Mar 14 00:15:00.799912 containerd[1934]: 2026-03-14 00:15:00.611 [INFO][5956] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" Mar 14 00:15:00.799912 containerd[1934]: 2026-03-14 00:15:00.764 [INFO][5964] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" HandleID="k8s-pod-network.44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" Workload="ip--172--31--18--130-k8s-calico--kube--controllers--646cdb9884--pj2xg-eth0" Mar 14 00:15:00.799912 containerd[1934]: 2026-03-14 00:15:00.764 [INFO][5964] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:00.799912 containerd[1934]: 2026-03-14 00:15:00.764 [INFO][5964] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:00.799912 containerd[1934]: 2026-03-14 00:15:00.783 [WARNING][5964] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" HandleID="k8s-pod-network.44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" Workload="ip--172--31--18--130-k8s-calico--kube--controllers--646cdb9884--pj2xg-eth0" Mar 14 00:15:00.799912 containerd[1934]: 2026-03-14 00:15:00.784 [INFO][5964] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" HandleID="k8s-pod-network.44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" Workload="ip--172--31--18--130-k8s-calico--kube--controllers--646cdb9884--pj2xg-eth0" Mar 14 00:15:00.799912 containerd[1934]: 2026-03-14 00:15:00.787 [INFO][5964] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:00.799912 containerd[1934]: 2026-03-14 00:15:00.794 [INFO][5956] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a" Mar 14 00:15:00.802322 containerd[1934]: time="2026-03-14T00:15:00.800263038Z" level=info msg="TearDown network for sandbox \"44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a\" successfully" Mar 14 00:15:00.811929 containerd[1934]: time="2026-03-14T00:15:00.811791738Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:15:00.812062 containerd[1934]: time="2026-03-14T00:15:00.811994130Z" level=info msg="RemovePodSandbox \"44af147c07fa0c1699bc262e5d488a36e49dc1ad47e247fcc9003cb0d6cf849a\" returns successfully" Mar 14 00:15:00.813401 containerd[1934]: time="2026-03-14T00:15:00.812837598Z" level=info msg="StopPodSandbox for \"f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f\"" Mar 14 00:15:01.074203 containerd[1934]: 2026-03-14 00:15:00.949 [WARNING][5978] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--ng7xc-eth0", GenerateName:"calico-apiserver-696c5cfc7f-", Namespace:"calico-system", SelfLink:"", UID:"9dac252b-d2f0-4f51-8a32-3e7e6f3e4118", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"696c5cfc7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"6fbb90a16bf8c41b9b3aea5878d88dacf092e2566d11bff6df89b168d5db4375", Pod:"calico-apiserver-696c5cfc7f-ng7xc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali75893884bdc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:01.074203 containerd[1934]: 2026-03-14 00:15:00.949 [INFO][5978] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" Mar 14 00:15:01.074203 containerd[1934]: 2026-03-14 00:15:00.949 [INFO][5978] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" iface="eth0" netns="" Mar 14 00:15:01.074203 containerd[1934]: 2026-03-14 00:15:00.949 [INFO][5978] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" Mar 14 00:15:01.074203 containerd[1934]: 2026-03-14 00:15:00.949 [INFO][5978] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" Mar 14 00:15:01.074203 containerd[1934]: 2026-03-14 00:15:01.028 [INFO][5985] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" HandleID="k8s-pod-network.f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" Workload="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--ng7xc-eth0" Mar 14 00:15:01.074203 containerd[1934]: 2026-03-14 00:15:01.028 [INFO][5985] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:01.074203 containerd[1934]: 2026-03-14 00:15:01.028 [INFO][5985] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:01.074203 containerd[1934]: 2026-03-14 00:15:01.061 [WARNING][5985] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" HandleID="k8s-pod-network.f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" Workload="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--ng7xc-eth0" Mar 14 00:15:01.074203 containerd[1934]: 2026-03-14 00:15:01.061 [INFO][5985] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" HandleID="k8s-pod-network.f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" Workload="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--ng7xc-eth0" Mar 14 00:15:01.074203 containerd[1934]: 2026-03-14 00:15:01.064 [INFO][5985] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:01.074203 containerd[1934]: 2026-03-14 00:15:01.069 [INFO][5978] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" Mar 14 00:15:01.075717 containerd[1934]: time="2026-03-14T00:15:01.074069680Z" level=info msg="TearDown network for sandbox \"f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f\" successfully" Mar 14 00:15:01.075717 containerd[1934]: time="2026-03-14T00:15:01.075545572Z" level=info msg="StopPodSandbox for \"f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f\" returns successfully" Mar 14 00:15:01.076664 containerd[1934]: time="2026-03-14T00:15:01.076333960Z" level=info msg="RemovePodSandbox for \"f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f\"" Mar 14 00:15:01.076664 containerd[1934]: time="2026-03-14T00:15:01.076379884Z" level=info msg="Forcibly stopping sandbox \"f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f\"" Mar 14 00:15:01.269462 containerd[1934]: 2026-03-14 00:15:01.172 [WARNING][5999] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--ng7xc-eth0", GenerateName:"calico-apiserver-696c5cfc7f-", Namespace:"calico-system", SelfLink:"", UID:"9dac252b-d2f0-4f51-8a32-3e7e6f3e4118", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"696c5cfc7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"6fbb90a16bf8c41b9b3aea5878d88dacf092e2566d11bff6df89b168d5db4375", Pod:"calico-apiserver-696c5cfc7f-ng7xc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali75893884bdc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:01.269462 containerd[1934]: 2026-03-14 00:15:01.173 [INFO][5999] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" Mar 14 00:15:01.269462 containerd[1934]: 2026-03-14 00:15:01.173 [INFO][5999] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" iface="eth0" netns="" Mar 14 00:15:01.269462 containerd[1934]: 2026-03-14 00:15:01.173 [INFO][5999] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" Mar 14 00:15:01.269462 containerd[1934]: 2026-03-14 00:15:01.173 [INFO][5999] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" Mar 14 00:15:01.269462 containerd[1934]: 2026-03-14 00:15:01.231 [INFO][6006] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" HandleID="k8s-pod-network.f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" Workload="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--ng7xc-eth0" Mar 14 00:15:01.269462 containerd[1934]: 2026-03-14 00:15:01.235 [INFO][6006] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:01.269462 containerd[1934]: 2026-03-14 00:15:01.236 [INFO][6006] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:01.269462 containerd[1934]: 2026-03-14 00:15:01.252 [WARNING][6006] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" HandleID="k8s-pod-network.f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" Workload="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--ng7xc-eth0" Mar 14 00:15:01.269462 containerd[1934]: 2026-03-14 00:15:01.252 [INFO][6006] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" HandleID="k8s-pod-network.f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" Workload="ip--172--31--18--130-k8s-calico--apiserver--696c5cfc7f--ng7xc-eth0" Mar 14 00:15:01.269462 containerd[1934]: 2026-03-14 00:15:01.256 [INFO][6006] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:01.269462 containerd[1934]: 2026-03-14 00:15:01.263 [INFO][5999] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f" Mar 14 00:15:01.269462 containerd[1934]: time="2026-03-14T00:15:01.268920064Z" level=info msg="TearDown network for sandbox \"f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f\" successfully" Mar 14 00:15:01.282459 containerd[1934]: time="2026-03-14T00:15:01.282188441Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:15:01.282459 containerd[1934]: time="2026-03-14T00:15:01.282298025Z" level=info msg="RemovePodSandbox \"f424bf3f6156b63e69a1864374c1517e998f1b062ab07b7f1bcb0fd45a0f489f\" returns successfully" Mar 14 00:15:01.284842 containerd[1934]: time="2026-03-14T00:15:01.283267349Z" level=info msg="StopPodSandbox for \"68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc\"" Mar 14 00:15:01.558786 containerd[1934]: 2026-03-14 00:15:01.415 [WARNING][6020] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-coredns--7d764666f9--v8blg-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"d4e6f04d-05da-499d-9d04-d1cc54b42952", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"3a49916aa2bbf14260e14b1b8b0cd1b3f41d09d079b3b5cbf3aeafa1df39cf3d", Pod:"coredns-7d764666f9-v8blg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3a1e0477228", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:01.558786 containerd[1934]: 2026-03-14 00:15:01.416 [INFO][6020] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" Mar 14 00:15:01.558786 containerd[1934]: 2026-03-14 00:15:01.416 [INFO][6020] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" iface="eth0" netns="" Mar 14 00:15:01.558786 containerd[1934]: 2026-03-14 00:15:01.416 [INFO][6020] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" Mar 14 00:15:01.558786 containerd[1934]: 2026-03-14 00:15:01.416 [INFO][6020] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" Mar 14 00:15:01.558786 containerd[1934]: 2026-03-14 00:15:01.514 [INFO][6027] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" HandleID="k8s-pod-network.68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" Workload="ip--172--31--18--130-k8s-coredns--7d764666f9--v8blg-eth0" Mar 14 00:15:01.558786 containerd[1934]: 2026-03-14 00:15:01.514 [INFO][6027] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:01.558786 containerd[1934]: 2026-03-14 00:15:01.514 [INFO][6027] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:01.558786 containerd[1934]: 2026-03-14 00:15:01.543 [WARNING][6027] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" HandleID="k8s-pod-network.68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" Workload="ip--172--31--18--130-k8s-coredns--7d764666f9--v8blg-eth0" Mar 14 00:15:01.558786 containerd[1934]: 2026-03-14 00:15:01.543 [INFO][6027] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" HandleID="k8s-pod-network.68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" Workload="ip--172--31--18--130-k8s-coredns--7d764666f9--v8blg-eth0" Mar 14 00:15:01.558786 containerd[1934]: 2026-03-14 00:15:01.547 [INFO][6027] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:01.558786 containerd[1934]: 2026-03-14 00:15:01.552 [INFO][6020] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" Mar 14 00:15:01.560780 containerd[1934]: time="2026-03-14T00:15:01.559259862Z" level=info msg="TearDown network for sandbox \"68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc\" successfully" Mar 14 00:15:01.560780 containerd[1934]: time="2026-03-14T00:15:01.559302882Z" level=info msg="StopPodSandbox for \"68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc\" returns successfully" Mar 14 00:15:01.562737 containerd[1934]: time="2026-03-14T00:15:01.561900726Z" level=info msg="RemovePodSandbox for \"68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc\"" Mar 14 00:15:01.562737 containerd[1934]: time="2026-03-14T00:15:01.561995058Z" level=info msg="Forcibly stopping sandbox \"68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc\"" Mar 14 00:15:01.624031 kubelet[3153]: I0314 00:15:01.622083 3153 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-696c5cfc7f-hzr5w" podStartSLOduration=31.596135759 podStartE2EDuration="38.622064862s" podCreationTimestamp="2026-03-14 00:14:23 +0000 UTC" firstStartedPulling="2026-03-14 00:14:49.448546566 +0000 UTC m=+52.042726904" lastFinishedPulling="2026-03-14 00:14:56.474475657 +0000 UTC m=+59.068656007" observedRunningTime="2026-03-14 00:14:57.463857746 +0000 UTC m=+60.058038108" watchObservedRunningTime="2026-03-14 00:15:01.622064862 +0000 UTC m=+64.216245188" Mar 14 00:15:01.929803 containerd[1934]: 2026-03-14 00:15:01.681 [WARNING][6042] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-coredns--7d764666f9--v8blg-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"d4e6f04d-05da-499d-9d04-d1cc54b42952", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"3a49916aa2bbf14260e14b1b8b0cd1b3f41d09d079b3b5cbf3aeafa1df39cf3d", Pod:"coredns-7d764666f9-v8blg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3a1e0477228", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:01.929803 containerd[1934]: 2026-03-14 00:15:01.681 [INFO][6042] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" Mar 14 00:15:01.929803 containerd[1934]: 2026-03-14 00:15:01.681 [INFO][6042] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" iface="eth0" netns="" Mar 14 00:15:01.929803 containerd[1934]: 2026-03-14 00:15:01.681 [INFO][6042] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" Mar 14 00:15:01.929803 containerd[1934]: 2026-03-14 00:15:01.681 [INFO][6042] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" Mar 14 00:15:01.929803 containerd[1934]: 2026-03-14 00:15:01.865 [INFO][6049] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" HandleID="k8s-pod-network.68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" Workload="ip--172--31--18--130-k8s-coredns--7d764666f9--v8blg-eth0" Mar 14 00:15:01.929803 containerd[1934]: 2026-03-14 00:15:01.866 [INFO][6049] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:01.929803 containerd[1934]: 2026-03-14 00:15:01.866 [INFO][6049] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:01.929803 containerd[1934]: 2026-03-14 00:15:01.897 [WARNING][6049] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" HandleID="k8s-pod-network.68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" Workload="ip--172--31--18--130-k8s-coredns--7d764666f9--v8blg-eth0" Mar 14 00:15:01.929803 containerd[1934]: 2026-03-14 00:15:01.897 [INFO][6049] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" HandleID="k8s-pod-network.68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" Workload="ip--172--31--18--130-k8s-coredns--7d764666f9--v8blg-eth0" Mar 14 00:15:01.929803 containerd[1934]: 2026-03-14 00:15:01.910 [INFO][6049] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:01.929803 containerd[1934]: 2026-03-14 00:15:01.924 [INFO][6042] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc" Mar 14 00:15:01.929803 containerd[1934]: time="2026-03-14T00:15:01.929859104Z" level=info msg="TearDown network for sandbox \"68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc\" successfully" Mar 14 00:15:01.951448 containerd[1934]: time="2026-03-14T00:15:01.951128132Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:15:01.951448 containerd[1934]: time="2026-03-14T00:15:01.951241028Z" level=info msg="RemovePodSandbox \"68f79e747d17e00a50a749012c13fb4bbd450ff9d66a4e609920fac20591f3dc\" returns successfully" Mar 14 00:15:01.953186 containerd[1934]: time="2026-03-14T00:15:01.952634516Z" level=info msg="StopPodSandbox for \"a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78\"" Mar 14 00:15:02.392859 containerd[1934]: 2026-03-14 00:15:02.121 [WARNING][6069] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-coredns--7d764666f9--fzzfp-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"bc321aa2-bc1d-4b5f-9fdf-44eb597d2609", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"89670e183635c0fa37a514aff84d77e6213888974f7f1cff9f744ee459c506ee", Pod:"coredns-7d764666f9-fzzfp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali64e12ef821e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:02.392859 containerd[1934]: 2026-03-14 00:15:02.122 [INFO][6069] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" Mar 14 00:15:02.392859 containerd[1934]: 2026-03-14 00:15:02.122 [INFO][6069] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" iface="eth0" netns="" Mar 14 00:15:02.392859 containerd[1934]: 2026-03-14 00:15:02.122 [INFO][6069] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" Mar 14 00:15:02.392859 containerd[1934]: 2026-03-14 00:15:02.122 [INFO][6069] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" Mar 14 00:15:02.392859 containerd[1934]: 2026-03-14 00:15:02.315 [INFO][6077] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" HandleID="k8s-pod-network.a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" Workload="ip--172--31--18--130-k8s-coredns--7d764666f9--fzzfp-eth0" Mar 14 00:15:02.392859 containerd[1934]: 2026-03-14 00:15:02.321 [INFO][6077] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:02.392859 containerd[1934]: 2026-03-14 00:15:02.328 [INFO][6077] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:02.392859 containerd[1934]: 2026-03-14 00:15:02.361 [WARNING][6077] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" HandleID="k8s-pod-network.a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" Workload="ip--172--31--18--130-k8s-coredns--7d764666f9--fzzfp-eth0" Mar 14 00:15:02.392859 containerd[1934]: 2026-03-14 00:15:02.362 [INFO][6077] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" HandleID="k8s-pod-network.a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" Workload="ip--172--31--18--130-k8s-coredns--7d764666f9--fzzfp-eth0" Mar 14 00:15:02.392859 containerd[1934]: 2026-03-14 00:15:02.366 [INFO][6077] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:02.392859 containerd[1934]: 2026-03-14 00:15:02.380 [INFO][6069] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" Mar 14 00:15:02.394257 containerd[1934]: time="2026-03-14T00:15:02.393322362Z" level=info msg="TearDown network for sandbox \"a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78\" successfully" Mar 14 00:15:02.394257 containerd[1934]: time="2026-03-14T00:15:02.393362214Z" level=info msg="StopPodSandbox for \"a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78\" returns successfully" Mar 14 00:15:02.397544 containerd[1934]: time="2026-03-14T00:15:02.396036990Z" level=info msg="RemovePodSandbox for \"a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78\"" Mar 14 00:15:02.397544 containerd[1934]: time="2026-03-14T00:15:02.396113526Z" level=info msg="Forcibly stopping sandbox \"a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78\"" Mar 14 00:15:02.806769 containerd[1934]: 2026-03-14 00:15:02.591 [WARNING][6091] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-coredns--7d764666f9--fzzfp-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"bc321aa2-bc1d-4b5f-9fdf-44eb597d2609", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2026, time.March, 14, 0, 14, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"89670e183635c0fa37a514aff84d77e6213888974f7f1cff9f744ee459c506ee", Pod:"coredns-7d764666f9-fzzfp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali64e12ef821e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 14 00:15:02.806769 containerd[1934]: 2026-03-14 00:15:02.591 [INFO][6091] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" Mar 14 00:15:02.806769 containerd[1934]: 2026-03-14 00:15:02.591 [INFO][6091] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" iface="eth0" netns="" Mar 14 00:15:02.806769 containerd[1934]: 2026-03-14 00:15:02.591 [INFO][6091] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" Mar 14 00:15:02.806769 containerd[1934]: 2026-03-14 00:15:02.591 [INFO][6091] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" Mar 14 00:15:02.806769 containerd[1934]: 2026-03-14 00:15:02.723 [INFO][6098] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" HandleID="k8s-pod-network.a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" Workload="ip--172--31--18--130-k8s-coredns--7d764666f9--fzzfp-eth0" Mar 14 00:15:02.806769 containerd[1934]: 2026-03-14 00:15:02.725 [INFO][6098] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 14 00:15:02.806769 containerd[1934]: 2026-03-14 00:15:02.725 [INFO][6098] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 14 00:15:02.806769 containerd[1934]: 2026-03-14 00:15:02.772 [WARNING][6098] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" HandleID="k8s-pod-network.a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" Workload="ip--172--31--18--130-k8s-coredns--7d764666f9--fzzfp-eth0" Mar 14 00:15:02.806769 containerd[1934]: 2026-03-14 00:15:02.773 [INFO][6098] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" HandleID="k8s-pod-network.a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" Workload="ip--172--31--18--130-k8s-coredns--7d764666f9--fzzfp-eth0" Mar 14 00:15:02.806769 containerd[1934]: 2026-03-14 00:15:02.784 [INFO][6098] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 14 00:15:02.806769 containerd[1934]: 2026-03-14 00:15:02.795 [INFO][6091] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78" Mar 14 00:15:02.811733 containerd[1934]: time="2026-03-14T00:15:02.809655824Z" level=info msg="TearDown network for sandbox \"a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78\" successfully" Mar 14 00:15:02.827479 containerd[1934]: time="2026-03-14T00:15:02.826495904Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:15:02.827809 containerd[1934]: time="2026-03-14T00:15:02.827755412Z" level=info msg="RemovePodSandbox \"a431307b7b7251c3a4eda10fb440e64e32c6d612edf5b431a59a1862d2f12f78\" returns successfully" Mar 14 00:15:04.201299 containerd[1934]: time="2026-03-14T00:15:04.200916055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:04.205597 containerd[1934]: time="2026-03-14T00:15:04.205525795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=49189955" Mar 14 00:15:04.206343 containerd[1934]: time="2026-03-14T00:15:04.206245711Z" level=info msg="ImageCreate event name:\"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:04.216376 containerd[1934]: time="2026-03-14T00:15:04.214584199Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:04.219025 containerd[1934]: time="2026-03-14T00:15:04.218966419Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"50587448\" in 5.517422343s" Mar 14 00:15:04.219307 containerd[1934]: time="2026-03-14T00:15:04.219210271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:e80fe1ce4f06b0791c077492cd9d5ebf00125a02bbafdcd04d2a64e10cc4ad95\"" Mar 14 00:15:04.222485 containerd[1934]: time="2026-03-14T00:15:04.222294091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 14 00:15:04.288981 containerd[1934]: time="2026-03-14T00:15:04.288923995Z" level=info msg="CreateContainer within sandbox \"c9c797587882950647fbb84eaec3649cbb99980e8d8f67562afad878f17ccdc2\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 14 00:15:04.314607 containerd[1934]: time="2026-03-14T00:15:04.314163176Z" level=info msg="CreateContainer within sandbox \"c9c797587882950647fbb84eaec3649cbb99980e8d8f67562afad878f17ccdc2\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3fa40e10b3139525fc7e9aa6e715b47fa09aa6b4ba3ebfd772d88cc4eef35939\"" Mar 14 00:15:04.320188 containerd[1934]: time="2026-03-14T00:15:04.317953484Z" level=info msg="StartContainer for \"3fa40e10b3139525fc7e9aa6e715b47fa09aa6b4ba3ebfd772d88cc4eef35939\"" Mar 14 00:15:04.416774 systemd[1]: Started cri-containerd-3fa40e10b3139525fc7e9aa6e715b47fa09aa6b4ba3ebfd772d88cc4eef35939.scope - libcontainer container 3fa40e10b3139525fc7e9aa6e715b47fa09aa6b4ba3ebfd772d88cc4eef35939. Mar 14 00:15:04.560956 systemd[1]: Started sshd@7-172.31.18.130:22-68.220.241.50:54998.service - OpenSSH per-connection server daemon (68.220.241.50:54998). Mar 14 00:15:04.712230 containerd[1934]: time="2026-03-14T00:15:04.712158490Z" level=info msg="StartContainer for \"3fa40e10b3139525fc7e9aa6e715b47fa09aa6b4ba3ebfd772d88cc4eef35939\" returns successfully" Mar 14 00:15:05.125262 sshd[6153]: Accepted publickey for core from 68.220.241.50 port 54998 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:15:05.134892 sshd[6153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:05.147037 systemd-logind[1913]: New session 8 of user core. Mar 14 00:15:05.152736 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 14 00:15:05.742477 kubelet[3153]: I0314 00:15:05.737582 3153 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-646cdb9884-pj2xg" podStartSLOduration=28.058500151 podStartE2EDuration="41.737565347s" podCreationTimestamp="2026-03-14 00:14:24 +0000 UTC" firstStartedPulling="2026-03-14 00:14:50.542199187 +0000 UTC m=+53.136379525" lastFinishedPulling="2026-03-14 00:15:04.221264323 +0000 UTC m=+66.815444721" observedRunningTime="2026-03-14 00:15:05.592770346 +0000 UTC m=+68.186950708" watchObservedRunningTime="2026-03-14 00:15:05.737565347 +0000 UTC m=+68.331745685" Mar 14 00:15:05.828374 sshd[6153]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:05.837229 systemd[1]: sshd@7-172.31.18.130:22-68.220.241.50:54998.service: Deactivated successfully. Mar 14 00:15:05.845364 systemd[1]: session-8.scope: Deactivated successfully. Mar 14 00:15:05.851338 systemd-logind[1913]: Session 8 logged out. Waiting for processes to exit. Mar 14 00:15:05.855160 systemd-logind[1913]: Removed session 8. Mar 14 00:15:05.952007 containerd[1934]: time="2026-03-14T00:15:05.951346668Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:05.954016 containerd[1934]: time="2026-03-14T00:15:05.953889336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=5882804" Mar 14 00:15:05.956608 containerd[1934]: time="2026-03-14T00:15:05.956512920Z" level=info msg="ImageCreate event name:\"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:05.959550 containerd[1934]: time="2026-03-14T00:15:05.959410716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:05.961808 containerd[1934]: time="2026-03-14T00:15:05.961161732Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7280321\" in 1.738355301s" Mar 14 00:15:05.961808 containerd[1934]: time="2026-03-14T00:15:05.961221540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:51af4e9dcdb93e51b26a4a6f99272ec2df8de1aef256bb746f2c7c844b8e7b2c\"" Mar 14 00:15:05.964646 containerd[1934]: time="2026-03-14T00:15:05.964049904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 14 00:15:05.972926 containerd[1934]: time="2026-03-14T00:15:05.972873996Z" level=info msg="CreateContainer within sandbox \"9efd7984d13f861f5d5070c3e2293ea46502853e8d42a9faa4d880267be77b28\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 14 00:15:06.008410 containerd[1934]: time="2026-03-14T00:15:06.008148572Z" level=info msg="CreateContainer within sandbox \"9efd7984d13f861f5d5070c3e2293ea46502853e8d42a9faa4d880267be77b28\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"72f3db66c178a5d97013c2277b5a529c0e1ab893b90fde9163c86c8e5e5e20ac\"" Mar 14 00:15:06.011425 containerd[1934]: time="2026-03-14T00:15:06.010887716Z" level=info msg="StartContainer for \"72f3db66c178a5d97013c2277b5a529c0e1ab893b90fde9163c86c8e5e5e20ac\"" Mar 14 00:15:06.102310 systemd[1]: Started cri-containerd-72f3db66c178a5d97013c2277b5a529c0e1ab893b90fde9163c86c8e5e5e20ac.scope - libcontainer container 72f3db66c178a5d97013c2277b5a529c0e1ab893b90fde9163c86c8e5e5e20ac. Mar 14 00:15:06.217841 containerd[1934]: time="2026-03-14T00:15:06.217763817Z" level=info msg="StartContainer for \"72f3db66c178a5d97013c2277b5a529c0e1ab893b90fde9163c86c8e5e5e20ac\" returns successfully" Mar 14 00:15:07.822758 containerd[1934]: time="2026-03-14T00:15:07.822352825Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:07.825955 containerd[1934]: time="2026-03-14T00:15:07.825866905Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=13766291" Mar 14 00:15:07.827924 containerd[1934]: time="2026-03-14T00:15:07.827748157Z" level=info msg="ImageCreate event name:\"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:07.834296 containerd[1934]: time="2026-03-14T00:15:07.834199813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:07.837032 containerd[1934]: time="2026-03-14T00:15:07.836824825Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"15163768\" in 1.872678213s" Mar 14 00:15:07.837032 containerd[1934]: time="2026-03-14T00:15:07.836890897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:8195c49a3b504e7ef58a8fc9a0e9ae66ae6ae90ef4998c04591be9588e8fa07e\"" Mar 14 00:15:07.841008 containerd[1934]: time="2026-03-14T00:15:07.839862877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 14 00:15:07.848751 containerd[1934]: time="2026-03-14T00:15:07.848478445Z" level=info msg="CreateContainer within sandbox \"53ca381c0e927fa2414e734098259d1959c0b91a0293351bb8ae5e5644e28710\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 14 00:15:07.876195 containerd[1934]: time="2026-03-14T00:15:07.875837797Z" level=info msg="CreateContainer within sandbox \"53ca381c0e927fa2414e734098259d1959c0b91a0293351bb8ae5e5644e28710\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e5a7ce37aea40f1dafcc42e06b14626f7f2fa22636f1146a7d108cdcbae4325e\"" Mar 14 00:15:07.877137 containerd[1934]: time="2026-03-14T00:15:07.877081621Z" level=info msg="StartContainer for \"e5a7ce37aea40f1dafcc42e06b14626f7f2fa22636f1146a7d108cdcbae4325e\"" Mar 14 00:15:07.992610 systemd[1]: Started cri-containerd-e5a7ce37aea40f1dafcc42e06b14626f7f2fa22636f1146a7d108cdcbae4325e.scope - libcontainer container e5a7ce37aea40f1dafcc42e06b14626f7f2fa22636f1146a7d108cdcbae4325e. Mar 14 00:15:08.067478 containerd[1934]: time="2026-03-14T00:15:08.067218178Z" level=info msg="StartContainer for \"e5a7ce37aea40f1dafcc42e06b14626f7f2fa22636f1146a7d108cdcbae4325e\" returns successfully" Mar 14 00:15:08.592873 kubelet[3153]: I0314 00:15:08.592777 3153 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-45kq2" podStartSLOduration=27.279418183 podStartE2EDuration="44.592761157s" podCreationTimestamp="2026-03-14 00:14:24 +0000 UTC" firstStartedPulling="2026-03-14 00:14:50.526048795 +0000 UTC m=+53.120229133" lastFinishedPulling="2026-03-14 00:15:07.839391709 +0000 UTC m=+70.433572107" observedRunningTime="2026-03-14 00:15:08.592755337 +0000 UTC m=+71.186935663" watchObservedRunningTime="2026-03-14 00:15:08.592761157 +0000 UTC m=+71.186941519" Mar 14 00:15:08.902919 kubelet[3153]: I0314 00:15:08.902859 3153 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 14 00:15:08.903121 kubelet[3153]: I0314 00:15:08.902942 3153 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 14 00:15:10.017616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount955471626.mount: Deactivated successfully. Mar 14 00:15:10.043573 containerd[1934]: time="2026-03-14T00:15:10.043509348Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:10.045282 containerd[1934]: time="2026-03-14T00:15:10.045227304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=16426594" Mar 14 00:15:10.047178 containerd[1934]: time="2026-03-14T00:15:10.046984032Z" level=info msg="ImageCreate event name:\"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:10.052735 containerd[1934]: time="2026-03-14T00:15:10.052528620Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:10.056685 containerd[1934]: time="2026-03-14T00:15:10.056398104Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"16426424\" in 2.216454611s" Mar 14 00:15:10.056685 containerd[1934]: time="2026-03-14T00:15:10.056526108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:19fab8e13a4d97732973f299576e43f89b889ceff6e3768f711f30e6ace1c662\"" Mar 14 00:15:10.065418 containerd[1934]: time="2026-03-14T00:15:10.065348400Z" level=info msg="CreateContainer within sandbox \"9efd7984d13f861f5d5070c3e2293ea46502853e8d42a9faa4d880267be77b28\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 14 00:15:10.090138 containerd[1934]: time="2026-03-14T00:15:10.089926080Z" level=info msg="CreateContainer within sandbox \"9efd7984d13f861f5d5070c3e2293ea46502853e8d42a9faa4d880267be77b28\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"d45edffbdbf04f8a4747c595f9fe0e2e3349a942ecfee0557ed4ea7581fdd3f2\"" Mar 14 00:15:10.093579 containerd[1934]: time="2026-03-14T00:15:10.092712324Z" level=info msg="StartContainer for \"d45edffbdbf04f8a4747c595f9fe0e2e3349a942ecfee0557ed4ea7581fdd3f2\"" Mar 14 00:15:10.163762 systemd[1]: Started cri-containerd-d45edffbdbf04f8a4747c595f9fe0e2e3349a942ecfee0557ed4ea7581fdd3f2.scope - libcontainer container d45edffbdbf04f8a4747c595f9fe0e2e3349a942ecfee0557ed4ea7581fdd3f2. Mar 14 00:15:10.241807 containerd[1934]: time="2026-03-14T00:15:10.241323229Z" level=info msg="StartContainer for \"d45edffbdbf04f8a4747c595f9fe0e2e3349a942ecfee0557ed4ea7581fdd3f2\" returns successfully" Mar 14 00:15:10.601882 kubelet[3153]: I0314 00:15:10.601744 3153 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-667f9f4987-qrmz9" podStartSLOduration=4.35171372 podStartE2EDuration="23.601694367s" podCreationTimestamp="2026-03-14 00:14:47 +0000 UTC" firstStartedPulling="2026-03-14 00:14:50.807840237 +0000 UTC m=+53.402020575" lastFinishedPulling="2026-03-14 00:15:10.057820884 +0000 UTC m=+72.652001222" observedRunningTime="2026-03-14 00:15:10.600640311 +0000 UTC m=+73.194820661" watchObservedRunningTime="2026-03-14 00:15:10.601694367 +0000 UTC m=+73.195874729" Mar 14 00:15:10.923254 systemd[1]: Started sshd@8-172.31.18.130:22-68.220.241.50:55000.service - OpenSSH per-connection server daemon (68.220.241.50:55000). Mar 14 00:15:11.444398 sshd[6344]: Accepted publickey for core from 68.220.241.50 port 55000 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:15:11.448582 sshd[6344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:11.458850 systemd-logind[1913]: New session 9 of user core. Mar 14 00:15:11.465788 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 14 00:15:11.967918 sshd[6344]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:11.973892 systemd[1]: sshd@8-172.31.18.130:22-68.220.241.50:55000.service: Deactivated successfully. Mar 14 00:15:11.978360 systemd[1]: session-9.scope: Deactivated successfully. Mar 14 00:15:11.982113 systemd-logind[1913]: Session 9 logged out. Waiting for processes to exit. Mar 14 00:15:11.985017 systemd-logind[1913]: Removed session 9. Mar 14 00:15:17.082005 systemd[1]: Started sshd@9-172.31.18.130:22-68.220.241.50:48776.service - OpenSSH per-connection server daemon (68.220.241.50:48776). Mar 14 00:15:17.630502 sshd[6395]: Accepted publickey for core from 68.220.241.50 port 48776 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:15:17.633998 sshd[6395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:17.641652 systemd-logind[1913]: New session 10 of user core. Mar 14 00:15:17.647734 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 14 00:15:18.155882 sshd[6395]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:18.163286 systemd[1]: sshd@9-172.31.18.130:22-68.220.241.50:48776.service: Deactivated successfully. Mar 14 00:15:18.167526 systemd[1]: session-10.scope: Deactivated successfully. Mar 14 00:15:18.170697 systemd-logind[1913]: Session 10 logged out. Waiting for processes to exit. Mar 14 00:15:18.173043 systemd-logind[1913]: Removed session 10. Mar 14 00:15:23.248585 systemd[1]: Started sshd@10-172.31.18.130:22-68.220.241.50:39640.service - OpenSSH per-connection server daemon (68.220.241.50:39640). Mar 14 00:15:23.740560 sshd[6432]: Accepted publickey for core from 68.220.241.50 port 39640 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:15:23.743690 sshd[6432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:23.752651 systemd-logind[1913]: New session 11 of user core. Mar 14 00:15:23.757737 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 14 00:15:24.274792 sshd[6432]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:24.280208 systemd-logind[1913]: Session 11 logged out. Waiting for processes to exit. Mar 14 00:15:24.281179 systemd[1]: sshd@10-172.31.18.130:22-68.220.241.50:39640.service: Deactivated successfully. Mar 14 00:15:24.286354 systemd[1]: session-11.scope: Deactivated successfully. Mar 14 00:15:24.291185 systemd-logind[1913]: Removed session 11. Mar 14 00:15:29.384991 systemd[1]: Started sshd@11-172.31.18.130:22-68.220.241.50:39644.service - OpenSSH per-connection server daemon (68.220.241.50:39644). Mar 14 00:15:29.950159 sshd[6495]: Accepted publickey for core from 68.220.241.50 port 39644 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:15:29.953085 sshd[6495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:29.962545 systemd-logind[1913]: New session 12 of user core. Mar 14 00:15:29.969744 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 14 00:15:30.498173 sshd[6495]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:30.505347 systemd-logind[1913]: Session 12 logged out. Waiting for processes to exit. Mar 14 00:15:30.505942 systemd[1]: sshd@11-172.31.18.130:22-68.220.241.50:39644.service: Deactivated successfully. Mar 14 00:15:30.511009 systemd[1]: session-12.scope: Deactivated successfully. Mar 14 00:15:30.515296 systemd-logind[1913]: Removed session 12. Mar 14 00:15:30.584984 systemd[1]: Started sshd@12-172.31.18.130:22-68.220.241.50:39646.service - OpenSSH per-connection server daemon (68.220.241.50:39646). Mar 14 00:15:31.089820 sshd[6509]: Accepted publickey for core from 68.220.241.50 port 39646 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:15:31.093118 sshd[6509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:31.101265 systemd-logind[1913]: New session 13 of user core. Mar 14 00:15:31.112741 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 14 00:15:31.681010 sshd[6509]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:31.689622 systemd[1]: session-13.scope: Deactivated successfully. Mar 14 00:15:31.692883 systemd-logind[1913]: Session 13 logged out. Waiting for processes to exit. Mar 14 00:15:31.694176 systemd[1]: sshd@12-172.31.18.130:22-68.220.241.50:39646.service: Deactivated successfully. Mar 14 00:15:31.700662 systemd-logind[1913]: Removed session 13. Mar 14 00:15:31.782031 systemd[1]: Started sshd@13-172.31.18.130:22-68.220.241.50:39658.service - OpenSSH per-connection server daemon (68.220.241.50:39658). Mar 14 00:15:32.306490 sshd[6536]: Accepted publickey for core from 68.220.241.50 port 39658 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:15:32.309233 sshd[6536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:32.322671 systemd-logind[1913]: New session 14 of user core. Mar 14 00:15:32.329798 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 14 00:15:32.817390 sshd[6536]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:32.825620 systemd[1]: sshd@13-172.31.18.130:22-68.220.241.50:39658.service: Deactivated successfully. Mar 14 00:15:32.832907 systemd[1]: session-14.scope: Deactivated successfully. Mar 14 00:15:32.835367 systemd-logind[1913]: Session 14 logged out. Waiting for processes to exit. Mar 14 00:15:32.840498 systemd-logind[1913]: Removed session 14. Mar 14 00:15:35.589180 systemd[1]: run-containerd-runc-k8s.io-3fa40e10b3139525fc7e9aa6e715b47fa09aa6b4ba3ebfd772d88cc4eef35939-runc.LdNZpi.mount: Deactivated successfully. Mar 14 00:15:37.918307 systemd[1]: Started sshd@14-172.31.18.130:22-68.220.241.50:42898.service - OpenSSH per-connection server daemon (68.220.241.50:42898). Mar 14 00:15:38.429843 sshd[6588]: Accepted publickey for core from 68.220.241.50 port 42898 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:15:38.435085 sshd[6588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:38.444545 systemd-logind[1913]: New session 15 of user core. Mar 14 00:15:38.449744 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 14 00:15:38.931307 sshd[6588]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:38.938998 systemd-logind[1913]: Session 15 logged out. Waiting for processes to exit. Mar 14 00:15:38.941040 systemd[1]: sshd@14-172.31.18.130:22-68.220.241.50:42898.service: Deactivated successfully. Mar 14 00:15:38.945387 systemd[1]: session-15.scope: Deactivated successfully. Mar 14 00:15:38.947989 systemd-logind[1913]: Removed session 15. Mar 14 00:15:44.021927 systemd[1]: Started sshd@15-172.31.18.130:22-68.220.241.50:58208.service - OpenSSH per-connection server daemon (68.220.241.50:58208). Mar 14 00:15:44.523507 sshd[6601]: Accepted publickey for core from 68.220.241.50 port 58208 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:15:44.525499 sshd[6601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:44.533622 systemd-logind[1913]: New session 16 of user core. Mar 14 00:15:44.545806 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 14 00:15:45.000944 sshd[6601]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:45.008908 systemd-logind[1913]: Session 16 logged out. Waiting for processes to exit. Mar 14 00:15:45.010770 systemd[1]: sshd@15-172.31.18.130:22-68.220.241.50:58208.service: Deactivated successfully. Mar 14 00:15:45.015579 systemd[1]: session-16.scope: Deactivated successfully. Mar 14 00:15:45.017927 systemd-logind[1913]: Removed session 16. Mar 14 00:15:45.098997 systemd[1]: Started sshd@16-172.31.18.130:22-68.220.241.50:58220.service - OpenSSH per-connection server daemon (68.220.241.50:58220). Mar 14 00:15:45.602488 sshd[6614]: Accepted publickey for core from 68.220.241.50 port 58220 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:15:45.604548 sshd[6614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:45.612921 systemd-logind[1913]: New session 17 of user core. Mar 14 00:15:45.618733 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 14 00:15:46.492124 sshd[6614]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:46.498713 systemd[1]: sshd@16-172.31.18.130:22-68.220.241.50:58220.service: Deactivated successfully. Mar 14 00:15:46.503007 systemd[1]: session-17.scope: Deactivated successfully. Mar 14 00:15:46.506401 systemd-logind[1913]: Session 17 logged out. Waiting for processes to exit. Mar 14 00:15:46.509325 systemd-logind[1913]: Removed session 17. Mar 14 00:15:46.602027 systemd[1]: Started sshd@17-172.31.18.130:22-68.220.241.50:58234.service - OpenSSH per-connection server daemon (68.220.241.50:58234). Mar 14 00:15:47.155590 sshd[6646]: Accepted publickey for core from 68.220.241.50 port 58234 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:15:47.158475 sshd[6646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:47.168190 systemd-logind[1913]: New session 18 of user core. Mar 14 00:15:47.179757 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 14 00:15:48.541305 sshd[6646]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:48.550782 systemd-logind[1913]: Session 18 logged out. Waiting for processes to exit. Mar 14 00:15:48.553042 systemd[1]: sshd@17-172.31.18.130:22-68.220.241.50:58234.service: Deactivated successfully. Mar 14 00:15:48.560687 systemd[1]: session-18.scope: Deactivated successfully. Mar 14 00:15:48.564471 systemd-logind[1913]: Removed session 18. Mar 14 00:15:48.633069 systemd[1]: Started sshd@18-172.31.18.130:22-68.220.241.50:58238.service - OpenSSH per-connection server daemon (68.220.241.50:58238). Mar 14 00:15:49.143054 sshd[6670]: Accepted publickey for core from 68.220.241.50 port 58238 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:15:49.146167 sshd[6670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:49.155914 systemd-logind[1913]: New session 19 of user core. Mar 14 00:15:49.164741 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 14 00:15:49.892096 sshd[6670]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:49.898963 systemd-logind[1913]: Session 19 logged out. Waiting for processes to exit. Mar 14 00:15:49.902152 systemd[1]: sshd@18-172.31.18.130:22-68.220.241.50:58238.service: Deactivated successfully. Mar 14 00:15:49.908538 systemd[1]: session-19.scope: Deactivated successfully. Mar 14 00:15:49.911014 systemd-logind[1913]: Removed session 19. Mar 14 00:15:49.988026 systemd[1]: Started sshd@19-172.31.18.130:22-68.220.241.50:58244.service - OpenSSH per-connection server daemon (68.220.241.50:58244). Mar 14 00:15:50.501169 sshd[6684]: Accepted publickey for core from 68.220.241.50 port 58244 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:15:50.502821 sshd[6684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:50.510591 systemd-logind[1913]: New session 20 of user core. Mar 14 00:15:50.518824 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 14 00:15:50.975878 sshd[6684]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:50.983606 systemd[1]: sshd@19-172.31.18.130:22-68.220.241.50:58244.service: Deactivated successfully. Mar 14 00:15:50.993673 systemd[1]: session-20.scope: Deactivated successfully. Mar 14 00:15:50.997905 systemd-logind[1913]: Session 20 logged out. Waiting for processes to exit. Mar 14 00:15:51.001709 systemd-logind[1913]: Removed session 20. Mar 14 00:15:56.073997 systemd[1]: Started sshd@20-172.31.18.130:22-68.220.241.50:58572.service - OpenSSH per-connection server daemon (68.220.241.50:58572). Mar 14 00:15:56.576951 sshd[6718]: Accepted publickey for core from 68.220.241.50 port 58572 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:15:56.578800 sshd[6718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:15:56.586640 systemd-logind[1913]: New session 21 of user core. Mar 14 00:15:56.596805 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 14 00:15:57.060830 sshd[6718]: pam_unix(sshd:session): session closed for user core Mar 14 00:15:57.069380 systemd[1]: sshd@20-172.31.18.130:22-68.220.241.50:58572.service: Deactivated successfully. Mar 14 00:15:57.073233 systemd[1]: session-21.scope: Deactivated successfully. Mar 14 00:15:57.076180 systemd-logind[1913]: Session 21 logged out. Waiting for processes to exit. Mar 14 00:15:57.078364 systemd-logind[1913]: Removed session 21. Mar 14 00:16:02.175057 systemd[1]: Started sshd@21-172.31.18.130:22-68.220.241.50:40678.service - OpenSSH per-connection server daemon (68.220.241.50:40678). Mar 14 00:16:02.725345 sshd[6735]: Accepted publickey for core from 68.220.241.50 port 40678 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:02.729225 sshd[6735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:02.738952 systemd-logind[1913]: New session 22 of user core. Mar 14 00:16:02.742742 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 14 00:16:03.231775 sshd[6735]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:03.238337 systemd-logind[1913]: Session 22 logged out. Waiting for processes to exit. Mar 14 00:16:03.241782 systemd[1]: sshd@21-172.31.18.130:22-68.220.241.50:40678.service: Deactivated successfully. Mar 14 00:16:03.246119 systemd[1]: session-22.scope: Deactivated successfully. Mar 14 00:16:03.250592 systemd-logind[1913]: Removed session 22. Mar 14 00:16:08.324226 systemd[1]: Started sshd@22-172.31.18.130:22-68.220.241.50:40684.service - OpenSSH per-connection server daemon (68.220.241.50:40684). Mar 14 00:16:08.830472 sshd[6770]: Accepted publickey for core from 68.220.241.50 port 40684 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:08.832338 sshd[6770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:08.846536 systemd-logind[1913]: New session 23 of user core. Mar 14 00:16:08.851805 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 14 00:16:09.349880 sshd[6770]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:09.360805 systemd-logind[1913]: Session 23 logged out. Waiting for processes to exit. Mar 14 00:16:09.361825 systemd[1]: sshd@22-172.31.18.130:22-68.220.241.50:40684.service: Deactivated successfully. Mar 14 00:16:09.367859 systemd[1]: session-23.scope: Deactivated successfully. Mar 14 00:16:09.371566 systemd-logind[1913]: Removed session 23. Mar 14 00:16:14.463200 systemd[1]: Started sshd@23-172.31.18.130:22-68.220.241.50:34220.service - OpenSSH per-connection server daemon (68.220.241.50:34220). Mar 14 00:16:15.007258 sshd[6795]: Accepted publickey for core from 68.220.241.50 port 34220 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:15.010032 sshd[6795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:15.020763 systemd-logind[1913]: New session 24 of user core. Mar 14 00:16:15.032073 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 14 00:16:15.521519 sshd[6795]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:15.528393 systemd[1]: sshd@23-172.31.18.130:22-68.220.241.50:34220.service: Deactivated successfully. Mar 14 00:16:15.532908 systemd[1]: session-24.scope: Deactivated successfully. Mar 14 00:16:15.536012 systemd-logind[1913]: Session 24 logged out. Waiting for processes to exit. Mar 14 00:16:15.539637 systemd-logind[1913]: Removed session 24. Mar 14 00:16:20.608003 systemd[1]: Started sshd@24-172.31.18.130:22-68.220.241.50:34236.service - OpenSSH per-connection server daemon (68.220.241.50:34236). Mar 14 00:16:21.114345 sshd[6849]: Accepted publickey for core from 68.220.241.50 port 34236 ssh2: RSA SHA256:wTcZPyU9bRq4OYS8Q3ttppxvBQbw+A1YvhVCQAQQbeI Mar 14 00:16:21.116150 sshd[6849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:16:21.124572 systemd-logind[1913]: New session 25 of user core. Mar 14 00:16:21.133786 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 14 00:16:21.593601 sshd[6849]: pam_unix(sshd:session): session closed for user core Mar 14 00:16:21.601185 systemd[1]: sshd@24-172.31.18.130:22-68.220.241.50:34236.service: Deactivated successfully. Mar 14 00:16:21.606617 systemd[1]: session-25.scope: Deactivated successfully. Mar 14 00:16:21.608418 systemd-logind[1913]: Session 25 logged out. Waiting for processes to exit. Mar 14 00:16:21.610414 systemd-logind[1913]: Removed session 25. Mar 14 00:16:39.429400 systemd[1]: cri-containerd-369ba1cafe629032a6a3eee129d60a65fa8e6a8fb9baca78dbd1b5fe1ed61fc7.scope: Deactivated successfully. Mar 14 00:16:39.429913 systemd[1]: cri-containerd-369ba1cafe629032a6a3eee129d60a65fa8e6a8fb9baca78dbd1b5fe1ed61fc7.scope: Consumed 23.519s CPU time. Mar 14 00:16:39.470800 systemd[1]: cri-containerd-1db24d5b3c9c60b5337de72e17a015a90450d95bdad048e2404ba7f0e2554368.scope: Deactivated successfully. Mar 14 00:16:39.473825 systemd[1]: cri-containerd-1db24d5b3c9c60b5337de72e17a015a90450d95bdad048e2404ba7f0e2554368.scope: Consumed 4.236s CPU time, 17.9M memory peak, 0B memory swap peak. Mar 14 00:16:39.514748 containerd[1934]: time="2026-03-14T00:16:39.512643916Z" level=info msg="shim disconnected" id=369ba1cafe629032a6a3eee129d60a65fa8e6a8fb9baca78dbd1b5fe1ed61fc7 namespace=k8s.io Mar 14 00:16:39.514748 containerd[1934]: time="2026-03-14T00:16:39.514608304Z" level=warning msg="cleaning up after shim disconnected" id=369ba1cafe629032a6a3eee129d60a65fa8e6a8fb9baca78dbd1b5fe1ed61fc7 namespace=k8s.io Mar 14 00:16:39.515762 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-369ba1cafe629032a6a3eee129d60a65fa8e6a8fb9baca78dbd1b5fe1ed61fc7-rootfs.mount: Deactivated successfully. Mar 14 00:16:39.516873 containerd[1934]: time="2026-03-14T00:16:39.514637380Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:16:39.557610 containerd[1934]: time="2026-03-14T00:16:39.556881857Z" level=info msg="shim disconnected" id=1db24d5b3c9c60b5337de72e17a015a90450d95bdad048e2404ba7f0e2554368 namespace=k8s.io Mar 14 00:16:39.557610 containerd[1934]: time="2026-03-14T00:16:39.556963289Z" level=warning msg="cleaning up after shim disconnected" id=1db24d5b3c9c60b5337de72e17a015a90450d95bdad048e2404ba7f0e2554368 namespace=k8s.io Mar 14 00:16:39.557610 containerd[1934]: time="2026-03-14T00:16:39.556985273Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:16:39.558411 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1db24d5b3c9c60b5337de72e17a015a90450d95bdad048e2404ba7f0e2554368-rootfs.mount: Deactivated successfully. Mar 14 00:16:39.880789 kubelet[3153]: I0314 00:16:39.880491 3153 scope.go:122] "RemoveContainer" containerID="369ba1cafe629032a6a3eee129d60a65fa8e6a8fb9baca78dbd1b5fe1ed61fc7" Mar 14 00:16:39.886567 kubelet[3153]: I0314 00:16:39.886010 3153 scope.go:122] "RemoveContainer" containerID="1db24d5b3c9c60b5337de72e17a015a90450d95bdad048e2404ba7f0e2554368" Mar 14 00:16:39.888611 containerd[1934]: time="2026-03-14T00:16:39.888044790Z" level=info msg="CreateContainer within sandbox \"e3e4a5398604fca96ca49dac194f74608ae429625cc8e43b43916769cad21c87\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Mar 14 00:16:39.894894 containerd[1934]: time="2026-03-14T00:16:39.894811146Z" level=info msg="CreateContainer within sandbox \"da0d0197b828f2db236bf476e3dbbd4144157123c50cbba5e93be2467d47b896\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 14 00:16:39.930807 containerd[1934]: time="2026-03-14T00:16:39.930726547Z" level=info msg="CreateContainer within sandbox \"e3e4a5398604fca96ca49dac194f74608ae429625cc8e43b43916769cad21c87\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"774c30079258be0a75e715a55823091647f5e66de59729ad6def4eefc89e4097\"" Mar 14 00:16:39.931912 containerd[1934]: time="2026-03-14T00:16:39.931860235Z" level=info msg="StartContainer for \"774c30079258be0a75e715a55823091647f5e66de59729ad6def4eefc89e4097\"" Mar 14 00:16:39.963867 containerd[1934]: time="2026-03-14T00:16:39.963806743Z" level=info msg="CreateContainer within sandbox \"da0d0197b828f2db236bf476e3dbbd4144157123c50cbba5e93be2467d47b896\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"82d518ae502695b7b9ad2a413889ab21230a8f034057f148d08e64ad9ae4e2e5\"" Mar 14 00:16:39.966575 containerd[1934]: time="2026-03-14T00:16:39.964833547Z" level=info msg="StartContainer for \"82d518ae502695b7b9ad2a413889ab21230a8f034057f148d08e64ad9ae4e2e5\"" Mar 14 00:16:40.004559 systemd[1]: Started cri-containerd-774c30079258be0a75e715a55823091647f5e66de59729ad6def4eefc89e4097.scope - libcontainer container 774c30079258be0a75e715a55823091647f5e66de59729ad6def4eefc89e4097. Mar 14 00:16:40.034725 systemd[1]: Started cri-containerd-82d518ae502695b7b9ad2a413889ab21230a8f034057f148d08e64ad9ae4e2e5.scope - libcontainer container 82d518ae502695b7b9ad2a413889ab21230a8f034057f148d08e64ad9ae4e2e5. Mar 14 00:16:40.081318 containerd[1934]: time="2026-03-14T00:16:40.081126687Z" level=info msg="StartContainer for \"774c30079258be0a75e715a55823091647f5e66de59729ad6def4eefc89e4097\" returns successfully" Mar 14 00:16:40.123468 containerd[1934]: time="2026-03-14T00:16:40.123373563Z" level=info msg="StartContainer for \"82d518ae502695b7b9ad2a413889ab21230a8f034057f148d08e64ad9ae4e2e5\" returns successfully" Mar 14 00:16:40.394077 kubelet[3153]: E0314 00:16:40.391018 3153 controller.go:251] "Failed to update lease" err="Put \"https://172.31.18.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-130?timeout=10s\": context deadline exceeded" Mar 14 00:16:40.520387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2090366956.mount: Deactivated successfully. Mar 14 00:16:43.508599 systemd[1]: cri-containerd-f15529d790a54875a9511117797dc6fba044928126632977689333e2667881eb.scope: Deactivated successfully. Mar 14 00:16:43.509954 systemd[1]: cri-containerd-f15529d790a54875a9511117797dc6fba044928126632977689333e2667881eb.scope: Consumed 2.698s CPU time, 13.3M memory peak, 0B memory swap peak. Mar 14 00:16:43.565481 containerd[1934]: time="2026-03-14T00:16:43.563915385Z" level=info msg="shim disconnected" id=f15529d790a54875a9511117797dc6fba044928126632977689333e2667881eb namespace=k8s.io Mar 14 00:16:43.565481 containerd[1934]: time="2026-03-14T00:16:43.564000969Z" level=warning msg="cleaning up after shim disconnected" id=f15529d790a54875a9511117797dc6fba044928126632977689333e2667881eb namespace=k8s.io Mar 14 00:16:43.565481 containerd[1934]: time="2026-03-14T00:16:43.564034269Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:16:43.570196 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f15529d790a54875a9511117797dc6fba044928126632977689333e2667881eb-rootfs.mount: Deactivated successfully. Mar 14 00:16:43.908555 kubelet[3153]: I0314 00:16:43.908119 3153 scope.go:122] "RemoveContainer" containerID="f15529d790a54875a9511117797dc6fba044928126632977689333e2667881eb" Mar 14 00:16:43.912859 containerd[1934]: time="2026-03-14T00:16:43.912755962Z" level=info msg="CreateContainer within sandbox \"d76f540614f4bac28ca4c2c7a9a935d885efb5e2453824c919bca03e3d9f6cc6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 14 00:16:43.943053 containerd[1934]: time="2026-03-14T00:16:43.942741982Z" level=info msg="CreateContainer within sandbox \"d76f540614f4bac28ca4c2c7a9a935d885efb5e2453824c919bca03e3d9f6cc6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"9e90cf824ab0e088f788166687425f5d2649da7b4d5b27c50021db07614611ac\"" Mar 14 00:16:43.945467 containerd[1934]: time="2026-03-14T00:16:43.943845586Z" level=info msg="StartContainer for \"9e90cf824ab0e088f788166687425f5d2649da7b4d5b27c50021db07614611ac\"" Mar 14 00:16:44.006770 systemd[1]: Started cri-containerd-9e90cf824ab0e088f788166687425f5d2649da7b4d5b27c50021db07614611ac.scope - libcontainer container 9e90cf824ab0e088f788166687425f5d2649da7b4d5b27c50021db07614611ac. Mar 14 00:16:44.076636 containerd[1934]: time="2026-03-14T00:16:44.076570999Z" level=info msg="StartContainer for \"9e90cf824ab0e088f788166687425f5d2649da7b4d5b27c50021db07614611ac\" returns successfully" Mar 14 00:16:50.395228 kubelet[3153]: E0314 00:16:50.394871 3153 controller.go:251] "Failed to update lease" err="Put \"https://172.31.18.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-130?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 14 00:16:52.046583 systemd[1]: cri-containerd-774c30079258be0a75e715a55823091647f5e66de59729ad6def4eefc89e4097.scope: Deactivated successfully. Mar 14 00:16:52.096982 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-774c30079258be0a75e715a55823091647f5e66de59729ad6def4eefc89e4097-rootfs.mount: Deactivated successfully. Mar 14 00:16:52.108840 containerd[1934]: time="2026-03-14T00:16:52.108753387Z" level=info msg="shim disconnected" id=774c30079258be0a75e715a55823091647f5e66de59729ad6def4eefc89e4097 namespace=k8s.io Mar 14 00:16:52.108840 containerd[1934]: time="2026-03-14T00:16:52.108829491Z" level=warning msg="cleaning up after shim disconnected" id=774c30079258be0a75e715a55823091647f5e66de59729ad6def4eefc89e4097 namespace=k8s.io Mar 14 00:16:52.110132 containerd[1934]: time="2026-03-14T00:16:52.108851523Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:16:52.943763 kubelet[3153]: I0314 00:16:52.943714 3153 scope.go:122] "RemoveContainer" containerID="369ba1cafe629032a6a3eee129d60a65fa8e6a8fb9baca78dbd1b5fe1ed61fc7" Mar 14 00:16:52.944485 kubelet[3153]: I0314 00:16:52.944249 3153 scope.go:122] "RemoveContainer" containerID="774c30079258be0a75e715a55823091647f5e66de59729ad6def4eefc89e4097" Mar 14 00:16:52.944585 kubelet[3153]: E0314 00:16:52.944487 3153 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-6cf4cccc57-qcdxv_tigera-operator(725d0b19-c966-43a4-9e56-5aec24257e7c)\"" pod="tigera-operator/tigera-operator-6cf4cccc57-qcdxv" podUID="725d0b19-c966-43a4-9e56-5aec24257e7c" Mar 14 00:16:52.947938 containerd[1934]: time="2026-03-14T00:16:52.947471659Z" level=info msg="RemoveContainer for \"369ba1cafe629032a6a3eee129d60a65fa8e6a8fb9baca78dbd1b5fe1ed61fc7\"" Mar 14 00:16:52.956873 containerd[1934]: time="2026-03-14T00:16:52.956822635Z" level=info msg="RemoveContainer for \"369ba1cafe629032a6a3eee129d60a65fa8e6a8fb9baca78dbd1b5fe1ed61fc7\" returns successfully"