Aug 13 00:19:56.237403 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Aug 13 00:19:56.237468 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Aug 12 22:21:53 -00 2025 Aug 13 00:19:56.237495 kernel: KASLR disabled due to lack of seed Aug 13 00:19:56.237512 kernel: efi: EFI v2.7 by EDK II Aug 13 00:19:56.237528 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x7852ee18 Aug 13 00:19:56.237544 kernel: ACPI: Early table checksum verification disabled Aug 13 00:19:56.237562 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Aug 13 00:19:56.237578 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Aug 13 00:19:56.237595 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Aug 13 00:19:56.237611 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Aug 13 00:19:56.237631 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Aug 13 00:19:56.237647 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Aug 13 00:19:56.237663 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Aug 13 00:19:56.237679 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Aug 13 00:19:56.237698 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Aug 13 00:19:56.237718 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Aug 13 00:19:56.237736 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Aug 13 00:19:56.237753 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Aug 13 00:19:56.237769 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Aug 13 00:19:56.237786 kernel: printk: bootconsole [uart0] enabled Aug 13 00:19:56.237802 kernel: NUMA: Failed to initialise from firmware Aug 13 00:19:56.237819 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Aug 13 00:19:56.237836 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Aug 13 00:19:56.237852 kernel: Zone ranges: Aug 13 00:19:56.237869 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Aug 13 00:19:56.237885 kernel: DMA32 empty Aug 13 00:19:56.237906 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Aug 13 00:19:56.237923 kernel: Movable zone start for each node Aug 13 00:19:56.237939 kernel: Early memory node ranges Aug 13 00:19:56.237956 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Aug 13 00:19:56.237972 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Aug 13 00:19:56.237988 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Aug 13 00:19:56.238005 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Aug 13 00:19:56.238021 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Aug 13 00:19:56.245551 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Aug 13 00:19:56.245584 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Aug 13 00:19:56.245602 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Aug 13 00:19:56.245619 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Aug 13 00:19:56.245645 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Aug 13 00:19:56.245663 kernel: psci: probing for conduit method from ACPI. Aug 13 00:19:56.245687 kernel: psci: PSCIv1.0 detected in firmware. Aug 13 00:19:56.245705 kernel: psci: Using standard PSCI v0.2 function IDs Aug 13 00:19:56.245723 kernel: psci: Trusted OS migration not required Aug 13 00:19:56.245745 kernel: psci: SMC Calling Convention v1.1 Aug 13 00:19:56.245764 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Aug 13 00:19:56.245781 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Aug 13 00:19:56.245799 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Aug 13 00:19:56.245817 kernel: pcpu-alloc: [0] 0 [0] 1 Aug 13 00:19:56.245835 kernel: Detected PIPT I-cache on CPU0 Aug 13 00:19:56.245853 kernel: CPU features: detected: GIC system register CPU interface Aug 13 00:19:56.245870 kernel: CPU features: detected: Spectre-v2 Aug 13 00:19:56.245888 kernel: CPU features: detected: Spectre-v3a Aug 13 00:19:56.245905 kernel: CPU features: detected: Spectre-BHB Aug 13 00:19:56.245923 kernel: CPU features: detected: ARM erratum 1742098 Aug 13 00:19:56.245946 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Aug 13 00:19:56.245965 kernel: alternatives: applying boot alternatives Aug 13 00:19:56.245986 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 13 00:19:56.246007 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:19:56.246025 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:19:56.247083 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:19:56.247106 kernel: Fallback order for Node 0: 0 Aug 13 00:19:56.247125 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Aug 13 00:19:56.247143 kernel: Policy zone: Normal Aug 13 00:19:56.247160 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:19:56.247178 kernel: software IO TLB: area num 2. Aug 13 00:19:56.247203 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Aug 13 00:19:56.247223 kernel: Memory: 3820088K/4030464K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 210376K reserved, 0K cma-reserved) Aug 13 00:19:56.247241 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:19:56.247259 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:19:56.247278 kernel: rcu: RCU event tracing is enabled. Aug 13 00:19:56.247296 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:19:56.247315 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:19:56.247334 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:19:56.247352 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:19:56.247369 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:19:56.247388 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 13 00:19:56.247412 kernel: GICv3: 96 SPIs implemented Aug 13 00:19:56.247430 kernel: GICv3: 0 Extended SPIs implemented Aug 13 00:19:56.247449 kernel: Root IRQ handler: gic_handle_irq Aug 13 00:19:56.247467 kernel: GICv3: GICv3 features: 16 PPIs Aug 13 00:19:56.247485 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Aug 13 00:19:56.247504 kernel: ITS [mem 0x10080000-0x1009ffff] Aug 13 00:19:56.247522 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Aug 13 00:19:56.247541 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Aug 13 00:19:56.247559 kernel: GICv3: using LPI property table @0x00000004000d0000 Aug 13 00:19:56.247577 kernel: ITS: Using hypervisor restricted LPI range [128] Aug 13 00:19:56.247595 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Aug 13 00:19:56.247613 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 00:19:56.247635 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Aug 13 00:19:56.247653 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Aug 13 00:19:56.247671 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Aug 13 00:19:56.247689 kernel: Console: colour dummy device 80x25 Aug 13 00:19:56.247708 kernel: printk: console [tty1] enabled Aug 13 00:19:56.247726 kernel: ACPI: Core revision 20230628 Aug 13 00:19:56.247744 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Aug 13 00:19:56.247762 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:19:56.247781 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 00:19:56.247804 kernel: landlock: Up and running. Aug 13 00:19:56.247823 kernel: SELinux: Initializing. Aug 13 00:19:56.247842 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:19:56.247860 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:19:56.247879 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:19:56.247898 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:19:56.247916 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:19:56.247934 kernel: rcu: Max phase no-delay instances is 400. Aug 13 00:19:56.247952 kernel: Platform MSI: ITS@0x10080000 domain created Aug 13 00:19:56.247974 kernel: PCI/MSI: ITS@0x10080000 domain created Aug 13 00:19:56.247992 kernel: Remapping and enabling EFI services. Aug 13 00:19:56.248010 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:19:56.248028 kernel: Detected PIPT I-cache on CPU1 Aug 13 00:19:56.249092 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Aug 13 00:19:56.249113 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Aug 13 00:19:56.249132 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Aug 13 00:19:56.249150 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:19:56.249168 kernel: SMP: Total of 2 processors activated. Aug 13 00:19:56.249186 kernel: CPU features: detected: 32-bit EL0 Support Aug 13 00:19:56.249211 kernel: CPU features: detected: 32-bit EL1 Support Aug 13 00:19:56.249230 kernel: CPU features: detected: CRC32 instructions Aug 13 00:19:56.249259 kernel: CPU: All CPU(s) started at EL1 Aug 13 00:19:56.249282 kernel: alternatives: applying system-wide alternatives Aug 13 00:19:56.249301 kernel: devtmpfs: initialized Aug 13 00:19:56.249320 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:19:56.249339 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:19:56.249358 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:19:56.249378 kernel: SMBIOS 3.0.0 present. Aug 13 00:19:56.249401 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Aug 13 00:19:56.249420 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:19:56.249459 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 13 00:19:56.249481 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 13 00:19:56.249500 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 13 00:19:56.249519 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:19:56.249538 kernel: audit: type=2000 audit(0.289:1): state=initialized audit_enabled=0 res=1 Aug 13 00:19:56.249562 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:19:56.249581 kernel: cpuidle: using governor menu Aug 13 00:19:56.249600 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 13 00:19:56.249619 kernel: ASID allocator initialised with 65536 entries Aug 13 00:19:56.249638 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:19:56.249657 kernel: Serial: AMBA PL011 UART driver Aug 13 00:19:56.249676 kernel: Modules: 17488 pages in range for non-PLT usage Aug 13 00:19:56.249694 kernel: Modules: 509008 pages in range for PLT usage Aug 13 00:19:56.249714 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:19:56.249738 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 00:19:56.249758 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 13 00:19:56.249777 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 13 00:19:56.249796 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:19:56.249815 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 00:19:56.249834 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 13 00:19:56.249852 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 13 00:19:56.249871 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:19:56.249890 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:19:56.249913 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:19:56.249934 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:19:56.249953 kernel: ACPI: Interpreter enabled Aug 13 00:19:56.249972 kernel: ACPI: Using GIC for interrupt routing Aug 13 00:19:56.249992 kernel: ACPI: MCFG table detected, 1 entries Aug 13 00:19:56.250011 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Aug 13 00:19:56.250383 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:19:56.250612 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 13 00:19:56.250824 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 13 00:19:56.254078 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Aug 13 00:19:56.254350 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Aug 13 00:19:56.254377 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Aug 13 00:19:56.254397 kernel: acpiphp: Slot [1] registered Aug 13 00:19:56.254416 kernel: acpiphp: Slot [2] registered Aug 13 00:19:56.254435 kernel: acpiphp: Slot [3] registered Aug 13 00:19:56.254453 kernel: acpiphp: Slot [4] registered Aug 13 00:19:56.254481 kernel: acpiphp: Slot [5] registered Aug 13 00:19:56.254500 kernel: acpiphp: Slot [6] registered Aug 13 00:19:56.254519 kernel: acpiphp: Slot [7] registered Aug 13 00:19:56.254537 kernel: acpiphp: Slot [8] registered Aug 13 00:19:56.254556 kernel: acpiphp: Slot [9] registered Aug 13 00:19:56.254574 kernel: acpiphp: Slot [10] registered Aug 13 00:19:56.254593 kernel: acpiphp: Slot [11] registered Aug 13 00:19:56.254611 kernel: acpiphp: Slot [12] registered Aug 13 00:19:56.254629 kernel: acpiphp: Slot [13] registered Aug 13 00:19:56.254647 kernel: acpiphp: Slot [14] registered Aug 13 00:19:56.254670 kernel: acpiphp: Slot [15] registered Aug 13 00:19:56.254689 kernel: acpiphp: Slot [16] registered Aug 13 00:19:56.254707 kernel: acpiphp: Slot [17] registered Aug 13 00:19:56.254725 kernel: acpiphp: Slot [18] registered Aug 13 00:19:56.254744 kernel: acpiphp: Slot [19] registered Aug 13 00:19:56.254762 kernel: acpiphp: Slot [20] registered Aug 13 00:19:56.254780 kernel: acpiphp: Slot [21] registered Aug 13 00:19:56.254799 kernel: acpiphp: Slot [22] registered Aug 13 00:19:56.254817 kernel: acpiphp: Slot [23] registered Aug 13 00:19:56.254839 kernel: acpiphp: Slot [24] registered Aug 13 00:19:56.254859 kernel: acpiphp: Slot [25] registered Aug 13 00:19:56.254877 kernel: acpiphp: Slot [26] registered Aug 13 00:19:56.254895 kernel: acpiphp: Slot [27] registered Aug 13 00:19:56.254914 kernel: acpiphp: Slot [28] registered Aug 13 00:19:56.254932 kernel: acpiphp: Slot [29] registered Aug 13 00:19:56.254950 kernel: acpiphp: Slot [30] registered Aug 13 00:19:56.254969 kernel: acpiphp: Slot [31] registered Aug 13 00:19:56.254987 kernel: PCI host bridge to bus 0000:00 Aug 13 00:19:56.255262 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Aug 13 00:19:56.255459 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 13 00:19:56.255663 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Aug 13 00:19:56.255858 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Aug 13 00:19:56.258255 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Aug 13 00:19:56.258531 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Aug 13 00:19:56.258806 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Aug 13 00:19:56.259102 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Aug 13 00:19:56.259334 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Aug 13 00:19:56.262720 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Aug 13 00:19:56.263012 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Aug 13 00:19:56.263273 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Aug 13 00:19:56.263507 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Aug 13 00:19:56.263737 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Aug 13 00:19:56.263993 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Aug 13 00:19:56.264246 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Aug 13 00:19:56.264478 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Aug 13 00:19:56.264699 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Aug 13 00:19:56.264924 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Aug 13 00:19:56.269959 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Aug 13 00:19:56.270276 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Aug 13 00:19:56.270478 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 13 00:19:56.280168 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Aug 13 00:19:56.280216 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 13 00:19:56.280238 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 13 00:19:56.280258 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 13 00:19:56.280277 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 13 00:19:56.280296 kernel: iommu: Default domain type: Translated Aug 13 00:19:56.280315 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 13 00:19:56.280344 kernel: efivars: Registered efivars operations Aug 13 00:19:56.280363 kernel: vgaarb: loaded Aug 13 00:19:56.280381 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 13 00:19:56.280401 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:19:56.280419 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:19:56.280438 kernel: pnp: PnP ACPI init Aug 13 00:19:56.280671 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Aug 13 00:19:56.280701 kernel: pnp: PnP ACPI: found 1 devices Aug 13 00:19:56.280726 kernel: NET: Registered PF_INET protocol family Aug 13 00:19:56.280746 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:19:56.280766 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:19:56.280785 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:19:56.280805 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:19:56.280824 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 00:19:56.280843 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:19:56.280862 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:19:56.280881 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:19:56.280905 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:19:56.280924 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:19:56.280943 kernel: kvm [1]: HYP mode not available Aug 13 00:19:56.280962 kernel: Initialise system trusted keyrings Aug 13 00:19:56.280981 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:19:56.281000 kernel: Key type asymmetric registered Aug 13 00:19:56.281018 kernel: Asymmetric key parser 'x509' registered Aug 13 00:19:56.281063 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 00:19:56.281087 kernel: io scheduler mq-deadline registered Aug 13 00:19:56.281113 kernel: io scheduler kyber registered Aug 13 00:19:56.281132 kernel: io scheduler bfq registered Aug 13 00:19:56.281366 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Aug 13 00:19:56.281396 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 13 00:19:56.281415 kernel: ACPI: button: Power Button [PWRB] Aug 13 00:19:56.281455 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Aug 13 00:19:56.281477 kernel: ACPI: button: Sleep Button [SLPB] Aug 13 00:19:56.281496 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:19:56.281521 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Aug 13 00:19:56.281746 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Aug 13 00:19:56.281775 kernel: printk: console [ttyS0] disabled Aug 13 00:19:56.281795 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Aug 13 00:19:56.281814 kernel: printk: console [ttyS0] enabled Aug 13 00:19:56.281833 kernel: printk: bootconsole [uart0] disabled Aug 13 00:19:56.281852 kernel: thunder_xcv, ver 1.0 Aug 13 00:19:56.281871 kernel: thunder_bgx, ver 1.0 Aug 13 00:19:56.281889 kernel: nicpf, ver 1.0 Aug 13 00:19:56.281913 kernel: nicvf, ver 1.0 Aug 13 00:19:56.284264 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 13 00:19:56.284492 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-13T00:19:55 UTC (1755044395) Aug 13 00:19:56.284519 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 00:19:56.284539 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Aug 13 00:19:56.284559 kernel: watchdog: Delayed init of the lockup detector failed: -19 Aug 13 00:19:56.284578 kernel: watchdog: Hard watchdog permanently disabled Aug 13 00:19:56.284596 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:19:56.284624 kernel: Segment Routing with IPv6 Aug 13 00:19:56.284644 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:19:56.284662 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:19:56.284681 kernel: Key type dns_resolver registered Aug 13 00:19:56.284700 kernel: registered taskstats version 1 Aug 13 00:19:56.284718 kernel: Loading compiled-in X.509 certificates Aug 13 00:19:56.284737 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 7263800c6d21650660e2b030c1023dce09b1e8b6' Aug 13 00:19:56.284755 kernel: Key type .fscrypt registered Aug 13 00:19:56.284774 kernel: Key type fscrypt-provisioning registered Aug 13 00:19:56.284797 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:19:56.284817 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:19:56.284835 kernel: ima: No architecture policies found Aug 13 00:19:56.284854 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 13 00:19:56.284872 kernel: clk: Disabling unused clocks Aug 13 00:19:56.284891 kernel: Freeing unused kernel memory: 39424K Aug 13 00:19:56.284909 kernel: Run /init as init process Aug 13 00:19:56.284927 kernel: with arguments: Aug 13 00:19:56.284946 kernel: /init Aug 13 00:19:56.284964 kernel: with environment: Aug 13 00:19:56.284987 kernel: HOME=/ Aug 13 00:19:56.285005 kernel: TERM=linux Aug 13 00:19:56.285024 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:19:56.285068 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 00:19:56.285095 systemd[1]: Detected virtualization amazon. Aug 13 00:19:56.285116 systemd[1]: Detected architecture arm64. Aug 13 00:19:56.285136 systemd[1]: Running in initrd. Aug 13 00:19:56.285162 systemd[1]: No hostname configured, using default hostname. Aug 13 00:19:56.285183 systemd[1]: Hostname set to . Aug 13 00:19:56.285204 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:19:56.285224 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:19:56.285245 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:19:56.285266 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:19:56.285288 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 00:19:56.285309 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:19:56.285334 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 00:19:56.285355 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 00:19:56.285379 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 00:19:56.285400 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 00:19:56.285421 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:19:56.285462 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:19:56.285484 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:19:56.285511 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:19:56.285532 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:19:56.285552 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:19:56.285573 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:19:56.285593 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:19:56.285614 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:19:56.285635 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 00:19:56.285655 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:19:56.285676 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:19:56.285702 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:19:56.285722 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:19:56.285743 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 00:19:56.285763 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:19:56.285784 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 00:19:56.285804 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:19:56.285825 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:19:56.285845 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:19:56.285870 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:19:56.285891 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 00:19:56.285950 systemd-journald[251]: Collecting audit messages is disabled. Aug 13 00:19:56.285995 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:19:56.286021 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:19:56.288109 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:19:56.288145 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:19:56.288168 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:19:56.288192 systemd-journald[251]: Journal started Aug 13 00:19:56.288242 systemd-journald[251]: Runtime Journal (/run/log/journal/ec28fc96351396a29d843edc37c508b9) is 8.0M, max 75.3M, 67.3M free. Aug 13 00:19:56.267093 systemd-modules-load[252]: Inserted module 'overlay' Aug 13 00:19:56.314887 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:19:56.314957 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:19:56.310249 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:19:56.323076 kernel: Bridge firewalling registered Aug 13 00:19:56.324156 systemd-modules-load[252]: Inserted module 'br_netfilter' Aug 13 00:19:56.330481 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:19:56.331801 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:19:56.337103 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:19:56.349757 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:19:56.380455 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:19:56.404109 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:19:56.409010 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:19:56.425451 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:19:56.431507 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:19:56.440978 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 00:19:56.474081 dracut-cmdline[288]: dracut-dracut-053 Aug 13 00:19:56.484073 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 13 00:19:56.525415 systemd-resolved[286]: Positive Trust Anchors: Aug 13 00:19:56.529701 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:19:56.529774 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:19:56.607253 kernel: SCSI subsystem initialized Aug 13 00:19:56.614160 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:19:56.627262 kernel: iscsi: registered transport (tcp) Aug 13 00:19:56.649799 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:19:56.649875 kernel: QLogic iSCSI HBA Driver Aug 13 00:19:56.742083 kernel: random: crng init done Aug 13 00:19:56.742448 systemd-resolved[286]: Defaulting to hostname 'linux'. Aug 13 00:19:56.749141 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:19:56.760093 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:19:56.769723 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 00:19:56.781415 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 00:19:56.825278 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:19:56.825380 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:19:56.827250 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 00:19:56.894083 kernel: raid6: neonx8 gen() 6698 MB/s Aug 13 00:19:56.911074 kernel: raid6: neonx4 gen() 6503 MB/s Aug 13 00:19:56.928073 kernel: raid6: neonx2 gen() 5440 MB/s Aug 13 00:19:56.945073 kernel: raid6: neonx1 gen() 3941 MB/s Aug 13 00:19:56.962074 kernel: raid6: int64x8 gen() 3795 MB/s Aug 13 00:19:56.979072 kernel: raid6: int64x4 gen() 3716 MB/s Aug 13 00:19:56.996073 kernel: raid6: int64x2 gen() 3606 MB/s Aug 13 00:19:57.014107 kernel: raid6: int64x1 gen() 2770 MB/s Aug 13 00:19:57.014148 kernel: raid6: using algorithm neonx8 gen() 6698 MB/s Aug 13 00:19:57.033068 kernel: raid6: .... xor() 4875 MB/s, rmw enabled Aug 13 00:19:57.033115 kernel: raid6: using neon recovery algorithm Aug 13 00:19:57.041074 kernel: xor: measuring software checksum speed Aug 13 00:19:57.043333 kernel: 8regs : 10188 MB/sec Aug 13 00:19:57.043366 kernel: 32regs : 11900 MB/sec Aug 13 00:19:57.044616 kernel: arm64_neon : 9490 MB/sec Aug 13 00:19:57.044649 kernel: xor: using function: 32regs (11900 MB/sec) Aug 13 00:19:57.131084 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 00:19:57.151094 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:19:57.171328 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:19:57.205847 systemd-udevd[469]: Using default interface naming scheme 'v255'. Aug 13 00:19:57.213610 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:19:57.232432 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 00:19:57.262121 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Aug 13 00:19:57.320533 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:19:57.335472 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:19:57.447335 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:19:57.467599 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 00:19:57.512829 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 00:19:57.515236 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:19:57.515727 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:19:57.516469 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:19:57.534593 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 00:19:57.578565 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:19:57.661722 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 13 00:19:57.661786 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Aug 13 00:19:57.667674 kernel: ena 0000:00:05.0: ENA device version: 0.10 Aug 13 00:19:57.668124 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Aug 13 00:19:57.679170 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:19:57.685359 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:ac:af:d2:95:93 Aug 13 00:19:57.679417 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:19:57.686267 (udev-worker)[528]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:19:57.693203 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:19:57.700540 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:19:57.700843 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:19:57.705206 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:19:57.728973 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Aug 13 00:19:57.729131 kernel: nvme nvme0: pci function 0000:00:04.0 Aug 13 00:19:57.730021 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:19:57.745211 kernel: nvme nvme0: 2/0/0 default/read/poll queues Aug 13 00:19:57.760221 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:19:57.760290 kernel: GPT:9289727 != 16777215 Aug 13 00:19:57.760327 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:19:57.760352 kernel: GPT:9289727 != 16777215 Aug 13 00:19:57.760377 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:19:57.760401 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 00:19:57.774211 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:19:57.791389 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:19:57.838657 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:19:57.852444 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (516) Aug 13 00:19:57.901084 kernel: BTRFS: device fsid 03408483-5051-409a-aab4-4e6d5027e982 devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (517) Aug 13 00:19:57.971057 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Aug 13 00:19:58.006266 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Aug 13 00:19:58.022490 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 13 00:19:58.039344 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Aug 13 00:19:58.042253 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Aug 13 00:19:58.063440 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 00:19:58.078808 disk-uuid[660]: Primary Header is updated. Aug 13 00:19:58.078808 disk-uuid[660]: Secondary Entries is updated. Aug 13 00:19:58.078808 disk-uuid[660]: Secondary Header is updated. Aug 13 00:19:58.094074 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 00:19:58.102070 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 00:19:58.112095 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 00:19:59.111105 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 00:19:59.112537 disk-uuid[661]: The operation has completed successfully. Aug 13 00:19:59.298763 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:19:59.298976 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 00:19:59.344340 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 00:19:59.356567 sh[1005]: Success Aug 13 00:19:59.375126 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 13 00:19:59.500546 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 00:19:59.505168 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 00:19:59.526137 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 00:19:59.552785 kernel: BTRFS info (device dm-0): first mount of filesystem 03408483-5051-409a-aab4-4e6d5027e982 Aug 13 00:19:59.552854 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:19:59.552881 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 00:19:59.554673 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 00:19:59.557200 kernel: BTRFS info (device dm-0): using free space tree Aug 13 00:19:59.695086 kernel: BTRFS info (device dm-0): enabling ssd optimizations Aug 13 00:19:59.731078 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 00:19:59.731613 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 00:19:59.742443 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 00:19:59.753138 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 00:19:59.785184 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:19:59.785273 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:19:59.785307 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 13 00:19:59.793096 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 13 00:19:59.813178 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:19:59.813958 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 00:19:59.826105 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 00:19:59.837401 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 00:19:59.953366 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:19:59.967361 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:20:00.035504 systemd-networkd[1197]: lo: Link UP Aug 13 00:20:00.035525 systemd-networkd[1197]: lo: Gained carrier Aug 13 00:20:00.038054 systemd-networkd[1197]: Enumeration completed Aug 13 00:20:00.039022 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:20:00.039029 systemd-networkd[1197]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:20:00.040199 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:20:00.043235 systemd[1]: Reached target network.target - Network. Aug 13 00:20:00.064322 systemd-networkd[1197]: eth0: Link UP Aug 13 00:20:00.064335 systemd-networkd[1197]: eth0: Gained carrier Aug 13 00:20:00.064352 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:20:00.090121 systemd-networkd[1197]: eth0: DHCPv4 address 172.31.31.162/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 13 00:20:00.318448 ignition[1108]: Ignition 2.19.0 Aug 13 00:20:00.318476 ignition[1108]: Stage: fetch-offline Aug 13 00:20:00.323913 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:20:00.320148 ignition[1108]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:20:00.332524 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 00:20:00.320173 ignition[1108]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:20:00.320946 ignition[1108]: Ignition finished successfully Aug 13 00:20:00.368485 ignition[1208]: Ignition 2.19.0 Aug 13 00:20:00.368517 ignition[1208]: Stage: fetch Aug 13 00:20:00.370183 ignition[1208]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:20:00.370210 ignition[1208]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:20:00.370366 ignition[1208]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:20:00.384001 ignition[1208]: PUT result: OK Aug 13 00:20:00.387467 ignition[1208]: parsed url from cmdline: "" Aug 13 00:20:00.387490 ignition[1208]: no config URL provided Aug 13 00:20:00.387506 ignition[1208]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:20:00.387532 ignition[1208]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:20:00.387563 ignition[1208]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:20:00.397694 ignition[1208]: PUT result: OK Aug 13 00:20:00.397790 ignition[1208]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Aug 13 00:20:00.402628 ignition[1208]: GET result: OK Aug 13 00:20:00.402816 ignition[1208]: parsing config with SHA512: ca743d7c40863a669fe7cbb137511733cddf421ccd7f029f7e1229816ed28971285e8b6bef117c9a7129ffb2067788050d0f043bdad676e9da423ac401cde1f5 Aug 13 00:20:00.414070 unknown[1208]: fetched base config from "system" Aug 13 00:20:00.417162 unknown[1208]: fetched base config from "system" Aug 13 00:20:00.417220 unknown[1208]: fetched user config from "aws" Aug 13 00:20:00.418389 ignition[1208]: fetch: fetch complete Aug 13 00:20:00.425483 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 00:20:00.418402 ignition[1208]: fetch: fetch passed Aug 13 00:20:00.418514 ignition[1208]: Ignition finished successfully Aug 13 00:20:00.440960 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 00:20:00.478142 ignition[1214]: Ignition 2.19.0 Aug 13 00:20:00.478169 ignition[1214]: Stage: kargs Aug 13 00:20:00.478796 ignition[1214]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:20:00.478822 ignition[1214]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:20:00.478985 ignition[1214]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:20:00.482208 ignition[1214]: PUT result: OK Aug 13 00:20:00.496450 ignition[1214]: kargs: kargs passed Aug 13 00:20:00.496568 ignition[1214]: Ignition finished successfully Aug 13 00:20:00.498896 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 00:20:00.522471 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 00:20:00.547534 ignition[1220]: Ignition 2.19.0 Aug 13 00:20:00.548081 ignition[1220]: Stage: disks Aug 13 00:20:00.548739 ignition[1220]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:20:00.548764 ignition[1220]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:20:00.548919 ignition[1220]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:20:00.559175 ignition[1220]: PUT result: OK Aug 13 00:20:00.568748 ignition[1220]: disks: disks passed Aug 13 00:20:00.568935 ignition[1220]: Ignition finished successfully Aug 13 00:20:00.571235 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 00:20:00.579837 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 00:20:00.583504 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:20:00.592424 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:20:00.595105 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:20:00.597754 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:20:00.612553 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 00:20:00.656601 systemd-fsck[1228]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 00:20:00.662232 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 00:20:00.673323 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 00:20:00.776095 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 128aec8b-f05d-48ed-8996-c9e8b21a7810 r/w with ordered data mode. Quota mode: none. Aug 13 00:20:00.777631 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 00:20:00.782385 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 00:20:00.797322 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:20:00.804222 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 00:20:00.808256 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 00:20:00.808341 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:20:00.808391 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:20:00.838002 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 00:20:00.849348 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 00:20:00.866999 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1247) Aug 13 00:20:00.867083 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:20:00.867113 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:20:00.870762 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 13 00:20:00.876429 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 13 00:20:00.878238 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:20:01.270213 initrd-setup-root[1271]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:20:01.280882 initrd-setup-root[1278]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:20:01.292445 initrd-setup-root[1285]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:20:01.301483 initrd-setup-root[1292]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:20:01.324225 systemd-networkd[1197]: eth0: Gained IPv6LL Aug 13 00:20:01.659754 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 00:20:01.671390 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 00:20:01.688365 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 00:20:01.699240 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:20:01.701126 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 00:20:01.746263 ignition[1360]: INFO : Ignition 2.19.0 Aug 13 00:20:01.746263 ignition[1360]: INFO : Stage: mount Aug 13 00:20:01.746263 ignition[1360]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:20:01.746263 ignition[1360]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:20:01.747561 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 00:20:01.762786 ignition[1360]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:20:01.766229 ignition[1360]: INFO : PUT result: OK Aug 13 00:20:01.771678 ignition[1360]: INFO : mount: mount passed Aug 13 00:20:01.771678 ignition[1360]: INFO : Ignition finished successfully Aug 13 00:20:01.779296 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 00:20:01.790525 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 00:20:01.822321 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:20:01.847802 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1374) Aug 13 00:20:01.847873 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:20:01.847901 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:20:01.850850 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 13 00:20:01.856076 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 13 00:20:01.860413 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:20:01.902169 ignition[1391]: INFO : Ignition 2.19.0 Aug 13 00:20:01.902169 ignition[1391]: INFO : Stage: files Aug 13 00:20:01.907662 ignition[1391]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:20:01.907662 ignition[1391]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:20:01.907662 ignition[1391]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:20:01.907662 ignition[1391]: INFO : PUT result: OK Aug 13 00:20:01.920839 ignition[1391]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:20:01.925476 ignition[1391]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:20:01.925476 ignition[1391]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:20:01.970912 ignition[1391]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:20:01.974809 ignition[1391]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:20:01.978476 unknown[1391]: wrote ssh authorized keys file for user: core Aug 13 00:20:01.981423 ignition[1391]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:20:01.985333 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 00:20:01.985333 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 00:20:01.985333 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 13 00:20:01.985333 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Aug 13 00:20:02.092290 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 00:20:02.413673 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 13 00:20:02.420286 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:20:02.420286 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:20:02.420286 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:20:02.420286 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:20:02.420286 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:20:02.420286 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:20:02.420286 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:20:02.420286 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:20:02.420286 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:20:02.420286 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:20:02.420286 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:20:02.420286 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:20:02.420286 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:20:02.420286 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Aug 13 00:20:02.777295 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 00:20:03.144529 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:20:03.144529 ignition[1391]: INFO : files: op(c): [started] processing unit "containerd.service" Aug 13 00:20:03.152888 ignition[1391]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 00:20:03.152888 ignition[1391]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 00:20:03.152888 ignition[1391]: INFO : files: op(c): [finished] processing unit "containerd.service" Aug 13 00:20:03.152888 ignition[1391]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Aug 13 00:20:03.152888 ignition[1391]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:20:03.152888 ignition[1391]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:20:03.152888 ignition[1391]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Aug 13 00:20:03.152888 ignition[1391]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:20:03.152888 ignition[1391]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:20:03.152888 ignition[1391]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:20:03.152888 ignition[1391]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:20:03.152888 ignition[1391]: INFO : files: files passed Aug 13 00:20:03.152888 ignition[1391]: INFO : Ignition finished successfully Aug 13 00:20:03.205566 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 00:20:03.217551 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 00:20:03.229737 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 00:20:03.241873 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:20:03.245575 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 00:20:03.270258 initrd-setup-root-after-ignition[1419]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:20:03.270258 initrd-setup-root-after-ignition[1419]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:20:03.280587 initrd-setup-root-after-ignition[1423]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:20:03.288692 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:20:03.292068 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 00:20:03.306485 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 00:20:03.366343 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:20:03.366553 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 00:20:03.369990 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 00:20:03.373303 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 00:20:03.375790 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 00:20:03.377592 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 00:20:03.428122 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:20:03.440374 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 00:20:03.468219 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:20:03.471287 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:20:03.474442 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 00:20:03.483991 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:20:03.484262 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:20:03.488067 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 00:20:03.498947 systemd[1]: Stopped target basic.target - Basic System. Aug 13 00:20:03.501887 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 00:20:03.510171 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:20:03.513445 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 00:20:03.516756 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 00:20:03.528119 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:20:03.531805 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 00:20:03.540465 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 00:20:03.543674 systemd[1]: Stopped target swap.target - Swaps. Aug 13 00:20:03.550027 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:20:03.550295 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:20:03.556118 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:20:03.562227 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:20:03.565852 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 00:20:03.566186 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:20:03.570113 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:20:03.570444 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 00:20:03.589736 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:20:03.589988 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:20:03.593764 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:20:03.593977 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 00:20:03.617028 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 00:20:03.621412 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 00:20:03.632403 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:20:03.634328 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:20:03.646553 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:20:03.650547 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:20:03.669920 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:20:03.675370 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 00:20:03.687759 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:20:03.699875 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:20:03.702643 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 00:20:03.708614 ignition[1443]: INFO : Ignition 2.19.0 Aug 13 00:20:03.708614 ignition[1443]: INFO : Stage: umount Aug 13 00:20:03.708614 ignition[1443]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:20:03.708614 ignition[1443]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:20:03.708614 ignition[1443]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:20:03.723256 ignition[1443]: INFO : PUT result: OK Aug 13 00:20:03.728963 ignition[1443]: INFO : umount: umount passed Aug 13 00:20:03.728963 ignition[1443]: INFO : Ignition finished successfully Aug 13 00:20:03.732920 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:20:03.733198 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 00:20:03.742598 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:20:03.742713 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 00:20:03.745539 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:20:03.745629 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 00:20:03.748379 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:20:03.748473 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 00:20:03.751173 systemd[1]: Stopped target network.target - Network. Aug 13 00:20:03.753849 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:20:03.753968 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:20:03.759319 systemd[1]: Stopped target paths.target - Path Units. Aug 13 00:20:03.762025 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:20:03.771961 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:20:03.775474 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 00:20:03.777778 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 00:20:03.784695 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:20:03.784778 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:20:03.789564 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:20:03.789637 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:20:03.792457 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:20:03.792546 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 00:20:03.795448 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 00:20:03.795529 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 00:20:03.798350 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:20:03.798429 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 00:20:03.801787 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 00:20:03.805136 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 00:20:03.830136 systemd-networkd[1197]: eth0: DHCPv6 lease lost Aug 13 00:20:03.839570 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:20:03.841440 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 00:20:03.872998 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:20:03.873122 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:20:03.888714 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 00:20:03.894128 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:20:03.894245 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:20:03.897710 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:20:03.901668 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:20:03.902092 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 00:20:03.931539 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:20:03.931726 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:20:03.941780 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:20:03.941903 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 00:20:03.943049 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 00:20:03.943144 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:20:03.966826 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:20:03.967541 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:20:03.979219 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:20:03.979314 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 00:20:03.982205 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:20:03.982272 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:20:03.985069 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:20:03.985159 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:20:03.988168 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:20:03.988254 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 00:20:04.012703 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:20:04.012807 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:20:04.024201 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 00:20:04.027196 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:20:04.027330 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:20:04.031917 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 00:20:04.032073 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:20:04.050465 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:20:04.050586 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:20:04.053921 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:20:04.054018 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:20:04.058329 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:20:04.058517 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 00:20:04.099858 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:20:04.100291 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 00:20:04.108892 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 00:20:04.113320 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 00:20:04.160875 systemd[1]: Switching root. Aug 13 00:20:04.202767 systemd-journald[251]: Journal stopped Aug 13 00:20:06.708151 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Aug 13 00:20:06.708302 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:20:06.708343 kernel: SELinux: policy capability open_perms=1 Aug 13 00:20:06.708373 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:20:06.708404 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:20:06.708435 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:20:06.708467 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:20:06.708505 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:20:06.708536 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:20:06.708566 kernel: audit: type=1403 audit(1755044404.859:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:20:06.708608 systemd[1]: Successfully loaded SELinux policy in 84.313ms. Aug 13 00:20:06.708680 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.980ms. Aug 13 00:20:06.708722 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 00:20:06.708758 systemd[1]: Detected virtualization amazon. Aug 13 00:20:06.708789 systemd[1]: Detected architecture arm64. Aug 13 00:20:06.708823 systemd[1]: Detected first boot. Aug 13 00:20:06.708858 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:20:06.708890 zram_generator::config[1502]: No configuration found. Aug 13 00:20:06.708926 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:20:06.708956 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:20:06.708998 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Aug 13 00:20:06.709144 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 00:20:06.709188 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 00:20:06.709221 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 00:20:06.709260 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 00:20:06.712172 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 00:20:06.712231 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 00:20:06.712266 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 00:20:06.712301 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 00:20:06.712333 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:20:06.712365 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:20:06.712396 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 00:20:06.712434 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 00:20:06.712467 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 00:20:06.712504 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:20:06.712534 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 00:20:06.712566 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:20:06.712599 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 00:20:06.712632 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:20:06.712664 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:20:06.712696 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:20:06.712734 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:20:06.712764 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 00:20:06.712796 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 00:20:06.712826 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:20:06.712855 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 00:20:06.712887 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:20:06.712919 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:20:06.712954 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:20:06.712986 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 00:20:06.713023 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 00:20:06.713079 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 00:20:06.713111 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 00:20:06.713144 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 00:20:06.713177 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 00:20:06.713206 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 00:20:06.713236 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 00:20:06.713266 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:20:06.713322 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:20:06.713359 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 00:20:06.713392 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:20:06.713422 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:20:06.713464 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:20:06.713497 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 00:20:06.713529 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:20:06.713561 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:20:06.713591 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Aug 13 00:20:06.713628 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Aug 13 00:20:06.713658 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:20:06.713687 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:20:06.713717 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:20:06.713749 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 00:20:06.713779 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:20:06.713810 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 00:20:06.713842 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 00:20:06.713872 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 00:20:06.713907 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 00:20:06.713940 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 00:20:06.713971 kernel: fuse: init (API version 7.39) Aug 13 00:20:06.714000 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 00:20:06.714030 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:20:06.724231 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:20:06.724264 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 00:20:06.724295 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:20:06.724336 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:20:06.724366 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:20:06.724397 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:20:06.724425 kernel: loop: module loaded Aug 13 00:20:06.724457 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:20:06.724494 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:20:06.724525 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:20:06.724554 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 00:20:06.724586 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:20:06.724617 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:20:06.724647 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 00:20:06.724680 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:20:06.724712 kernel: ACPI: bus type drm_connector registered Aug 13 00:20:06.724787 systemd-journald[1606]: Collecting audit messages is disabled. Aug 13 00:20:06.724845 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 00:20:06.724880 systemd-journald[1606]: Journal started Aug 13 00:20:06.724931 systemd-journald[1606]: Runtime Journal (/run/log/journal/ec28fc96351396a29d843edc37c508b9) is 8.0M, max 75.3M, 67.3M free. Aug 13 00:20:06.744342 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 00:20:06.744463 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:20:06.764459 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 00:20:06.775080 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:20:06.793546 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 00:20:06.800114 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:20:06.820069 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:20:06.847087 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:20:06.864210 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:20:06.871680 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 00:20:06.878386 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:20:06.878813 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:20:06.885698 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 00:20:06.894242 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 00:20:06.902378 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 00:20:06.937341 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:20:06.955218 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 00:20:06.976568 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 00:20:06.985396 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 00:20:06.990709 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:20:07.018393 udevadm[1665]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 00:20:07.028623 systemd-journald[1606]: Time spent on flushing to /var/log/journal/ec28fc96351396a29d843edc37c508b9 is 46.335ms for 902 entries. Aug 13 00:20:07.028623 systemd-journald[1606]: System Journal (/var/log/journal/ec28fc96351396a29d843edc37c508b9) is 8.0M, max 195.6M, 187.6M free. Aug 13 00:20:07.083853 systemd-journald[1606]: Received client request to flush runtime journal. Aug 13 00:20:07.034299 systemd-tmpfiles[1635]: ACLs are not supported, ignoring. Aug 13 00:20:07.034325 systemd-tmpfiles[1635]: ACLs are not supported, ignoring. Aug 13 00:20:07.043784 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:20:07.062363 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 00:20:07.087905 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 00:20:07.148692 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 00:20:07.177355 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:20:07.217154 systemd-tmpfiles[1676]: ACLs are not supported, ignoring. Aug 13 00:20:07.217744 systemd-tmpfiles[1676]: ACLs are not supported, ignoring. Aug 13 00:20:07.227812 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:20:07.932584 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 00:20:07.945507 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:20:07.997939 systemd-udevd[1682]: Using default interface naming scheme 'v255'. Aug 13 00:20:08.048246 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:20:08.065346 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:20:08.100765 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 00:20:08.174652 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Aug 13 00:20:08.217425 (udev-worker)[1683]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:20:08.261332 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 00:20:08.436784 systemd-networkd[1689]: lo: Link UP Aug 13 00:20:08.437727 systemd-networkd[1689]: lo: Gained carrier Aug 13 00:20:08.441802 systemd-networkd[1689]: Enumeration completed Aug 13 00:20:08.442383 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:20:08.445904 systemd-networkd[1689]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:20:08.446073 systemd-networkd[1689]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:20:08.448571 systemd-networkd[1689]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:20:08.448782 systemd-networkd[1689]: eth0: Link UP Aug 13 00:20:08.449304 systemd-networkd[1689]: eth0: Gained carrier Aug 13 00:20:08.449446 systemd-networkd[1689]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:20:08.456501 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 00:20:08.472513 systemd-networkd[1689]: eth0: DHCPv4 address 172.31.31.162/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 13 00:20:08.517065 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1683) Aug 13 00:20:08.533344 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:20:08.748969 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 00:20:08.764542 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 13 00:20:08.778783 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 00:20:08.783365 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:20:08.822059 lvm[1809]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:20:08.861573 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 00:20:08.869889 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:20:08.879335 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 00:20:08.897877 lvm[1814]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:20:08.938136 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 00:20:08.942989 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:20:08.946759 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:20:08.946830 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:20:08.949714 systemd[1]: Reached target machines.target - Containers. Aug 13 00:20:08.954930 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 00:20:08.967454 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 00:20:08.978615 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 00:20:08.988170 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:20:08.998829 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 00:20:09.005638 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 00:20:09.028566 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 00:20:09.032828 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 00:20:09.056109 kernel: loop0: detected capacity change from 0 to 203944 Aug 13 00:20:09.070585 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:20:09.076668 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 00:20:09.084763 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 00:20:09.174373 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:20:09.210330 kernel: loop1: detected capacity change from 0 to 52536 Aug 13 00:20:09.274067 kernel: loop2: detected capacity change from 0 to 114432 Aug 13 00:20:09.376071 kernel: loop3: detected capacity change from 0 to 114328 Aug 13 00:20:09.467061 kernel: loop4: detected capacity change from 0 to 203944 Aug 13 00:20:09.494109 kernel: loop5: detected capacity change from 0 to 52536 Aug 13 00:20:09.514238 kernel: loop6: detected capacity change from 0 to 114432 Aug 13 00:20:09.529547 kernel: loop7: detected capacity change from 0 to 114328 Aug 13 00:20:09.543530 (sd-merge)[1835]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Aug 13 00:20:09.544558 (sd-merge)[1835]: Merged extensions into '/usr'. Aug 13 00:20:09.551295 systemd[1]: Reloading requested from client PID 1822 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 00:20:09.551319 systemd[1]: Reloading... Aug 13 00:20:09.700252 zram_generator::config[1864]: No configuration found. Aug 13 00:20:09.965172 systemd-networkd[1689]: eth0: Gained IPv6LL Aug 13 00:20:09.972785 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:20:10.127316 systemd[1]: Reloading finished in 575 ms. Aug 13 00:20:10.154900 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 00:20:10.159626 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 00:20:10.178307 systemd[1]: Starting ensure-sysext.service... Aug 13 00:20:10.184300 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:20:10.214967 ldconfig[1818]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:20:10.223209 systemd[1]: Reloading requested from client PID 1922 ('systemctl') (unit ensure-sysext.service)... Aug 13 00:20:10.223241 systemd[1]: Reloading... Aug 13 00:20:10.243654 systemd-tmpfiles[1923]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:20:10.244622 systemd-tmpfiles[1923]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 00:20:10.247413 systemd-tmpfiles[1923]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:20:10.248443 systemd-tmpfiles[1923]: ACLs are not supported, ignoring. Aug 13 00:20:10.248718 systemd-tmpfiles[1923]: ACLs are not supported, ignoring. Aug 13 00:20:10.258120 systemd-tmpfiles[1923]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:20:10.258139 systemd-tmpfiles[1923]: Skipping /boot Aug 13 00:20:10.283264 systemd-tmpfiles[1923]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:20:10.283285 systemd-tmpfiles[1923]: Skipping /boot Aug 13 00:20:10.388182 zram_generator::config[1954]: No configuration found. Aug 13 00:20:10.626051 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:20:10.779642 systemd[1]: Reloading finished in 555 ms. Aug 13 00:20:10.806311 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 00:20:10.810551 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:20:10.839571 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 00:20:10.854359 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 00:20:10.869546 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 00:20:10.889398 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:20:10.914326 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 00:20:10.933707 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:20:10.941639 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:20:10.962501 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:20:10.979370 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:20:10.985206 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:20:11.005974 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:20:11.006432 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:20:11.023651 augenrules[2038]: No rules Aug 13 00:20:11.028483 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:20:11.028849 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:20:11.033188 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 00:20:11.046554 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 00:20:11.056910 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:20:11.061434 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:20:11.067890 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 00:20:11.109007 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:20:11.118269 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:20:11.138603 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:20:11.145671 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:20:11.167524 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:20:11.177775 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:20:11.178248 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 00:20:11.196737 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 00:20:11.204569 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 00:20:11.219486 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:20:11.219877 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:20:11.227484 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:20:11.227866 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:20:11.236109 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:20:11.236552 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:20:11.242219 systemd-resolved[2024]: Positive Trust Anchors: Aug 13 00:20:11.242250 systemd-resolved[2024]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:20:11.242314 systemd-resolved[2024]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:20:11.251603 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:20:11.252066 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:20:11.269961 systemd[1]: Finished ensure-sysext.service. Aug 13 00:20:11.284961 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:20:11.285202 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:20:11.285275 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:20:11.288332 systemd-resolved[2024]: Defaulting to hostname 'linux'. Aug 13 00:20:11.293084 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:20:11.296205 systemd[1]: Reached target network.target - Network. Aug 13 00:20:11.298594 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 00:20:11.301433 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:20:11.308861 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 00:20:11.311898 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:20:11.314878 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 00:20:11.318101 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 00:20:11.321728 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 00:20:11.324509 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 00:20:11.327897 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 00:20:11.331226 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:20:11.331291 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:20:11.333459 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:20:11.337414 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 00:20:11.343758 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 00:20:11.348348 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 00:20:11.360213 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 00:20:11.363267 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:20:11.365749 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:20:11.368352 systemd[1]: System is tainted: cgroupsv1 Aug 13 00:20:11.369199 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:20:11.369419 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:20:11.371720 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 00:20:11.379324 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 00:20:11.388394 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 00:20:11.405434 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 00:20:11.417158 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 00:20:11.433541 jq[2081]: false Aug 13 00:20:11.425474 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 00:20:11.435679 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:20:11.445602 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 00:20:11.453790 systemd[1]: Started ntpd.service - Network Time Service. Aug 13 00:20:11.491533 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 00:20:11.508257 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 00:20:11.544560 systemd[1]: Starting setup-oem.service - Setup OEM... Aug 13 00:20:11.551367 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 00:20:11.578530 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 00:20:11.608956 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 00:20:11.619259 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:20:11.629811 extend-filesystems[2082]: Found loop4 Aug 13 00:20:11.653728 extend-filesystems[2082]: Found loop5 Aug 13 00:20:11.653728 extend-filesystems[2082]: Found loop6 Aug 13 00:20:11.653728 extend-filesystems[2082]: Found loop7 Aug 13 00:20:11.653728 extend-filesystems[2082]: Found nvme0n1 Aug 13 00:20:11.653728 extend-filesystems[2082]: Found nvme0n1p1 Aug 13 00:20:11.653728 extend-filesystems[2082]: Found nvme0n1p2 Aug 13 00:20:11.653728 extend-filesystems[2082]: Found nvme0n1p3 Aug 13 00:20:11.653728 extend-filesystems[2082]: Found usr Aug 13 00:20:11.653728 extend-filesystems[2082]: Found nvme0n1p4 Aug 13 00:20:11.653728 extend-filesystems[2082]: Found nvme0n1p6 Aug 13 00:20:11.653728 extend-filesystems[2082]: Found nvme0n1p7 Aug 13 00:20:11.653728 extend-filesystems[2082]: Found nvme0n1p9 Aug 13 00:20:11.653728 extend-filesystems[2082]: Checking size of /dev/nvme0n1p9 Aug 13 00:20:11.635577 dbus-daemon[2080]: [system] SELinux support is enabled Aug 13 00:20:11.645112 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 00:20:11.649286 dbus-daemon[2080]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1689 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 00:20:11.683517 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 00:20:11.705105 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 00:20:11.743463 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:20:11.743989 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 00:20:11.753144 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:20:11.753715 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 00:20:11.762716 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 00:20:11.775593 ntpd[2086]: ntpd 4.2.8p17@1.4004-o Tue Aug 12 21:30:33 UTC 2025 (1): Starting Aug 13 00:20:11.790661 jq[2116]: true Aug 13 00:20:11.790970 ntpd[2086]: 13 Aug 00:20:11 ntpd[2086]: ntpd 4.2.8p17@1.4004-o Tue Aug 12 21:30:33 UTC 2025 (1): Starting Aug 13 00:20:11.790970 ntpd[2086]: 13 Aug 00:20:11 ntpd[2086]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 13 00:20:11.790970 ntpd[2086]: 13 Aug 00:20:11 ntpd[2086]: ---------------------------------------------------- Aug 13 00:20:11.790970 ntpd[2086]: 13 Aug 00:20:11 ntpd[2086]: ntp-4 is maintained by Network Time Foundation, Aug 13 00:20:11.790970 ntpd[2086]: 13 Aug 00:20:11 ntpd[2086]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 13 00:20:11.790970 ntpd[2086]: 13 Aug 00:20:11 ntpd[2086]: corporation. Support and training for ntp-4 are Aug 13 00:20:11.790970 ntpd[2086]: 13 Aug 00:20:11 ntpd[2086]: available at https://www.nwtime.org/support Aug 13 00:20:11.790970 ntpd[2086]: 13 Aug 00:20:11 ntpd[2086]: ---------------------------------------------------- Aug 13 00:20:11.779313 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:20:11.775660 ntpd[2086]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 13 00:20:11.779824 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 00:20:11.836236 extend-filesystems[2082]: Resized partition /dev/nvme0n1p9 Aug 13 00:20:11.775681 ntpd[2086]: ---------------------------------------------------- Aug 13 00:20:11.841389 ntpd[2086]: 13 Aug 00:20:11 ntpd[2086]: proto: precision = 0.108 usec (-23) Aug 13 00:20:11.841389 ntpd[2086]: 13 Aug 00:20:11 ntpd[2086]: basedate set to 2025-07-31 Aug 13 00:20:11.841389 ntpd[2086]: 13 Aug 00:20:11 ntpd[2086]: gps base set to 2025-08-03 (week 2378) Aug 13 00:20:11.841389 ntpd[2086]: 13 Aug 00:20:11 ntpd[2086]: Listen and drop on 0 v6wildcard [::]:123 Aug 13 00:20:11.841389 ntpd[2086]: 13 Aug 00:20:11 ntpd[2086]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 13 00:20:11.841389 ntpd[2086]: 13 Aug 00:20:11 ntpd[2086]: Listen normally on 2 lo 127.0.0.1:123 Aug 13 00:20:11.841389 ntpd[2086]: 13 Aug 00:20:11 ntpd[2086]: Listen normally on 3 eth0 172.31.31.162:123 Aug 13 00:20:11.841389 ntpd[2086]: 13 Aug 00:20:11 ntpd[2086]: Listen normally on 4 lo [::1]:123 Aug 13 00:20:11.841389 ntpd[2086]: 13 Aug 00:20:11 ntpd[2086]: Listen normally on 5 eth0 [fe80::4ac:afff:fed2:9593%2]:123 Aug 13 00:20:11.841389 ntpd[2086]: 13 Aug 00:20:11 ntpd[2086]: Listening on routing socket on fd #22 for interface updates Aug 13 00:20:11.841843 extend-filesystems[2133]: resize2fs 1.47.1 (20-May-2024) Aug 13 00:20:11.910163 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Aug 13 00:20:11.775700 ntpd[2086]: ntp-4 is maintained by Network Time Foundation, Aug 13 00:20:11.910411 update_engine[2112]: I20250813 00:20:11.850649 2112 main.cc:92] Flatcar Update Engine starting Aug 13 00:20:11.910411 update_engine[2112]: I20250813 00:20:11.864442 2112 update_check_scheduler.cc:74] Next update check in 10m41s Aug 13 00:20:11.918285 ntpd[2086]: 13 Aug 00:20:11 ntpd[2086]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 00:20:11.918285 ntpd[2086]: 13 Aug 00:20:11 ntpd[2086]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 00:20:11.775718 ntpd[2086]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 13 00:20:11.775736 ntpd[2086]: corporation. Support and training for ntp-4 are Aug 13 00:20:11.775755 ntpd[2086]: available at https://www.nwtime.org/support Aug 13 00:20:11.775774 ntpd[2086]: ---------------------------------------------------- Aug 13 00:20:11.793904 ntpd[2086]: proto: precision = 0.108 usec (-23) Aug 13 00:20:11.797693 ntpd[2086]: basedate set to 2025-07-31 Aug 13 00:20:11.797732 ntpd[2086]: gps base set to 2025-08-03 (week 2378) Aug 13 00:20:11.807531 ntpd[2086]: Listen and drop on 0 v6wildcard [::]:123 Aug 13 00:20:11.807621 ntpd[2086]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 13 00:20:11.810668 ntpd[2086]: Listen normally on 2 lo 127.0.0.1:123 Aug 13 00:20:11.810746 ntpd[2086]: Listen normally on 3 eth0 172.31.31.162:123 Aug 13 00:20:11.814999 ntpd[2086]: Listen normally on 4 lo [::1]:123 Aug 13 00:20:11.815226 ntpd[2086]: Listen normally on 5 eth0 [fe80::4ac:afff:fed2:9593%2]:123 Aug 13 00:20:11.815301 ntpd[2086]: Listening on routing socket on fd #22 for interface updates Aug 13 00:20:11.856290 ntpd[2086]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 00:20:11.856342 ntpd[2086]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 00:20:11.958542 (ntainerd)[2138]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 00:20:11.989199 dbus-daemon[2080]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 13 00:20:11.983399 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:20:12.022306 tar[2125]: linux-arm64/helm Aug 13 00:20:11.983500 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 00:20:11.990711 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:20:11.990766 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 00:20:12.000150 systemd[1]: Started update-engine.service - Update Engine. Aug 13 00:20:12.024358 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 00:20:12.034465 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:20:12.078705 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Aug 13 00:20:12.048371 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 00:20:12.105409 jq[2137]: true Aug 13 00:20:12.134139 coreos-metadata[2079]: Aug 13 00:20:12.132 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 13 00:20:12.148372 extend-filesystems[2133]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Aug 13 00:20:12.148372 extend-filesystems[2133]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 00:20:12.148372 extend-filesystems[2133]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Aug 13 00:20:12.172730 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:20:12.199867 extend-filesystems[2082]: Resized filesystem in /dev/nvme0n1p9 Aug 13 00:20:12.174428 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 00:20:12.214532 coreos-metadata[2079]: Aug 13 00:20:12.207 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Aug 13 00:20:12.236573 coreos-metadata[2079]: Aug 13 00:20:12.218 INFO Fetch successful Aug 13 00:20:12.236573 coreos-metadata[2079]: Aug 13 00:20:12.218 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Aug 13 00:20:12.236573 coreos-metadata[2079]: Aug 13 00:20:12.227 INFO Fetch successful Aug 13 00:20:12.236573 coreos-metadata[2079]: Aug 13 00:20:12.227 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Aug 13 00:20:12.236573 coreos-metadata[2079]: Aug 13 00:20:12.233 INFO Fetch successful Aug 13 00:20:12.236573 coreos-metadata[2079]: Aug 13 00:20:12.233 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Aug 13 00:20:12.220954 systemd[1]: Finished setup-oem.service - Setup OEM. Aug 13 00:20:12.240480 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Aug 13 00:20:12.249478 coreos-metadata[2079]: Aug 13 00:20:12.249 INFO Fetch successful Aug 13 00:20:12.249685 coreos-metadata[2079]: Aug 13 00:20:12.249 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Aug 13 00:20:12.259115 coreos-metadata[2079]: Aug 13 00:20:12.253 INFO Fetch failed with 404: resource not found Aug 13 00:20:12.264854 coreos-metadata[2079]: Aug 13 00:20:12.264 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Aug 13 00:20:12.266227 coreos-metadata[2079]: Aug 13 00:20:12.266 INFO Fetch successful Aug 13 00:20:12.266340 coreos-metadata[2079]: Aug 13 00:20:12.266 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Aug 13 00:20:12.276002 coreos-metadata[2079]: Aug 13 00:20:12.266 INFO Fetch successful Aug 13 00:20:12.276002 coreos-metadata[2079]: Aug 13 00:20:12.266 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Aug 13 00:20:12.276002 coreos-metadata[2079]: Aug 13 00:20:12.269 INFO Fetch successful Aug 13 00:20:12.276002 coreos-metadata[2079]: Aug 13 00:20:12.269 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Aug 13 00:20:12.276002 coreos-metadata[2079]: Aug 13 00:20:12.273 INFO Fetch successful Aug 13 00:20:12.276002 coreos-metadata[2079]: Aug 13 00:20:12.274 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Aug 13 00:20:12.281063 coreos-metadata[2079]: Aug 13 00:20:12.278 INFO Fetch successful Aug 13 00:20:12.402589 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 00:20:12.410765 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 00:20:12.486124 systemd-logind[2103]: Watching system buttons on /dev/input/event0 (Power Button) Aug 13 00:20:12.488550 bash[2202]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:20:12.486204 systemd-logind[2103]: Watching system buttons on /dev/input/event1 (Sleep Button) Aug 13 00:20:12.492307 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 00:20:12.495491 systemd-logind[2103]: New seat seat0. Aug 13 00:20:12.526677 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (2180) Aug 13 00:20:12.538931 systemd[1]: Starting sshkeys.service... Aug 13 00:20:12.551374 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 00:20:12.632909 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 00:20:12.643906 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 00:20:12.690063 amazon-ssm-agent[2171]: Initializing new seelog logger Aug 13 00:20:12.690063 amazon-ssm-agent[2171]: New Seelog Logger Creation Complete Aug 13 00:20:12.690063 amazon-ssm-agent[2171]: 2025/08/13 00:20:12 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:20:12.690063 amazon-ssm-agent[2171]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:20:12.693517 amazon-ssm-agent[2171]: 2025/08/13 00:20:12 processing appconfig overrides Aug 13 00:20:12.700977 amazon-ssm-agent[2171]: 2025/08/13 00:20:12 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:20:12.700977 amazon-ssm-agent[2171]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:20:12.700977 amazon-ssm-agent[2171]: 2025-08-13 00:20:12 INFO Proxy environment variables: Aug 13 00:20:12.703177 amazon-ssm-agent[2171]: 2025/08/13 00:20:12 processing appconfig overrides Aug 13 00:20:12.705974 amazon-ssm-agent[2171]: 2025/08/13 00:20:12 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:20:12.708710 amazon-ssm-agent[2171]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:20:12.708995 amazon-ssm-agent[2171]: 2025/08/13 00:20:12 processing appconfig overrides Aug 13 00:20:12.723618 amazon-ssm-agent[2171]: 2025/08/13 00:20:12 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:20:12.724127 amazon-ssm-agent[2171]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:20:12.728412 amazon-ssm-agent[2171]: 2025/08/13 00:20:12 processing appconfig overrides Aug 13 00:20:12.807283 amazon-ssm-agent[2171]: 2025-08-13 00:20:12 INFO no_proxy: Aug 13 00:20:12.836074 locksmithd[2151]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:20:12.906714 amazon-ssm-agent[2171]: 2025-08-13 00:20:12 INFO https_proxy: Aug 13 00:20:12.996780 dbus-daemon[2080]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 00:20:12.997805 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 00:20:12.998580 dbus-daemon[2080]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2150 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 00:20:13.006486 amazon-ssm-agent[2171]: 2025-08-13 00:20:12 INFO http_proxy: Aug 13 00:20:13.026729 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 00:20:13.054804 polkitd[2270]: Started polkitd version 121 Aug 13 00:20:13.071301 polkitd[2270]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 00:20:13.071418 polkitd[2270]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 00:20:13.073831 polkitd[2270]: Finished loading, compiling and executing 2 rules Aug 13 00:20:13.075546 dbus-daemon[2080]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 00:20:13.075849 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 00:20:13.076520 polkitd[2270]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 00:20:13.102608 systemd-hostnamed[2150]: Hostname set to (transient) Aug 13 00:20:13.102771 systemd-resolved[2024]: System hostname changed to 'ip-172-31-31-162'. Aug 13 00:20:13.104776 amazon-ssm-agent[2171]: 2025-08-13 00:20:12 INFO Checking if agent identity type OnPrem can be assumed Aug 13 00:20:13.128121 coreos-metadata[2231]: Aug 13 00:20:13.128 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 13 00:20:13.143114 coreos-metadata[2231]: Aug 13 00:20:13.143 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Aug 13 00:20:13.149694 coreos-metadata[2231]: Aug 13 00:20:13.149 INFO Fetch successful Aug 13 00:20:13.149694 coreos-metadata[2231]: Aug 13 00:20:13.149 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Aug 13 00:20:13.155066 coreos-metadata[2231]: Aug 13 00:20:13.150 INFO Fetch successful Aug 13 00:20:13.159158 unknown[2231]: wrote ssh authorized keys file for user: core Aug 13 00:20:13.209504 amazon-ssm-agent[2171]: 2025-08-13 00:20:12 INFO Checking if agent identity type EC2 can be assumed Aug 13 00:20:13.229502 update-ssh-keys[2299]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:20:13.234674 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 00:20:13.254670 systemd[1]: Finished sshkeys.service. Aug 13 00:20:13.312574 amazon-ssm-agent[2171]: 2025-08-13 00:20:12 INFO Agent will take identity from EC2 Aug 13 00:20:13.413848 amazon-ssm-agent[2171]: 2025-08-13 00:20:12 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 13 00:20:13.420828 containerd[2138]: time="2025-08-13T00:20:13.420700405Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 00:20:13.516834 amazon-ssm-agent[2171]: 2025-08-13 00:20:12 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 13 00:20:13.616436 amazon-ssm-agent[2171]: 2025-08-13 00:20:12 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 13 00:20:13.625743 containerd[2138]: time="2025-08-13T00:20:13.625682354Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:20:13.641562 containerd[2138]: time="2025-08-13T00:20:13.638008982Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:20:13.641562 containerd[2138]: time="2025-08-13T00:20:13.640130594Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:20:13.641562 containerd[2138]: time="2025-08-13T00:20:13.640170758Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:20:13.641562 containerd[2138]: time="2025-08-13T00:20:13.640470866Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 00:20:13.641562 containerd[2138]: time="2025-08-13T00:20:13.640504514Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 00:20:13.641562 containerd[2138]: time="2025-08-13T00:20:13.640624766Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:20:13.641562 containerd[2138]: time="2025-08-13T00:20:13.640652714Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:20:13.644065 containerd[2138]: time="2025-08-13T00:20:13.641019590Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:20:13.644065 containerd[2138]: time="2025-08-13T00:20:13.643123802Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:20:13.644065 containerd[2138]: time="2025-08-13T00:20:13.643171958Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:20:13.644065 containerd[2138]: time="2025-08-13T00:20:13.643199474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:20:13.644065 containerd[2138]: time="2025-08-13T00:20:13.643410326Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:20:13.644065 containerd[2138]: time="2025-08-13T00:20:13.643805906Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:20:13.646429 containerd[2138]: time="2025-08-13T00:20:13.646381022Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:20:13.648648 containerd[2138]: time="2025-08-13T00:20:13.648088886Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:20:13.648648 containerd[2138]: time="2025-08-13T00:20:13.648317858Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:20:13.648648 containerd[2138]: time="2025-08-13T00:20:13.648422318Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:20:13.658072 containerd[2138]: time="2025-08-13T00:20:13.657603542Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:20:13.658072 containerd[2138]: time="2025-08-13T00:20:13.657710114Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:20:13.658072 containerd[2138]: time="2025-08-13T00:20:13.657749018Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 00:20:13.658072 containerd[2138]: time="2025-08-13T00:20:13.657797582Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 00:20:13.658072 containerd[2138]: time="2025-08-13T00:20:13.657832862Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:20:13.662316 containerd[2138]: time="2025-08-13T00:20:13.658443914Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:20:13.662316 containerd[2138]: time="2025-08-13T00:20:13.658998890Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:20:13.662316 containerd[2138]: time="2025-08-13T00:20:13.659270618Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 00:20:13.662316 containerd[2138]: time="2025-08-13T00:20:13.659303042Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 00:20:13.662316 containerd[2138]: time="2025-08-13T00:20:13.659335910Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 00:20:13.662316 containerd[2138]: time="2025-08-13T00:20:13.659368526Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:20:13.662316 containerd[2138]: time="2025-08-13T00:20:13.659398310Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:20:13.662316 containerd[2138]: time="2025-08-13T00:20:13.659426978Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:20:13.662316 containerd[2138]: time="2025-08-13T00:20:13.659459738Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:20:13.662316 containerd[2138]: time="2025-08-13T00:20:13.659491982Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:20:13.662316 containerd[2138]: time="2025-08-13T00:20:13.659521154Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:20:13.662316 containerd[2138]: time="2025-08-13T00:20:13.659550506Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:20:13.662316 containerd[2138]: time="2025-08-13T00:20:13.659578586Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:20:13.662316 containerd[2138]: time="2025-08-13T00:20:13.659623370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:20:13.663096 containerd[2138]: time="2025-08-13T00:20:13.659655170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:20:13.663096 containerd[2138]: time="2025-08-13T00:20:13.659684126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:20:13.663096 containerd[2138]: time="2025-08-13T00:20:13.659718026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:20:13.663096 containerd[2138]: time="2025-08-13T00:20:13.659755670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:20:13.663096 containerd[2138]: time="2025-08-13T00:20:13.659786798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:20:13.663096 containerd[2138]: time="2025-08-13T00:20:13.659816618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:20:13.663096 containerd[2138]: time="2025-08-13T00:20:13.659847110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:20:13.663096 containerd[2138]: time="2025-08-13T00:20:13.659876630Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 00:20:13.663096 containerd[2138]: time="2025-08-13T00:20:13.659909882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 00:20:13.663096 containerd[2138]: time="2025-08-13T00:20:13.659937722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:20:13.663096 containerd[2138]: time="2025-08-13T00:20:13.659965286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 00:20:13.663096 containerd[2138]: time="2025-08-13T00:20:13.659996006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:20:13.670131 containerd[2138]: time="2025-08-13T00:20:13.668109074Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 00:20:13.670131 containerd[2138]: time="2025-08-13T00:20:13.668200706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 00:20:13.670131 containerd[2138]: time="2025-08-13T00:20:13.668237966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:20:13.670131 containerd[2138]: time="2025-08-13T00:20:13.668266862Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:20:13.670131 containerd[2138]: time="2025-08-13T00:20:13.668503286Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:20:13.670131 containerd[2138]: time="2025-08-13T00:20:13.668541410Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 00:20:13.670131 containerd[2138]: time="2025-08-13T00:20:13.668567522Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:20:13.670131 containerd[2138]: time="2025-08-13T00:20:13.668598698Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 00:20:13.670131 containerd[2138]: time="2025-08-13T00:20:13.668623046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:20:13.670131 containerd[2138]: time="2025-08-13T00:20:13.668651102Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 00:20:13.670131 containerd[2138]: time="2025-08-13T00:20:13.668674538Z" level=info msg="NRI interface is disabled by configuration." Aug 13 00:20:13.670131 containerd[2138]: time="2025-08-13T00:20:13.668699450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:20:13.670711 containerd[2138]: time="2025-08-13T00:20:13.669381182Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:20:13.670711 containerd[2138]: time="2025-08-13T00:20:13.669493982Z" level=info msg="Connect containerd service" Aug 13 00:20:13.670711 containerd[2138]: time="2025-08-13T00:20:13.669691094Z" level=info msg="using legacy CRI server" Aug 13 00:20:13.670711 containerd[2138]: time="2025-08-13T00:20:13.669711182Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 00:20:13.670711 containerd[2138]: time="2025-08-13T00:20:13.669897434Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:20:13.680090 containerd[2138]: time="2025-08-13T00:20:13.677937266Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:20:13.687089 containerd[2138]: time="2025-08-13T00:20:13.686376758Z" level=info msg="Start subscribing containerd event" Aug 13 00:20:13.687089 containerd[2138]: time="2025-08-13T00:20:13.686474186Z" level=info msg="Start recovering state" Aug 13 00:20:13.687089 containerd[2138]: time="2025-08-13T00:20:13.686599682Z" level=info msg="Start event monitor" Aug 13 00:20:13.687089 containerd[2138]: time="2025-08-13T00:20:13.686662670Z" level=info msg="Start snapshots syncer" Aug 13 00:20:13.687089 containerd[2138]: time="2025-08-13T00:20:13.686687678Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:20:13.687089 containerd[2138]: time="2025-08-13T00:20:13.686706410Z" level=info msg="Start streaming server" Aug 13 00:20:13.687440 containerd[2138]: time="2025-08-13T00:20:13.686709170Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:20:13.687440 containerd[2138]: time="2025-08-13T00:20:13.687330482Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:20:13.687576 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 00:20:13.697240 containerd[2138]: time="2025-08-13T00:20:13.697167458Z" level=info msg="containerd successfully booted in 0.281859s" Aug 13 00:20:13.715281 amazon-ssm-agent[2171]: 2025-08-13 00:20:12 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Aug 13 00:20:13.815860 amazon-ssm-agent[2171]: 2025-08-13 00:20:12 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Aug 13 00:20:13.916059 amazon-ssm-agent[2171]: 2025-08-13 00:20:12 INFO [amazon-ssm-agent] Starting Core Agent Aug 13 00:20:14.016892 amazon-ssm-agent[2171]: 2025-08-13 00:20:12 INFO [amazon-ssm-agent] registrar detected. Attempting registration Aug 13 00:20:14.042044 amazon-ssm-agent[2171]: 2025-08-13 00:20:12 INFO [Registrar] Starting registrar module Aug 13 00:20:14.042044 amazon-ssm-agent[2171]: 2025-08-13 00:20:12 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Aug 13 00:20:14.042044 amazon-ssm-agent[2171]: 2025-08-13 00:20:14 INFO [EC2Identity] EC2 registration was successful. Aug 13 00:20:14.042044 amazon-ssm-agent[2171]: 2025-08-13 00:20:14 INFO [CredentialRefresher] credentialRefresher has started Aug 13 00:20:14.042044 amazon-ssm-agent[2171]: 2025-08-13 00:20:14 INFO [CredentialRefresher] Starting credentials refresher loop Aug 13 00:20:14.042044 amazon-ssm-agent[2171]: 2025-08-13 00:20:14 INFO EC2RoleProvider Successfully connected with instance profile role credentials Aug 13 00:20:14.117278 amazon-ssm-agent[2171]: 2025-08-13 00:20:14 INFO [CredentialRefresher] Next credential rotation will be in 30.733275565133333 minutes Aug 13 00:20:14.307225 tar[2125]: linux-arm64/LICENSE Aug 13 00:20:14.308650 tar[2125]: linux-arm64/README.md Aug 13 00:20:14.342450 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 00:20:14.575858 sshd_keygen[2136]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:20:14.623427 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 00:20:14.637575 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 00:20:14.668856 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:20:14.672389 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 00:20:14.687310 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 00:20:14.711556 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 00:20:14.728673 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 00:20:14.742606 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 00:20:14.748564 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 00:20:15.068578 amazon-ssm-agent[2171]: 2025-08-13 00:20:15 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Aug 13 00:20:15.170170 amazon-ssm-agent[2171]: 2025-08-13 00:20:15 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2364) started Aug 13 00:20:15.225426 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:20:15.235012 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 00:20:15.243466 systemd[1]: Startup finished in 10.201s (kernel) + 10.468s (userspace) = 20.670s. Aug 13 00:20:15.249124 (kubelet)[2378]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:20:15.277627 amazon-ssm-agent[2171]: 2025-08-13 00:20:15 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Aug 13 00:20:16.452855 kubelet[2378]: E0813 00:20:16.452785 2378 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:20:16.458229 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:20:16.458649 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:20:18.424808 systemd-resolved[2024]: Clock change detected. Flushing caches. Aug 13 00:20:20.790150 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 00:20:20.800438 systemd[1]: Started sshd@0-172.31.31.162:22-139.178.89.65:36096.service - OpenSSH per-connection server daemon (139.178.89.65:36096). Aug 13 00:20:20.988816 sshd[2394]: Accepted publickey for core from 139.178.89.65 port 36096 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:20.993071 sshd[2394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:21.009425 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 00:20:21.017674 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 00:20:21.023170 systemd-logind[2103]: New session 1 of user core. Aug 13 00:20:21.049583 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 00:20:21.063955 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 00:20:21.083400 (systemd)[2400]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:20:21.310129 systemd[2400]: Queued start job for default target default.target. Aug 13 00:20:21.310821 systemd[2400]: Created slice app.slice - User Application Slice. Aug 13 00:20:21.310877 systemd[2400]: Reached target paths.target - Paths. Aug 13 00:20:21.310909 systemd[2400]: Reached target timers.target - Timers. Aug 13 00:20:21.320193 systemd[2400]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 00:20:21.334936 systemd[2400]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 00:20:21.335092 systemd[2400]: Reached target sockets.target - Sockets. Aug 13 00:20:21.335127 systemd[2400]: Reached target basic.target - Basic System. Aug 13 00:20:21.335564 systemd[2400]: Reached target default.target - Main User Target. Aug 13 00:20:21.335638 systemd[2400]: Startup finished in 240ms. Aug 13 00:20:21.336035 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 00:20:21.342854 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 00:20:21.492450 systemd[1]: Started sshd@1-172.31.31.162:22-139.178.89.65:36110.service - OpenSSH per-connection server daemon (139.178.89.65:36110). Aug 13 00:20:21.684576 sshd[2412]: Accepted publickey for core from 139.178.89.65 port 36110 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:21.687154 sshd[2412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:21.695177 systemd-logind[2103]: New session 2 of user core. Aug 13 00:20:21.707609 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 00:20:21.840346 sshd[2412]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:21.846633 systemd[1]: sshd@1-172.31.31.162:22-139.178.89.65:36110.service: Deactivated successfully. Aug 13 00:20:21.852628 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:20:21.853107 systemd-logind[2103]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:20:21.856225 systemd-logind[2103]: Removed session 2. Aug 13 00:20:21.867475 systemd[1]: Started sshd@2-172.31.31.162:22-139.178.89.65:36112.service - OpenSSH per-connection server daemon (139.178.89.65:36112). Aug 13 00:20:22.044901 sshd[2420]: Accepted publickey for core from 139.178.89.65 port 36112 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:22.047197 sshd[2420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:22.055941 systemd-logind[2103]: New session 3 of user core. Aug 13 00:20:22.062505 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 00:20:22.182635 sshd[2420]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:22.189312 systemd-logind[2103]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:20:22.190728 systemd[1]: sshd@2-172.31.31.162:22-139.178.89.65:36112.service: Deactivated successfully. Aug 13 00:20:22.195774 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:20:22.198410 systemd-logind[2103]: Removed session 3. Aug 13 00:20:22.218435 systemd[1]: Started sshd@3-172.31.31.162:22-139.178.89.65:36128.service - OpenSSH per-connection server daemon (139.178.89.65:36128). Aug 13 00:20:22.385888 sshd[2428]: Accepted publickey for core from 139.178.89.65 port 36128 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:22.388484 sshd[2428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:22.397155 systemd-logind[2103]: New session 4 of user core. Aug 13 00:20:22.404496 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 00:20:22.535296 sshd[2428]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:22.541947 systemd[1]: sshd@3-172.31.31.162:22-139.178.89.65:36128.service: Deactivated successfully. Aug 13 00:20:22.548617 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:20:22.550348 systemd-logind[2103]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:20:22.551985 systemd-logind[2103]: Removed session 4. Aug 13 00:20:22.561516 systemd[1]: Started sshd@4-172.31.31.162:22-139.178.89.65:36144.service - OpenSSH per-connection server daemon (139.178.89.65:36144). Aug 13 00:20:22.735201 sshd[2436]: Accepted publickey for core from 139.178.89.65 port 36144 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:22.738192 sshd[2436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:22.748234 systemd-logind[2103]: New session 5 of user core. Aug 13 00:20:22.754624 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 00:20:22.896545 sudo[2440]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:20:22.897246 sudo[2440]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:20:22.911704 sudo[2440]: pam_unix(sudo:session): session closed for user root Aug 13 00:20:22.936426 sshd[2436]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:22.944561 systemd[1]: sshd@4-172.31.31.162:22-139.178.89.65:36144.service: Deactivated successfully. Aug 13 00:20:22.949560 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:20:22.951790 systemd-logind[2103]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:20:22.953746 systemd-logind[2103]: Removed session 5. Aug 13 00:20:22.969521 systemd[1]: Started sshd@5-172.31.31.162:22-139.178.89.65:36156.service - OpenSSH per-connection server daemon (139.178.89.65:36156). Aug 13 00:20:23.131414 sshd[2445]: Accepted publickey for core from 139.178.89.65 port 36156 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:23.134201 sshd[2445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:23.143363 systemd-logind[2103]: New session 6 of user core. Aug 13 00:20:23.150450 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 00:20:23.255838 sudo[2450]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:20:23.257113 sudo[2450]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:20:23.263601 sudo[2450]: pam_unix(sudo:session): session closed for user root Aug 13 00:20:23.273982 sudo[2449]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 00:20:23.274765 sudo[2449]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:20:23.299570 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 00:20:23.303741 auditctl[2453]: No rules Aug 13 00:20:23.304591 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:20:23.305195 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 00:20:23.319760 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 00:20:23.363260 augenrules[2472]: No rules Aug 13 00:20:23.366971 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 00:20:23.371589 sudo[2449]: pam_unix(sudo:session): session closed for user root Aug 13 00:20:23.393963 sshd[2445]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:23.401147 systemd-logind[2103]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:20:23.402214 systemd[1]: sshd@5-172.31.31.162:22-139.178.89.65:36156.service: Deactivated successfully. Aug 13 00:20:23.408079 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:20:23.409741 systemd-logind[2103]: Removed session 6. Aug 13 00:20:23.427517 systemd[1]: Started sshd@6-172.31.31.162:22-139.178.89.65:36166.service - OpenSSH per-connection server daemon (139.178.89.65:36166). Aug 13 00:20:23.602126 sshd[2481]: Accepted publickey for core from 139.178.89.65 port 36166 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:23.604654 sshd[2481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:23.612117 systemd-logind[2103]: New session 7 of user core. Aug 13 00:20:23.625898 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 00:20:23.733435 sudo[2485]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:20:23.734811 sudo[2485]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:20:24.346468 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 00:20:24.358696 (dockerd)[2501]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 00:20:24.874797 dockerd[2501]: time="2025-08-13T00:20:24.874690132Z" level=info msg="Starting up" Aug 13 00:20:25.282099 dockerd[2501]: time="2025-08-13T00:20:25.281688410Z" level=info msg="Loading containers: start." Aug 13 00:20:25.453073 kernel: Initializing XFRM netlink socket Aug 13 00:20:25.488249 (udev-worker)[2524]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:20:25.574641 systemd-networkd[1689]: docker0: Link UP Aug 13 00:20:25.599630 dockerd[2501]: time="2025-08-13T00:20:25.599376507Z" level=info msg="Loading containers: done." Aug 13 00:20:25.626464 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck641042847-merged.mount: Deactivated successfully. Aug 13 00:20:25.632236 dockerd[2501]: time="2025-08-13T00:20:25.632163183Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:20:25.632394 dockerd[2501]: time="2025-08-13T00:20:25.632326275Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 00:20:25.632569 dockerd[2501]: time="2025-08-13T00:20:25.632516559Z" level=info msg="Daemon has completed initialization" Aug 13 00:20:25.698535 dockerd[2501]: time="2025-08-13T00:20:25.698256028Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:20:25.698697 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 00:20:26.110687 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:20:26.119903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:20:26.593215 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:20:26.606648 (kubelet)[2653]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:20:26.698973 kubelet[2653]: E0813 00:20:26.698849 2653 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:20:26.707118 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:20:26.707520 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:20:27.135587 containerd[2138]: time="2025-08-13T00:20:27.135528183Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 00:20:27.786400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1459349031.mount: Deactivated successfully. Aug 13 00:20:29.124870 containerd[2138]: time="2025-08-13T00:20:29.124801817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:29.126979 containerd[2138]: time="2025-08-13T00:20:29.126923861Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=25651813" Aug 13 00:20:29.127490 containerd[2138]: time="2025-08-13T00:20:29.127427585Z" level=info msg="ImageCreate event name:\"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:29.135201 containerd[2138]: time="2025-08-13T00:20:29.134606825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:29.143493 containerd[2138]: time="2025-08-13T00:20:29.143421761Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"25648613\" in 2.007826006s" Aug 13 00:20:29.144516 containerd[2138]: time="2025-08-13T00:20:29.144464789Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\"" Aug 13 00:20:29.147397 containerd[2138]: time="2025-08-13T00:20:29.147339293Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 00:20:30.509974 containerd[2138]: time="2025-08-13T00:20:30.509891240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:30.511780 containerd[2138]: time="2025-08-13T00:20:30.511718996Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=22460283" Aug 13 00:20:30.512678 containerd[2138]: time="2025-08-13T00:20:30.512576480Z" level=info msg="ImageCreate event name:\"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:30.521103 containerd[2138]: time="2025-08-13T00:20:30.521026232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:30.524130 containerd[2138]: time="2025-08-13T00:20:30.524061308Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"23996073\" in 1.376479639s" Aug 13 00:20:30.524130 containerd[2138]: time="2025-08-13T00:20:30.524124092Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\"" Aug 13 00:20:30.525110 containerd[2138]: time="2025-08-13T00:20:30.525073400Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 00:20:31.649907 containerd[2138]: time="2025-08-13T00:20:31.649841217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:31.651981 containerd[2138]: time="2025-08-13T00:20:31.651908433Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=17125089" Aug 13 00:20:31.652637 containerd[2138]: time="2025-08-13T00:20:31.652557669Z" level=info msg="ImageCreate event name:\"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:31.658393 containerd[2138]: time="2025-08-13T00:20:31.658305513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:31.661689 containerd[2138]: time="2025-08-13T00:20:31.660603249Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"18660897\" in 1.135350389s" Aug 13 00:20:31.661689 containerd[2138]: time="2025-08-13T00:20:31.660665697Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\"" Aug 13 00:20:31.661689 containerd[2138]: time="2025-08-13T00:20:31.661293837Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 00:20:32.964886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1738956005.mount: Deactivated successfully. Aug 13 00:20:33.513277 containerd[2138]: time="2025-08-13T00:20:33.513206614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:33.514690 containerd[2138]: time="2025-08-13T00:20:33.514620274Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=26915993" Aug 13 00:20:33.516122 containerd[2138]: time="2025-08-13T00:20:33.516042706Z" level=info msg="ImageCreate event name:\"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:33.519630 containerd[2138]: time="2025-08-13T00:20:33.519576730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:33.521381 containerd[2138]: time="2025-08-13T00:20:33.521191402Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"26915012\" in 1.859844585s" Aug 13 00:20:33.521381 containerd[2138]: time="2025-08-13T00:20:33.521248066Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\"" Aug 13 00:20:33.522537 containerd[2138]: time="2025-08-13T00:20:33.522489862Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:20:34.022445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3904531126.mount: Deactivated successfully. Aug 13 00:20:35.268087 containerd[2138]: time="2025-08-13T00:20:35.267465143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:35.270035 containerd[2138]: time="2025-08-13T00:20:35.269944763Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Aug 13 00:20:35.272840 containerd[2138]: time="2025-08-13T00:20:35.272743091Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:35.279184 containerd[2138]: time="2025-08-13T00:20:35.279102983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:35.281658 containerd[2138]: time="2025-08-13T00:20:35.281600171Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.759049085s" Aug 13 00:20:35.282012 containerd[2138]: time="2025-08-13T00:20:35.281810615Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Aug 13 00:20:35.282759 containerd[2138]: time="2025-08-13T00:20:35.282467495Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:20:35.817388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount160551829.mount: Deactivated successfully. Aug 13 00:20:35.830536 containerd[2138]: time="2025-08-13T00:20:35.830459114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:35.833814 containerd[2138]: time="2025-08-13T00:20:35.833747030Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Aug 13 00:20:35.836171 containerd[2138]: time="2025-08-13T00:20:35.836095514Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:35.841137 containerd[2138]: time="2025-08-13T00:20:35.841023338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:35.842915 containerd[2138]: time="2025-08-13T00:20:35.842721686Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 560.197359ms" Aug 13 00:20:35.842915 containerd[2138]: time="2025-08-13T00:20:35.842781254Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 13 00:20:35.843919 containerd[2138]: time="2025-08-13T00:20:35.843870542Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 00:20:36.432336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3783754382.mount: Deactivated successfully. Aug 13 00:20:36.860740 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:20:36.873403 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:20:38.925328 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:20:38.939179 (kubelet)[2823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:20:39.021619 kubelet[2823]: E0813 00:20:39.021488 2823 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:20:39.025697 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:20:39.027881 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:20:40.259693 containerd[2138]: time="2025-08-13T00:20:40.259608388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:40.262042 containerd[2138]: time="2025-08-13T00:20:40.261963952Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" Aug 13 00:20:40.264105 containerd[2138]: time="2025-08-13T00:20:40.264049204Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:40.270971 containerd[2138]: time="2025-08-13T00:20:40.270890536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:40.273590 containerd[2138]: time="2025-08-13T00:20:40.273370288Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 4.429448542s" Aug 13 00:20:40.273590 containerd[2138]: time="2025-08-13T00:20:40.273428872Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Aug 13 00:20:42.786432 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 00:20:48.740843 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:20:48.756462 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:20:48.821697 systemd[1]: Reloading requested from client PID 2893 ('systemctl') (unit session-7.scope)... Aug 13 00:20:48.821909 systemd[1]: Reloading... Aug 13 00:20:49.044045 zram_generator::config[2936]: No configuration found. Aug 13 00:20:49.293250 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:20:49.461510 systemd[1]: Reloading finished in 638 ms. Aug 13 00:20:49.560315 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 00:20:49.560920 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 00:20:49.561664 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:20:49.569748 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:20:49.881325 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:20:49.892947 (kubelet)[3008]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:20:49.971396 kubelet[3008]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:20:49.971396 kubelet[3008]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:20:49.971396 kubelet[3008]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:20:49.971968 kubelet[3008]: I0813 00:20:49.971495 3008 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:20:52.424587 kubelet[3008]: I0813 00:20:52.424518 3008 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:20:52.424587 kubelet[3008]: I0813 00:20:52.424570 3008 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:20:52.425474 kubelet[3008]: I0813 00:20:52.425102 3008 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:20:52.465463 kubelet[3008]: E0813 00:20:52.465372 3008 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.31.162:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.31.162:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:20:52.469058 kubelet[3008]: I0813 00:20:52.468835 3008 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:20:52.485599 kubelet[3008]: E0813 00:20:52.485400 3008 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:20:52.485599 kubelet[3008]: I0813 00:20:52.485464 3008 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:20:52.492264 kubelet[3008]: I0813 00:20:52.492221 3008 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:20:52.493119 kubelet[3008]: I0813 00:20:52.493086 3008 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:20:52.493391 kubelet[3008]: I0813 00:20:52.493338 3008 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:20:52.493687 kubelet[3008]: I0813 00:20:52.493394 3008 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-162","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 00:20:52.494006 kubelet[3008]: I0813 00:20:52.493967 3008 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:20:52.494100 kubelet[3008]: I0813 00:20:52.494026 3008 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:20:52.494379 kubelet[3008]: I0813 00:20:52.494350 3008 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:20:52.501224 kubelet[3008]: I0813 00:20:52.501161 3008 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:20:52.501224 kubelet[3008]: I0813 00:20:52.501218 3008 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:20:52.501444 kubelet[3008]: I0813 00:20:52.501259 3008 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:20:52.501444 kubelet[3008]: I0813 00:20:52.501420 3008 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:20:52.506473 kubelet[3008]: W0813 00:20:52.506378 3008 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.31.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-162&limit=500&resourceVersion=0": dial tcp 172.31.31.162:6443: connect: connection refused Aug 13 00:20:52.506748 kubelet[3008]: E0813 00:20:52.506712 3008 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.31.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-162&limit=500&resourceVersion=0\": dial tcp 172.31.31.162:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:20:52.507701 kubelet[3008]: W0813 00:20:52.507655 3008 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.162:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.31.162:6443: connect: connection refused Aug 13 00:20:52.508327 kubelet[3008]: E0813 00:20:52.508292 3008 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.31.162:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.162:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:20:52.508889 kubelet[3008]: I0813 00:20:52.508860 3008 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 00:20:52.510363 kubelet[3008]: I0813 00:20:52.510322 3008 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:20:52.510859 kubelet[3008]: W0813 00:20:52.510837 3008 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:20:52.513635 kubelet[3008]: I0813 00:20:52.513602 3008 server.go:1274] "Started kubelet" Aug 13 00:20:52.518796 kubelet[3008]: I0813 00:20:52.518717 3008 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:20:52.522266 kubelet[3008]: I0813 00:20:52.522228 3008 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:20:52.526044 kubelet[3008]: I0813 00:20:52.519879 3008 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:20:52.526044 kubelet[3008]: E0813 00:20:52.523329 3008 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.162:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.162:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-162.185b2ba185174e65 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-162,UID:ip-172-31-31-162,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-162,},FirstTimestamp:2025-08-13 00:20:52.513566309 +0000 UTC m=+2.614244366,LastTimestamp:2025-08-13 00:20:52.513566309 +0000 UTC m=+2.614244366,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-162,}" Aug 13 00:20:52.526531 kubelet[3008]: I0813 00:20:52.526503 3008 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:20:52.527505 kubelet[3008]: I0813 00:20:52.527344 3008 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:20:52.529038 kubelet[3008]: I0813 00:20:52.528758 3008 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:20:52.529351 kubelet[3008]: I0813 00:20:52.529327 3008 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:20:52.529885 kubelet[3008]: E0813 00:20:52.529857 3008 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-31-162\" not found" Aug 13 00:20:52.531536 kubelet[3008]: I0813 00:20:52.530941 3008 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:20:52.532292 kubelet[3008]: W0813 00:20:52.532213 3008 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.31.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.162:6443: connect: connection refused Aug 13 00:20:52.532418 kubelet[3008]: E0813 00:20:52.532307 3008 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.31.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.162:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:20:52.532498 kubelet[3008]: E0813 00:20:52.532425 3008 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-162?timeout=10s\": dial tcp 172.31.31.162:6443: connect: connection refused" interval="200ms" Aug 13 00:20:52.532559 kubelet[3008]: I0813 00:20:52.532516 3008 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:20:52.535407 kubelet[3008]: I0813 00:20:52.535349 3008 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:20:52.535535 kubelet[3008]: I0813 00:20:52.535509 3008 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:20:52.540031 kubelet[3008]: E0813 00:20:52.537774 3008 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:20:52.542091 kubelet[3008]: I0813 00:20:52.542043 3008 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:20:52.579918 kubelet[3008]: I0813 00:20:52.579836 3008 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:20:52.588105 kubelet[3008]: I0813 00:20:52.587732 3008 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:20:52.588105 kubelet[3008]: I0813 00:20:52.587793 3008 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:20:52.588105 kubelet[3008]: I0813 00:20:52.587848 3008 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:20:52.588105 kubelet[3008]: E0813 00:20:52.587919 3008 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:20:52.598637 kubelet[3008]: W0813 00:20:52.598551 3008 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.31.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.162:6443: connect: connection refused Aug 13 00:20:52.598761 kubelet[3008]: E0813 00:20:52.598652 3008 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.31.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.162:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:20:52.603437 kubelet[3008]: I0813 00:20:52.603389 3008 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:20:52.603437 kubelet[3008]: I0813 00:20:52.603427 3008 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:20:52.603647 kubelet[3008]: I0813 00:20:52.603459 3008 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:20:52.607654 kubelet[3008]: I0813 00:20:52.607601 3008 policy_none.go:49] "None policy: Start" Aug 13 00:20:52.608962 kubelet[3008]: I0813 00:20:52.608922 3008 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:20:52.609097 kubelet[3008]: I0813 00:20:52.608977 3008 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:20:52.626934 kubelet[3008]: I0813 00:20:52.625728 3008 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:20:52.626934 kubelet[3008]: I0813 00:20:52.626062 3008 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:20:52.626934 kubelet[3008]: I0813 00:20:52.626082 3008 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:20:52.628853 kubelet[3008]: I0813 00:20:52.628823 3008 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:20:52.631488 kubelet[3008]: E0813 00:20:52.631454 3008 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-31-162\" not found" Aug 13 00:20:52.728940 kubelet[3008]: I0813 00:20:52.728771 3008 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-162" Aug 13 00:20:52.729980 kubelet[3008]: E0813 00:20:52.729911 3008 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.162:6443/api/v1/nodes\": dial tcp 172.31.31.162:6443: connect: connection refused" node="ip-172-31-31-162" Aug 13 00:20:52.733638 kubelet[3008]: I0813 00:20:52.733213 3008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/75d82fd0422ba7761f745c0c2be67a17-ca-certs\") pod \"kube-apiserver-ip-172-31-31-162\" (UID: \"75d82fd0422ba7761f745c0c2be67a17\") " pod="kube-system/kube-apiserver-ip-172-31-31-162" Aug 13 00:20:52.733638 kubelet[3008]: I0813 00:20:52.733281 3008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/75d82fd0422ba7761f745c0c2be67a17-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-162\" (UID: \"75d82fd0422ba7761f745c0c2be67a17\") " pod="kube-system/kube-apiserver-ip-172-31-31-162" Aug 13 00:20:52.733638 kubelet[3008]: I0813 00:20:52.733321 3008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/75d82fd0422ba7761f745c0c2be67a17-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-162\" (UID: \"75d82fd0422ba7761f745c0c2be67a17\") " pod="kube-system/kube-apiserver-ip-172-31-31-162" Aug 13 00:20:52.733638 kubelet[3008]: I0813 00:20:52.733359 3008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c7f93b6629f23baac192cfd4cd572a46-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-162\" (UID: \"c7f93b6629f23baac192cfd4cd572a46\") " pod="kube-system/kube-controller-manager-ip-172-31-31-162" Aug 13 00:20:52.733638 kubelet[3008]: I0813 00:20:52.733397 3008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c7f93b6629f23baac192cfd4cd572a46-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-162\" (UID: \"c7f93b6629f23baac192cfd4cd572a46\") " pod="kube-system/kube-controller-manager-ip-172-31-31-162" Aug 13 00:20:52.733979 kubelet[3008]: I0813 00:20:52.733433 3008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c7f93b6629f23baac192cfd4cd572a46-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-162\" (UID: \"c7f93b6629f23baac192cfd4cd572a46\") " pod="kube-system/kube-controller-manager-ip-172-31-31-162" Aug 13 00:20:52.733979 kubelet[3008]: I0813 00:20:52.733467 3008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c7f93b6629f23baac192cfd4cd572a46-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-162\" (UID: \"c7f93b6629f23baac192cfd4cd572a46\") " pod="kube-system/kube-controller-manager-ip-172-31-31-162" Aug 13 00:20:52.733979 kubelet[3008]: I0813 00:20:52.733517 3008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c7f93b6629f23baac192cfd4cd572a46-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-162\" (UID: \"c7f93b6629f23baac192cfd4cd572a46\") " pod="kube-system/kube-controller-manager-ip-172-31-31-162" Aug 13 00:20:52.733979 kubelet[3008]: I0813 00:20:52.733558 3008 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5b6ba1c110abad2ab96e1e2e7e87dfc9-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-162\" (UID: \"5b6ba1c110abad2ab96e1e2e7e87dfc9\") " pod="kube-system/kube-scheduler-ip-172-31-31-162" Aug 13 00:20:52.733979 kubelet[3008]: E0813 00:20:52.733513 3008 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-162?timeout=10s\": dial tcp 172.31.31.162:6443: connect: connection refused" interval="400ms" Aug 13 00:20:52.932181 kubelet[3008]: I0813 00:20:52.932123 3008 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-162" Aug 13 00:20:52.932742 kubelet[3008]: E0813 00:20:52.932651 3008 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.162:6443/api/v1/nodes\": dial tcp 172.31.31.162:6443: connect: connection refused" node="ip-172-31-31-162" Aug 13 00:20:53.001737 containerd[2138]: time="2025-08-13T00:20:53.001174647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-162,Uid:75d82fd0422ba7761f745c0c2be67a17,Namespace:kube-system,Attempt:0,}" Aug 13 00:20:53.005706 containerd[2138]: time="2025-08-13T00:20:53.005634063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-162,Uid:c7f93b6629f23baac192cfd4cd572a46,Namespace:kube-system,Attempt:0,}" Aug 13 00:20:53.011687 containerd[2138]: time="2025-08-13T00:20:53.011513859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-162,Uid:5b6ba1c110abad2ab96e1e2e7e87dfc9,Namespace:kube-system,Attempt:0,}" Aug 13 00:20:53.134340 kubelet[3008]: E0813 00:20:53.134265 3008 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-162?timeout=10s\": dial tcp 172.31.31.162:6443: connect: connection refused" interval="800ms" Aug 13 00:20:53.336865 kubelet[3008]: I0813 00:20:53.336611 3008 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-162" Aug 13 00:20:53.337297 kubelet[3008]: E0813 00:20:53.337191 3008 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.162:6443/api/v1/nodes\": dial tcp 172.31.31.162:6443: connect: connection refused" node="ip-172-31-31-162" Aug 13 00:20:53.362212 kubelet[3008]: W0813 00:20:53.362054 3008 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.31.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.162:6443: connect: connection refused Aug 13 00:20:53.362212 kubelet[3008]: E0813 00:20:53.362155 3008 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.31.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.31.162:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:20:53.511549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4208848139.mount: Deactivated successfully. Aug 13 00:20:53.522168 containerd[2138]: time="2025-08-13T00:20:53.522107346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:20:53.526439 containerd[2138]: time="2025-08-13T00:20:53.526370898Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:20:53.528664 containerd[2138]: time="2025-08-13T00:20:53.528258030Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:20:53.531310 containerd[2138]: time="2025-08-13T00:20:53.531041166Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:20:53.533450 containerd[2138]: time="2025-08-13T00:20:53.533383086Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:20:53.535290 containerd[2138]: time="2025-08-13T00:20:53.535221378Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Aug 13 00:20:53.536872 containerd[2138]: time="2025-08-13T00:20:53.536779014Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:20:53.541982 containerd[2138]: time="2025-08-13T00:20:53.541892898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:20:53.545010 containerd[2138]: time="2025-08-13T00:20:53.544568190Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 538.818315ms" Aug 13 00:20:53.546902 containerd[2138]: time="2025-08-13T00:20:53.546802326Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 545.502531ms" Aug 13 00:20:53.557676 containerd[2138]: time="2025-08-13T00:20:53.557169006Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 545.308275ms" Aug 13 00:20:53.774167 containerd[2138]: time="2025-08-13T00:20:53.773526211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:20:53.775477 containerd[2138]: time="2025-08-13T00:20:53.774518863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:20:53.776409 containerd[2138]: time="2025-08-13T00:20:53.776067439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:53.777642 containerd[2138]: time="2025-08-13T00:20:53.777217387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:20:53.777642 containerd[2138]: time="2025-08-13T00:20:53.777311767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:20:53.777642 containerd[2138]: time="2025-08-13T00:20:53.777349135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:53.778258 containerd[2138]: time="2025-08-13T00:20:53.778130623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:20:53.778541 containerd[2138]: time="2025-08-13T00:20:53.778403167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:20:53.778677 containerd[2138]: time="2025-08-13T00:20:53.778507903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:53.780926 containerd[2138]: time="2025-08-13T00:20:53.780769951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:53.781325 containerd[2138]: time="2025-08-13T00:20:53.780796591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:53.781489 containerd[2138]: time="2025-08-13T00:20:53.781320091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:53.844861 kubelet[3008]: W0813 00:20:53.843471 3008 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.31.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-162&limit=500&resourceVersion=0": dial tcp 172.31.31.162:6443: connect: connection refused Aug 13 00:20:53.844861 kubelet[3008]: E0813 00:20:53.843581 3008 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.31.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-162&limit=500&resourceVersion=0\": dial tcp 172.31.31.162:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:20:53.936971 kubelet[3008]: E0813 00:20:53.936131 3008 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-162?timeout=10s\": dial tcp 172.31.31.162:6443: connect: connection refused" interval="1.6s" Aug 13 00:20:53.940128 containerd[2138]: time="2025-08-13T00:20:53.940065308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-162,Uid:5b6ba1c110abad2ab96e1e2e7e87dfc9,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a86fb1f75482f5ac443d1a17f00f9d60419d8ffcbaec76cf7f2e4cd3cfe25a6\"" Aug 13 00:20:53.946080 containerd[2138]: time="2025-08-13T00:20:53.946025384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-162,Uid:c7f93b6629f23baac192cfd4cd572a46,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a300e294c0ef11346fa160e9936bbb2570c5bf0df1f8347302bfbb043cb60ca\"" Aug 13 00:20:53.955404 containerd[2138]: time="2025-08-13T00:20:53.955227404Z" level=info msg="CreateContainer within sandbox \"6a86fb1f75482f5ac443d1a17f00f9d60419d8ffcbaec76cf7f2e4cd3cfe25a6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:20:53.956251 containerd[2138]: time="2025-08-13T00:20:53.956119748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-162,Uid:75d82fd0422ba7761f745c0c2be67a17,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab45593137cb26a6a1d600f92ad8df29e0eb4d8b920ee717654e8697e51ca5af\"" Aug 13 00:20:53.957828 containerd[2138]: time="2025-08-13T00:20:53.957704984Z" level=info msg="CreateContainer within sandbox \"2a300e294c0ef11346fa160e9936bbb2570c5bf0df1f8347302bfbb043cb60ca\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:20:53.963679 containerd[2138]: time="2025-08-13T00:20:53.963621632Z" level=info msg="CreateContainer within sandbox \"ab45593137cb26a6a1d600f92ad8df29e0eb4d8b920ee717654e8697e51ca5af\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:20:54.002236 containerd[2138]: time="2025-08-13T00:20:54.002161444Z" level=info msg="CreateContainer within sandbox \"2a300e294c0ef11346fa160e9936bbb2570c5bf0df1f8347302bfbb043cb60ca\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"71963f28167d23718ba3cb6e204bc6407da0589a99f769aec4fb8559cb9a2ec1\"" Aug 13 00:20:54.003407 containerd[2138]: time="2025-08-13T00:20:54.003242836Z" level=info msg="StartContainer for \"71963f28167d23718ba3cb6e204bc6407da0589a99f769aec4fb8559cb9a2ec1\"" Aug 13 00:20:54.005140 containerd[2138]: time="2025-08-13T00:20:54.004608868Z" level=info msg="CreateContainer within sandbox \"6a86fb1f75482f5ac443d1a17f00f9d60419d8ffcbaec76cf7f2e4cd3cfe25a6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"36feca051c09a0ae38386ce1d8699e6fe99286d5f0e875ea9a80fd5bd8800c15\"" Aug 13 00:20:54.006791 containerd[2138]: time="2025-08-13T00:20:54.006732436Z" level=info msg="StartContainer for \"36feca051c09a0ae38386ce1d8699e6fe99286d5f0e875ea9a80fd5bd8800c15\"" Aug 13 00:20:54.012156 containerd[2138]: time="2025-08-13T00:20:54.011798464Z" level=info msg="CreateContainer within sandbox \"ab45593137cb26a6a1d600f92ad8df29e0eb4d8b920ee717654e8697e51ca5af\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cb432dc4258d6c9734689022b13805ccc6a0f754dc40a75a4b35af6a672d9f33\"" Aug 13 00:20:54.013047 containerd[2138]: time="2025-08-13T00:20:54.012718972Z" level=info msg="StartContainer for \"cb432dc4258d6c9734689022b13805ccc6a0f754dc40a75a4b35af6a672d9f33\"" Aug 13 00:20:54.017172 kubelet[3008]: W0813 00:20:54.017069 3008 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.31.162:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.31.162:6443: connect: connection refused Aug 13 00:20:54.017368 kubelet[3008]: E0813 00:20:54.017174 3008 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.31.162:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.31.162:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:20:54.091517 kubelet[3008]: W0813 00:20:54.089959 3008 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.31.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.162:6443: connect: connection refused Aug 13 00:20:54.091517 kubelet[3008]: E0813 00:20:54.091298 3008 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.31.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.31.162:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:20:54.144233 kubelet[3008]: I0813 00:20:54.144185 3008 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-162" Aug 13 00:20:54.147183 kubelet[3008]: E0813 00:20:54.147104 3008 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.31.162:6443/api/v1/nodes\": dial tcp 172.31.31.162:6443: connect: connection refused" node="ip-172-31-31-162" Aug 13 00:20:54.221293 containerd[2138]: time="2025-08-13T00:20:54.221107781Z" level=info msg="StartContainer for \"71963f28167d23718ba3cb6e204bc6407da0589a99f769aec4fb8559cb9a2ec1\" returns successfully" Aug 13 00:20:54.221911 containerd[2138]: time="2025-08-13T00:20:54.221483225Z" level=info msg="StartContainer for \"cb432dc4258d6c9734689022b13805ccc6a0f754dc40a75a4b35af6a672d9f33\" returns successfully" Aug 13 00:20:54.237031 containerd[2138]: time="2025-08-13T00:20:54.236042585Z" level=info msg="StartContainer for \"36feca051c09a0ae38386ce1d8699e6fe99286d5f0e875ea9a80fd5bd8800c15\" returns successfully" Aug 13 00:20:55.750967 kubelet[3008]: I0813 00:20:55.750831 3008 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-162" Aug 13 00:20:57.238049 update_engine[2112]: I20250813 00:20:57.235033 2112 update_attempter.cc:509] Updating boot flags... Aug 13 00:20:57.467020 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3295) Aug 13 00:20:58.162160 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3294) Aug 13 00:20:58.794027 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3294) Aug 13 00:20:59.517024 kubelet[3008]: I0813 00:20:59.516941 3008 apiserver.go:52] "Watching apiserver" Aug 13 00:20:59.591709 kubelet[3008]: E0813 00:20:59.591641 3008 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-31-162\" not found" node="ip-172-31-31-162" Aug 13 00:20:59.632243 kubelet[3008]: I0813 00:20:59.631589 3008 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-31-162" Aug 13 00:20:59.632740 kubelet[3008]: I0813 00:20:59.632524 3008 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:20:59.715026 kubelet[3008]: E0813 00:20:59.712961 3008 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-31-162.185b2ba185174e65 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-162,UID:ip-172-31-31-162,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-162,},FirstTimestamp:2025-08-13 00:20:52.513566309 +0000 UTC m=+2.614244366,LastTimestamp:2025-08-13 00:20:52.513566309 +0000 UTC m=+2.614244366,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-162,}" Aug 13 00:21:01.878866 systemd[1]: Reloading requested from client PID 3551 ('systemctl') (unit session-7.scope)... Aug 13 00:21:01.878900 systemd[1]: Reloading... Aug 13 00:21:02.047109 zram_generator::config[3594]: No configuration found. Aug 13 00:21:02.286770 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:21:02.475464 systemd[1]: Reloading finished in 595 ms. Aug 13 00:21:02.537854 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:21:02.549886 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:21:02.551411 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:21:02.567629 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:21:02.960363 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:21:02.984666 (kubelet)[3661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:21:03.094782 kubelet[3661]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:21:03.094782 kubelet[3661]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:21:03.094782 kubelet[3661]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:21:03.095768 kubelet[3661]: I0813 00:21:03.094875 3661 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:21:03.109538 kubelet[3661]: I0813 00:21:03.109484 3661 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:21:03.109538 kubelet[3661]: I0813 00:21:03.109533 3661 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:21:03.110071 kubelet[3661]: I0813 00:21:03.109957 3661 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:21:03.119409 kubelet[3661]: I0813 00:21:03.117219 3661 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:21:03.135182 kubelet[3661]: I0813 00:21:03.134960 3661 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:21:03.144113 kubelet[3661]: E0813 00:21:03.144024 3661 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:21:03.144113 kubelet[3661]: I0813 00:21:03.144078 3661 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:21:03.153982 kubelet[3661]: I0813 00:21:03.153929 3661 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:21:03.154655 kubelet[3661]: I0813 00:21:03.154607 3661 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:21:03.154867 kubelet[3661]: I0813 00:21:03.154813 3661 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:21:03.155172 kubelet[3661]: I0813 00:21:03.154870 3661 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-31-162","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 00:21:03.155330 kubelet[3661]: I0813 00:21:03.155176 3661 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:21:03.155330 kubelet[3661]: I0813 00:21:03.155196 3661 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:21:03.155330 kubelet[3661]: I0813 00:21:03.155265 3661 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:21:03.155510 kubelet[3661]: I0813 00:21:03.155450 3661 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:21:03.155510 kubelet[3661]: I0813 00:21:03.155493 3661 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:21:03.155619 kubelet[3661]: I0813 00:21:03.155532 3661 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:21:03.155619 kubelet[3661]: I0813 00:21:03.155562 3661 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:21:03.165885 kubelet[3661]: I0813 00:21:03.165848 3661 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 00:21:03.167131 kubelet[3661]: I0813 00:21:03.167100 3661 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:21:03.168476 kubelet[3661]: I0813 00:21:03.168171 3661 server.go:1274] "Started kubelet" Aug 13 00:21:03.172515 kubelet[3661]: I0813 00:21:03.172482 3661 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:21:03.176241 kubelet[3661]: I0813 00:21:03.176070 3661 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:21:03.177605 kubelet[3661]: I0813 00:21:03.177239 3661 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:21:03.177605 kubelet[3661]: I0813 00:21:03.177353 3661 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:21:03.183521 kubelet[3661]: I0813 00:21:03.183482 3661 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:21:03.191194 kubelet[3661]: I0813 00:21:03.190625 3661 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:21:03.191443 kubelet[3661]: E0813 00:21:03.191403 3661 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-31-162\" not found" Aug 13 00:21:03.195362 kubelet[3661]: I0813 00:21:03.192515 3661 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:21:03.201405 kubelet[3661]: I0813 00:21:03.201357 3661 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:21:03.202647 kubelet[3661]: I0813 00:21:03.201630 3661 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:21:03.209392 kubelet[3661]: I0813 00:21:03.209358 3661 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:21:03.237166 kubelet[3661]: I0813 00:21:03.233738 3661 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:21:03.237166 kubelet[3661]: I0813 00:21:03.236096 3661 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:21:03.263851 kubelet[3661]: I0813 00:21:03.262203 3661 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:21:03.263851 kubelet[3661]: I0813 00:21:03.262252 3661 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:21:03.263851 kubelet[3661]: I0813 00:21:03.262281 3661 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:21:03.263851 kubelet[3661]: E0813 00:21:03.262354 3661 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:21:03.290837 kubelet[3661]: I0813 00:21:03.290780 3661 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:21:03.294854 kubelet[3661]: E0813 00:21:03.294758 3661 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:21:03.366328 kubelet[3661]: E0813 00:21:03.366262 3661 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 00:21:03.445776 kubelet[3661]: I0813 00:21:03.445706 3661 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:21:03.445776 kubelet[3661]: I0813 00:21:03.445751 3661 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:21:03.445776 kubelet[3661]: I0813 00:21:03.445788 3661 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:21:03.446664 kubelet[3661]: I0813 00:21:03.446062 3661 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:21:03.446664 kubelet[3661]: I0813 00:21:03.446085 3661 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:21:03.446664 kubelet[3661]: I0813 00:21:03.446122 3661 policy_none.go:49] "None policy: Start" Aug 13 00:21:03.449467 kubelet[3661]: I0813 00:21:03.447885 3661 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:21:03.449467 kubelet[3661]: I0813 00:21:03.447929 3661 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:21:03.449467 kubelet[3661]: I0813 00:21:03.448224 3661 state_mem.go:75] "Updated machine memory state" Aug 13 00:21:03.451072 kubelet[3661]: I0813 00:21:03.451037 3661 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:21:03.451488 kubelet[3661]: I0813 00:21:03.451467 3661 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:21:03.451629 kubelet[3661]: I0813 00:21:03.451579 3661 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:21:03.453085 kubelet[3661]: I0813 00:21:03.453060 3661 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:21:03.574520 kubelet[3661]: I0813 00:21:03.574347 3661 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-31-162" Aug 13 00:21:03.596917 kubelet[3661]: I0813 00:21:03.595454 3661 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-31-162" Aug 13 00:21:03.596917 kubelet[3661]: I0813 00:21:03.595577 3661 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-31-162" Aug 13 00:21:03.704150 kubelet[3661]: I0813 00:21:03.703977 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c7f93b6629f23baac192cfd4cd572a46-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-162\" (UID: \"c7f93b6629f23baac192cfd4cd572a46\") " pod="kube-system/kube-controller-manager-ip-172-31-31-162" Aug 13 00:21:03.704150 kubelet[3661]: I0813 00:21:03.704075 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/75d82fd0422ba7761f745c0c2be67a17-ca-certs\") pod \"kube-apiserver-ip-172-31-31-162\" (UID: \"75d82fd0422ba7761f745c0c2be67a17\") " pod="kube-system/kube-apiserver-ip-172-31-31-162" Aug 13 00:21:03.704150 kubelet[3661]: I0813 00:21:03.704130 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/75d82fd0422ba7761f745c0c2be67a17-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-162\" (UID: \"75d82fd0422ba7761f745c0c2be67a17\") " pod="kube-system/kube-apiserver-ip-172-31-31-162" Aug 13 00:21:03.704595 kubelet[3661]: I0813 00:21:03.704175 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c7f93b6629f23baac192cfd4cd572a46-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-162\" (UID: \"c7f93b6629f23baac192cfd4cd572a46\") " pod="kube-system/kube-controller-manager-ip-172-31-31-162" Aug 13 00:21:03.704595 kubelet[3661]: I0813 00:21:03.704215 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c7f93b6629f23baac192cfd4cd572a46-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-162\" (UID: \"c7f93b6629f23baac192cfd4cd572a46\") " pod="kube-system/kube-controller-manager-ip-172-31-31-162" Aug 13 00:21:03.704595 kubelet[3661]: I0813 00:21:03.704260 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5b6ba1c110abad2ab96e1e2e7e87dfc9-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-162\" (UID: \"5b6ba1c110abad2ab96e1e2e7e87dfc9\") " pod="kube-system/kube-scheduler-ip-172-31-31-162" Aug 13 00:21:03.704595 kubelet[3661]: I0813 00:21:03.704294 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/75d82fd0422ba7761f745c0c2be67a17-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-162\" (UID: \"75d82fd0422ba7761f745c0c2be67a17\") " pod="kube-system/kube-apiserver-ip-172-31-31-162" Aug 13 00:21:03.704595 kubelet[3661]: I0813 00:21:03.704329 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c7f93b6629f23baac192cfd4cd572a46-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-162\" (UID: \"c7f93b6629f23baac192cfd4cd572a46\") " pod="kube-system/kube-controller-manager-ip-172-31-31-162" Aug 13 00:21:03.704886 kubelet[3661]: I0813 00:21:03.704363 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c7f93b6629f23baac192cfd4cd572a46-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-162\" (UID: \"c7f93b6629f23baac192cfd4cd572a46\") " pod="kube-system/kube-controller-manager-ip-172-31-31-162" Aug 13 00:21:04.156593 kubelet[3661]: I0813 00:21:04.156511 3661 apiserver.go:52] "Watching apiserver" Aug 13 00:21:04.201754 kubelet[3661]: I0813 00:21:04.201685 3661 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:21:04.265610 kubelet[3661]: I0813 00:21:04.264826 3661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-31-162" podStartSLOduration=1.264776223 podStartE2EDuration="1.264776223s" podCreationTimestamp="2025-08-13 00:21:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:21:04.264118323 +0000 UTC m=+1.272001867" watchObservedRunningTime="2025-08-13 00:21:04.264776223 +0000 UTC m=+1.272659743" Aug 13 00:21:04.279344 kubelet[3661]: I0813 00:21:04.279236 3661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-31-162" podStartSLOduration=1.279211815 podStartE2EDuration="1.279211815s" podCreationTimestamp="2025-08-13 00:21:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:21:04.279164247 +0000 UTC m=+1.287047755" watchObservedRunningTime="2025-08-13 00:21:04.279211815 +0000 UTC m=+1.287095347" Aug 13 00:21:04.330112 kubelet[3661]: I0813 00:21:04.329944 3661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-31-162" podStartSLOduration=1.32992372 podStartE2EDuration="1.32992372s" podCreationTimestamp="2025-08-13 00:21:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:21:04.302207367 +0000 UTC m=+1.310090887" watchObservedRunningTime="2025-08-13 00:21:04.32992372 +0000 UTC m=+1.337807240" Aug 13 00:21:07.307836 kubelet[3661]: I0813 00:21:07.307458 3661 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:21:07.311502 containerd[2138]: time="2025-08-13T00:21:07.311435166Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:21:07.319286 kubelet[3661]: I0813 00:21:07.316580 3661 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:21:08.247604 kubelet[3661]: I0813 00:21:08.247529 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2d016643-7aef-4015-801f-791528af92d8-kube-proxy\") pod \"kube-proxy-kkjdk\" (UID: \"2d016643-7aef-4015-801f-791528af92d8\") " pod="kube-system/kube-proxy-kkjdk" Aug 13 00:21:08.248126 kubelet[3661]: I0813 00:21:08.247918 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prqq4\" (UniqueName: \"kubernetes.io/projected/2d016643-7aef-4015-801f-791528af92d8-kube-api-access-prqq4\") pod \"kube-proxy-kkjdk\" (UID: \"2d016643-7aef-4015-801f-791528af92d8\") " pod="kube-system/kube-proxy-kkjdk" Aug 13 00:21:08.248507 kubelet[3661]: I0813 00:21:08.248397 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d016643-7aef-4015-801f-791528af92d8-xtables-lock\") pod \"kube-proxy-kkjdk\" (UID: \"2d016643-7aef-4015-801f-791528af92d8\") " pod="kube-system/kube-proxy-kkjdk" Aug 13 00:21:08.248712 kubelet[3661]: I0813 00:21:08.248589 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d016643-7aef-4015-801f-791528af92d8-lib-modules\") pod \"kube-proxy-kkjdk\" (UID: \"2d016643-7aef-4015-801f-791528af92d8\") " pod="kube-system/kube-proxy-kkjdk" Aug 13 00:21:08.475972 containerd[2138]: time="2025-08-13T00:21:08.474550352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kkjdk,Uid:2d016643-7aef-4015-801f-791528af92d8,Namespace:kube-system,Attempt:0,}" Aug 13 00:21:08.525305 containerd[2138]: time="2025-08-13T00:21:08.524952644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:21:08.525305 containerd[2138]: time="2025-08-13T00:21:08.525097964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:21:08.525305 containerd[2138]: time="2025-08-13T00:21:08.525152336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:21:08.526288 containerd[2138]: time="2025-08-13T00:21:08.526041644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:21:08.556023 kubelet[3661]: I0813 00:21:08.552375 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/50a85f18-7700-47ba-92db-84f83fef182f-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-w4m7k\" (UID: \"50a85f18-7700-47ba-92db-84f83fef182f\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-w4m7k" Aug 13 00:21:08.556023 kubelet[3661]: I0813 00:21:08.552443 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlbmr\" (UniqueName: \"kubernetes.io/projected/50a85f18-7700-47ba-92db-84f83fef182f-kube-api-access-nlbmr\") pod \"tigera-operator-5bf8dfcb4-w4m7k\" (UID: \"50a85f18-7700-47ba-92db-84f83fef182f\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-w4m7k" Aug 13 00:21:08.600126 containerd[2138]: time="2025-08-13T00:21:08.600059253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kkjdk,Uid:2d016643-7aef-4015-801f-791528af92d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"1934a676e0e90e2556c1aa60f9409b0897c65ce0bfc2f257929a128171721bb8\"" Aug 13 00:21:08.605254 containerd[2138]: time="2025-08-13T00:21:08.605180661Z" level=info msg="CreateContainer within sandbox \"1934a676e0e90e2556c1aa60f9409b0897c65ce0bfc2f257929a128171721bb8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:21:08.635426 containerd[2138]: time="2025-08-13T00:21:08.635341965Z" level=info msg="CreateContainer within sandbox \"1934a676e0e90e2556c1aa60f9409b0897c65ce0bfc2f257929a128171721bb8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"60407bcfa19fec02f8e56d6557465c8c1fa233d67321192bba17aeaa03b1aa8b\"" Aug 13 00:21:08.638191 containerd[2138]: time="2025-08-13T00:21:08.638119665Z" level=info msg="StartContainer for \"60407bcfa19fec02f8e56d6557465c8c1fa233d67321192bba17aeaa03b1aa8b\"" Aug 13 00:21:08.755769 containerd[2138]: time="2025-08-13T00:21:08.755663373Z" level=info msg="StartContainer for \"60407bcfa19fec02f8e56d6557465c8c1fa233d67321192bba17aeaa03b1aa8b\" returns successfully" Aug 13 00:21:08.763250 containerd[2138]: time="2025-08-13T00:21:08.762676366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-w4m7k,Uid:50a85f18-7700-47ba-92db-84f83fef182f,Namespace:tigera-operator,Attempt:0,}" Aug 13 00:21:08.816543 containerd[2138]: time="2025-08-13T00:21:08.816262954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:21:08.817021 containerd[2138]: time="2025-08-13T00:21:08.816827446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:21:08.819290 containerd[2138]: time="2025-08-13T00:21:08.819023002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:21:08.819712 containerd[2138]: time="2025-08-13T00:21:08.819579034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:21:08.924443 containerd[2138]: time="2025-08-13T00:21:08.924288778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-w4m7k,Uid:50a85f18-7700-47ba-92db-84f83fef182f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ef642a9ea866d0cf8babb9631b5d7c21d8c6343fbf783319886161d4b1a40c65\"" Aug 13 00:21:08.932938 containerd[2138]: time="2025-08-13T00:21:08.932500810Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 00:21:10.486964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3672532918.mount: Deactivated successfully. Aug 13 00:21:11.178055 containerd[2138]: time="2025-08-13T00:21:11.177965794Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:21:11.179726 containerd[2138]: time="2025-08-13T00:21:11.179644126Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Aug 13 00:21:11.181683 containerd[2138]: time="2025-08-13T00:21:11.181558846Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:21:11.197290 containerd[2138]: time="2025-08-13T00:21:11.197198722Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:21:11.199298 containerd[2138]: time="2025-08-13T00:21:11.199100614Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.266204872s" Aug 13 00:21:11.199298 containerd[2138]: time="2025-08-13T00:21:11.199157410Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Aug 13 00:21:11.203983 containerd[2138]: time="2025-08-13T00:21:11.203757922Z" level=info msg="CreateContainer within sandbox \"ef642a9ea866d0cf8babb9631b5d7c21d8c6343fbf783319886161d4b1a40c65\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 00:21:11.224785 containerd[2138]: time="2025-08-13T00:21:11.224702806Z" level=info msg="CreateContainer within sandbox \"ef642a9ea866d0cf8babb9631b5d7c21d8c6343fbf783319886161d4b1a40c65\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"adb9a74a9b86ad16fa168fa25991e182c4c97209ec19d72227fb0057c4d01ebc\"" Aug 13 00:21:11.227497 containerd[2138]: time="2025-08-13T00:21:11.225822406Z" level=info msg="StartContainer for \"adb9a74a9b86ad16fa168fa25991e182c4c97209ec19d72227fb0057c4d01ebc\"" Aug 13 00:21:11.337764 containerd[2138]: time="2025-08-13T00:21:11.336819226Z" level=info msg="StartContainer for \"adb9a74a9b86ad16fa168fa25991e182c4c97209ec19d72227fb0057c4d01ebc\" returns successfully" Aug 13 00:21:11.387648 kubelet[3661]: I0813 00:21:11.387416 3661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kkjdk" podStartSLOduration=3.387349139 podStartE2EDuration="3.387349139s" podCreationTimestamp="2025-08-13 00:21:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:21:09.382435305 +0000 UTC m=+6.390318837" watchObservedRunningTime="2025-08-13 00:21:11.387349139 +0000 UTC m=+8.395232659" Aug 13 00:21:13.302696 kubelet[3661]: I0813 00:21:13.302591 3661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-w4m7k" podStartSLOduration=3.030227636 podStartE2EDuration="5.302569128s" podCreationTimestamp="2025-08-13 00:21:08 +0000 UTC" firstStartedPulling="2025-08-13 00:21:08.928618966 +0000 UTC m=+5.936502474" lastFinishedPulling="2025-08-13 00:21:11.200960458 +0000 UTC m=+8.208843966" observedRunningTime="2025-08-13 00:21:11.390970235 +0000 UTC m=+8.398853755" watchObservedRunningTime="2025-08-13 00:21:13.302569128 +0000 UTC m=+10.310452636" Aug 13 00:21:20.427343 sudo[2485]: pam_unix(sudo:session): session closed for user root Aug 13 00:21:20.454879 sshd[2481]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:20.463752 systemd-logind[2103]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:21:20.471548 systemd[1]: sshd@6-172.31.31.162:22-139.178.89.65:36166.service: Deactivated successfully. Aug 13 00:21:20.487415 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:21:20.496485 systemd-logind[2103]: Removed session 7. Aug 13 00:21:32.939332 kubelet[3661]: I0813 00:21:32.939268 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa83c157-33aa-485d-988e-53d7d0fe47cb-tigera-ca-bundle\") pod \"calico-typha-dbf995694-qttn6\" (UID: \"aa83c157-33aa-485d-988e-53d7d0fe47cb\") " pod="calico-system/calico-typha-dbf995694-qttn6" Aug 13 00:21:32.939955 kubelet[3661]: I0813 00:21:32.939341 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/aa83c157-33aa-485d-988e-53d7d0fe47cb-typha-certs\") pod \"calico-typha-dbf995694-qttn6\" (UID: \"aa83c157-33aa-485d-988e-53d7d0fe47cb\") " pod="calico-system/calico-typha-dbf995694-qttn6" Aug 13 00:21:32.939955 kubelet[3661]: I0813 00:21:32.939388 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trrfc\" (UniqueName: \"kubernetes.io/projected/aa83c157-33aa-485d-988e-53d7d0fe47cb-kube-api-access-trrfc\") pod \"calico-typha-dbf995694-qttn6\" (UID: \"aa83c157-33aa-485d-988e-53d7d0fe47cb\") " pod="calico-system/calico-typha-dbf995694-qttn6" Aug 13 00:21:33.191604 containerd[2138]: time="2025-08-13T00:21:33.191310871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-dbf995694-qttn6,Uid:aa83c157-33aa-485d-988e-53d7d0fe47cb,Namespace:calico-system,Attempt:0,}" Aug 13 00:21:33.245024 kubelet[3661]: I0813 00:21:33.243231 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b6daf963-5978-4a5d-96b6-a144b649a51e-flexvol-driver-host\") pod \"calico-node-kvv5p\" (UID: \"b6daf963-5978-4a5d-96b6-a144b649a51e\") " pod="calico-system/calico-node-kvv5p" Aug 13 00:21:33.245024 kubelet[3661]: I0813 00:21:33.243293 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6daf963-5978-4a5d-96b6-a144b649a51e-lib-modules\") pod \"calico-node-kvv5p\" (UID: \"b6daf963-5978-4a5d-96b6-a144b649a51e\") " pod="calico-system/calico-node-kvv5p" Aug 13 00:21:33.245024 kubelet[3661]: I0813 00:21:33.243333 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b6daf963-5978-4a5d-96b6-a144b649a51e-var-run-calico\") pod \"calico-node-kvv5p\" (UID: \"b6daf963-5978-4a5d-96b6-a144b649a51e\") " pod="calico-system/calico-node-kvv5p" Aug 13 00:21:33.245024 kubelet[3661]: I0813 00:21:33.243370 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b6daf963-5978-4a5d-96b6-a144b649a51e-cni-log-dir\") pod \"calico-node-kvv5p\" (UID: \"b6daf963-5978-4a5d-96b6-a144b649a51e\") " pod="calico-system/calico-node-kvv5p" Aug 13 00:21:33.245024 kubelet[3661]: I0813 00:21:33.243405 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b6daf963-5978-4a5d-96b6-a144b649a51e-node-certs\") pod \"calico-node-kvv5p\" (UID: \"b6daf963-5978-4a5d-96b6-a144b649a51e\") " pod="calico-system/calico-node-kvv5p" Aug 13 00:21:33.245438 kubelet[3661]: I0813 00:21:33.243441 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b6daf963-5978-4a5d-96b6-a144b649a51e-var-lib-calico\") pod \"calico-node-kvv5p\" (UID: \"b6daf963-5978-4a5d-96b6-a144b649a51e\") " pod="calico-system/calico-node-kvv5p" Aug 13 00:21:33.245438 kubelet[3661]: I0813 00:21:33.243481 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6daf963-5978-4a5d-96b6-a144b649a51e-tigera-ca-bundle\") pod \"calico-node-kvv5p\" (UID: \"b6daf963-5978-4a5d-96b6-a144b649a51e\") " pod="calico-system/calico-node-kvv5p" Aug 13 00:21:33.245438 kubelet[3661]: I0813 00:21:33.243554 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b6daf963-5978-4a5d-96b6-a144b649a51e-cni-bin-dir\") pod \"calico-node-kvv5p\" (UID: \"b6daf963-5978-4a5d-96b6-a144b649a51e\") " pod="calico-system/calico-node-kvv5p" Aug 13 00:21:33.245438 kubelet[3661]: I0813 00:21:33.243602 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz4td\" (UniqueName: \"kubernetes.io/projected/b6daf963-5978-4a5d-96b6-a144b649a51e-kube-api-access-jz4td\") pod \"calico-node-kvv5p\" (UID: \"b6daf963-5978-4a5d-96b6-a144b649a51e\") " pod="calico-system/calico-node-kvv5p" Aug 13 00:21:33.245438 kubelet[3661]: I0813 00:21:33.243644 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b6daf963-5978-4a5d-96b6-a144b649a51e-cni-net-dir\") pod \"calico-node-kvv5p\" (UID: \"b6daf963-5978-4a5d-96b6-a144b649a51e\") " pod="calico-system/calico-node-kvv5p" Aug 13 00:21:33.246199 kubelet[3661]: I0813 00:21:33.243686 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b6daf963-5978-4a5d-96b6-a144b649a51e-policysync\") pod \"calico-node-kvv5p\" (UID: \"b6daf963-5978-4a5d-96b6-a144b649a51e\") " pod="calico-system/calico-node-kvv5p" Aug 13 00:21:33.246199 kubelet[3661]: I0813 00:21:33.243724 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6daf963-5978-4a5d-96b6-a144b649a51e-xtables-lock\") pod \"calico-node-kvv5p\" (UID: \"b6daf963-5978-4a5d-96b6-a144b649a51e\") " pod="calico-system/calico-node-kvv5p" Aug 13 00:21:33.273056 containerd[2138]: time="2025-08-13T00:21:33.268482643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:21:33.273056 containerd[2138]: time="2025-08-13T00:21:33.269986687Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:21:33.274902 containerd[2138]: time="2025-08-13T00:21:33.271865335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:21:33.277636 containerd[2138]: time="2025-08-13T00:21:33.276138031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:21:33.367231 kubelet[3661]: E0813 00:21:33.363061 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.367231 kubelet[3661]: W0813 00:21:33.363469 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.367231 kubelet[3661]: E0813 00:21:33.364138 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.375442 kubelet[3661]: E0813 00:21:33.369456 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.375442 kubelet[3661]: W0813 00:21:33.373585 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.375442 kubelet[3661]: E0813 00:21:33.373642 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.384028 kubelet[3661]: E0813 00:21:33.382171 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.384028 kubelet[3661]: W0813 00:21:33.382213 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.384028 kubelet[3661]: E0813 00:21:33.382250 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.390026 kubelet[3661]: E0813 00:21:33.387181 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.390026 kubelet[3661]: W0813 00:21:33.387594 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.390026 kubelet[3661]: E0813 00:21:33.388109 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.391400 kubelet[3661]: E0813 00:21:33.390550 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.391400 kubelet[3661]: W0813 00:21:33.390589 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.391400 kubelet[3661]: E0813 00:21:33.390837 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.399059 kubelet[3661]: E0813 00:21:33.392290 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.399059 kubelet[3661]: W0813 00:21:33.392451 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.399059 kubelet[3661]: E0813 00:21:33.392724 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.399059 kubelet[3661]: E0813 00:21:33.393659 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.399059 kubelet[3661]: W0813 00:21:33.393687 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.399059 kubelet[3661]: E0813 00:21:33.394676 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.399059 kubelet[3661]: E0813 00:21:33.395387 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.399059 kubelet[3661]: W0813 00:21:33.395448 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.399059 kubelet[3661]: E0813 00:21:33.396659 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.401208 kubelet[3661]: E0813 00:21:33.399711 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.401457 kubelet[3661]: W0813 00:21:33.401422 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.402025 kubelet[3661]: E0813 00:21:33.401592 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.409025 kubelet[3661]: E0813 00:21:33.408131 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.409779 kubelet[3661]: W0813 00:21:33.409709 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.412847 kubelet[3661]: E0813 00:21:33.411854 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.415284 kubelet[3661]: W0813 00:21:33.413168 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.415284 kubelet[3661]: E0813 00:21:33.414420 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.415284 kubelet[3661]: W0813 00:21:33.414449 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.415284 kubelet[3661]: E0813 00:21:33.415057 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.415284 kubelet[3661]: E0813 00:21:33.415156 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.415284 kubelet[3661]: E0813 00:21:33.412147 3661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4xwwn" podUID="5a310c1b-b07f-40c9-96ad-53e0942080e1" Aug 13 00:21:33.415284 kubelet[3661]: E0813 00:21:33.415220 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.415748 kubelet[3661]: E0813 00:21:33.415537 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.415748 kubelet[3661]: W0813 00:21:33.415579 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.415748 kubelet[3661]: E0813 00:21:33.415731 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.421328 kubelet[3661]: E0813 00:21:33.421275 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.421328 kubelet[3661]: W0813 00:21:33.421316 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.421980 kubelet[3661]: E0813 00:21:33.421754 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.421980 kubelet[3661]: E0813 00:21:33.422810 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.421980 kubelet[3661]: W0813 00:21:33.422839 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.421980 kubelet[3661]: E0813 00:21:33.422872 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.431924 containerd[2138]: time="2025-08-13T00:21:33.430308524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kvv5p,Uid:b6daf963-5978-4a5d-96b6-a144b649a51e,Namespace:calico-system,Attempt:0,}" Aug 13 00:21:33.515215 kubelet[3661]: E0813 00:21:33.513916 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.515401 kubelet[3661]: W0813 00:21:33.515097 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.515401 kubelet[3661]: E0813 00:21:33.515264 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.521707 kubelet[3661]: E0813 00:21:33.521398 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.521707 kubelet[3661]: W0813 00:21:33.521433 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.521707 kubelet[3661]: E0813 00:21:33.521467 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.524101 kubelet[3661]: E0813 00:21:33.523737 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.524101 kubelet[3661]: W0813 00:21:33.523804 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.524917 kubelet[3661]: E0813 00:21:33.524283 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.526263 kubelet[3661]: E0813 00:21:33.526203 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.526656 kubelet[3661]: W0813 00:21:33.526375 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.526656 kubelet[3661]: E0813 00:21:33.526411 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.528639 kubelet[3661]: E0813 00:21:33.528140 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.528639 kubelet[3661]: W0813 00:21:33.528441 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.528639 kubelet[3661]: E0813 00:21:33.528484 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.529793 kubelet[3661]: E0813 00:21:33.529552 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.529793 kubelet[3661]: W0813 00:21:33.529639 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.529793 kubelet[3661]: E0813 00:21:33.529728 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.531113 kubelet[3661]: E0813 00:21:33.530666 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.531113 kubelet[3661]: W0813 00:21:33.530698 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.531113 kubelet[3661]: E0813 00:21:33.530748 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.532309 kubelet[3661]: E0813 00:21:33.531826 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.532309 kubelet[3661]: W0813 00:21:33.531882 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.532309 kubelet[3661]: E0813 00:21:33.531914 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.533153 kubelet[3661]: E0813 00:21:33.532762 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.533153 kubelet[3661]: W0813 00:21:33.532816 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.533153 kubelet[3661]: E0813 00:21:33.532845 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.534009 kubelet[3661]: E0813 00:21:33.533880 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.534009 kubelet[3661]: W0813 00:21:33.533939 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.534833 kubelet[3661]: E0813 00:21:33.533969 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.536747 kubelet[3661]: E0813 00:21:33.536478 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.536747 kubelet[3661]: W0813 00:21:33.536525 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.536747 kubelet[3661]: E0813 00:21:33.536560 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.545041 kubelet[3661]: E0813 00:21:33.544686 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.545041 kubelet[3661]: W0813 00:21:33.544723 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.545041 kubelet[3661]: E0813 00:21:33.544765 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.548542 kubelet[3661]: E0813 00:21:33.548289 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.548542 kubelet[3661]: W0813 00:21:33.548325 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.548542 kubelet[3661]: E0813 00:21:33.548357 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.550492 kubelet[3661]: E0813 00:21:33.549964 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.550492 kubelet[3661]: W0813 00:21:33.550025 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.550492 kubelet[3661]: E0813 00:21:33.550060 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.551574 containerd[2138]: time="2025-08-13T00:21:33.549596085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:21:33.551574 containerd[2138]: time="2025-08-13T00:21:33.550324905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:21:33.551574 containerd[2138]: time="2025-08-13T00:21:33.550375785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:21:33.551829 kubelet[3661]: E0813 00:21:33.551394 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.551829 kubelet[3661]: W0813 00:21:33.551422 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.551829 kubelet[3661]: E0813 00:21:33.551454 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.554514 kubelet[3661]: E0813 00:21:33.553600 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.554514 kubelet[3661]: W0813 00:21:33.553636 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.554514 kubelet[3661]: E0813 00:21:33.553969 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.557872 kubelet[3661]: E0813 00:21:33.556459 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.557872 kubelet[3661]: W0813 00:21:33.557187 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.557872 kubelet[3661]: E0813 00:21:33.557234 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.558609 kubelet[3661]: E0813 00:21:33.558575 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.558882 kubelet[3661]: W0813 00:21:33.558853 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.559225 kubelet[3661]: E0813 00:21:33.559197 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.561660 kubelet[3661]: E0813 00:21:33.561385 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.561788 containerd[2138]: time="2025-08-13T00:21:33.560533797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:21:33.562441 kubelet[3661]: W0813 00:21:33.562164 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.562441 kubelet[3661]: E0813 00:21:33.562223 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.564479 kubelet[3661]: E0813 00:21:33.563827 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.564479 kubelet[3661]: W0813 00:21:33.563928 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.564479 kubelet[3661]: E0813 00:21:33.563965 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.566037 kubelet[3661]: E0813 00:21:33.565409 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.566037 kubelet[3661]: W0813 00:21:33.565455 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.566037 kubelet[3661]: E0813 00:21:33.565490 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.566037 kubelet[3661]: I0813 00:21:33.565531 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5a310c1b-b07f-40c9-96ad-53e0942080e1-varrun\") pod \"csi-node-driver-4xwwn\" (UID: \"5a310c1b-b07f-40c9-96ad-53e0942080e1\") " pod="calico-system/csi-node-driver-4xwwn" Aug 13 00:21:33.567473 kubelet[3661]: E0813 00:21:33.566938 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.567473 kubelet[3661]: W0813 00:21:33.566976 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.567473 kubelet[3661]: E0813 00:21:33.567059 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.567473 kubelet[3661]: I0813 00:21:33.567103 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2ckv\" (UniqueName: \"kubernetes.io/projected/5a310c1b-b07f-40c9-96ad-53e0942080e1-kube-api-access-s2ckv\") pod \"csi-node-driver-4xwwn\" (UID: \"5a310c1b-b07f-40c9-96ad-53e0942080e1\") " pod="calico-system/csi-node-driver-4xwwn" Aug 13 00:21:33.568980 kubelet[3661]: E0813 00:21:33.568356 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.568980 kubelet[3661]: W0813 00:21:33.568391 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.568980 kubelet[3661]: E0813 00:21:33.568450 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.568980 kubelet[3661]: I0813 00:21:33.568526 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5a310c1b-b07f-40c9-96ad-53e0942080e1-registration-dir\") pod \"csi-node-driver-4xwwn\" (UID: \"5a310c1b-b07f-40c9-96ad-53e0942080e1\") " pod="calico-system/csi-node-driver-4xwwn" Aug 13 00:21:33.570706 kubelet[3661]: E0813 00:21:33.570316 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.570706 kubelet[3661]: W0813 00:21:33.570352 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.570706 kubelet[3661]: E0813 00:21:33.570502 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.571283 kubelet[3661]: I0813 00:21:33.570829 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5a310c1b-b07f-40c9-96ad-53e0942080e1-socket-dir\") pod \"csi-node-driver-4xwwn\" (UID: \"5a310c1b-b07f-40c9-96ad-53e0942080e1\") " pod="calico-system/csi-node-driver-4xwwn" Aug 13 00:21:33.575698 kubelet[3661]: E0813 00:21:33.574750 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.575698 kubelet[3661]: W0813 00:21:33.574794 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.575698 kubelet[3661]: E0813 00:21:33.574847 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.575698 kubelet[3661]: E0813 00:21:33.575432 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.575698 kubelet[3661]: W0813 00:21:33.575454 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.575698 kubelet[3661]: E0813 00:21:33.575572 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.577673 kubelet[3661]: E0813 00:21:33.576534 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.577673 kubelet[3661]: W0813 00:21:33.576561 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.578956 kubelet[3661]: E0813 00:21:33.578056 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.580032 kubelet[3661]: E0813 00:21:33.579428 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.580032 kubelet[3661]: W0813 00:21:33.579460 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.581429 kubelet[3661]: E0813 00:21:33.580559 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.582658 kubelet[3661]: E0813 00:21:33.582146 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.582658 kubelet[3661]: W0813 00:21:33.582180 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.584138 kubelet[3661]: E0813 00:21:33.583739 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.584138 kubelet[3661]: W0813 00:21:33.583774 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.584138 kubelet[3661]: E0813 00:21:33.584067 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.584138 kubelet[3661]: I0813 00:21:33.584121 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a310c1b-b07f-40c9-96ad-53e0942080e1-kubelet-dir\") pod \"csi-node-driver-4xwwn\" (UID: \"5a310c1b-b07f-40c9-96ad-53e0942080e1\") " pod="calico-system/csi-node-driver-4xwwn" Aug 13 00:21:33.584138 kubelet[3661]: E0813 00:21:33.584147 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.586099 kubelet[3661]: E0813 00:21:33.585301 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.586099 kubelet[3661]: W0813 00:21:33.585416 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.586099 kubelet[3661]: E0813 00:21:33.585467 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.587684 kubelet[3661]: E0813 00:21:33.587291 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.587684 kubelet[3661]: W0813 00:21:33.587328 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.587684 kubelet[3661]: E0813 00:21:33.587383 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.590239 kubelet[3661]: E0813 00:21:33.589707 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.590239 kubelet[3661]: W0813 00:21:33.590126 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.590239 kubelet[3661]: E0813 00:21:33.590170 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.593394 kubelet[3661]: E0813 00:21:33.592543 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.593394 kubelet[3661]: W0813 00:21:33.592579 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.593394 kubelet[3661]: E0813 00:21:33.592667 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.596573 kubelet[3661]: E0813 00:21:33.596390 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.596573 kubelet[3661]: W0813 00:21:33.596425 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.596573 kubelet[3661]: E0813 00:21:33.596460 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.688540 kubelet[3661]: E0813 00:21:33.688490 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.688540 kubelet[3661]: W0813 00:21:33.688529 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.688883 kubelet[3661]: E0813 00:21:33.688565 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.690446 kubelet[3661]: E0813 00:21:33.690395 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.690446 kubelet[3661]: W0813 00:21:33.690435 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.691382 kubelet[3661]: E0813 00:21:33.690489 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.693508 kubelet[3661]: E0813 00:21:33.693456 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.693508 kubelet[3661]: W0813 00:21:33.693497 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.694314 kubelet[3661]: E0813 00:21:33.693747 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.694314 kubelet[3661]: E0813 00:21:33.694231 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.694314 kubelet[3661]: W0813 00:21:33.694254 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.695641 kubelet[3661]: E0813 00:21:33.695431 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.695641 kubelet[3661]: W0813 00:21:33.695473 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.695641 kubelet[3661]: E0813 00:21:33.695509 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.695641 kubelet[3661]: E0813 00:21:33.695570 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.698128 kubelet[3661]: E0813 00:21:33.697608 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.698128 kubelet[3661]: W0813 00:21:33.697648 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.698128 kubelet[3661]: E0813 00:21:33.697698 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.700291 kubelet[3661]: E0813 00:21:33.700192 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.700291 kubelet[3661]: W0813 00:21:33.700233 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.700883 kubelet[3661]: E0813 00:21:33.700632 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.702071 kubelet[3661]: E0813 00:21:33.701334 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.702071 kubelet[3661]: W0813 00:21:33.701374 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.702071 kubelet[3661]: E0813 00:21:33.701552 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.703579 kubelet[3661]: E0813 00:21:33.703324 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.703579 kubelet[3661]: W0813 00:21:33.703352 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.703579 kubelet[3661]: E0813 00:21:33.703419 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.706229 kubelet[3661]: E0813 00:21:33.706177 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.706229 kubelet[3661]: W0813 00:21:33.706216 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.707496 kubelet[3661]: E0813 00:21:33.706669 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.708089 kubelet[3661]: E0813 00:21:33.707898 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.708089 kubelet[3661]: W0813 00:21:33.707936 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.708785 kubelet[3661]: E0813 00:21:33.708553 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.711377 kubelet[3661]: E0813 00:21:33.711326 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.711377 kubelet[3661]: W0813 00:21:33.711365 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.712967 kubelet[3661]: E0813 00:21:33.711763 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.713710 kubelet[3661]: E0813 00:21:33.713299 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.713710 kubelet[3661]: W0813 00:21:33.713336 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.715523 kubelet[3661]: E0813 00:21:33.714491 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.715523 kubelet[3661]: E0813 00:21:33.715342 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.715523 kubelet[3661]: W0813 00:21:33.715371 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.715523 kubelet[3661]: E0813 00:21:33.715453 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.719041 kubelet[3661]: E0813 00:21:33.718151 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.719041 kubelet[3661]: W0813 00:21:33.718191 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.719041 kubelet[3661]: E0813 00:21:33.718259 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.721160 kubelet[3661]: E0813 00:21:33.721104 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.721160 kubelet[3661]: W0813 00:21:33.721145 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.721393 kubelet[3661]: E0813 00:21:33.721310 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.722435 kubelet[3661]: E0813 00:21:33.722326 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.722435 kubelet[3661]: W0813 00:21:33.722352 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.725437 kubelet[3661]: E0813 00:21:33.723048 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.727390 kubelet[3661]: E0813 00:21:33.726168 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.727390 kubelet[3661]: W0813 00:21:33.726207 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.727390 kubelet[3661]: E0813 00:21:33.727131 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.727390 kubelet[3661]: W0813 00:21:33.727160 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.731659 kubelet[3661]: E0813 00:21:33.729456 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.731659 kubelet[3661]: W0813 00:21:33.729496 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.731659 kubelet[3661]: E0813 00:21:33.730400 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.731659 kubelet[3661]: W0813 00:21:33.730424 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.731659 kubelet[3661]: E0813 00:21:33.730456 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.732710 kubelet[3661]: E0813 00:21:33.731770 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.732710 kubelet[3661]: W0813 00:21:33.731796 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.732710 kubelet[3661]: E0813 00:21:33.731828 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.732710 kubelet[3661]: E0813 00:21:33.731876 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.732972 kubelet[3661]: E0813 00:21:33.732731 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.732972 kubelet[3661]: W0813 00:21:33.732756 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.732972 kubelet[3661]: E0813 00:21:33.732786 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.735115 kubelet[3661]: E0813 00:21:33.733954 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.737567 kubelet[3661]: E0813 00:21:33.735688 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.740107 kubelet[3661]: E0813 00:21:33.739591 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.740107 kubelet[3661]: W0813 00:21:33.739631 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.740107 kubelet[3661]: E0813 00:21:33.739667 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.745130 kubelet[3661]: E0813 00:21:33.743184 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.745130 kubelet[3661]: W0813 00:21:33.743224 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.745130 kubelet[3661]: E0813 00:21:33.743260 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.797878 kubelet[3661]: E0813 00:21:33.797656 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:33.797878 kubelet[3661]: W0813 00:21:33.797689 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:33.797878 kubelet[3661]: E0813 00:21:33.797726 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:33.804930 containerd[2138]: time="2025-08-13T00:21:33.804857794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-dbf995694-qttn6,Uid:aa83c157-33aa-485d-988e-53d7d0fe47cb,Namespace:calico-system,Attempt:0,} returns sandbox id \"fc0a52c52474947dd0ed65b1c4043a8489b5e7fc091b0bb4e1817a3ab53e8bcd\"" Aug 13 00:21:33.812899 containerd[2138]: time="2025-08-13T00:21:33.812438326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 00:21:33.927981 containerd[2138]: time="2025-08-13T00:21:33.924615935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kvv5p,Uid:b6daf963-5978-4a5d-96b6-a144b649a51e,Namespace:calico-system,Attempt:0,} returns sandbox id \"df0338ace1b19c507366292ceed84c594aecd51dd43e25f7081e1a9ba55ecf53\"" Aug 13 00:21:34.912293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1985495654.mount: Deactivated successfully. Aug 13 00:21:35.264062 kubelet[3661]: E0813 00:21:35.263841 3661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4xwwn" podUID="5a310c1b-b07f-40c9-96ad-53e0942080e1" Aug 13 00:21:35.735385 containerd[2138]: time="2025-08-13T00:21:35.735318168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:21:35.736809 containerd[2138]: time="2025-08-13T00:21:35.736753380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Aug 13 00:21:35.737923 containerd[2138]: time="2025-08-13T00:21:35.737849040Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:21:35.741508 containerd[2138]: time="2025-08-13T00:21:35.741443964Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:21:35.743314 containerd[2138]: time="2025-08-13T00:21:35.743105136Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.930604122s" Aug 13 00:21:35.743314 containerd[2138]: time="2025-08-13T00:21:35.743165388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Aug 13 00:21:35.747778 containerd[2138]: time="2025-08-13T00:21:35.746750760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 00:21:35.761508 containerd[2138]: time="2025-08-13T00:21:35.761446440Z" level=info msg="CreateContainer within sandbox \"fc0a52c52474947dd0ed65b1c4043a8489b5e7fc091b0bb4e1817a3ab53e8bcd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 00:21:35.792986 containerd[2138]: time="2025-08-13T00:21:35.792875376Z" level=info msg="CreateContainer within sandbox \"fc0a52c52474947dd0ed65b1c4043a8489b5e7fc091b0bb4e1817a3ab53e8bcd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"cbee8bb4b8cac9af543cb5a303edacffd161a5cb62ab39f0a608aed37601105f\"" Aug 13 00:21:35.795045 containerd[2138]: time="2025-08-13T00:21:35.794395344Z" level=info msg="StartContainer for \"cbee8bb4b8cac9af543cb5a303edacffd161a5cb62ab39f0a608aed37601105f\"" Aug 13 00:21:35.924223 containerd[2138]: time="2025-08-13T00:21:35.923986992Z" level=info msg="StartContainer for \"cbee8bb4b8cac9af543cb5a303edacffd161a5cb62ab39f0a608aed37601105f\" returns successfully" Aug 13 00:21:36.492516 kubelet[3661]: E0813 00:21:36.492397 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:36.492516 kubelet[3661]: W0813 00:21:36.492457 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:36.492516 kubelet[3661]: E0813 00:21:36.492492 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:36.493232 kubelet[3661]: E0813 00:21:36.493036 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:36.493232 kubelet[3661]: W0813 00:21:36.493057 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:36.493232 kubelet[3661]: E0813 00:21:36.493101 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:36.498029 kubelet[3661]: E0813 00:21:36.495294 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:36.498029 kubelet[3661]: W0813 00:21:36.495340 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:36.498029 kubelet[3661]: E0813 00:21:36.495407 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:36.498029 kubelet[3661]: E0813 00:21:36.496068 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:36.498029 kubelet[3661]: W0813 00:21:36.496115 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:36.498029 kubelet[3661]: E0813 00:21:36.496163 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:36.498029 kubelet[3661]: E0813 00:21:36.496745 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:36.498029 kubelet[3661]: W0813 00:21:36.496768 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:36.498029 kubelet[3661]: E0813 00:21:36.496817 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:36.498029 kubelet[3661]: E0813 00:21:36.497334 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:36.498614 kubelet[3661]: W0813 00:21:36.497358 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:36.498614 kubelet[3661]: E0813 00:21:36.497384 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:36.498614 kubelet[3661]: E0813 00:21:36.498161 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:36.498614 kubelet[3661]: W0813 00:21:36.498190 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:36.498614 kubelet[3661]: E0813 00:21:36.498319 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:36.498884 kubelet[3661]: E0813 00:21:36.498810 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:36.498884 kubelet[3661]: W0813 00:21:36.498829 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:36.498884 kubelet[3661]: E0813 00:21:36.498852 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:36.502516 kubelet[3661]: E0813 00:21:36.499400 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:36.502516 kubelet[3661]: W0813 00:21:36.499438 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:36.502516 kubelet[3661]: E0813 00:21:36.499468 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:36.502516 kubelet[3661]: E0813 00:21:36.500174 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:36.502516 kubelet[3661]: W0813 00:21:36.500200 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:36.502516 kubelet[3661]: E0813 00:21:36.500227 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:36.502516 kubelet[3661]: E0813 00:21:36.501006 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:36.502516 kubelet[3661]: W0813 00:21:36.501035 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:36.502516 kubelet[3661]: E0813 00:21:36.501438 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:36.502516 kubelet[3661]: E0813 00:21:36.502142 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:36.503152 kubelet[3661]: W0813 00:21:36.502165 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:36.503152 kubelet[3661]: E0813 00:21:36.502192 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:36.503152 kubelet[3661]: E0813 00:21:36.502617 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:36.503152 kubelet[3661]: W0813 00:21:36.502635 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:36.503152 kubelet[3661]: E0813 00:21:36.502657 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:36.503152 kubelet[3661]: E0813 00:21:36.503151 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:36.503452 kubelet[3661]: W0813 00:21:36.503170 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:36.503452 kubelet[3661]: E0813 00:21:36.503192 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:36.508063 kubelet[3661]: E0813 00:21:36.503680 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:36.508063 kubelet[3661]: W0813 00:21:36.503714 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:36.508063 kubelet[3661]: E0813 00:21:36.503742 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:36.537384 kubelet[3661]: E0813 00:21:36.536777 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:36.537384 kubelet[3661]: W0813 00:21:36.536811 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:36.537384 kubelet[3661]: E0813 00:21:36.536859 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:36.539268 kubelet[3661]: E0813 00:21:36.539046 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:36.539268 kubelet[3661]: W0813 00:21:36.539079 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:36.539268 kubelet[3661]: E0813 00:21:36.539139 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:36.541901 kubelet[3661]: E0813 00:21:36.541441 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:36.541901 kubelet[3661]: W0813 00:21:36.541516 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:36.541901 kubelet[3661]: E0813 00:21:36.541587 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:36.543815 kubelet[3661]: E0813 00:21:36.543401 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:36.543815 kubelet[3661]: W0813 00:21:36.543439 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:36.543815 kubelet[3661]: E0813 00:21:36.543681 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:36.545468 kubelet[3661]: E0813 00:21:36.545160 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:36.545468 kubelet[3661]: W0813 00:21:36.545200 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:36.545468 kubelet[3661]: E0813 00:21:36.545276 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:36.547218 kubelet[3661]: E0813 00:21:36.546803 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:36.547218 kubelet[3661]: W0813 00:21:36.546843 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:36.548331 kubelet[3661]: E0813 00:21:36.547575 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:36.550785 kubelet[3661]: E0813 00:21:36.550733 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:36.550785 kubelet[3661]: W0813 00:21:36.550774 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:36.551111 kubelet[3661]: E0813 00:21:36.551039 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:36.554060 kubelet[3661]: E0813 00:21:36.553910 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:36.554060 kubelet[3661]: W0813 00:21:36.553947 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:36.555597 kubelet[3661]: E0813 00:21:36.554401 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:36.555972 kubelet[3661]: E0813 00:21:36.555905 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:36.555972 kubelet[3661]: W0813 00:21:36.555935 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:36.556531 kubelet[3661]: E0813 00:21:36.556310 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:36.557098 kubelet[3661]: E0813 00:21:36.556877 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:36.557098 kubelet[3661]: W0813 00:21:36.556906 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:36.559049 kubelet[3661]: E0813 00:21:36.558735 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:36.559049 kubelet[3661]: W0813 00:21:36.558769 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:36.560488 kubelet[3661]: E0813 00:21:36.560437 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:36.561502 kubelet[3661]: E0813 00:21:36.560951 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:36.561502 kubelet[3661]: W0813 00:21:36.561144 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:36.561502 kubelet[3661]: E0813 00:21:36.561172 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:36.564234 kubelet[3661]: E0813 00:21:36.563521 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:36.564234 kubelet[3661]: W0813 00:21:36.563571 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:36.564234 kubelet[3661]: E0813 00:21:36.563606 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:36.566275 kubelet[3661]: E0813 00:21:36.560972 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:36.567035 kubelet[3661]: E0813 00:21:36.566743 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:36.567035 kubelet[3661]: W0813 00:21:36.566790 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:36.567035 kubelet[3661]: E0813 00:21:36.566847 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:36.568546 kubelet[3661]: E0813 00:21:36.568093 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:36.568546 kubelet[3661]: W0813 00:21:36.568153 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:36.569300 kubelet[3661]: E0813 00:21:36.568236 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:36.570029 kubelet[3661]: E0813 00:21:36.569631 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:36.570029 kubelet[3661]: W0813 00:21:36.569710 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:36.570029 kubelet[3661]: E0813 00:21:36.569870 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:36.572908 kubelet[3661]: E0813 00:21:36.571854 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:36.572908 kubelet[3661]: W0813 00:21:36.571889 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:36.572908 kubelet[3661]: E0813 00:21:36.571923 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:36.574332 kubelet[3661]: E0813 00:21:36.574285 3661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:21:36.574791 kubelet[3661]: W0813 00:21:36.574585 3661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:21:36.574791 kubelet[3661]: E0813 00:21:36.574629 3661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:21:36.592740 kubelet[3661]: I0813 00:21:36.592034 3661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-dbf995694-qttn6" podStartSLOduration=2.657573802 podStartE2EDuration="4.591981588s" podCreationTimestamp="2025-08-13 00:21:32 +0000 UTC" firstStartedPulling="2025-08-13 00:21:33.810246358 +0000 UTC m=+30.818129866" lastFinishedPulling="2025-08-13 00:21:35.74465406 +0000 UTC m=+32.752537652" observedRunningTime="2025-08-13 00:21:36.538290239 +0000 UTC m=+33.546173771" watchObservedRunningTime="2025-08-13 00:21:36.591981588 +0000 UTC m=+33.599865096" Aug 13 00:21:36.865648 containerd[2138]: time="2025-08-13T00:21:36.862686337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:21:36.867150 containerd[2138]: time="2025-08-13T00:21:36.866714917Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Aug 13 00:21:36.868122 containerd[2138]: time="2025-08-13T00:21:36.868038157Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:21:36.874181 containerd[2138]: time="2025-08-13T00:21:36.874089541Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:21:36.875971 containerd[2138]: time="2025-08-13T00:21:36.875776501Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.128961289s" Aug 13 00:21:36.875971 containerd[2138]: time="2025-08-13T00:21:36.875835457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Aug 13 00:21:36.881051 containerd[2138]: time="2025-08-13T00:21:36.880852249Z" level=info msg="CreateContainer within sandbox \"df0338ace1b19c507366292ceed84c594aecd51dd43e25f7081e1a9ba55ecf53\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 00:21:36.904194 containerd[2138]: time="2025-08-13T00:21:36.904103041Z" level=info msg="CreateContainer within sandbox \"df0338ace1b19c507366292ceed84c594aecd51dd43e25f7081e1a9ba55ecf53\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9facfda295558d373389d55ec466b74053f87ea8758a73a594e4bcf1fd346d74\"" Aug 13 00:21:36.904972 containerd[2138]: time="2025-08-13T00:21:36.904921045Z" level=info msg="StartContainer for \"9facfda295558d373389d55ec466b74053f87ea8758a73a594e4bcf1fd346d74\"" Aug 13 00:21:37.012829 containerd[2138]: time="2025-08-13T00:21:37.012749434Z" level=info msg="StartContainer for \"9facfda295558d373389d55ec466b74053f87ea8758a73a594e4bcf1fd346d74\" returns successfully" Aug 13 00:21:37.085902 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9facfda295558d373389d55ec466b74053f87ea8758a73a594e4bcf1fd346d74-rootfs.mount: Deactivated successfully. Aug 13 00:21:37.267763 kubelet[3661]: E0813 00:21:37.264626 3661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4xwwn" podUID="5a310c1b-b07f-40c9-96ad-53e0942080e1" Aug 13 00:21:37.444645 containerd[2138]: time="2025-08-13T00:21:37.444522960Z" level=info msg="shim disconnected" id=9facfda295558d373389d55ec466b74053f87ea8758a73a594e4bcf1fd346d74 namespace=k8s.io Aug 13 00:21:37.444645 containerd[2138]: time="2025-08-13T00:21:37.444597408Z" level=warning msg="cleaning up after shim disconnected" id=9facfda295558d373389d55ec466b74053f87ea8758a73a594e4bcf1fd346d74 namespace=k8s.io Aug 13 00:21:37.444645 containerd[2138]: time="2025-08-13T00:21:37.444617592Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:21:37.495983 containerd[2138]: time="2025-08-13T00:21:37.495917232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 00:21:39.264765 kubelet[3661]: E0813 00:21:39.264712 3661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4xwwn" podUID="5a310c1b-b07f-40c9-96ad-53e0942080e1" Aug 13 00:21:40.622216 containerd[2138]: time="2025-08-13T00:21:40.622125304Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:21:40.623723 containerd[2138]: time="2025-08-13T00:21:40.623654824Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Aug 13 00:21:40.624448 containerd[2138]: time="2025-08-13T00:21:40.624362068Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:21:40.628635 containerd[2138]: time="2025-08-13T00:21:40.628478476Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:21:40.630834 containerd[2138]: time="2025-08-13T00:21:40.630761968Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 3.134773324s" Aug 13 00:21:40.632775 containerd[2138]: time="2025-08-13T00:21:40.631038172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Aug 13 00:21:40.638651 containerd[2138]: time="2025-08-13T00:21:40.638586832Z" level=info msg="CreateContainer within sandbox \"df0338ace1b19c507366292ceed84c594aecd51dd43e25f7081e1a9ba55ecf53\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 00:21:40.662307 containerd[2138]: time="2025-08-13T00:21:40.662243968Z" level=info msg="CreateContainer within sandbox \"df0338ace1b19c507366292ceed84c594aecd51dd43e25f7081e1a9ba55ecf53\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"62f849d734e5e3bc179906fadedd15faadaea7c2ba5a5ac7e2733d6c00b7ab32\"" Aug 13 00:21:40.663052 containerd[2138]: time="2025-08-13T00:21:40.662922388Z" level=info msg="StartContainer for \"62f849d734e5e3bc179906fadedd15faadaea7c2ba5a5ac7e2733d6c00b7ab32\"" Aug 13 00:21:40.778650 containerd[2138]: time="2025-08-13T00:21:40.778581413Z" level=info msg="StartContainer for \"62f849d734e5e3bc179906fadedd15faadaea7c2ba5a5ac7e2733d6c00b7ab32\" returns successfully" Aug 13 00:21:41.265027 kubelet[3661]: E0813 00:21:41.263609 3661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4xwwn" podUID="5a310c1b-b07f-40c9-96ad-53e0942080e1" Aug 13 00:21:41.769668 containerd[2138]: time="2025-08-13T00:21:41.769587557Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:21:41.802342 kubelet[3661]: I0813 00:21:41.802296 3661 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 00:21:41.820352 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62f849d734e5e3bc179906fadedd15faadaea7c2ba5a5ac7e2733d6c00b7ab32-rootfs.mount: Deactivated successfully. Aug 13 00:21:41.993548 kubelet[3661]: I0813 00:21:41.993481 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b8efc0e-ddd2-44d0-b0dc-3fcdf622b514-config\") pod \"goldmane-58fd7646b9-zs6cp\" (UID: \"2b8efc0e-ddd2-44d0-b0dc-3fcdf622b514\") " pod="calico-system/goldmane-58fd7646b9-zs6cp" Aug 13 00:21:41.993548 kubelet[3661]: I0813 00:21:41.993550 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dn86\" (UniqueName: \"kubernetes.io/projected/24486bb1-f01d-44a5-bd10-5c11bbdaf03f-kube-api-access-4dn86\") pod \"calico-apiserver-5768c76bdb-97r48\" (UID: \"24486bb1-f01d-44a5-bd10-5c11bbdaf03f\") " pod="calico-apiserver/calico-apiserver-5768c76bdb-97r48" Aug 13 00:21:41.993809 kubelet[3661]: I0813 00:21:41.993600 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h8cp\" (UniqueName: \"kubernetes.io/projected/aa0e8597-87b1-46a6-b15d-ea2b84ced854-kube-api-access-7h8cp\") pod \"coredns-7c65d6cfc9-gggpz\" (UID: \"aa0e8597-87b1-46a6-b15d-ea2b84ced854\") " pod="kube-system/coredns-7c65d6cfc9-gggpz" Aug 13 00:21:41.993809 kubelet[3661]: I0813 00:21:41.993640 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bdee9ba-0652-4e5a-aa31-e915cc90ffb9-tigera-ca-bundle\") pod \"calico-kube-controllers-764ff7f5f7-s9b5s\" (UID: \"0bdee9ba-0652-4e5a-aa31-e915cc90ffb9\") " pod="calico-system/calico-kube-controllers-764ff7f5f7-s9b5s" Aug 13 00:21:41.993809 kubelet[3661]: I0813 00:21:41.993685 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tbww\" (UniqueName: \"kubernetes.io/projected/177eec0b-4e35-4df6-b815-1a477ed2acfc-kube-api-access-8tbww\") pod \"coredns-7c65d6cfc9-mx5t4\" (UID: \"177eec0b-4e35-4df6-b815-1a477ed2acfc\") " pod="kube-system/coredns-7c65d6cfc9-mx5t4" Aug 13 00:21:41.993809 kubelet[3661]: I0813 00:21:41.993722 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/399bf4b8-02c0-4439-a46e-13374f3c4aff-whisker-backend-key-pair\") pod \"whisker-84dd9bcd4f-x6wsv\" (UID: \"399bf4b8-02c0-4439-a46e-13374f3c4aff\") " pod="calico-system/whisker-84dd9bcd4f-x6wsv" Aug 13 00:21:41.993809 kubelet[3661]: I0813 00:21:41.993762 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/399bf4b8-02c0-4439-a46e-13374f3c4aff-whisker-ca-bundle\") pod \"whisker-84dd9bcd4f-x6wsv\" (UID: \"399bf4b8-02c0-4439-a46e-13374f3c4aff\") " pod="calico-system/whisker-84dd9bcd4f-x6wsv" Aug 13 00:21:41.994488 kubelet[3661]: I0813 00:21:41.993797 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2b8efc0e-ddd2-44d0-b0dc-3fcdf622b514-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-zs6cp\" (UID: \"2b8efc0e-ddd2-44d0-b0dc-3fcdf622b514\") " pod="calico-system/goldmane-58fd7646b9-zs6cp" Aug 13 00:21:41.994488 kubelet[3661]: I0813 00:21:41.993838 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3cf1659a-34f6-4f08-a0e0-5a806126f297-calico-apiserver-certs\") pod \"calico-apiserver-5768c76bdb-qb8dv\" (UID: \"3cf1659a-34f6-4f08-a0e0-5a806126f297\") " pod="calico-apiserver/calico-apiserver-5768c76bdb-qb8dv" Aug 13 00:21:41.994488 kubelet[3661]: I0813 00:21:41.993879 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/24486bb1-f01d-44a5-bd10-5c11bbdaf03f-calico-apiserver-certs\") pod \"calico-apiserver-5768c76bdb-97r48\" (UID: \"24486bb1-f01d-44a5-bd10-5c11bbdaf03f\") " pod="calico-apiserver/calico-apiserver-5768c76bdb-97r48" Aug 13 00:21:41.994488 kubelet[3661]: I0813 00:21:41.993925 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmxs9\" (UniqueName: \"kubernetes.io/projected/0bdee9ba-0652-4e5a-aa31-e915cc90ffb9-kube-api-access-xmxs9\") pod \"calico-kube-controllers-764ff7f5f7-s9b5s\" (UID: \"0bdee9ba-0652-4e5a-aa31-e915cc90ffb9\") " pod="calico-system/calico-kube-controllers-764ff7f5f7-s9b5s" Aug 13 00:21:41.994488 kubelet[3661]: I0813 00:21:41.993962 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwrlb\" (UniqueName: \"kubernetes.io/projected/399bf4b8-02c0-4439-a46e-13374f3c4aff-kube-api-access-kwrlb\") pod \"whisker-84dd9bcd4f-x6wsv\" (UID: \"399bf4b8-02c0-4439-a46e-13374f3c4aff\") " pod="calico-system/whisker-84dd9bcd4f-x6wsv" Aug 13 00:21:41.994791 kubelet[3661]: I0813 00:21:41.994025 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2b8efc0e-ddd2-44d0-b0dc-3fcdf622b514-goldmane-key-pair\") pod \"goldmane-58fd7646b9-zs6cp\" (UID: \"2b8efc0e-ddd2-44d0-b0dc-3fcdf622b514\") " pod="calico-system/goldmane-58fd7646b9-zs6cp" Aug 13 00:21:41.994791 kubelet[3661]: I0813 00:21:41.994065 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa0e8597-87b1-46a6-b15d-ea2b84ced854-config-volume\") pod \"coredns-7c65d6cfc9-gggpz\" (UID: \"aa0e8597-87b1-46a6-b15d-ea2b84ced854\") " pod="kube-system/coredns-7c65d6cfc9-gggpz" Aug 13 00:21:41.994791 kubelet[3661]: I0813 00:21:41.994120 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/177eec0b-4e35-4df6-b815-1a477ed2acfc-config-volume\") pod \"coredns-7c65d6cfc9-mx5t4\" (UID: \"177eec0b-4e35-4df6-b815-1a477ed2acfc\") " pod="kube-system/coredns-7c65d6cfc9-mx5t4" Aug 13 00:21:41.994791 kubelet[3661]: I0813 00:21:41.994158 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsw49\" (UniqueName: \"kubernetes.io/projected/3cf1659a-34f6-4f08-a0e0-5a806126f297-kube-api-access-hsw49\") pod \"calico-apiserver-5768c76bdb-qb8dv\" (UID: \"3cf1659a-34f6-4f08-a0e0-5a806126f297\") " pod="calico-apiserver/calico-apiserver-5768c76bdb-qb8dv" Aug 13 00:21:41.994791 kubelet[3661]: I0813 00:21:41.994198 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcn75\" (UniqueName: \"kubernetes.io/projected/2b8efc0e-ddd2-44d0-b0dc-3fcdf622b514-kube-api-access-qcn75\") pod \"goldmane-58fd7646b9-zs6cp\" (UID: \"2b8efc0e-ddd2-44d0-b0dc-3fcdf622b514\") " pod="calico-system/goldmane-58fd7646b9-zs6cp" Aug 13 00:21:42.256465 containerd[2138]: time="2025-08-13T00:21:42.255975268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mx5t4,Uid:177eec0b-4e35-4df6-b815-1a477ed2acfc,Namespace:kube-system,Attempt:0,}" Aug 13 00:21:42.256465 containerd[2138]: time="2025-08-13T00:21:42.256149136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-764ff7f5f7-s9b5s,Uid:0bdee9ba-0652-4e5a-aa31-e915cc90ffb9,Namespace:calico-system,Attempt:0,}" Aug 13 00:21:42.262322 containerd[2138]: time="2025-08-13T00:21:42.261964876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84dd9bcd4f-x6wsv,Uid:399bf4b8-02c0-4439-a46e-13374f3c4aff,Namespace:calico-system,Attempt:0,}" Aug 13 00:21:42.269700 containerd[2138]: time="2025-08-13T00:21:42.269629048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5768c76bdb-97r48,Uid:24486bb1-f01d-44a5-bd10-5c11bbdaf03f,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:21:42.280377 containerd[2138]: time="2025-08-13T00:21:42.280286884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-zs6cp,Uid:2b8efc0e-ddd2-44d0-b0dc-3fcdf622b514,Namespace:calico-system,Attempt:0,}" Aug 13 00:21:42.284357 containerd[2138]: time="2025-08-13T00:21:42.284294644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5768c76bdb-qb8dv,Uid:3cf1659a-34f6-4f08-a0e0-5a806126f297,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:21:42.418792 containerd[2138]: time="2025-08-13T00:21:42.418247549Z" level=info msg="shim disconnected" id=62f849d734e5e3bc179906fadedd15faadaea7c2ba5a5ac7e2733d6c00b7ab32 namespace=k8s.io Aug 13 00:21:42.418792 containerd[2138]: time="2025-08-13T00:21:42.418324697Z" level=warning msg="cleaning up after shim disconnected" id=62f849d734e5e3bc179906fadedd15faadaea7c2ba5a5ac7e2733d6c00b7ab32 namespace=k8s.io Aug 13 00:21:42.418792 containerd[2138]: time="2025-08-13T00:21:42.418346513Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:21:42.442239 containerd[2138]: time="2025-08-13T00:21:42.442162757Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:21:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 00:21:42.497875 containerd[2138]: time="2025-08-13T00:21:42.497249501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gggpz,Uid:aa0e8597-87b1-46a6-b15d-ea2b84ced854,Namespace:kube-system,Attempt:0,}" Aug 13 00:21:42.537943 containerd[2138]: time="2025-08-13T00:21:42.537768713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 00:21:42.947293 containerd[2138]: time="2025-08-13T00:21:42.947228719Z" level=error msg="Failed to destroy network for sandbox \"9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:42.955078 containerd[2138]: time="2025-08-13T00:21:42.953414731Z" level=error msg="encountered an error cleaning up failed sandbox \"9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:42.955078 containerd[2138]: time="2025-08-13T00:21:42.953528971Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5768c76bdb-97r48,Uid:24486bb1-f01d-44a5-bd10-5c11bbdaf03f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:42.956591 kubelet[3661]: E0813 00:21:42.953816 3661 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:42.956591 kubelet[3661]: E0813 00:21:42.953906 3661 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5768c76bdb-97r48" Aug 13 00:21:42.956591 kubelet[3661]: E0813 00:21:42.953939 3661 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5768c76bdb-97r48" Aug 13 00:21:42.957368 kubelet[3661]: E0813 00:21:42.954039 3661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5768c76bdb-97r48_calico-apiserver(24486bb1-f01d-44a5-bd10-5c11bbdaf03f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5768c76bdb-97r48_calico-apiserver(24486bb1-f01d-44a5-bd10-5c11bbdaf03f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5768c76bdb-97r48" podUID="24486bb1-f01d-44a5-bd10-5c11bbdaf03f" Aug 13 00:21:42.956735 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3-shm.mount: Deactivated successfully. Aug 13 00:21:42.983719 containerd[2138]: time="2025-08-13T00:21:42.983651108Z" level=error msg="Failed to destroy network for sandbox \"37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:42.991648 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa-shm.mount: Deactivated successfully. Aug 13 00:21:42.994137 containerd[2138]: time="2025-08-13T00:21:42.992976872Z" level=error msg="encountered an error cleaning up failed sandbox \"37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:42.994386 containerd[2138]: time="2025-08-13T00:21:42.994340780Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84dd9bcd4f-x6wsv,Uid:399bf4b8-02c0-4439-a46e-13374f3c4aff,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:42.995400 kubelet[3661]: E0813 00:21:42.994866 3661 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:42.995400 kubelet[3661]: E0813 00:21:42.994947 3661 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-84dd9bcd4f-x6wsv" Aug 13 00:21:42.995400 kubelet[3661]: E0813 00:21:42.994980 3661 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-84dd9bcd4f-x6wsv" Aug 13 00:21:42.995683 kubelet[3661]: E0813 00:21:42.995068 3661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-84dd9bcd4f-x6wsv_calico-system(399bf4b8-02c0-4439-a46e-13374f3c4aff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-84dd9bcd4f-x6wsv_calico-system(399bf4b8-02c0-4439-a46e-13374f3c4aff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-84dd9bcd4f-x6wsv" podUID="399bf4b8-02c0-4439-a46e-13374f3c4aff" Aug 13 00:21:42.998868 containerd[2138]: time="2025-08-13T00:21:42.998522300Z" level=error msg="Failed to destroy network for sandbox \"d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:43.007084 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2-shm.mount: Deactivated successfully. Aug 13 00:21:43.007528 containerd[2138]: time="2025-08-13T00:21:43.007121068Z" level=error msg="encountered an error cleaning up failed sandbox \"d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:43.010160 containerd[2138]: time="2025-08-13T00:21:43.009657532Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mx5t4,Uid:177eec0b-4e35-4df6-b815-1a477ed2acfc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:43.013539 kubelet[3661]: E0813 00:21:43.013021 3661 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:43.013539 kubelet[3661]: E0813 00:21:43.013099 3661 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mx5t4" Aug 13 00:21:43.013539 kubelet[3661]: E0813 00:21:43.013132 3661 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mx5t4" Aug 13 00:21:43.013841 kubelet[3661]: E0813 00:21:43.013193 3661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-mx5t4_kube-system(177eec0b-4e35-4df6-b815-1a477ed2acfc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-mx5t4_kube-system(177eec0b-4e35-4df6-b815-1a477ed2acfc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mx5t4" podUID="177eec0b-4e35-4df6-b815-1a477ed2acfc" Aug 13 00:21:43.026980 containerd[2138]: time="2025-08-13T00:21:43.026562544Z" level=error msg="Failed to destroy network for sandbox \"e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:43.031467 containerd[2138]: time="2025-08-13T00:21:43.031325812Z" level=error msg="Failed to destroy network for sandbox \"eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:43.033559 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79-shm.mount: Deactivated successfully. Aug 13 00:21:43.039178 containerd[2138]: time="2025-08-13T00:21:43.033773116Z" level=error msg="encountered an error cleaning up failed sandbox \"e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:43.039475 containerd[2138]: time="2025-08-13T00:21:43.039424588Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-764ff7f5f7-s9b5s,Uid:0bdee9ba-0652-4e5a-aa31-e915cc90ffb9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:43.039784 containerd[2138]: time="2025-08-13T00:21:43.036279088Z" level=error msg="encountered an error cleaning up failed sandbox \"eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:43.039975 containerd[2138]: time="2025-08-13T00:21:43.039916624Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-zs6cp,Uid:2b8efc0e-ddd2-44d0-b0dc-3fcdf622b514,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:43.040465 kubelet[3661]: E0813 00:21:43.040393 3661 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:43.040632 kubelet[3661]: E0813 00:21:43.040477 3661 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-764ff7f5f7-s9b5s" Aug 13 00:21:43.040632 kubelet[3661]: E0813 00:21:43.040516 3661 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-764ff7f5f7-s9b5s" Aug 13 00:21:43.040632 kubelet[3661]: E0813 00:21:43.040591 3661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-764ff7f5f7-s9b5s_calico-system(0bdee9ba-0652-4e5a-aa31-e915cc90ffb9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-764ff7f5f7-s9b5s_calico-system(0bdee9ba-0652-4e5a-aa31-e915cc90ffb9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-764ff7f5f7-s9b5s" podUID="0bdee9ba-0652-4e5a-aa31-e915cc90ffb9" Aug 13 00:21:43.041829 kubelet[3661]: E0813 00:21:43.041568 3661 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:43.041829 kubelet[3661]: E0813 00:21:43.041650 3661 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-zs6cp" Aug 13 00:21:43.041829 kubelet[3661]: E0813 00:21:43.041687 3661 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-zs6cp" Aug 13 00:21:43.042217 kubelet[3661]: E0813 00:21:43.041750 3661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-zs6cp_calico-system(2b8efc0e-ddd2-44d0-b0dc-3fcdf622b514)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-zs6cp_calico-system(2b8efc0e-ddd2-44d0-b0dc-3fcdf622b514)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-zs6cp" podUID="2b8efc0e-ddd2-44d0-b0dc-3fcdf622b514" Aug 13 00:21:43.057467 containerd[2138]: time="2025-08-13T00:21:43.057393544Z" level=error msg="Failed to destroy network for sandbox \"1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:43.058383 containerd[2138]: time="2025-08-13T00:21:43.058325248Z" level=error msg="encountered an error cleaning up failed sandbox \"1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:43.059473 containerd[2138]: time="2025-08-13T00:21:43.059420596Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gggpz,Uid:aa0e8597-87b1-46a6-b15d-ea2b84ced854,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:43.060468 kubelet[3661]: E0813 00:21:43.060407 3661 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:43.060638 kubelet[3661]: E0813 00:21:43.060490 3661 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-gggpz" Aug 13 00:21:43.060638 kubelet[3661]: E0813 00:21:43.060529 3661 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-gggpz" Aug 13 00:21:43.060638 kubelet[3661]: E0813 00:21:43.060602 3661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-gggpz_kube-system(aa0e8597-87b1-46a6-b15d-ea2b84ced854)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-gggpz_kube-system(aa0e8597-87b1-46a6-b15d-ea2b84ced854)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-gggpz" podUID="aa0e8597-87b1-46a6-b15d-ea2b84ced854" Aug 13 00:21:43.065475 containerd[2138]: time="2025-08-13T00:21:43.065386744Z" level=error msg="Failed to destroy network for sandbox \"2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:43.066156 containerd[2138]: time="2025-08-13T00:21:43.066084544Z" level=error msg="encountered an error cleaning up failed sandbox \"2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:43.066608 containerd[2138]: time="2025-08-13T00:21:43.066189316Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5768c76bdb-qb8dv,Uid:3cf1659a-34f6-4f08-a0e0-5a806126f297,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:43.066783 kubelet[3661]: E0813 00:21:43.066686 3661 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:43.067426 kubelet[3661]: E0813 00:21:43.066850 3661 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5768c76bdb-qb8dv" Aug 13 00:21:43.067426 kubelet[3661]: E0813 00:21:43.066923 3661 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5768c76bdb-qb8dv" Aug 13 00:21:43.067426 kubelet[3661]: E0813 00:21:43.067142 3661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5768c76bdb-qb8dv_calico-apiserver(3cf1659a-34f6-4f08-a0e0-5a806126f297)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5768c76bdb-qb8dv_calico-apiserver(3cf1659a-34f6-4f08-a0e0-5a806126f297)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5768c76bdb-qb8dv" podUID="3cf1659a-34f6-4f08-a0e0-5a806126f297" Aug 13 00:21:43.272033 containerd[2138]: time="2025-08-13T00:21:43.271464821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4xwwn,Uid:5a310c1b-b07f-40c9-96ad-53e0942080e1,Namespace:calico-system,Attempt:0,}" Aug 13 00:21:43.386152 containerd[2138]: time="2025-08-13T00:21:43.386015490Z" level=error msg="Failed to destroy network for sandbox \"31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:43.386748 containerd[2138]: time="2025-08-13T00:21:43.386667666Z" level=error msg="encountered an error cleaning up failed sandbox \"31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:43.386839 containerd[2138]: time="2025-08-13T00:21:43.386787126Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4xwwn,Uid:5a310c1b-b07f-40c9-96ad-53e0942080e1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:43.387611 kubelet[3661]: E0813 00:21:43.387092 3661 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:43.387611 kubelet[3661]: E0813 00:21:43.387175 3661 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4xwwn" Aug 13 00:21:43.387611 kubelet[3661]: E0813 00:21:43.387219 3661 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4xwwn" Aug 13 00:21:43.387860 kubelet[3661]: E0813 00:21:43.387285 3661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4xwwn_calico-system(5a310c1b-b07f-40c9-96ad-53e0942080e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4xwwn_calico-system(5a310c1b-b07f-40c9-96ad-53e0942080e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4xwwn" podUID="5a310c1b-b07f-40c9-96ad-53e0942080e1" Aug 13 00:21:43.538522 kubelet[3661]: I0813 00:21:43.537883 3661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" Aug 13 00:21:43.540446 containerd[2138]: time="2025-08-13T00:21:43.540369126Z" level=info msg="StopPodSandbox for \"37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa\"" Aug 13 00:21:43.541039 containerd[2138]: time="2025-08-13T00:21:43.540747738Z" level=info msg="Ensure that sandbox 37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa in task-service has been cleanup successfully" Aug 13 00:21:43.551263 kubelet[3661]: I0813 00:21:43.551144 3661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" Aug 13 00:21:43.555190 containerd[2138]: time="2025-08-13T00:21:43.554885538Z" level=info msg="StopPodSandbox for \"d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2\"" Aug 13 00:21:43.555342 containerd[2138]: time="2025-08-13T00:21:43.555251958Z" level=info msg="Ensure that sandbox d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2 in task-service has been cleanup successfully" Aug 13 00:21:43.559562 kubelet[3661]: I0813 00:21:43.559247 3661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" Aug 13 00:21:43.562355 containerd[2138]: time="2025-08-13T00:21:43.560476914Z" level=info msg="StopPodSandbox for \"31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4\"" Aug 13 00:21:43.562355 containerd[2138]: time="2025-08-13T00:21:43.560775294Z" level=info msg="Ensure that sandbox 31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4 in task-service has been cleanup successfully" Aug 13 00:21:43.569103 kubelet[3661]: I0813 00:21:43.568170 3661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" Aug 13 00:21:43.574839 containerd[2138]: time="2025-08-13T00:21:43.574504698Z" level=info msg="StopPodSandbox for \"e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79\"" Aug 13 00:21:43.574839 containerd[2138]: time="2025-08-13T00:21:43.574816146Z" level=info msg="Ensure that sandbox e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79 in task-service has been cleanup successfully" Aug 13 00:21:43.581817 kubelet[3661]: I0813 00:21:43.581622 3661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" Aug 13 00:21:43.588796 containerd[2138]: time="2025-08-13T00:21:43.588727339Z" level=info msg="StopPodSandbox for \"9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3\"" Aug 13 00:21:43.589102 containerd[2138]: time="2025-08-13T00:21:43.589041583Z" level=info msg="Ensure that sandbox 9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3 in task-service has been cleanup successfully" Aug 13 00:21:43.591450 kubelet[3661]: I0813 00:21:43.591287 3661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" Aug 13 00:21:43.596932 containerd[2138]: time="2025-08-13T00:21:43.596791963Z" level=info msg="StopPodSandbox for \"2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8\"" Aug 13 00:21:43.598084 containerd[2138]: time="2025-08-13T00:21:43.597919831Z" level=info msg="Ensure that sandbox 2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8 in task-service has been cleanup successfully" Aug 13 00:21:43.608504 kubelet[3661]: I0813 00:21:43.606533 3661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" Aug 13 00:21:43.610776 containerd[2138]: time="2025-08-13T00:21:43.610699783Z" level=info msg="StopPodSandbox for \"1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c\"" Aug 13 00:21:43.611497 containerd[2138]: time="2025-08-13T00:21:43.611079835Z" level=info msg="Ensure that sandbox 1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c in task-service has been cleanup successfully" Aug 13 00:21:43.629782 kubelet[3661]: I0813 00:21:43.629733 3661 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" Aug 13 00:21:43.632030 containerd[2138]: time="2025-08-13T00:21:43.631682731Z" level=info msg="StopPodSandbox for \"eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283\"" Aug 13 00:21:43.632178 containerd[2138]: time="2025-08-13T00:21:43.632135431Z" level=info msg="Ensure that sandbox eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283 in task-service has been cleanup successfully" Aug 13 00:21:43.738413 containerd[2138]: time="2025-08-13T00:21:43.738200107Z" level=error msg="StopPodSandbox for \"31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4\" failed" error="failed to destroy network for sandbox \"31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:43.740436 kubelet[3661]: E0813 00:21:43.739613 3661 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" Aug 13 00:21:43.740436 kubelet[3661]: E0813 00:21:43.739736 3661 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4"} Aug 13 00:21:43.740436 kubelet[3661]: E0813 00:21:43.739821 3661 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5a310c1b-b07f-40c9-96ad-53e0942080e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:21:43.740436 kubelet[3661]: E0813 00:21:43.739857 3661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5a310c1b-b07f-40c9-96ad-53e0942080e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4xwwn" podUID="5a310c1b-b07f-40c9-96ad-53e0942080e1" Aug 13 00:21:43.770782 containerd[2138]: time="2025-08-13T00:21:43.770696239Z" level=error msg="StopPodSandbox for \"e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79\" failed" error="failed to destroy network for sandbox \"e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:43.771739 kubelet[3661]: E0813 00:21:43.771041 3661 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" Aug 13 00:21:43.771739 kubelet[3661]: E0813 00:21:43.771108 3661 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79"} Aug 13 00:21:43.771739 kubelet[3661]: E0813 00:21:43.771165 3661 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0bdee9ba-0652-4e5a-aa31-e915cc90ffb9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:21:43.771739 kubelet[3661]: E0813 00:21:43.771207 3661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0bdee9ba-0652-4e5a-aa31-e915cc90ffb9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-764ff7f5f7-s9b5s" podUID="0bdee9ba-0652-4e5a-aa31-e915cc90ffb9" Aug 13 00:21:43.779642 containerd[2138]: time="2025-08-13T00:21:43.779562187Z" level=error msg="StopPodSandbox for \"37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa\" failed" error="failed to destroy network for sandbox \"37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:43.780402 kubelet[3661]: E0813 00:21:43.779894 3661 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" Aug 13 00:21:43.780402 kubelet[3661]: E0813 00:21:43.779962 3661 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa"} Aug 13 00:21:43.780402 kubelet[3661]: E0813 00:21:43.780102 3661 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"399bf4b8-02c0-4439-a46e-13374f3c4aff\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:21:43.780402 kubelet[3661]: E0813 00:21:43.780166 3661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"399bf4b8-02c0-4439-a46e-13374f3c4aff\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-84dd9bcd4f-x6wsv" podUID="399bf4b8-02c0-4439-a46e-13374f3c4aff" Aug 13 00:21:43.804347 containerd[2138]: time="2025-08-13T00:21:43.803936624Z" level=error msg="StopPodSandbox for \"1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c\" failed" error="failed to destroy network for sandbox \"1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:43.804488 kubelet[3661]: E0813 00:21:43.804287 3661 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" Aug 13 00:21:43.804488 kubelet[3661]: E0813 00:21:43.804366 3661 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c"} Aug 13 00:21:43.804488 kubelet[3661]: E0813 00:21:43.804428 3661 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aa0e8597-87b1-46a6-b15d-ea2b84ced854\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:21:43.804488 kubelet[3661]: E0813 00:21:43.804470 3661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aa0e8597-87b1-46a6-b15d-ea2b84ced854\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-gggpz" podUID="aa0e8597-87b1-46a6-b15d-ea2b84ced854" Aug 13 00:21:43.818848 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c-shm.mount: Deactivated successfully. Aug 13 00:21:43.820168 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283-shm.mount: Deactivated successfully. Aug 13 00:21:43.820798 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8-shm.mount: Deactivated successfully. Aug 13 00:21:43.831275 containerd[2138]: time="2025-08-13T00:21:43.831038936Z" level=error msg="StopPodSandbox for \"d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2\" failed" error="failed to destroy network for sandbox \"d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:43.831463 kubelet[3661]: E0813 00:21:43.831356 3661 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" Aug 13 00:21:43.831463 kubelet[3661]: E0813 00:21:43.831423 3661 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2"} Aug 13 00:21:43.831609 kubelet[3661]: E0813 00:21:43.831474 3661 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"177eec0b-4e35-4df6-b815-1a477ed2acfc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:21:43.831609 kubelet[3661]: E0813 00:21:43.831512 3661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"177eec0b-4e35-4df6-b815-1a477ed2acfc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mx5t4" podUID="177eec0b-4e35-4df6-b815-1a477ed2acfc" Aug 13 00:21:43.847314 containerd[2138]: time="2025-08-13T00:21:43.846815696Z" level=error msg="StopPodSandbox for \"2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8\" failed" error="failed to destroy network for sandbox \"2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:43.847682 kubelet[3661]: E0813 00:21:43.847192 3661 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" Aug 13 00:21:43.847682 kubelet[3661]: E0813 00:21:43.847260 3661 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8"} Aug 13 00:21:43.847682 kubelet[3661]: E0813 00:21:43.847313 3661 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3cf1659a-34f6-4f08-a0e0-5a806126f297\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:21:43.847682 kubelet[3661]: E0813 00:21:43.847351 3661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3cf1659a-34f6-4f08-a0e0-5a806126f297\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5768c76bdb-qb8dv" podUID="3cf1659a-34f6-4f08-a0e0-5a806126f297" Aug 13 00:21:43.859342 containerd[2138]: time="2025-08-13T00:21:43.859221128Z" level=error msg="StopPodSandbox for \"9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3\" failed" error="failed to destroy network for sandbox \"9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:43.860257 kubelet[3661]: E0813 00:21:43.859544 3661 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" Aug 13 00:21:43.860257 kubelet[3661]: E0813 00:21:43.859612 3661 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3"} Aug 13 00:21:43.860257 kubelet[3661]: E0813 00:21:43.859674 3661 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"24486bb1-f01d-44a5-bd10-5c11bbdaf03f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:21:43.860257 kubelet[3661]: E0813 00:21:43.859715 3661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"24486bb1-f01d-44a5-bd10-5c11bbdaf03f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5768c76bdb-97r48" podUID="24486bb1-f01d-44a5-bd10-5c11bbdaf03f" Aug 13 00:21:43.865520 containerd[2138]: time="2025-08-13T00:21:43.865417772Z" level=error msg="StopPodSandbox for \"eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283\" failed" error="failed to destroy network for sandbox \"eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:21:43.865848 kubelet[3661]: E0813 00:21:43.865738 3661 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" Aug 13 00:21:43.865955 kubelet[3661]: E0813 00:21:43.865850 3661 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283"} Aug 13 00:21:43.865955 kubelet[3661]: E0813 00:21:43.865903 3661 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2b8efc0e-ddd2-44d0-b0dc-3fcdf622b514\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:21:43.866180 kubelet[3661]: E0813 00:21:43.865942 3661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2b8efc0e-ddd2-44d0-b0dc-3fcdf622b514\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-zs6cp" podUID="2b8efc0e-ddd2-44d0-b0dc-3fcdf622b514" Aug 13 00:21:48.749248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount702257757.mount: Deactivated successfully. Aug 13 00:21:48.821670 containerd[2138]: time="2025-08-13T00:21:48.820864285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:21:48.823029 containerd[2138]: time="2025-08-13T00:21:48.822732121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Aug 13 00:21:48.825183 containerd[2138]: time="2025-08-13T00:21:48.825108577Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:21:48.829905 containerd[2138]: time="2025-08-13T00:21:48.829802053Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:21:48.832184 containerd[2138]: time="2025-08-13T00:21:48.831202405Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 6.293359832s" Aug 13 00:21:48.832184 containerd[2138]: time="2025-08-13T00:21:48.831266797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Aug 13 00:21:48.874123 containerd[2138]: time="2025-08-13T00:21:48.874050985Z" level=info msg="CreateContainer within sandbox \"df0338ace1b19c507366292ceed84c594aecd51dd43e25f7081e1a9ba55ecf53\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 00:21:48.916714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2531267680.mount: Deactivated successfully. Aug 13 00:21:48.920808 containerd[2138]: time="2025-08-13T00:21:48.920723101Z" level=info msg="CreateContainer within sandbox \"df0338ace1b19c507366292ceed84c594aecd51dd43e25f7081e1a9ba55ecf53\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d593548298b11159b8bc961e93a1704c3f71f75472a3476e196c163703f5f5be\"" Aug 13 00:21:48.922163 containerd[2138]: time="2025-08-13T00:21:48.922058173Z" level=info msg="StartContainer for \"d593548298b11159b8bc961e93a1704c3f71f75472a3476e196c163703f5f5be\"" Aug 13 00:21:49.055240 containerd[2138]: time="2025-08-13T00:21:49.055066762Z" level=info msg="StartContainer for \"d593548298b11159b8bc961e93a1704c3f71f75472a3476e196c163703f5f5be\" returns successfully" Aug 13 00:21:49.401702 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 00:21:49.401882 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 00:21:49.587042 containerd[2138]: time="2025-08-13T00:21:49.583036296Z" level=info msg="StopPodSandbox for \"37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa\"" Aug 13 00:21:49.712177 kubelet[3661]: I0813 00:21:49.710806 3661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kvv5p" podStartSLOduration=1.812070575 podStartE2EDuration="16.710661937s" podCreationTimestamp="2025-08-13 00:21:33 +0000 UTC" firstStartedPulling="2025-08-13 00:21:33.933886331 +0000 UTC m=+30.941769839" lastFinishedPulling="2025-08-13 00:21:48.832477693 +0000 UTC m=+45.840361201" observedRunningTime="2025-08-13 00:21:49.709948153 +0000 UTC m=+46.717831697" watchObservedRunningTime="2025-08-13 00:21:49.710661937 +0000 UTC m=+46.718545481" Aug 13 00:21:50.121399 containerd[2138]: 2025-08-13 00:21:49.891 [INFO][4865] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" Aug 13 00:21:50.121399 containerd[2138]: 2025-08-13 00:21:49.892 [INFO][4865] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" iface="eth0" netns="/var/run/netns/cni-89a21788-689a-2b8b-77b5-bae7c6d0f55b" Aug 13 00:21:50.121399 containerd[2138]: 2025-08-13 00:21:49.894 [INFO][4865] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" iface="eth0" netns="/var/run/netns/cni-89a21788-689a-2b8b-77b5-bae7c6d0f55b" Aug 13 00:21:50.121399 containerd[2138]: 2025-08-13 00:21:49.898 [INFO][4865] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" iface="eth0" netns="/var/run/netns/cni-89a21788-689a-2b8b-77b5-bae7c6d0f55b" Aug 13 00:21:50.121399 containerd[2138]: 2025-08-13 00:21:49.898 [INFO][4865] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" Aug 13 00:21:50.121399 containerd[2138]: 2025-08-13 00:21:49.898 [INFO][4865] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" Aug 13 00:21:50.121399 containerd[2138]: 2025-08-13 00:21:50.066 [INFO][4898] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" HandleID="k8s-pod-network.37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" Workload="ip--172--31--31--162-k8s-whisker--84dd9bcd4f--x6wsv-eth0" Aug 13 00:21:50.121399 containerd[2138]: 2025-08-13 00:21:50.067 [INFO][4898] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:21:50.121399 containerd[2138]: 2025-08-13 00:21:50.067 [INFO][4898] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:21:50.121399 containerd[2138]: 2025-08-13 00:21:50.093 [WARNING][4898] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" HandleID="k8s-pod-network.37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" Workload="ip--172--31--31--162-k8s-whisker--84dd9bcd4f--x6wsv-eth0" Aug 13 00:21:50.121399 containerd[2138]: 2025-08-13 00:21:50.093 [INFO][4898] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" HandleID="k8s-pod-network.37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" Workload="ip--172--31--31--162-k8s-whisker--84dd9bcd4f--x6wsv-eth0" Aug 13 00:21:50.121399 containerd[2138]: 2025-08-13 00:21:50.096 [INFO][4898] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:21:50.121399 containerd[2138]: 2025-08-13 00:21:50.115 [INFO][4865] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" Aug 13 00:21:50.126296 containerd[2138]: time="2025-08-13T00:21:50.126146387Z" level=info msg="TearDown network for sandbox \"37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa\" successfully" Aug 13 00:21:50.126296 containerd[2138]: time="2025-08-13T00:21:50.126196859Z" level=info msg="StopPodSandbox for \"37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa\" returns successfully" Aug 13 00:21:50.140698 systemd[1]: run-netns-cni\x2d89a21788\x2d689a\x2d2b8b\x2d77b5\x2dbae7c6d0f55b.mount: Deactivated successfully. Aug 13 00:21:50.269683 kubelet[3661]: I0813 00:21:50.268028 3661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwrlb\" (UniqueName: \"kubernetes.io/projected/399bf4b8-02c0-4439-a46e-13374f3c4aff-kube-api-access-kwrlb\") pod \"399bf4b8-02c0-4439-a46e-13374f3c4aff\" (UID: \"399bf4b8-02c0-4439-a46e-13374f3c4aff\") " Aug 13 00:21:50.269683 kubelet[3661]: I0813 00:21:50.268102 3661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/399bf4b8-02c0-4439-a46e-13374f3c4aff-whisker-backend-key-pair\") pod \"399bf4b8-02c0-4439-a46e-13374f3c4aff\" (UID: \"399bf4b8-02c0-4439-a46e-13374f3c4aff\") " Aug 13 00:21:50.269683 kubelet[3661]: I0813 00:21:50.268147 3661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/399bf4b8-02c0-4439-a46e-13374f3c4aff-whisker-ca-bundle\") pod \"399bf4b8-02c0-4439-a46e-13374f3c4aff\" (UID: \"399bf4b8-02c0-4439-a46e-13374f3c4aff\") " Aug 13 00:21:50.283540 kubelet[3661]: I0813 00:21:50.282220 3661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/399bf4b8-02c0-4439-a46e-13374f3c4aff-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "399bf4b8-02c0-4439-a46e-13374f3c4aff" (UID: "399bf4b8-02c0-4439-a46e-13374f3c4aff"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:21:50.286604 systemd[1]: var-lib-kubelet-pods-399bf4b8\x2d02c0\x2d4439\x2da46e\x2d13374f3c4aff-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkwrlb.mount: Deactivated successfully. Aug 13 00:21:50.291925 kubelet[3661]: I0813 00:21:50.290471 3661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/399bf4b8-02c0-4439-a46e-13374f3c4aff-kube-api-access-kwrlb" (OuterVolumeSpecName: "kube-api-access-kwrlb") pod "399bf4b8-02c0-4439-a46e-13374f3c4aff" (UID: "399bf4b8-02c0-4439-a46e-13374f3c4aff"). InnerVolumeSpecName "kube-api-access-kwrlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:21:50.292626 kubelet[3661]: I0813 00:21:50.292527 3661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/399bf4b8-02c0-4439-a46e-13374f3c4aff-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "399bf4b8-02c0-4439-a46e-13374f3c4aff" (UID: "399bf4b8-02c0-4439-a46e-13374f3c4aff"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:21:50.301749 systemd[1]: var-lib-kubelet-pods-399bf4b8\x2d02c0\x2d4439\x2da46e\x2d13374f3c4aff-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 00:21:50.370254 kubelet[3661]: I0813 00:21:50.370094 3661 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/399bf4b8-02c0-4439-a46e-13374f3c4aff-whisker-backend-key-pair\") on node \"ip-172-31-31-162\" DevicePath \"\"" Aug 13 00:21:50.370254 kubelet[3661]: I0813 00:21:50.370203 3661 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/399bf4b8-02c0-4439-a46e-13374f3c4aff-whisker-ca-bundle\") on node \"ip-172-31-31-162\" DevicePath \"\"" Aug 13 00:21:50.370554 kubelet[3661]: I0813 00:21:50.370227 3661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwrlb\" (UniqueName: \"kubernetes.io/projected/399bf4b8-02c0-4439-a46e-13374f3c4aff-kube-api-access-kwrlb\") on node \"ip-172-31-31-162\" DevicePath \"\"" Aug 13 00:21:50.875566 kubelet[3661]: I0813 00:21:50.875268 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/743c0f4a-7a07-477a-a798-b9d12ac4b333-whisker-backend-key-pair\") pod \"whisker-5f574c55d7-tswpg\" (UID: \"743c0f4a-7a07-477a-a798-b9d12ac4b333\") " pod="calico-system/whisker-5f574c55d7-tswpg" Aug 13 00:21:50.875566 kubelet[3661]: I0813 00:21:50.875370 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/743c0f4a-7a07-477a-a798-b9d12ac4b333-whisker-ca-bundle\") pod \"whisker-5f574c55d7-tswpg\" (UID: \"743c0f4a-7a07-477a-a798-b9d12ac4b333\") " pod="calico-system/whisker-5f574c55d7-tswpg" Aug 13 00:21:50.875566 kubelet[3661]: I0813 00:21:50.875417 3661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvt8h\" (UniqueName: \"kubernetes.io/projected/743c0f4a-7a07-477a-a798-b9d12ac4b333-kube-api-access-rvt8h\") pod \"whisker-5f574c55d7-tswpg\" (UID: \"743c0f4a-7a07-477a-a798-b9d12ac4b333\") " pod="calico-system/whisker-5f574c55d7-tswpg" Aug 13 00:21:51.127088 containerd[2138]: time="2025-08-13T00:21:51.126705048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f574c55d7-tswpg,Uid:743c0f4a-7a07-477a-a798-b9d12ac4b333,Namespace:calico-system,Attempt:0,}" Aug 13 00:21:51.270038 kubelet[3661]: I0813 00:21:51.269371 3661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="399bf4b8-02c0-4439-a46e-13374f3c4aff" path="/var/lib/kubelet/pods/399bf4b8-02c0-4439-a46e-13374f3c4aff/volumes" Aug 13 00:21:51.365654 (udev-worker)[4848]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:21:51.373562 systemd-networkd[1689]: calid7f7c877ba2: Link UP Aug 13 00:21:51.374060 systemd-networkd[1689]: calid7f7c877ba2: Gained carrier Aug 13 00:21:51.425268 containerd[2138]: 2025-08-13 00:21:51.205 [INFO][4941] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:21:51.425268 containerd[2138]: 2025-08-13 00:21:51.228 [INFO][4941] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--162-k8s-whisker--5f574c55d7--tswpg-eth0 whisker-5f574c55d7- calico-system 743c0f4a-7a07-477a-a798-b9d12ac4b333 948 0 2025-08-13 00:21:50 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5f574c55d7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-31-162 whisker-5f574c55d7-tswpg eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid7f7c877ba2 [] [] }} ContainerID="08d1cbc98d5262be09455dbaaaa340588cbb5dc9ce273a4870404bd9a8e36231" Namespace="calico-system" Pod="whisker-5f574c55d7-tswpg" WorkloadEndpoint="ip--172--31--31--162-k8s-whisker--5f574c55d7--tswpg-" Aug 13 00:21:51.425268 containerd[2138]: 2025-08-13 00:21:51.228 [INFO][4941] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="08d1cbc98d5262be09455dbaaaa340588cbb5dc9ce273a4870404bd9a8e36231" Namespace="calico-system" Pod="whisker-5f574c55d7-tswpg" WorkloadEndpoint="ip--172--31--31--162-k8s-whisker--5f574c55d7--tswpg-eth0" Aug 13 00:21:51.425268 containerd[2138]: 2025-08-13 00:21:51.285 [INFO][4953] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="08d1cbc98d5262be09455dbaaaa340588cbb5dc9ce273a4870404bd9a8e36231" HandleID="k8s-pod-network.08d1cbc98d5262be09455dbaaaa340588cbb5dc9ce273a4870404bd9a8e36231" Workload="ip--172--31--31--162-k8s-whisker--5f574c55d7--tswpg-eth0" Aug 13 00:21:51.425268 containerd[2138]: 2025-08-13 00:21:51.286 [INFO][4953] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="08d1cbc98d5262be09455dbaaaa340588cbb5dc9ce273a4870404bd9a8e36231" HandleID="k8s-pod-network.08d1cbc98d5262be09455dbaaaa340588cbb5dc9ce273a4870404bd9a8e36231" Workload="ip--172--31--31--162-k8s-whisker--5f574c55d7--tswpg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afa0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-162", "pod":"whisker-5f574c55d7-tswpg", "timestamp":"2025-08-13 00:21:51.285791341 +0000 UTC"}, Hostname:"ip-172-31-31-162", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:21:51.425268 containerd[2138]: 2025-08-13 00:21:51.286 [INFO][4953] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:21:51.425268 containerd[2138]: 2025-08-13 00:21:51.286 [INFO][4953] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:21:51.425268 containerd[2138]: 2025-08-13 00:21:51.286 [INFO][4953] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-162' Aug 13 00:21:51.425268 containerd[2138]: 2025-08-13 00:21:51.300 [INFO][4953] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.08d1cbc98d5262be09455dbaaaa340588cbb5dc9ce273a4870404bd9a8e36231" host="ip-172-31-31-162" Aug 13 00:21:51.425268 containerd[2138]: 2025-08-13 00:21:51.308 [INFO][4953] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-162" Aug 13 00:21:51.425268 containerd[2138]: 2025-08-13 00:21:51.314 [INFO][4953] ipam/ipam.go 511: Trying affinity for 192.168.37.192/26 host="ip-172-31-31-162" Aug 13 00:21:51.425268 containerd[2138]: 2025-08-13 00:21:51.317 [INFO][4953] ipam/ipam.go 158: Attempting to load block cidr=192.168.37.192/26 host="ip-172-31-31-162" Aug 13 00:21:51.425268 containerd[2138]: 2025-08-13 00:21:51.320 [INFO][4953] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.37.192/26 host="ip-172-31-31-162" Aug 13 00:21:51.425268 containerd[2138]: 2025-08-13 00:21:51.320 [INFO][4953] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.37.192/26 handle="k8s-pod-network.08d1cbc98d5262be09455dbaaaa340588cbb5dc9ce273a4870404bd9a8e36231" host="ip-172-31-31-162" Aug 13 00:21:51.425268 containerd[2138]: 2025-08-13 00:21:51.323 [INFO][4953] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.08d1cbc98d5262be09455dbaaaa340588cbb5dc9ce273a4870404bd9a8e36231 Aug 13 00:21:51.425268 containerd[2138]: 2025-08-13 00:21:51.332 [INFO][4953] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.37.192/26 handle="k8s-pod-network.08d1cbc98d5262be09455dbaaaa340588cbb5dc9ce273a4870404bd9a8e36231" host="ip-172-31-31-162" Aug 13 00:21:51.425268 containerd[2138]: 2025-08-13 00:21:51.344 [INFO][4953] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.37.193/26] block=192.168.37.192/26 handle="k8s-pod-network.08d1cbc98d5262be09455dbaaaa340588cbb5dc9ce273a4870404bd9a8e36231" host="ip-172-31-31-162" Aug 13 00:21:51.425268 containerd[2138]: 2025-08-13 00:21:51.345 [INFO][4953] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.37.193/26] handle="k8s-pod-network.08d1cbc98d5262be09455dbaaaa340588cbb5dc9ce273a4870404bd9a8e36231" host="ip-172-31-31-162" Aug 13 00:21:51.425268 containerd[2138]: 2025-08-13 00:21:51.345 [INFO][4953] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:21:51.425268 containerd[2138]: 2025-08-13 00:21:51.345 [INFO][4953] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.37.193/26] IPv6=[] ContainerID="08d1cbc98d5262be09455dbaaaa340588cbb5dc9ce273a4870404bd9a8e36231" HandleID="k8s-pod-network.08d1cbc98d5262be09455dbaaaa340588cbb5dc9ce273a4870404bd9a8e36231" Workload="ip--172--31--31--162-k8s-whisker--5f574c55d7--tswpg-eth0" Aug 13 00:21:51.426548 containerd[2138]: 2025-08-13 00:21:51.348 [INFO][4941] cni-plugin/k8s.go 418: Populated endpoint ContainerID="08d1cbc98d5262be09455dbaaaa340588cbb5dc9ce273a4870404bd9a8e36231" Namespace="calico-system" Pod="whisker-5f574c55d7-tswpg" WorkloadEndpoint="ip--172--31--31--162-k8s-whisker--5f574c55d7--tswpg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--162-k8s-whisker--5f574c55d7--tswpg-eth0", GenerateName:"whisker-5f574c55d7-", Namespace:"calico-system", SelfLink:"", UID:"743c0f4a-7a07-477a-a798-b9d12ac4b333", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 21, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f574c55d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-162", ContainerID:"", Pod:"whisker-5f574c55d7-tswpg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.37.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid7f7c877ba2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:21:51.426548 containerd[2138]: 2025-08-13 00:21:51.348 [INFO][4941] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.193/32] ContainerID="08d1cbc98d5262be09455dbaaaa340588cbb5dc9ce273a4870404bd9a8e36231" Namespace="calico-system" Pod="whisker-5f574c55d7-tswpg" WorkloadEndpoint="ip--172--31--31--162-k8s-whisker--5f574c55d7--tswpg-eth0" Aug 13 00:21:51.426548 containerd[2138]: 2025-08-13 00:21:51.349 [INFO][4941] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid7f7c877ba2 ContainerID="08d1cbc98d5262be09455dbaaaa340588cbb5dc9ce273a4870404bd9a8e36231" Namespace="calico-system" Pod="whisker-5f574c55d7-tswpg" WorkloadEndpoint="ip--172--31--31--162-k8s-whisker--5f574c55d7--tswpg-eth0" Aug 13 00:21:51.426548 containerd[2138]: 2025-08-13 00:21:51.384 [INFO][4941] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="08d1cbc98d5262be09455dbaaaa340588cbb5dc9ce273a4870404bd9a8e36231" Namespace="calico-system" Pod="whisker-5f574c55d7-tswpg" WorkloadEndpoint="ip--172--31--31--162-k8s-whisker--5f574c55d7--tswpg-eth0" Aug 13 00:21:51.426548 containerd[2138]: 2025-08-13 00:21:51.387 [INFO][4941] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="08d1cbc98d5262be09455dbaaaa340588cbb5dc9ce273a4870404bd9a8e36231" Namespace="calico-system" Pod="whisker-5f574c55d7-tswpg" WorkloadEndpoint="ip--172--31--31--162-k8s-whisker--5f574c55d7--tswpg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--162-k8s-whisker--5f574c55d7--tswpg-eth0", GenerateName:"whisker-5f574c55d7-", Namespace:"calico-system", SelfLink:"", UID:"743c0f4a-7a07-477a-a798-b9d12ac4b333", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 21, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f574c55d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-162", ContainerID:"08d1cbc98d5262be09455dbaaaa340588cbb5dc9ce273a4870404bd9a8e36231", Pod:"whisker-5f574c55d7-tswpg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.37.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid7f7c877ba2", MAC:"66:56:db:75:4f:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:21:51.426548 containerd[2138]: 2025-08-13 00:21:51.413 [INFO][4941] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="08d1cbc98d5262be09455dbaaaa340588cbb5dc9ce273a4870404bd9a8e36231" Namespace="calico-system" Pod="whisker-5f574c55d7-tswpg" WorkloadEndpoint="ip--172--31--31--162-k8s-whisker--5f574c55d7--tswpg-eth0" Aug 13 00:21:51.482397 containerd[2138]: time="2025-08-13T00:21:51.481824134Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:21:51.482397 containerd[2138]: time="2025-08-13T00:21:51.481952426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:21:51.482397 containerd[2138]: time="2025-08-13T00:21:51.482019698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:21:51.482397 containerd[2138]: time="2025-08-13T00:21:51.482212394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:21:51.689929 containerd[2138]: time="2025-08-13T00:21:51.689682855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f574c55d7-tswpg,Uid:743c0f4a-7a07-477a-a798-b9d12ac4b333,Namespace:calico-system,Attempt:0,} returns sandbox id \"08d1cbc98d5262be09455dbaaaa340588cbb5dc9ce273a4870404bd9a8e36231\"" Aug 13 00:21:51.705377 containerd[2138]: time="2025-08-13T00:21:51.705021291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 00:21:52.375054 kernel: bpftool[5151]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 13 00:21:52.735009 systemd-networkd[1689]: vxlan.calico: Link UP Aug 13 00:21:52.735128 systemd-networkd[1689]: vxlan.calico: Gained carrier Aug 13 00:21:52.790486 (udev-worker)[4849]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:21:52.909624 systemd-networkd[1689]: calid7f7c877ba2: Gained IPv6LL Aug 13 00:21:53.206588 containerd[2138]: time="2025-08-13T00:21:53.206513258Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:21:53.211482 containerd[2138]: time="2025-08-13T00:21:53.211410554Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Aug 13 00:21:53.211649 containerd[2138]: time="2025-08-13T00:21:53.211566974Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:21:53.241830 containerd[2138]: time="2025-08-13T00:21:53.239957186Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:21:53.250061 containerd[2138]: time="2025-08-13T00:21:53.248957366Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.543856443s" Aug 13 00:21:53.250340 containerd[2138]: time="2025-08-13T00:21:53.250279239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Aug 13 00:21:53.271718 containerd[2138]: time="2025-08-13T00:21:53.271135143Z" level=info msg="CreateContainer within sandbox \"08d1cbc98d5262be09455dbaaaa340588cbb5dc9ce273a4870404bd9a8e36231\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 00:21:53.308080 containerd[2138]: time="2025-08-13T00:21:53.307491891Z" level=info msg="CreateContainer within sandbox \"08d1cbc98d5262be09455dbaaaa340588cbb5dc9ce273a4870404bd9a8e36231\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"f3b1cd3431307f86e3ef0657e6d13664fb18aa347ac583e061fd4f0a92c27851\"" Aug 13 00:21:53.308537 containerd[2138]: time="2025-08-13T00:21:53.308480499Z" level=info msg="StartContainer for \"f3b1cd3431307f86e3ef0657e6d13664fb18aa347ac583e061fd4f0a92c27851\"" Aug 13 00:21:53.457831 containerd[2138]: time="2025-08-13T00:21:53.457684480Z" level=info msg="StartContainer for \"f3b1cd3431307f86e3ef0657e6d13664fb18aa347ac583e061fd4f0a92c27851\" returns successfully" Aug 13 00:21:53.463566 containerd[2138]: time="2025-08-13T00:21:53.463411108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 00:21:54.265042 containerd[2138]: time="2025-08-13T00:21:54.264539956Z" level=info msg="StopPodSandbox for \"d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2\"" Aug 13 00:21:54.268646 containerd[2138]: time="2025-08-13T00:21:54.265414360Z" level=info msg="StopPodSandbox for \"eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283\"" Aug 13 00:21:54.318090 systemd-networkd[1689]: vxlan.calico: Gained IPv6LL Aug 13 00:21:54.492397 containerd[2138]: 2025-08-13 00:21:54.394 [INFO][5286] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" Aug 13 00:21:54.492397 containerd[2138]: 2025-08-13 00:21:54.394 [INFO][5286] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" iface="eth0" netns="/var/run/netns/cni-d1eb6cd9-03c8-a7df-be65-e64935094dd9" Aug 13 00:21:54.492397 containerd[2138]: 2025-08-13 00:21:54.396 [INFO][5286] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" iface="eth0" netns="/var/run/netns/cni-d1eb6cd9-03c8-a7df-be65-e64935094dd9" Aug 13 00:21:54.492397 containerd[2138]: 2025-08-13 00:21:54.404 [INFO][5286] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" iface="eth0" netns="/var/run/netns/cni-d1eb6cd9-03c8-a7df-be65-e64935094dd9" Aug 13 00:21:54.492397 containerd[2138]: 2025-08-13 00:21:54.405 [INFO][5286] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" Aug 13 00:21:54.492397 containerd[2138]: 2025-08-13 00:21:54.405 [INFO][5286] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" Aug 13 00:21:54.492397 containerd[2138]: 2025-08-13 00:21:54.463 [INFO][5301] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" HandleID="k8s-pod-network.eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" Workload="ip--172--31--31--162-k8s-goldmane--58fd7646b9--zs6cp-eth0" Aug 13 00:21:54.492397 containerd[2138]: 2025-08-13 00:21:54.464 [INFO][5301] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:21:54.492397 containerd[2138]: 2025-08-13 00:21:54.467 [INFO][5301] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:21:54.492397 containerd[2138]: 2025-08-13 00:21:54.483 [WARNING][5301] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" HandleID="k8s-pod-network.eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" Workload="ip--172--31--31--162-k8s-goldmane--58fd7646b9--zs6cp-eth0" Aug 13 00:21:54.492397 containerd[2138]: 2025-08-13 00:21:54.483 [INFO][5301] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" HandleID="k8s-pod-network.eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" Workload="ip--172--31--31--162-k8s-goldmane--58fd7646b9--zs6cp-eth0" Aug 13 00:21:54.492397 containerd[2138]: 2025-08-13 00:21:54.486 [INFO][5301] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:21:54.492397 containerd[2138]: 2025-08-13 00:21:54.489 [INFO][5286] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" Aug 13 00:21:54.493799 containerd[2138]: time="2025-08-13T00:21:54.493599281Z" level=info msg="TearDown network for sandbox \"eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283\" successfully" Aug 13 00:21:54.493799 containerd[2138]: time="2025-08-13T00:21:54.493654805Z" level=info msg="StopPodSandbox for \"eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283\" returns successfully" Aug 13 00:21:54.498028 containerd[2138]: time="2025-08-13T00:21:54.497936405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-zs6cp,Uid:2b8efc0e-ddd2-44d0-b0dc-3fcdf622b514,Namespace:calico-system,Attempt:1,}" Aug 13 00:21:54.508585 systemd[1]: run-netns-cni\x2dd1eb6cd9\x2d03c8\x2da7df\x2dbe65\x2de64935094dd9.mount: Deactivated successfully. Aug 13 00:21:54.519773 containerd[2138]: 2025-08-13 00:21:54.405 [INFO][5287] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" Aug 13 00:21:54.519773 containerd[2138]: 2025-08-13 00:21:54.405 [INFO][5287] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" iface="eth0" netns="/var/run/netns/cni-40556412-1ecb-3b79-c558-ce22358e2865" Aug 13 00:21:54.519773 containerd[2138]: 2025-08-13 00:21:54.405 [INFO][5287] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" iface="eth0" netns="/var/run/netns/cni-40556412-1ecb-3b79-c558-ce22358e2865" Aug 13 00:21:54.519773 containerd[2138]: 2025-08-13 00:21:54.406 [INFO][5287] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" iface="eth0" netns="/var/run/netns/cni-40556412-1ecb-3b79-c558-ce22358e2865" Aug 13 00:21:54.519773 containerd[2138]: 2025-08-13 00:21:54.406 [INFO][5287] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" Aug 13 00:21:54.519773 containerd[2138]: 2025-08-13 00:21:54.406 [INFO][5287] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" Aug 13 00:21:54.519773 containerd[2138]: 2025-08-13 00:21:54.476 [INFO][5302] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" HandleID="k8s-pod-network.d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" Workload="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--mx5t4-eth0" Aug 13 00:21:54.519773 containerd[2138]: 2025-08-13 00:21:54.476 [INFO][5302] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:21:54.519773 containerd[2138]: 2025-08-13 00:21:54.486 [INFO][5302] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:21:54.519773 containerd[2138]: 2025-08-13 00:21:54.507 [WARNING][5302] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" HandleID="k8s-pod-network.d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" Workload="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--mx5t4-eth0" Aug 13 00:21:54.519773 containerd[2138]: 2025-08-13 00:21:54.507 [INFO][5302] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" HandleID="k8s-pod-network.d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" Workload="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--mx5t4-eth0" Aug 13 00:21:54.519773 containerd[2138]: 2025-08-13 00:21:54.513 [INFO][5302] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:21:54.519773 containerd[2138]: 2025-08-13 00:21:54.516 [INFO][5287] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" Aug 13 00:21:54.523502 containerd[2138]: time="2025-08-13T00:21:54.521781965Z" level=info msg="TearDown network for sandbox \"d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2\" successfully" Aug 13 00:21:54.523502 containerd[2138]: time="2025-08-13T00:21:54.521826773Z" level=info msg="StopPodSandbox for \"d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2\" returns successfully" Aug 13 00:21:54.524413 containerd[2138]: time="2025-08-13T00:21:54.524154089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mx5t4,Uid:177eec0b-4e35-4df6-b815-1a477ed2acfc,Namespace:kube-system,Attempt:1,}" Aug 13 00:21:54.530683 systemd[1]: run-netns-cni\x2d40556412\x2d1ecb\x2d3b79\x2dc558\x2dce22358e2865.mount: Deactivated successfully. Aug 13 00:21:54.902520 (udev-worker)[5187]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:21:54.904880 systemd-networkd[1689]: cali6d700e8764b: Link UP Aug 13 00:21:54.908582 systemd-networkd[1689]: cali6d700e8764b: Gained carrier Aug 13 00:21:54.951872 containerd[2138]: 2025-08-13 00:21:54.660 [INFO][5315] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--162-k8s-goldmane--58fd7646b9--zs6cp-eth0 goldmane-58fd7646b9- calico-system 2b8efc0e-ddd2-44d0-b0dc-3fcdf622b514 968 0 2025-08-13 00:21:33 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-31-162 goldmane-58fd7646b9-zs6cp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6d700e8764b [] [] }} ContainerID="50f844f72d74062e69aaaf98942e6afac9a217e084e0abf75dae6e1acc6c7fce" Namespace="calico-system" Pod="goldmane-58fd7646b9-zs6cp" WorkloadEndpoint="ip--172--31--31--162-k8s-goldmane--58fd7646b9--zs6cp-" Aug 13 00:21:54.951872 containerd[2138]: 2025-08-13 00:21:54.660 [INFO][5315] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="50f844f72d74062e69aaaf98942e6afac9a217e084e0abf75dae6e1acc6c7fce" Namespace="calico-system" Pod="goldmane-58fd7646b9-zs6cp" WorkloadEndpoint="ip--172--31--31--162-k8s-goldmane--58fd7646b9--zs6cp-eth0" Aug 13 00:21:54.951872 containerd[2138]: 2025-08-13 00:21:54.782 [INFO][5339] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="50f844f72d74062e69aaaf98942e6afac9a217e084e0abf75dae6e1acc6c7fce" HandleID="k8s-pod-network.50f844f72d74062e69aaaf98942e6afac9a217e084e0abf75dae6e1acc6c7fce" Workload="ip--172--31--31--162-k8s-goldmane--58fd7646b9--zs6cp-eth0" Aug 13 00:21:54.951872 containerd[2138]: 2025-08-13 00:21:54.783 [INFO][5339] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="50f844f72d74062e69aaaf98942e6afac9a217e084e0abf75dae6e1acc6c7fce" HandleID="k8s-pod-network.50f844f72d74062e69aaaf98942e6afac9a217e084e0abf75dae6e1acc6c7fce" Workload="ip--172--31--31--162-k8s-goldmane--58fd7646b9--zs6cp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024aff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-162", "pod":"goldmane-58fd7646b9-zs6cp", "timestamp":"2025-08-13 00:21:54.782258946 +0000 UTC"}, Hostname:"ip-172-31-31-162", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:21:54.951872 containerd[2138]: 2025-08-13 00:21:54.783 [INFO][5339] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:21:54.951872 containerd[2138]: 2025-08-13 00:21:54.783 [INFO][5339] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:21:54.951872 containerd[2138]: 2025-08-13 00:21:54.783 [INFO][5339] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-162' Aug 13 00:21:54.951872 containerd[2138]: 2025-08-13 00:21:54.804 [INFO][5339] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.50f844f72d74062e69aaaf98942e6afac9a217e084e0abf75dae6e1acc6c7fce" host="ip-172-31-31-162" Aug 13 00:21:54.951872 containerd[2138]: 2025-08-13 00:21:54.818 [INFO][5339] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-162" Aug 13 00:21:54.951872 containerd[2138]: 2025-08-13 00:21:54.832 [INFO][5339] ipam/ipam.go 511: Trying affinity for 192.168.37.192/26 host="ip-172-31-31-162" Aug 13 00:21:54.951872 containerd[2138]: 2025-08-13 00:21:54.835 [INFO][5339] ipam/ipam.go 158: Attempting to load block cidr=192.168.37.192/26 host="ip-172-31-31-162" Aug 13 00:21:54.951872 containerd[2138]: 2025-08-13 00:21:54.846 [INFO][5339] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.37.192/26 host="ip-172-31-31-162" Aug 13 00:21:54.951872 containerd[2138]: 2025-08-13 00:21:54.846 [INFO][5339] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.37.192/26 handle="k8s-pod-network.50f844f72d74062e69aaaf98942e6afac9a217e084e0abf75dae6e1acc6c7fce" host="ip-172-31-31-162" Aug 13 00:21:54.951872 containerd[2138]: 2025-08-13 00:21:54.852 [INFO][5339] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.50f844f72d74062e69aaaf98942e6afac9a217e084e0abf75dae6e1acc6c7fce Aug 13 00:21:54.951872 containerd[2138]: 2025-08-13 00:21:54.863 [INFO][5339] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.37.192/26 handle="k8s-pod-network.50f844f72d74062e69aaaf98942e6afac9a217e084e0abf75dae6e1acc6c7fce" host="ip-172-31-31-162" Aug 13 00:21:54.951872 containerd[2138]: 2025-08-13 00:21:54.883 [INFO][5339] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.37.194/26] block=192.168.37.192/26 handle="k8s-pod-network.50f844f72d74062e69aaaf98942e6afac9a217e084e0abf75dae6e1acc6c7fce" host="ip-172-31-31-162" Aug 13 00:21:54.951872 containerd[2138]: 2025-08-13 00:21:54.885 [INFO][5339] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.37.194/26] handle="k8s-pod-network.50f844f72d74062e69aaaf98942e6afac9a217e084e0abf75dae6e1acc6c7fce" host="ip-172-31-31-162" Aug 13 00:21:54.951872 containerd[2138]: 2025-08-13 00:21:54.885 [INFO][5339] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:21:54.951872 containerd[2138]: 2025-08-13 00:21:54.885 [INFO][5339] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.37.194/26] IPv6=[] ContainerID="50f844f72d74062e69aaaf98942e6afac9a217e084e0abf75dae6e1acc6c7fce" HandleID="k8s-pod-network.50f844f72d74062e69aaaf98942e6afac9a217e084e0abf75dae6e1acc6c7fce" Workload="ip--172--31--31--162-k8s-goldmane--58fd7646b9--zs6cp-eth0" Aug 13 00:21:54.955268 containerd[2138]: 2025-08-13 00:21:54.896 [INFO][5315] cni-plugin/k8s.go 418: Populated endpoint ContainerID="50f844f72d74062e69aaaf98942e6afac9a217e084e0abf75dae6e1acc6c7fce" Namespace="calico-system" Pod="goldmane-58fd7646b9-zs6cp" WorkloadEndpoint="ip--172--31--31--162-k8s-goldmane--58fd7646b9--zs6cp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--162-k8s-goldmane--58fd7646b9--zs6cp-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"2b8efc0e-ddd2-44d0-b0dc-3fcdf622b514", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-162", ContainerID:"", Pod:"goldmane-58fd7646b9-zs6cp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.37.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6d700e8764b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:21:54.955268 containerd[2138]: 2025-08-13 00:21:54.896 [INFO][5315] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.194/32] ContainerID="50f844f72d74062e69aaaf98942e6afac9a217e084e0abf75dae6e1acc6c7fce" Namespace="calico-system" Pod="goldmane-58fd7646b9-zs6cp" WorkloadEndpoint="ip--172--31--31--162-k8s-goldmane--58fd7646b9--zs6cp-eth0" Aug 13 00:21:54.955268 containerd[2138]: 2025-08-13 00:21:54.896 [INFO][5315] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6d700e8764b ContainerID="50f844f72d74062e69aaaf98942e6afac9a217e084e0abf75dae6e1acc6c7fce" Namespace="calico-system" Pod="goldmane-58fd7646b9-zs6cp" WorkloadEndpoint="ip--172--31--31--162-k8s-goldmane--58fd7646b9--zs6cp-eth0" Aug 13 00:21:54.955268 containerd[2138]: 2025-08-13 00:21:54.910 [INFO][5315] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="50f844f72d74062e69aaaf98942e6afac9a217e084e0abf75dae6e1acc6c7fce" Namespace="calico-system" Pod="goldmane-58fd7646b9-zs6cp" WorkloadEndpoint="ip--172--31--31--162-k8s-goldmane--58fd7646b9--zs6cp-eth0" Aug 13 00:21:54.955268 containerd[2138]: 2025-08-13 00:21:54.913 [INFO][5315] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="50f844f72d74062e69aaaf98942e6afac9a217e084e0abf75dae6e1acc6c7fce" Namespace="calico-system" Pod="goldmane-58fd7646b9-zs6cp" WorkloadEndpoint="ip--172--31--31--162-k8s-goldmane--58fd7646b9--zs6cp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--162-k8s-goldmane--58fd7646b9--zs6cp-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"2b8efc0e-ddd2-44d0-b0dc-3fcdf622b514", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-162", ContainerID:"50f844f72d74062e69aaaf98942e6afac9a217e084e0abf75dae6e1acc6c7fce", Pod:"goldmane-58fd7646b9-zs6cp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.37.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6d700e8764b", MAC:"6a:57:c6:80:ac:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:21:54.955268 containerd[2138]: 2025-08-13 00:21:54.941 [INFO][5315] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="50f844f72d74062e69aaaf98942e6afac9a217e084e0abf75dae6e1acc6c7fce" Namespace="calico-system" Pod="goldmane-58fd7646b9-zs6cp" WorkloadEndpoint="ip--172--31--31--162-k8s-goldmane--58fd7646b9--zs6cp-eth0" Aug 13 00:21:55.044650 containerd[2138]: time="2025-08-13T00:21:55.044301159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:21:55.044650 containerd[2138]: time="2025-08-13T00:21:55.044425191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:21:55.044650 containerd[2138]: time="2025-08-13T00:21:55.044482803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:21:55.046798 containerd[2138]: time="2025-08-13T00:21:55.045626055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:21:55.054800 systemd-networkd[1689]: cali93f6d79960c: Link UP Aug 13 00:21:55.063688 systemd-networkd[1689]: cali93f6d79960c: Gained carrier Aug 13 00:21:55.128053 containerd[2138]: 2025-08-13 00:21:54.688 [INFO][5325] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--162-k8s-coredns--7c65d6cfc9--mx5t4-eth0 coredns-7c65d6cfc9- kube-system 177eec0b-4e35-4df6-b815-1a477ed2acfc 969 0 2025-08-13 00:21:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-31-162 coredns-7c65d6cfc9-mx5t4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali93f6d79960c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="cbb88c05e91242a0ccf204d15f73c893e9d0b3d159f990fa5fd0d4313a96b975" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mx5t4" WorkloadEndpoint="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--mx5t4-" Aug 13 00:21:55.128053 containerd[2138]: 2025-08-13 00:21:54.689 [INFO][5325] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cbb88c05e91242a0ccf204d15f73c893e9d0b3d159f990fa5fd0d4313a96b975" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mx5t4" WorkloadEndpoint="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--mx5t4-eth0" Aug 13 00:21:55.128053 containerd[2138]: 2025-08-13 00:21:54.865 [INFO][5344] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cbb88c05e91242a0ccf204d15f73c893e9d0b3d159f990fa5fd0d4313a96b975" HandleID="k8s-pod-network.cbb88c05e91242a0ccf204d15f73c893e9d0b3d159f990fa5fd0d4313a96b975" Workload="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--mx5t4-eth0" Aug 13 00:21:55.128053 containerd[2138]: 2025-08-13 00:21:54.867 [INFO][5344] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cbb88c05e91242a0ccf204d15f73c893e9d0b3d159f990fa5fd0d4313a96b975" HandleID="k8s-pod-network.cbb88c05e91242a0ccf204d15f73c893e9d0b3d159f990fa5fd0d4313a96b975" Workload="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--mx5t4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400037a180), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-31-162", "pod":"coredns-7c65d6cfc9-mx5t4", "timestamp":"2025-08-13 00:21:54.865746331 +0000 UTC"}, Hostname:"ip-172-31-31-162", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:21:55.128053 containerd[2138]: 2025-08-13 00:21:54.867 [INFO][5344] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:21:55.128053 containerd[2138]: 2025-08-13 00:21:54.885 [INFO][5344] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:21:55.128053 containerd[2138]: 2025-08-13 00:21:54.885 [INFO][5344] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-162' Aug 13 00:21:55.128053 containerd[2138]: 2025-08-13 00:21:54.936 [INFO][5344] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cbb88c05e91242a0ccf204d15f73c893e9d0b3d159f990fa5fd0d4313a96b975" host="ip-172-31-31-162" Aug 13 00:21:55.128053 containerd[2138]: 2025-08-13 00:21:54.956 [INFO][5344] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-162" Aug 13 00:21:55.128053 containerd[2138]: 2025-08-13 00:21:54.972 [INFO][5344] ipam/ipam.go 511: Trying affinity for 192.168.37.192/26 host="ip-172-31-31-162" Aug 13 00:21:55.128053 containerd[2138]: 2025-08-13 00:21:54.977 [INFO][5344] ipam/ipam.go 158: Attempting to load block cidr=192.168.37.192/26 host="ip-172-31-31-162" Aug 13 00:21:55.128053 containerd[2138]: 2025-08-13 00:21:54.985 [INFO][5344] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.37.192/26 host="ip-172-31-31-162" Aug 13 00:21:55.128053 containerd[2138]: 2025-08-13 00:21:54.985 [INFO][5344] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.37.192/26 handle="k8s-pod-network.cbb88c05e91242a0ccf204d15f73c893e9d0b3d159f990fa5fd0d4313a96b975" host="ip-172-31-31-162" Aug 13 00:21:55.128053 containerd[2138]: 2025-08-13 00:21:54.989 [INFO][5344] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cbb88c05e91242a0ccf204d15f73c893e9d0b3d159f990fa5fd0d4313a96b975 Aug 13 00:21:55.128053 containerd[2138]: 2025-08-13 00:21:54.998 [INFO][5344] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.37.192/26 handle="k8s-pod-network.cbb88c05e91242a0ccf204d15f73c893e9d0b3d159f990fa5fd0d4313a96b975" host="ip-172-31-31-162" Aug 13 00:21:55.128053 containerd[2138]: 2025-08-13 00:21:55.029 [INFO][5344] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.37.195/26] block=192.168.37.192/26 handle="k8s-pod-network.cbb88c05e91242a0ccf204d15f73c893e9d0b3d159f990fa5fd0d4313a96b975" host="ip-172-31-31-162" Aug 13 00:21:55.128053 containerd[2138]: 2025-08-13 00:21:55.030 [INFO][5344] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.37.195/26] handle="k8s-pod-network.cbb88c05e91242a0ccf204d15f73c893e9d0b3d159f990fa5fd0d4313a96b975" host="ip-172-31-31-162" Aug 13 00:21:55.128053 containerd[2138]: 2025-08-13 00:21:55.030 [INFO][5344] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:21:55.128053 containerd[2138]: 2025-08-13 00:21:55.030 [INFO][5344] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.37.195/26] IPv6=[] ContainerID="cbb88c05e91242a0ccf204d15f73c893e9d0b3d159f990fa5fd0d4313a96b975" HandleID="k8s-pod-network.cbb88c05e91242a0ccf204d15f73c893e9d0b3d159f990fa5fd0d4313a96b975" Workload="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--mx5t4-eth0" Aug 13 00:21:55.130091 containerd[2138]: 2025-08-13 00:21:55.042 [INFO][5325] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cbb88c05e91242a0ccf204d15f73c893e9d0b3d159f990fa5fd0d4313a96b975" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mx5t4" WorkloadEndpoint="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--mx5t4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--162-k8s-coredns--7c65d6cfc9--mx5t4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"177eec0b-4e35-4df6-b815-1a477ed2acfc", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-162", ContainerID:"", Pod:"coredns-7c65d6cfc9-mx5t4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali93f6d79960c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:21:55.130091 containerd[2138]: 2025-08-13 00:21:55.042 [INFO][5325] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.195/32] ContainerID="cbb88c05e91242a0ccf204d15f73c893e9d0b3d159f990fa5fd0d4313a96b975" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mx5t4" WorkloadEndpoint="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--mx5t4-eth0" Aug 13 00:21:55.130091 containerd[2138]: 2025-08-13 00:21:55.044 [INFO][5325] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali93f6d79960c ContainerID="cbb88c05e91242a0ccf204d15f73c893e9d0b3d159f990fa5fd0d4313a96b975" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mx5t4" WorkloadEndpoint="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--mx5t4-eth0" Aug 13 00:21:55.130091 containerd[2138]: 2025-08-13 00:21:55.069 [INFO][5325] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cbb88c05e91242a0ccf204d15f73c893e9d0b3d159f990fa5fd0d4313a96b975" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mx5t4" WorkloadEndpoint="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--mx5t4-eth0" Aug 13 00:21:55.130091 containerd[2138]: 2025-08-13 00:21:55.080 [INFO][5325] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cbb88c05e91242a0ccf204d15f73c893e9d0b3d159f990fa5fd0d4313a96b975" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mx5t4" WorkloadEndpoint="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--mx5t4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--162-k8s-coredns--7c65d6cfc9--mx5t4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"177eec0b-4e35-4df6-b815-1a477ed2acfc", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-162", ContainerID:"cbb88c05e91242a0ccf204d15f73c893e9d0b3d159f990fa5fd0d4313a96b975", Pod:"coredns-7c65d6cfc9-mx5t4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali93f6d79960c", MAC:"6e:04:51:4a:48:81", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:21:55.130091 containerd[2138]: 2025-08-13 00:21:55.107 [INFO][5325] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cbb88c05e91242a0ccf204d15f73c893e9d0b3d159f990fa5fd0d4313a96b975" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mx5t4" WorkloadEndpoint="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--mx5t4-eth0" Aug 13 00:21:55.236485 containerd[2138]: time="2025-08-13T00:21:55.235728556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:21:55.236485 containerd[2138]: time="2025-08-13T00:21:55.235845172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:21:55.237785 containerd[2138]: time="2025-08-13T00:21:55.235881856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:21:55.237785 containerd[2138]: time="2025-08-13T00:21:55.237090472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:21:55.258523 containerd[2138]: time="2025-08-13T00:21:55.258329704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-zs6cp,Uid:2b8efc0e-ddd2-44d0-b0dc-3fcdf622b514,Namespace:calico-system,Attempt:1,} returns sandbox id \"50f844f72d74062e69aaaf98942e6afac9a217e084e0abf75dae6e1acc6c7fce\"" Aug 13 00:21:55.268052 containerd[2138]: time="2025-08-13T00:21:55.265446509Z" level=info msg="StopPodSandbox for \"1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c\"" Aug 13 00:21:55.420221 containerd[2138]: time="2025-08-13T00:21:55.420170393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mx5t4,Uid:177eec0b-4e35-4df6-b815-1a477ed2acfc,Namespace:kube-system,Attempt:1,} returns sandbox id \"cbb88c05e91242a0ccf204d15f73c893e9d0b3d159f990fa5fd0d4313a96b975\"" Aug 13 00:21:55.434014 containerd[2138]: time="2025-08-13T00:21:55.433823201Z" level=info msg="CreateContainer within sandbox \"cbb88c05e91242a0ccf204d15f73c893e9d0b3d159f990fa5fd0d4313a96b975\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:21:55.480778 containerd[2138]: time="2025-08-13T00:21:55.480710994Z" level=info msg="CreateContainer within sandbox \"cbb88c05e91242a0ccf204d15f73c893e9d0b3d159f990fa5fd0d4313a96b975\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7d103b9310ed65e0d0287282be212b581b07b22b0b33d9a71f9feb331c2f211c\"" Aug 13 00:21:55.482909 containerd[2138]: time="2025-08-13T00:21:55.482554470Z" level=info msg="StartContainer for \"7d103b9310ed65e0d0287282be212b581b07b22b0b33d9a71f9feb331c2f211c\"" Aug 13 00:21:55.615779 systemd[1]: run-containerd-runc-k8s.io-7d103b9310ed65e0d0287282be212b581b07b22b0b33d9a71f9feb331c2f211c-runc.cbj7NA.mount: Deactivated successfully. Aug 13 00:21:55.752022 containerd[2138]: 2025-08-13 00:21:55.468 [INFO][5456] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" Aug 13 00:21:55.752022 containerd[2138]: 2025-08-13 00:21:55.468 [INFO][5456] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" iface="eth0" netns="/var/run/netns/cni-e5ac3f26-6674-bdbe-af04-67c2067f7005" Aug 13 00:21:55.752022 containerd[2138]: 2025-08-13 00:21:55.469 [INFO][5456] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" iface="eth0" netns="/var/run/netns/cni-e5ac3f26-6674-bdbe-af04-67c2067f7005" Aug 13 00:21:55.752022 containerd[2138]: 2025-08-13 00:21:55.470 [INFO][5456] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" iface="eth0" netns="/var/run/netns/cni-e5ac3f26-6674-bdbe-af04-67c2067f7005" Aug 13 00:21:55.752022 containerd[2138]: 2025-08-13 00:21:55.470 [INFO][5456] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" Aug 13 00:21:55.752022 containerd[2138]: 2025-08-13 00:21:55.470 [INFO][5456] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" Aug 13 00:21:55.752022 containerd[2138]: 2025-08-13 00:21:55.677 [INFO][5478] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" HandleID="k8s-pod-network.1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" Workload="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--gggpz-eth0" Aug 13 00:21:55.752022 containerd[2138]: 2025-08-13 00:21:55.680 [INFO][5478] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:21:55.752022 containerd[2138]: 2025-08-13 00:21:55.681 [INFO][5478] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:21:55.752022 containerd[2138]: 2025-08-13 00:21:55.714 [WARNING][5478] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" HandleID="k8s-pod-network.1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" Workload="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--gggpz-eth0" Aug 13 00:21:55.752022 containerd[2138]: 2025-08-13 00:21:55.714 [INFO][5478] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" HandleID="k8s-pod-network.1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" Workload="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--gggpz-eth0" Aug 13 00:21:55.752022 containerd[2138]: 2025-08-13 00:21:55.724 [INFO][5478] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:21:55.752022 containerd[2138]: 2025-08-13 00:21:55.736 [INFO][5456] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" Aug 13 00:21:55.758106 containerd[2138]: time="2025-08-13T00:21:55.756238447Z" level=info msg="TearDown network for sandbox \"1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c\" successfully" Aug 13 00:21:55.758106 containerd[2138]: time="2025-08-13T00:21:55.756304111Z" level=info msg="StopPodSandbox for \"1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c\" returns successfully" Aug 13 00:21:55.761249 containerd[2138]: time="2025-08-13T00:21:55.759471463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gggpz,Uid:aa0e8597-87b1-46a6-b15d-ea2b84ced854,Namespace:kube-system,Attempt:1,}" Aug 13 00:21:55.766965 systemd[1]: run-netns-cni\x2de5ac3f26\x2d6674\x2dbdbe\x2daf04\x2d67c2067f7005.mount: Deactivated successfully. Aug 13 00:21:55.896077 containerd[2138]: time="2025-08-13T00:21:55.895733396Z" level=info msg="StartContainer for \"7d103b9310ed65e0d0287282be212b581b07b22b0b33d9a71f9feb331c2f211c\" returns successfully" Aug 13 00:21:56.236595 systemd-networkd[1689]: cali6d700e8764b: Gained IPv6LL Aug 13 00:21:56.274375 containerd[2138]: time="2025-08-13T00:21:56.273856590Z" level=info msg="StopPodSandbox for \"9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3\"" Aug 13 00:21:56.278910 containerd[2138]: time="2025-08-13T00:21:56.277728342Z" level=info msg="StopPodSandbox for \"31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4\"" Aug 13 00:21:56.358522 systemd[1]: Started sshd@7-172.31.31.162:22-139.178.89.65:58774.service - OpenSSH per-connection server daemon (139.178.89.65:58774). Aug 13 00:21:56.429150 systemd-networkd[1689]: cali93f6d79960c: Gained IPv6LL Aug 13 00:21:56.617445 sshd[5564]: Accepted publickey for core from 139.178.89.65 port 58774 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:21:56.628512 sshd[5564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:56.646664 systemd-networkd[1689]: cali44f7845f4c4: Link UP Aug 13 00:21:56.647025 systemd-networkd[1689]: cali44f7845f4c4: Gained carrier Aug 13 00:21:56.657948 systemd-logind[2103]: New session 8 of user core. Aug 13 00:21:56.661609 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 00:21:56.739059 containerd[2138]: 2025-08-13 00:21:56.131 [INFO][5515] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--162-k8s-coredns--7c65d6cfc9--gggpz-eth0 coredns-7c65d6cfc9- kube-system aa0e8597-87b1-46a6-b15d-ea2b84ced854 983 0 2025-08-13 00:21:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-31-162 coredns-7c65d6cfc9-gggpz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali44f7845f4c4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1ec66feba418ba7a8b6a062ed070487e19f710bf4f56c645a384bd5119f84bd2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gggpz" WorkloadEndpoint="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--gggpz-" Aug 13 00:21:56.739059 containerd[2138]: 2025-08-13 00:21:56.132 [INFO][5515] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1ec66feba418ba7a8b6a062ed070487e19f710bf4f56c645a384bd5119f84bd2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gggpz" WorkloadEndpoint="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--gggpz-eth0" Aug 13 00:21:56.739059 containerd[2138]: 2025-08-13 00:21:56.394 [INFO][5532] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1ec66feba418ba7a8b6a062ed070487e19f710bf4f56c645a384bd5119f84bd2" HandleID="k8s-pod-network.1ec66feba418ba7a8b6a062ed070487e19f710bf4f56c645a384bd5119f84bd2" Workload="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--gggpz-eth0" Aug 13 00:21:56.739059 containerd[2138]: 2025-08-13 00:21:56.395 [INFO][5532] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1ec66feba418ba7a8b6a062ed070487e19f710bf4f56c645a384bd5119f84bd2" HandleID="k8s-pod-network.1ec66feba418ba7a8b6a062ed070487e19f710bf4f56c645a384bd5119f84bd2" Workload="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--gggpz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40000a3700), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-31-162", "pod":"coredns-7c65d6cfc9-gggpz", "timestamp":"2025-08-13 00:21:56.39469533 +0000 UTC"}, Hostname:"ip-172-31-31-162", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:21:56.739059 containerd[2138]: 2025-08-13 00:21:56.395 [INFO][5532] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:21:56.739059 containerd[2138]: 2025-08-13 00:21:56.396 [INFO][5532] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:21:56.739059 containerd[2138]: 2025-08-13 00:21:56.396 [INFO][5532] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-162' Aug 13 00:21:56.739059 containerd[2138]: 2025-08-13 00:21:56.447 [INFO][5532] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1ec66feba418ba7a8b6a062ed070487e19f710bf4f56c645a384bd5119f84bd2" host="ip-172-31-31-162" Aug 13 00:21:56.739059 containerd[2138]: 2025-08-13 00:21:56.463 [INFO][5532] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-162" Aug 13 00:21:56.739059 containerd[2138]: 2025-08-13 00:21:56.535 [INFO][5532] ipam/ipam.go 511: Trying affinity for 192.168.37.192/26 host="ip-172-31-31-162" Aug 13 00:21:56.739059 containerd[2138]: 2025-08-13 00:21:56.555 [INFO][5532] ipam/ipam.go 158: Attempting to load block cidr=192.168.37.192/26 host="ip-172-31-31-162" Aug 13 00:21:56.739059 containerd[2138]: 2025-08-13 00:21:56.560 [INFO][5532] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.37.192/26 host="ip-172-31-31-162" Aug 13 00:21:56.739059 containerd[2138]: 2025-08-13 00:21:56.560 [INFO][5532] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.37.192/26 handle="k8s-pod-network.1ec66feba418ba7a8b6a062ed070487e19f710bf4f56c645a384bd5119f84bd2" host="ip-172-31-31-162" Aug 13 00:21:56.739059 containerd[2138]: 2025-08-13 00:21:56.565 [INFO][5532] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1ec66feba418ba7a8b6a062ed070487e19f710bf4f56c645a384bd5119f84bd2 Aug 13 00:21:56.739059 containerd[2138]: 2025-08-13 00:21:56.582 [INFO][5532] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.37.192/26 handle="k8s-pod-network.1ec66feba418ba7a8b6a062ed070487e19f710bf4f56c645a384bd5119f84bd2" host="ip-172-31-31-162" Aug 13 00:21:56.739059 containerd[2138]: 2025-08-13 00:21:56.599 [INFO][5532] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.37.196/26] block=192.168.37.192/26 handle="k8s-pod-network.1ec66feba418ba7a8b6a062ed070487e19f710bf4f56c645a384bd5119f84bd2" host="ip-172-31-31-162" Aug 13 00:21:56.739059 containerd[2138]: 2025-08-13 00:21:56.599 [INFO][5532] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.37.196/26] handle="k8s-pod-network.1ec66feba418ba7a8b6a062ed070487e19f710bf4f56c645a384bd5119f84bd2" host="ip-172-31-31-162" Aug 13 00:21:56.739059 containerd[2138]: 2025-08-13 00:21:56.603 [INFO][5532] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:21:56.739059 containerd[2138]: 2025-08-13 00:21:56.604 [INFO][5532] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.37.196/26] IPv6=[] ContainerID="1ec66feba418ba7a8b6a062ed070487e19f710bf4f56c645a384bd5119f84bd2" HandleID="k8s-pod-network.1ec66feba418ba7a8b6a062ed070487e19f710bf4f56c645a384bd5119f84bd2" Workload="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--gggpz-eth0" Aug 13 00:21:56.744788 containerd[2138]: 2025-08-13 00:21:56.638 [INFO][5515] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1ec66feba418ba7a8b6a062ed070487e19f710bf4f56c645a384bd5119f84bd2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gggpz" WorkloadEndpoint="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--gggpz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--162-k8s-coredns--7c65d6cfc9--gggpz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"aa0e8597-87b1-46a6-b15d-ea2b84ced854", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-162", ContainerID:"", Pod:"coredns-7c65d6cfc9-gggpz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali44f7845f4c4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:21:56.744788 containerd[2138]: 2025-08-13 00:21:56.638 [INFO][5515] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.196/32] ContainerID="1ec66feba418ba7a8b6a062ed070487e19f710bf4f56c645a384bd5119f84bd2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gggpz" WorkloadEndpoint="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--gggpz-eth0" Aug 13 00:21:56.744788 containerd[2138]: 2025-08-13 00:21:56.638 [INFO][5515] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali44f7845f4c4 ContainerID="1ec66feba418ba7a8b6a062ed070487e19f710bf4f56c645a384bd5119f84bd2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gggpz" WorkloadEndpoint="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--gggpz-eth0" Aug 13 00:21:56.744788 containerd[2138]: 2025-08-13 00:21:56.644 [INFO][5515] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1ec66feba418ba7a8b6a062ed070487e19f710bf4f56c645a384bd5119f84bd2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gggpz" WorkloadEndpoint="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--gggpz-eth0" Aug 13 00:21:56.744788 containerd[2138]: 2025-08-13 00:21:56.653 [INFO][5515] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1ec66feba418ba7a8b6a062ed070487e19f710bf4f56c645a384bd5119f84bd2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gggpz" WorkloadEndpoint="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--gggpz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--162-k8s-coredns--7c65d6cfc9--gggpz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"aa0e8597-87b1-46a6-b15d-ea2b84ced854", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-162", ContainerID:"1ec66feba418ba7a8b6a062ed070487e19f710bf4f56c645a384bd5119f84bd2", Pod:"coredns-7c65d6cfc9-gggpz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali44f7845f4c4", MAC:"da:38:5d:35:40:5b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:21:56.744788 containerd[2138]: 2025-08-13 00:21:56.696 [INFO][5515] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1ec66feba418ba7a8b6a062ed070487e19f710bf4f56c645a384bd5119f84bd2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gggpz" WorkloadEndpoint="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--gggpz-eth0" Aug 13 00:21:56.952039 containerd[2138]: time="2025-08-13T00:21:56.948380013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:21:56.952039 containerd[2138]: time="2025-08-13T00:21:56.948492117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:21:56.952039 containerd[2138]: time="2025-08-13T00:21:56.948520881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:21:56.952039 containerd[2138]: time="2025-08-13T00:21:56.948708309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:21:57.017154 containerd[2138]: 2025-08-13 00:21:56.620 [INFO][5563] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" Aug 13 00:21:57.017154 containerd[2138]: 2025-08-13 00:21:56.620 [INFO][5563] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" iface="eth0" netns="/var/run/netns/cni-21249dde-eb2e-a434-9716-eac03ed1f153" Aug 13 00:21:57.017154 containerd[2138]: 2025-08-13 00:21:56.627 [INFO][5563] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" iface="eth0" netns="/var/run/netns/cni-21249dde-eb2e-a434-9716-eac03ed1f153" Aug 13 00:21:57.017154 containerd[2138]: 2025-08-13 00:21:56.630 [INFO][5563] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" iface="eth0" netns="/var/run/netns/cni-21249dde-eb2e-a434-9716-eac03ed1f153" Aug 13 00:21:57.017154 containerd[2138]: 2025-08-13 00:21:56.630 [INFO][5563] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" Aug 13 00:21:57.017154 containerd[2138]: 2025-08-13 00:21:56.630 [INFO][5563] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" Aug 13 00:21:57.017154 containerd[2138]: 2025-08-13 00:21:56.889 [INFO][5580] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" HandleID="k8s-pod-network.31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" Workload="ip--172--31--31--162-k8s-csi--node--driver--4xwwn-eth0" Aug 13 00:21:57.017154 containerd[2138]: 2025-08-13 00:21:56.890 [INFO][5580] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:21:57.017154 containerd[2138]: 2025-08-13 00:21:56.892 [INFO][5580] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:21:57.017154 containerd[2138]: 2025-08-13 00:21:56.983 [WARNING][5580] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" HandleID="k8s-pod-network.31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" Workload="ip--172--31--31--162-k8s-csi--node--driver--4xwwn-eth0" Aug 13 00:21:57.017154 containerd[2138]: 2025-08-13 00:21:56.983 [INFO][5580] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" HandleID="k8s-pod-network.31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" Workload="ip--172--31--31--162-k8s-csi--node--driver--4xwwn-eth0" Aug 13 00:21:57.017154 containerd[2138]: 2025-08-13 00:21:56.993 [INFO][5580] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:21:57.017154 containerd[2138]: 2025-08-13 00:21:57.006 [INFO][5563] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" Aug 13 00:21:57.021676 containerd[2138]: time="2025-08-13T00:21:57.021534245Z" level=info msg="TearDown network for sandbox \"31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4\" successfully" Aug 13 00:21:57.021676 containerd[2138]: time="2025-08-13T00:21:57.021589121Z" level=info msg="StopPodSandbox for \"31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4\" returns successfully" Aug 13 00:21:57.031874 systemd[1]: run-netns-cni\x2d21249dde\x2deb2e\x2da434\x2d9716\x2deac03ed1f153.mount: Deactivated successfully. Aug 13 00:21:57.036429 containerd[2138]: time="2025-08-13T00:21:57.035797733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4xwwn,Uid:5a310c1b-b07f-40c9-96ad-53e0942080e1,Namespace:calico-system,Attempt:1,}" Aug 13 00:21:57.147442 containerd[2138]: 2025-08-13 00:21:56.623 [INFO][5555] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" Aug 13 00:21:57.147442 containerd[2138]: 2025-08-13 00:21:56.623 [INFO][5555] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" iface="eth0" netns="/var/run/netns/cni-cef935a6-12ba-eadf-d334-b949fbdcdfcc" Aug 13 00:21:57.147442 containerd[2138]: 2025-08-13 00:21:56.626 [INFO][5555] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" iface="eth0" netns="/var/run/netns/cni-cef935a6-12ba-eadf-d334-b949fbdcdfcc" Aug 13 00:21:57.147442 containerd[2138]: 2025-08-13 00:21:56.626 [INFO][5555] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" iface="eth0" netns="/var/run/netns/cni-cef935a6-12ba-eadf-d334-b949fbdcdfcc" Aug 13 00:21:57.147442 containerd[2138]: 2025-08-13 00:21:56.626 [INFO][5555] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" Aug 13 00:21:57.147442 containerd[2138]: 2025-08-13 00:21:56.626 [INFO][5555] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" Aug 13 00:21:57.147442 containerd[2138]: 2025-08-13 00:21:57.039 [INFO][5578] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" HandleID="k8s-pod-network.9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" Workload="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--97r48-eth0" Aug 13 00:21:57.147442 containerd[2138]: 2025-08-13 00:21:57.041 [INFO][5578] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:21:57.147442 containerd[2138]: 2025-08-13 00:21:57.043 [INFO][5578] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:21:57.147442 containerd[2138]: 2025-08-13 00:21:57.073 [WARNING][5578] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" HandleID="k8s-pod-network.9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" Workload="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--97r48-eth0" Aug 13 00:21:57.147442 containerd[2138]: 2025-08-13 00:21:57.073 [INFO][5578] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" HandleID="k8s-pod-network.9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" Workload="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--97r48-eth0" Aug 13 00:21:57.147442 containerd[2138]: 2025-08-13 00:21:57.076 [INFO][5578] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:21:57.147442 containerd[2138]: 2025-08-13 00:21:57.093 [INFO][5555] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" Aug 13 00:21:57.157380 containerd[2138]: time="2025-08-13T00:21:57.153637254Z" level=info msg="TearDown network for sandbox \"9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3\" successfully" Aug 13 00:21:57.157380 containerd[2138]: time="2025-08-13T00:21:57.155044782Z" level=info msg="StopPodSandbox for \"9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3\" returns successfully" Aug 13 00:21:57.167189 containerd[2138]: time="2025-08-13T00:21:57.166599246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5768c76bdb-97r48,Uid:24486bb1-f01d-44a5-bd10-5c11bbdaf03f,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:21:57.194506 sshd[5564]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:57.208690 systemd[1]: sshd@7-172.31.31.162:22-139.178.89.65:58774.service: Deactivated successfully. Aug 13 00:21:57.223678 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:21:57.224271 systemd-logind[2103]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:21:57.235497 systemd-logind[2103]: Removed session 8. Aug 13 00:21:57.293316 containerd[2138]: time="2025-08-13T00:21:57.292795111Z" level=info msg="StopPodSandbox for \"e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79\"" Aug 13 00:21:57.381386 containerd[2138]: time="2025-08-13T00:21:57.381318007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gggpz,Uid:aa0e8597-87b1-46a6-b15d-ea2b84ced854,Namespace:kube-system,Attempt:1,} returns sandbox id \"1ec66feba418ba7a8b6a062ed070487e19f710bf4f56c645a384bd5119f84bd2\"" Aug 13 00:21:57.388354 containerd[2138]: time="2025-08-13T00:21:57.388303183Z" level=info msg="CreateContainer within sandbox \"1ec66feba418ba7a8b6a062ed070487e19f710bf4f56c645a384bd5119f84bd2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:21:57.440023 containerd[2138]: time="2025-08-13T00:21:57.438448735Z" level=info msg="CreateContainer within sandbox \"1ec66feba418ba7a8b6a062ed070487e19f710bf4f56c645a384bd5119f84bd2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"19a8c14779f434a8f00e9deb601672abfeca5ddefaec1f459496be3438f76dab\"" Aug 13 00:21:57.440900 containerd[2138]: time="2025-08-13T00:21:57.440831455Z" level=info msg="StartContainer for \"19a8c14779f434a8f00e9deb601672abfeca5ddefaec1f459496be3438f76dab\"" Aug 13 00:21:57.512694 systemd[1]: run-netns-cni\x2dcef935a6\x2d12ba\x2deadf\x2dd334\x2db949fbdcdfcc.mount: Deactivated successfully. Aug 13 00:21:57.655863 kubelet[3661]: I0813 00:21:57.655748 3661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-mx5t4" podStartSLOduration=49.655725416 podStartE2EDuration="49.655725416s" podCreationTimestamp="2025-08-13 00:21:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:21:56.836288444 +0000 UTC m=+53.844171988" watchObservedRunningTime="2025-08-13 00:21:57.655725416 +0000 UTC m=+54.663608924" Aug 13 00:21:57.799529 systemd-networkd[1689]: calie46fb86b5a4: Link UP Aug 13 00:21:57.799936 systemd-networkd[1689]: calie46fb86b5a4: Gained carrier Aug 13 00:21:57.889614 containerd[2138]: 2025-08-13 00:21:57.287 [INFO][5647] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--162-k8s-csi--node--driver--4xwwn-eth0 csi-node-driver- calico-system 5a310c1b-b07f-40c9-96ad-53e0942080e1 1021 0 2025-08-13 00:21:33 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-31-162 csi-node-driver-4xwwn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie46fb86b5a4 [] [] }} ContainerID="9780c5c5997e1b4e12ef350559f21e12ac0c8c9c347931bf3599c698f468a49b" Namespace="calico-system" Pod="csi-node-driver-4xwwn" WorkloadEndpoint="ip--172--31--31--162-k8s-csi--node--driver--4xwwn-" Aug 13 00:21:57.889614 containerd[2138]: 2025-08-13 00:21:57.287 [INFO][5647] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9780c5c5997e1b4e12ef350559f21e12ac0c8c9c347931bf3599c698f468a49b" Namespace="calico-system" Pod="csi-node-driver-4xwwn" WorkloadEndpoint="ip--172--31--31--162-k8s-csi--node--driver--4xwwn-eth0" Aug 13 00:21:57.889614 containerd[2138]: 2025-08-13 00:21:57.574 [INFO][5680] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9780c5c5997e1b4e12ef350559f21e12ac0c8c9c347931bf3599c698f468a49b" HandleID="k8s-pod-network.9780c5c5997e1b4e12ef350559f21e12ac0c8c9c347931bf3599c698f468a49b" Workload="ip--172--31--31--162-k8s-csi--node--driver--4xwwn-eth0" Aug 13 00:21:57.889614 containerd[2138]: 2025-08-13 00:21:57.575 [INFO][5680] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9780c5c5997e1b4e12ef350559f21e12ac0c8c9c347931bf3599c698f468a49b" HandleID="k8s-pod-network.9780c5c5997e1b4e12ef350559f21e12ac0c8c9c347931bf3599c698f468a49b" Workload="ip--172--31--31--162-k8s-csi--node--driver--4xwwn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001f7430), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-162", "pod":"csi-node-driver-4xwwn", "timestamp":"2025-08-13 00:21:57.574289564 +0000 UTC"}, Hostname:"ip-172-31-31-162", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:21:57.889614 containerd[2138]: 2025-08-13 00:21:57.575 [INFO][5680] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:21:57.889614 containerd[2138]: 2025-08-13 00:21:57.575 [INFO][5680] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:21:57.889614 containerd[2138]: 2025-08-13 00:21:57.575 [INFO][5680] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-162' Aug 13 00:21:57.889614 containerd[2138]: 2025-08-13 00:21:57.614 [INFO][5680] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9780c5c5997e1b4e12ef350559f21e12ac0c8c9c347931bf3599c698f468a49b" host="ip-172-31-31-162" Aug 13 00:21:57.889614 containerd[2138]: 2025-08-13 00:21:57.630 [INFO][5680] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-162" Aug 13 00:21:57.889614 containerd[2138]: 2025-08-13 00:21:57.661 [INFO][5680] ipam/ipam.go 511: Trying affinity for 192.168.37.192/26 host="ip-172-31-31-162" Aug 13 00:21:57.889614 containerd[2138]: 2025-08-13 00:21:57.674 [INFO][5680] ipam/ipam.go 158: Attempting to load block cidr=192.168.37.192/26 host="ip-172-31-31-162" Aug 13 00:21:57.889614 containerd[2138]: 2025-08-13 00:21:57.701 [INFO][5680] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.37.192/26 host="ip-172-31-31-162" Aug 13 00:21:57.889614 containerd[2138]: 2025-08-13 00:21:57.701 [INFO][5680] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.37.192/26 handle="k8s-pod-network.9780c5c5997e1b4e12ef350559f21e12ac0c8c9c347931bf3599c698f468a49b" host="ip-172-31-31-162" Aug 13 00:21:57.889614 containerd[2138]: 2025-08-13 00:21:57.704 [INFO][5680] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9780c5c5997e1b4e12ef350559f21e12ac0c8c9c347931bf3599c698f468a49b Aug 13 00:21:57.889614 containerd[2138]: 2025-08-13 00:21:57.717 [INFO][5680] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.37.192/26 handle="k8s-pod-network.9780c5c5997e1b4e12ef350559f21e12ac0c8c9c347931bf3599c698f468a49b" host="ip-172-31-31-162" Aug 13 00:21:57.889614 containerd[2138]: 2025-08-13 00:21:57.750 [INFO][5680] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.37.197/26] block=192.168.37.192/26 handle="k8s-pod-network.9780c5c5997e1b4e12ef350559f21e12ac0c8c9c347931bf3599c698f468a49b" host="ip-172-31-31-162" Aug 13 00:21:57.889614 containerd[2138]: 2025-08-13 00:21:57.750 [INFO][5680] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.37.197/26] handle="k8s-pod-network.9780c5c5997e1b4e12ef350559f21e12ac0c8c9c347931bf3599c698f468a49b" host="ip-172-31-31-162" Aug 13 00:21:57.889614 containerd[2138]: 2025-08-13 00:21:57.750 [INFO][5680] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:21:57.889614 containerd[2138]: 2025-08-13 00:21:57.756 [INFO][5680] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.37.197/26] IPv6=[] ContainerID="9780c5c5997e1b4e12ef350559f21e12ac0c8c9c347931bf3599c698f468a49b" HandleID="k8s-pod-network.9780c5c5997e1b4e12ef350559f21e12ac0c8c9c347931bf3599c698f468a49b" Workload="ip--172--31--31--162-k8s-csi--node--driver--4xwwn-eth0" Aug 13 00:21:57.928169 containerd[2138]: 2025-08-13 00:21:57.777 [INFO][5647] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9780c5c5997e1b4e12ef350559f21e12ac0c8c9c347931bf3599c698f468a49b" Namespace="calico-system" Pod="csi-node-driver-4xwwn" WorkloadEndpoint="ip--172--31--31--162-k8s-csi--node--driver--4xwwn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--162-k8s-csi--node--driver--4xwwn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5a310c1b-b07f-40c9-96ad-53e0942080e1", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-162", ContainerID:"", Pod:"csi-node-driver-4xwwn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.37.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie46fb86b5a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:21:57.928169 containerd[2138]: 2025-08-13 00:21:57.779 [INFO][5647] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.197/32] ContainerID="9780c5c5997e1b4e12ef350559f21e12ac0c8c9c347931bf3599c698f468a49b" Namespace="calico-system" Pod="csi-node-driver-4xwwn" WorkloadEndpoint="ip--172--31--31--162-k8s-csi--node--driver--4xwwn-eth0" Aug 13 00:21:57.928169 containerd[2138]: 2025-08-13 00:21:57.779 [INFO][5647] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie46fb86b5a4 ContainerID="9780c5c5997e1b4e12ef350559f21e12ac0c8c9c347931bf3599c698f468a49b" Namespace="calico-system" Pod="csi-node-driver-4xwwn" WorkloadEndpoint="ip--172--31--31--162-k8s-csi--node--driver--4xwwn-eth0" Aug 13 00:21:57.928169 containerd[2138]: 2025-08-13 00:21:57.801 [INFO][5647] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9780c5c5997e1b4e12ef350559f21e12ac0c8c9c347931bf3599c698f468a49b" Namespace="calico-system" Pod="csi-node-driver-4xwwn" WorkloadEndpoint="ip--172--31--31--162-k8s-csi--node--driver--4xwwn-eth0" Aug 13 00:21:57.928169 containerd[2138]: 2025-08-13 00:21:57.804 [INFO][5647] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9780c5c5997e1b4e12ef350559f21e12ac0c8c9c347931bf3599c698f468a49b" Namespace="calico-system" Pod="csi-node-driver-4xwwn" WorkloadEndpoint="ip--172--31--31--162-k8s-csi--node--driver--4xwwn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--162-k8s-csi--node--driver--4xwwn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5a310c1b-b07f-40c9-96ad-53e0942080e1", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-162", ContainerID:"9780c5c5997e1b4e12ef350559f21e12ac0c8c9c347931bf3599c698f468a49b", Pod:"csi-node-driver-4xwwn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.37.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie46fb86b5a4", MAC:"36:a1:f6:9a:17:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:21:57.928169 containerd[2138]: 2025-08-13 00:21:57.869 [INFO][5647] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9780c5c5997e1b4e12ef350559f21e12ac0c8c9c347931bf3599c698f468a49b" Namespace="calico-system" Pod="csi-node-driver-4xwwn" WorkloadEndpoint="ip--172--31--31--162-k8s-csi--node--driver--4xwwn-eth0" Aug 13 00:21:57.939774 containerd[2138]: time="2025-08-13T00:21:57.938709586Z" level=info msg="StartContainer for \"19a8c14779f434a8f00e9deb601672abfeca5ddefaec1f459496be3438f76dab\" returns successfully" Aug 13 00:21:58.125975 containerd[2138]: time="2025-08-13T00:21:58.125382475Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:21:58.125975 containerd[2138]: time="2025-08-13T00:21:58.125466979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:21:58.125975 containerd[2138]: time="2025-08-13T00:21:58.125491543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:21:58.125975 containerd[2138]: time="2025-08-13T00:21:58.125654551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:21:58.220340 systemd-resolved[2024]: Under memory pressure, flushing caches. Aug 13 00:21:58.223305 systemd-journald[1606]: Under memory pressure, flushing caches. Aug 13 00:21:58.220408 systemd-resolved[2024]: Flushed all caches. Aug 13 00:21:58.271587 containerd[2138]: time="2025-08-13T00:21:58.269464999Z" level=info msg="StopPodSandbox for \"2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8\"" Aug 13 00:21:58.297248 systemd-networkd[1689]: cali2bb88cd7496: Link UP Aug 13 00:21:58.316413 systemd-networkd[1689]: cali2bb88cd7496: Gained carrier Aug 13 00:21:58.346032 containerd[2138]: 2025-08-13 00:21:57.637 [INFO][5688] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" Aug 13 00:21:58.346032 containerd[2138]: 2025-08-13 00:21:57.637 [INFO][5688] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" iface="eth0" netns="/var/run/netns/cni-599a5602-343c-3828-9726-0a9d541fd1ec" Aug 13 00:21:58.346032 containerd[2138]: 2025-08-13 00:21:57.642 [INFO][5688] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" iface="eth0" netns="/var/run/netns/cni-599a5602-343c-3828-9726-0a9d541fd1ec" Aug 13 00:21:58.346032 containerd[2138]: 2025-08-13 00:21:57.677 [INFO][5688] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" iface="eth0" netns="/var/run/netns/cni-599a5602-343c-3828-9726-0a9d541fd1ec" Aug 13 00:21:58.346032 containerd[2138]: 2025-08-13 00:21:57.677 [INFO][5688] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" Aug 13 00:21:58.346032 containerd[2138]: 2025-08-13 00:21:57.677 [INFO][5688] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" Aug 13 00:21:58.346032 containerd[2138]: 2025-08-13 00:21:58.011 [INFO][5744] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" HandleID="k8s-pod-network.e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" Workload="ip--172--31--31--162-k8s-calico--kube--controllers--764ff7f5f7--s9b5s-eth0" Aug 13 00:21:58.346032 containerd[2138]: 2025-08-13 00:21:58.016 [INFO][5744] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:21:58.346032 containerd[2138]: 2025-08-13 00:21:58.213 [INFO][5744] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:21:58.346032 containerd[2138]: 2025-08-13 00:21:58.236 [WARNING][5744] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" HandleID="k8s-pod-network.e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" Workload="ip--172--31--31--162-k8s-calico--kube--controllers--764ff7f5f7--s9b5s-eth0" Aug 13 00:21:58.346032 containerd[2138]: 2025-08-13 00:21:58.236 [INFO][5744] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" HandleID="k8s-pod-network.e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" Workload="ip--172--31--31--162-k8s-calico--kube--controllers--764ff7f5f7--s9b5s-eth0" Aug 13 00:21:58.346032 containerd[2138]: 2025-08-13 00:21:58.245 [INFO][5744] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:21:58.346032 containerd[2138]: 2025-08-13 00:21:58.293 [INFO][5688] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" Aug 13 00:21:58.353432 systemd-networkd[1689]: cali44f7845f4c4: Gained IPv6LL Aug 13 00:21:58.359388 containerd[2138]: time="2025-08-13T00:21:58.354243284Z" level=info msg="TearDown network for sandbox \"e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79\" successfully" Aug 13 00:21:58.359388 containerd[2138]: time="2025-08-13T00:21:58.354291044Z" level=info msg="StopPodSandbox for \"e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79\" returns successfully" Aug 13 00:21:58.386393 containerd[2138]: time="2025-08-13T00:21:58.385508276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-764ff7f5f7-s9b5s,Uid:0bdee9ba-0652-4e5a-aa31-e915cc90ffb9,Namespace:calico-system,Attempt:1,}" Aug 13 00:21:58.403237 containerd[2138]: 2025-08-13 00:21:57.519 [INFO][5666] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--97r48-eth0 calico-apiserver-5768c76bdb- calico-apiserver 24486bb1-f01d-44a5-bd10-5c11bbdaf03f 1022 0 2025-08-13 00:21:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5768c76bdb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-31-162 calico-apiserver-5768c76bdb-97r48 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2bb88cd7496 [] [] }} ContainerID="095bc929cadbd6d01743f2820f40d4364304184146da7e3495c5326e8dd1ea5b" Namespace="calico-apiserver" Pod="calico-apiserver-5768c76bdb-97r48" WorkloadEndpoint="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--97r48-" Aug 13 00:21:58.403237 containerd[2138]: 2025-08-13 00:21:57.520 [INFO][5666] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="095bc929cadbd6d01743f2820f40d4364304184146da7e3495c5326e8dd1ea5b" Namespace="calico-apiserver" Pod="calico-apiserver-5768c76bdb-97r48" WorkloadEndpoint="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--97r48-eth0" Aug 13 00:21:58.403237 containerd[2138]: 2025-08-13 00:21:57.898 [INFO][5719] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="095bc929cadbd6d01743f2820f40d4364304184146da7e3495c5326e8dd1ea5b" HandleID="k8s-pod-network.095bc929cadbd6d01743f2820f40d4364304184146da7e3495c5326e8dd1ea5b" Workload="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--97r48-eth0" Aug 13 00:21:58.403237 containerd[2138]: 2025-08-13 00:21:57.936 [INFO][5719] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="095bc929cadbd6d01743f2820f40d4364304184146da7e3495c5326e8dd1ea5b" HandleID="k8s-pod-network.095bc929cadbd6d01743f2820f40d4364304184146da7e3495c5326e8dd1ea5b" Workload="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--97r48-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ca00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-31-162", "pod":"calico-apiserver-5768c76bdb-97r48", "timestamp":"2025-08-13 00:21:57.898113982 +0000 UTC"}, Hostname:"ip-172-31-31-162", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:21:58.403237 containerd[2138]: 2025-08-13 00:21:57.940 [INFO][5719] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:21:58.403237 containerd[2138]: 2025-08-13 00:21:57.945 [INFO][5719] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:21:58.403237 containerd[2138]: 2025-08-13 00:21:57.946 [INFO][5719] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-162' Aug 13 00:21:58.403237 containerd[2138]: 2025-08-13 00:21:58.025 [INFO][5719] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.095bc929cadbd6d01743f2820f40d4364304184146da7e3495c5326e8dd1ea5b" host="ip-172-31-31-162" Aug 13 00:21:58.403237 containerd[2138]: 2025-08-13 00:21:58.079 [INFO][5719] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-162" Aug 13 00:21:58.403237 containerd[2138]: 2025-08-13 00:21:58.104 [INFO][5719] ipam/ipam.go 511: Trying affinity for 192.168.37.192/26 host="ip-172-31-31-162" Aug 13 00:21:58.403237 containerd[2138]: 2025-08-13 00:21:58.115 [INFO][5719] ipam/ipam.go 158: Attempting to load block cidr=192.168.37.192/26 host="ip-172-31-31-162" Aug 13 00:21:58.403237 containerd[2138]: 2025-08-13 00:21:58.131 [INFO][5719] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.37.192/26 host="ip-172-31-31-162" Aug 13 00:21:58.403237 containerd[2138]: 2025-08-13 00:21:58.131 [INFO][5719] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.37.192/26 handle="k8s-pod-network.095bc929cadbd6d01743f2820f40d4364304184146da7e3495c5326e8dd1ea5b" host="ip-172-31-31-162" Aug 13 00:21:58.403237 containerd[2138]: 2025-08-13 00:21:58.142 [INFO][5719] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.095bc929cadbd6d01743f2820f40d4364304184146da7e3495c5326e8dd1ea5b Aug 13 00:21:58.403237 containerd[2138]: 2025-08-13 00:21:58.172 [INFO][5719] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.37.192/26 handle="k8s-pod-network.095bc929cadbd6d01743f2820f40d4364304184146da7e3495c5326e8dd1ea5b" host="ip-172-31-31-162" Aug 13 00:21:58.403237 containerd[2138]: 2025-08-13 00:21:58.211 [INFO][5719] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.37.198/26] block=192.168.37.192/26 handle="k8s-pod-network.095bc929cadbd6d01743f2820f40d4364304184146da7e3495c5326e8dd1ea5b" host="ip-172-31-31-162" Aug 13 00:21:58.403237 containerd[2138]: 2025-08-13 00:21:58.211 [INFO][5719] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.37.198/26] handle="k8s-pod-network.095bc929cadbd6d01743f2820f40d4364304184146da7e3495c5326e8dd1ea5b" host="ip-172-31-31-162" Aug 13 00:21:58.403237 containerd[2138]: 2025-08-13 00:21:58.211 [INFO][5719] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:21:58.403237 containerd[2138]: 2025-08-13 00:21:58.211 [INFO][5719] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.37.198/26] IPv6=[] ContainerID="095bc929cadbd6d01743f2820f40d4364304184146da7e3495c5326e8dd1ea5b" HandleID="k8s-pod-network.095bc929cadbd6d01743f2820f40d4364304184146da7e3495c5326e8dd1ea5b" Workload="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--97r48-eth0" Aug 13 00:21:58.404475 containerd[2138]: 2025-08-13 00:21:58.220 [INFO][5666] cni-plugin/k8s.go 418: Populated endpoint ContainerID="095bc929cadbd6d01743f2820f40d4364304184146da7e3495c5326e8dd1ea5b" Namespace="calico-apiserver" Pod="calico-apiserver-5768c76bdb-97r48" WorkloadEndpoint="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--97r48-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--97r48-eth0", GenerateName:"calico-apiserver-5768c76bdb-", Namespace:"calico-apiserver", SelfLink:"", UID:"24486bb1-f01d-44a5-bd10-5c11bbdaf03f", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 21, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5768c76bdb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-162", ContainerID:"", Pod:"calico-apiserver-5768c76bdb-97r48", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2bb88cd7496", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:21:58.404475 containerd[2138]: 2025-08-13 00:21:58.221 [INFO][5666] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.198/32] ContainerID="095bc929cadbd6d01743f2820f40d4364304184146da7e3495c5326e8dd1ea5b" Namespace="calico-apiserver" Pod="calico-apiserver-5768c76bdb-97r48" WorkloadEndpoint="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--97r48-eth0" Aug 13 00:21:58.404475 containerd[2138]: 2025-08-13 00:21:58.221 [INFO][5666] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2bb88cd7496 ContainerID="095bc929cadbd6d01743f2820f40d4364304184146da7e3495c5326e8dd1ea5b" Namespace="calico-apiserver" Pod="calico-apiserver-5768c76bdb-97r48" WorkloadEndpoint="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--97r48-eth0" Aug 13 00:21:58.404475 containerd[2138]: 2025-08-13 00:21:58.297 [INFO][5666] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="095bc929cadbd6d01743f2820f40d4364304184146da7e3495c5326e8dd1ea5b" Namespace="calico-apiserver" Pod="calico-apiserver-5768c76bdb-97r48" WorkloadEndpoint="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--97r48-eth0" Aug 13 00:21:58.404475 containerd[2138]: 2025-08-13 00:21:58.328 [INFO][5666] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="095bc929cadbd6d01743f2820f40d4364304184146da7e3495c5326e8dd1ea5b" Namespace="calico-apiserver" Pod="calico-apiserver-5768c76bdb-97r48" WorkloadEndpoint="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--97r48-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--97r48-eth0", GenerateName:"calico-apiserver-5768c76bdb-", Namespace:"calico-apiserver", SelfLink:"", UID:"24486bb1-f01d-44a5-bd10-5c11bbdaf03f", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 21, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5768c76bdb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-162", ContainerID:"095bc929cadbd6d01743f2820f40d4364304184146da7e3495c5326e8dd1ea5b", Pod:"calico-apiserver-5768c76bdb-97r48", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2bb88cd7496", MAC:"0a:6f:dc:0c:01:3f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:21:58.404475 containerd[2138]: 2025-08-13 00:21:58.362 [INFO][5666] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="095bc929cadbd6d01743f2820f40d4364304184146da7e3495c5326e8dd1ea5b" Namespace="calico-apiserver" Pod="calico-apiserver-5768c76bdb-97r48" WorkloadEndpoint="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--97r48-eth0" Aug 13 00:21:58.525039 systemd[1]: run-netns-cni\x2d599a5602\x2d343c\x2d3828\x2d9726\x2d0a9d541fd1ec.mount: Deactivated successfully. Aug 13 00:21:58.583373 containerd[2138]: time="2025-08-13T00:21:58.578938245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:21:58.595285 containerd[2138]: time="2025-08-13T00:21:58.588800541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:21:58.595285 containerd[2138]: time="2025-08-13T00:21:58.588877929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:21:58.595285 containerd[2138]: time="2025-08-13T00:21:58.589153389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:21:58.757862 containerd[2138]: time="2025-08-13T00:21:58.757717462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4xwwn,Uid:5a310c1b-b07f-40c9-96ad-53e0942080e1,Namespace:calico-system,Attempt:1,} returns sandbox id \"9780c5c5997e1b4e12ef350559f21e12ac0c8c9c347931bf3599c698f468a49b\"" Aug 13 00:21:59.095901 kubelet[3661]: I0813 00:21:59.095278 3661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-gggpz" podStartSLOduration=51.094925288 podStartE2EDuration="51.094925288s" podCreationTimestamp="2025-08-13 00:21:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:21:59.051308227 +0000 UTC m=+56.059191759" watchObservedRunningTime="2025-08-13 00:21:59.094925288 +0000 UTC m=+56.102808892" Aug 13 00:21:59.183879 systemd-networkd[1689]: calie46fb86b5a4: Gained IPv6LL Aug 13 00:21:59.375026 containerd[2138]: time="2025-08-13T00:21:59.372742197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5768c76bdb-97r48,Uid:24486bb1-f01d-44a5-bd10-5c11bbdaf03f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"095bc929cadbd6d01743f2820f40d4364304184146da7e3495c5326e8dd1ea5b\"" Aug 13 00:21:59.398540 systemd-networkd[1689]: cali7c8d63b9505: Link UP Aug 13 00:21:59.399664 systemd-networkd[1689]: cali7c8d63b9505: Gained carrier Aug 13 00:21:59.414238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2479558103.mount: Deactivated successfully. Aug 13 00:21:59.427315 containerd[2138]: 2025-08-13 00:21:58.893 [INFO][5831] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" Aug 13 00:21:59.427315 containerd[2138]: 2025-08-13 00:21:58.893 [INFO][5831] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" iface="eth0" netns="/var/run/netns/cni-33507c9c-b1af-aa26-d84e-60a56b4e76c9" Aug 13 00:21:59.427315 containerd[2138]: 2025-08-13 00:21:58.896 [INFO][5831] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" iface="eth0" netns="/var/run/netns/cni-33507c9c-b1af-aa26-d84e-60a56b4e76c9" Aug 13 00:21:59.427315 containerd[2138]: 2025-08-13 00:21:58.902 [INFO][5831] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" iface="eth0" netns="/var/run/netns/cni-33507c9c-b1af-aa26-d84e-60a56b4e76c9" Aug 13 00:21:59.427315 containerd[2138]: 2025-08-13 00:21:58.905 [INFO][5831] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" Aug 13 00:21:59.427315 containerd[2138]: 2025-08-13 00:21:58.905 [INFO][5831] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" Aug 13 00:21:59.427315 containerd[2138]: 2025-08-13 00:21:59.258 [INFO][5886] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" HandleID="k8s-pod-network.2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" Workload="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--qb8dv-eth0" Aug 13 00:21:59.427315 containerd[2138]: 2025-08-13 00:21:59.259 [INFO][5886] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:21:59.427315 containerd[2138]: 2025-08-13 00:21:59.338 [INFO][5886] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:21:59.427315 containerd[2138]: 2025-08-13 00:21:59.385 [WARNING][5886] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" HandleID="k8s-pod-network.2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" Workload="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--qb8dv-eth0" Aug 13 00:21:59.427315 containerd[2138]: 2025-08-13 00:21:59.387 [INFO][5886] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" HandleID="k8s-pod-network.2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" Workload="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--qb8dv-eth0" Aug 13 00:21:59.427315 containerd[2138]: 2025-08-13 00:21:59.397 [INFO][5886] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:21:59.427315 containerd[2138]: 2025-08-13 00:21:59.409 [INFO][5831] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" Aug 13 00:21:59.429411 containerd[2138]: time="2025-08-13T00:21:59.429179637Z" level=info msg="TearDown network for sandbox \"2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8\" successfully" Aug 13 00:21:59.429411 containerd[2138]: time="2025-08-13T00:21:59.429235101Z" level=info msg="StopPodSandbox for \"2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8\" returns successfully" Aug 13 00:21:59.438883 systemd[1]: run-netns-cni\x2d33507c9c\x2db1af\x2daa26\x2dd84e\x2d60a56b4e76c9.mount: Deactivated successfully. Aug 13 00:21:59.444317 containerd[2138]: time="2025-08-13T00:21:59.442864017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5768c76bdb-qb8dv,Uid:3cf1659a-34f6-4f08-a0e0-5a806126f297,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:21:59.456775 containerd[2138]: 2025-08-13 00:21:58.817 [INFO][5835] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--162-k8s-calico--kube--controllers--764ff7f5f7--s9b5s-eth0 calico-kube-controllers-764ff7f5f7- calico-system 0bdee9ba-0652-4e5a-aa31-e915cc90ffb9 1036 0 2025-08-13 00:21:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:764ff7f5f7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-31-162 calico-kube-controllers-764ff7f5f7-s9b5s eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7c8d63b9505 [] [] }} ContainerID="4305770d2b7ca762c98a746c9ccf55316a42445bdb86836d6faa67aa90858d7a" Namespace="calico-system" Pod="calico-kube-controllers-764ff7f5f7-s9b5s" WorkloadEndpoint="ip--172--31--31--162-k8s-calico--kube--controllers--764ff7f5f7--s9b5s-" Aug 13 00:21:59.456775 containerd[2138]: 2025-08-13 00:21:58.818 [INFO][5835] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4305770d2b7ca762c98a746c9ccf55316a42445bdb86836d6faa67aa90858d7a" Namespace="calico-system" Pod="calico-kube-controllers-764ff7f5f7-s9b5s" WorkloadEndpoint="ip--172--31--31--162-k8s-calico--kube--controllers--764ff7f5f7--s9b5s-eth0" Aug 13 00:21:59.456775 containerd[2138]: 2025-08-13 00:21:59.215 [INFO][5891] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4305770d2b7ca762c98a746c9ccf55316a42445bdb86836d6faa67aa90858d7a" HandleID="k8s-pod-network.4305770d2b7ca762c98a746c9ccf55316a42445bdb86836d6faa67aa90858d7a" Workload="ip--172--31--31--162-k8s-calico--kube--controllers--764ff7f5f7--s9b5s-eth0" Aug 13 00:21:59.456775 containerd[2138]: 2025-08-13 00:21:59.216 [INFO][5891] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4305770d2b7ca762c98a746c9ccf55316a42445bdb86836d6faa67aa90858d7a" HandleID="k8s-pod-network.4305770d2b7ca762c98a746c9ccf55316a42445bdb86836d6faa67aa90858d7a" Workload="ip--172--31--31--162-k8s-calico--kube--controllers--764ff7f5f7--s9b5s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dbd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-162", "pod":"calico-kube-controllers-764ff7f5f7-s9b5s", "timestamp":"2025-08-13 00:21:59.2148584 +0000 UTC"}, Hostname:"ip-172-31-31-162", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:21:59.456775 containerd[2138]: 2025-08-13 00:21:59.217 [INFO][5891] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:21:59.456775 containerd[2138]: 2025-08-13 00:21:59.218 [INFO][5891] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:21:59.456775 containerd[2138]: 2025-08-13 00:21:59.218 [INFO][5891] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-162' Aug 13 00:21:59.456775 containerd[2138]: 2025-08-13 00:21:59.249 [INFO][5891] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4305770d2b7ca762c98a746c9ccf55316a42445bdb86836d6faa67aa90858d7a" host="ip-172-31-31-162" Aug 13 00:21:59.456775 containerd[2138]: 2025-08-13 00:21:59.273 [INFO][5891] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-162" Aug 13 00:21:59.456775 containerd[2138]: 2025-08-13 00:21:59.286 [INFO][5891] ipam/ipam.go 511: Trying affinity for 192.168.37.192/26 host="ip-172-31-31-162" Aug 13 00:21:59.456775 containerd[2138]: 2025-08-13 00:21:59.292 [INFO][5891] ipam/ipam.go 158: Attempting to load block cidr=192.168.37.192/26 host="ip-172-31-31-162" Aug 13 00:21:59.456775 containerd[2138]: 2025-08-13 00:21:59.300 [INFO][5891] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.37.192/26 host="ip-172-31-31-162" Aug 13 00:21:59.456775 containerd[2138]: 2025-08-13 00:21:59.303 [INFO][5891] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.37.192/26 handle="k8s-pod-network.4305770d2b7ca762c98a746c9ccf55316a42445bdb86836d6faa67aa90858d7a" host="ip-172-31-31-162" Aug 13 00:21:59.456775 containerd[2138]: 2025-08-13 00:21:59.310 [INFO][5891] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4305770d2b7ca762c98a746c9ccf55316a42445bdb86836d6faa67aa90858d7a Aug 13 00:21:59.456775 containerd[2138]: 2025-08-13 00:21:59.321 [INFO][5891] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.37.192/26 handle="k8s-pod-network.4305770d2b7ca762c98a746c9ccf55316a42445bdb86836d6faa67aa90858d7a" host="ip-172-31-31-162" Aug 13 00:21:59.456775 containerd[2138]: 2025-08-13 00:21:59.336 [INFO][5891] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.37.199/26] block=192.168.37.192/26 handle="k8s-pod-network.4305770d2b7ca762c98a746c9ccf55316a42445bdb86836d6faa67aa90858d7a" host="ip-172-31-31-162" Aug 13 00:21:59.456775 containerd[2138]: 2025-08-13 00:21:59.336 [INFO][5891] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.37.199/26] handle="k8s-pod-network.4305770d2b7ca762c98a746c9ccf55316a42445bdb86836d6faa67aa90858d7a" host="ip-172-31-31-162" Aug 13 00:21:59.456775 containerd[2138]: 2025-08-13 00:21:59.336 [INFO][5891] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:21:59.456775 containerd[2138]: 2025-08-13 00:21:59.339 [INFO][5891] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.37.199/26] IPv6=[] ContainerID="4305770d2b7ca762c98a746c9ccf55316a42445bdb86836d6faa67aa90858d7a" HandleID="k8s-pod-network.4305770d2b7ca762c98a746c9ccf55316a42445bdb86836d6faa67aa90858d7a" Workload="ip--172--31--31--162-k8s-calico--kube--controllers--764ff7f5f7--s9b5s-eth0" Aug 13 00:21:59.460264 containerd[2138]: 2025-08-13 00:21:59.374 [INFO][5835] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4305770d2b7ca762c98a746c9ccf55316a42445bdb86836d6faa67aa90858d7a" Namespace="calico-system" Pod="calico-kube-controllers-764ff7f5f7-s9b5s" WorkloadEndpoint="ip--172--31--31--162-k8s-calico--kube--controllers--764ff7f5f7--s9b5s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--162-k8s-calico--kube--controllers--764ff7f5f7--s9b5s-eth0", GenerateName:"calico-kube-controllers-764ff7f5f7-", Namespace:"calico-system", SelfLink:"", UID:"0bdee9ba-0652-4e5a-aa31-e915cc90ffb9", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"764ff7f5f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-162", ContainerID:"", Pod:"calico-kube-controllers-764ff7f5f7-s9b5s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.37.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7c8d63b9505", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:21:59.460264 containerd[2138]: 2025-08-13 00:21:59.382 [INFO][5835] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.199/32] ContainerID="4305770d2b7ca762c98a746c9ccf55316a42445bdb86836d6faa67aa90858d7a" Namespace="calico-system" Pod="calico-kube-controllers-764ff7f5f7-s9b5s" WorkloadEndpoint="ip--172--31--31--162-k8s-calico--kube--controllers--764ff7f5f7--s9b5s-eth0" Aug 13 00:21:59.460264 containerd[2138]: 2025-08-13 00:21:59.384 [INFO][5835] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c8d63b9505 ContainerID="4305770d2b7ca762c98a746c9ccf55316a42445bdb86836d6faa67aa90858d7a" Namespace="calico-system" Pod="calico-kube-controllers-764ff7f5f7-s9b5s" WorkloadEndpoint="ip--172--31--31--162-k8s-calico--kube--controllers--764ff7f5f7--s9b5s-eth0" Aug 13 00:21:59.460264 containerd[2138]: 2025-08-13 00:21:59.400 [INFO][5835] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4305770d2b7ca762c98a746c9ccf55316a42445bdb86836d6faa67aa90858d7a" Namespace="calico-system" Pod="calico-kube-controllers-764ff7f5f7-s9b5s" WorkloadEndpoint="ip--172--31--31--162-k8s-calico--kube--controllers--764ff7f5f7--s9b5s-eth0" Aug 13 00:21:59.460264 containerd[2138]: 2025-08-13 00:21:59.410 [INFO][5835] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4305770d2b7ca762c98a746c9ccf55316a42445bdb86836d6faa67aa90858d7a" Namespace="calico-system" Pod="calico-kube-controllers-764ff7f5f7-s9b5s" WorkloadEndpoint="ip--172--31--31--162-k8s-calico--kube--controllers--764ff7f5f7--s9b5s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--162-k8s-calico--kube--controllers--764ff7f5f7--s9b5s-eth0", GenerateName:"calico-kube-controllers-764ff7f5f7-", Namespace:"calico-system", SelfLink:"", UID:"0bdee9ba-0652-4e5a-aa31-e915cc90ffb9", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"764ff7f5f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-162", ContainerID:"4305770d2b7ca762c98a746c9ccf55316a42445bdb86836d6faa67aa90858d7a", Pod:"calico-kube-controllers-764ff7f5f7-s9b5s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.37.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7c8d63b9505", MAC:"d6:93:e0:1c:3b:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:21:59.460264 containerd[2138]: 2025-08-13 00:21:59.451 [INFO][5835] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4305770d2b7ca762c98a746c9ccf55316a42445bdb86836d6faa67aa90858d7a" Namespace="calico-system" Pod="calico-kube-controllers-764ff7f5f7-s9b5s" WorkloadEndpoint="ip--172--31--31--162-k8s-calico--kube--controllers--764ff7f5f7--s9b5s-eth0" Aug 13 00:21:59.531192 containerd[2138]: time="2025-08-13T00:21:59.531117202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:21:59.536514 containerd[2138]: time="2025-08-13T00:21:59.535846930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Aug 13 00:21:59.542433 containerd[2138]: time="2025-08-13T00:21:59.542263462Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:21:59.553720 containerd[2138]: time="2025-08-13T00:21:59.551155966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:21:59.553720 containerd[2138]: time="2025-08-13T00:21:59.551252266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:21:59.553720 containerd[2138]: time="2025-08-13T00:21:59.551278246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:21:59.553720 containerd[2138]: time="2025-08-13T00:21:59.551458378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:21:59.565034 containerd[2138]: time="2025-08-13T00:21:59.560375734Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:21:59.572163 containerd[2138]: time="2025-08-13T00:21:59.568637746Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 6.105106434s" Aug 13 00:21:59.572163 containerd[2138]: time="2025-08-13T00:21:59.568710418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Aug 13 00:21:59.585953 containerd[2138]: time="2025-08-13T00:21:59.585773770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 00:21:59.590763 containerd[2138]: time="2025-08-13T00:21:59.590692366Z" level=info msg="CreateContainer within sandbox \"08d1cbc98d5262be09455dbaaaa340588cbb5dc9ce273a4870404bd9a8e36231\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 00:21:59.678700 containerd[2138]: time="2025-08-13T00:21:59.677080606Z" level=info msg="CreateContainer within sandbox \"08d1cbc98d5262be09455dbaaaa340588cbb5dc9ce273a4870404bd9a8e36231\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"11b71aa285a8481e3b672184f4cebb6e53d9898dc2c90c851f9a6a4633e8e135\"" Aug 13 00:21:59.682280 containerd[2138]: time="2025-08-13T00:21:59.682228378Z" level=info msg="StartContainer for \"11b71aa285a8481e3b672184f4cebb6e53d9898dc2c90c851f9a6a4633e8e135\"" Aug 13 00:21:59.917378 containerd[2138]: time="2025-08-13T00:21:59.917323836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-764ff7f5f7-s9b5s,Uid:0bdee9ba-0652-4e5a-aa31-e915cc90ffb9,Namespace:calico-system,Attempt:1,} returns sandbox id \"4305770d2b7ca762c98a746c9ccf55316a42445bdb86836d6faa67aa90858d7a\"" Aug 13 00:22:00.016158 systemd-networkd[1689]: cali2bb88cd7496: Gained IPv6LL Aug 13 00:22:00.044701 systemd-networkd[1689]: cali8b88b6f4780: Link UP Aug 13 00:22:00.046616 containerd[2138]: time="2025-08-13T00:22:00.045974516Z" level=info msg="StartContainer for \"11b71aa285a8481e3b672184f4cebb6e53d9898dc2c90c851f9a6a4633e8e135\" returns successfully" Aug 13 00:22:00.048021 systemd-networkd[1689]: cali8b88b6f4780: Gained carrier Aug 13 00:22:00.093856 containerd[2138]: 2025-08-13 00:21:59.770 [INFO][5937] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--qb8dv-eth0 calico-apiserver-5768c76bdb- calico-apiserver 3cf1659a-34f6-4f08-a0e0-5a806126f297 1057 0 2025-08-13 00:21:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5768c76bdb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-31-162 calico-apiserver-5768c76bdb-qb8dv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8b88b6f4780 [] [] }} ContainerID="bb1b8f160cb28c25f4efcd53e7344264d070511711a1aaafb243aef6e354c4e6" Namespace="calico-apiserver" Pod="calico-apiserver-5768c76bdb-qb8dv" WorkloadEndpoint="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--qb8dv-" Aug 13 00:22:00.093856 containerd[2138]: 2025-08-13 00:21:59.772 [INFO][5937] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bb1b8f160cb28c25f4efcd53e7344264d070511711a1aaafb243aef6e354c4e6" Namespace="calico-apiserver" Pod="calico-apiserver-5768c76bdb-qb8dv" WorkloadEndpoint="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--qb8dv-eth0" Aug 13 00:22:00.093856 containerd[2138]: 2025-08-13 00:21:59.930 [INFO][5993] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bb1b8f160cb28c25f4efcd53e7344264d070511711a1aaafb243aef6e354c4e6" HandleID="k8s-pod-network.bb1b8f160cb28c25f4efcd53e7344264d070511711a1aaafb243aef6e354c4e6" Workload="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--qb8dv-eth0" Aug 13 00:22:00.093856 containerd[2138]: 2025-08-13 00:21:59.930 [INFO][5993] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bb1b8f160cb28c25f4efcd53e7344264d070511711a1aaafb243aef6e354c4e6" HandleID="k8s-pod-network.bb1b8f160cb28c25f4efcd53e7344264d070511711a1aaafb243aef6e354c4e6" Workload="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--qb8dv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d9b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-31-162", "pod":"calico-apiserver-5768c76bdb-qb8dv", "timestamp":"2025-08-13 00:21:59.930154992 +0000 UTC"}, Hostname:"ip-172-31-31-162", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:22:00.093856 containerd[2138]: 2025-08-13 00:21:59.930 [INFO][5993] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:22:00.093856 containerd[2138]: 2025-08-13 00:21:59.930 [INFO][5993] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:22:00.093856 containerd[2138]: 2025-08-13 00:21:59.931 [INFO][5993] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-162' Aug 13 00:22:00.093856 containerd[2138]: 2025-08-13 00:21:59.947 [INFO][5993] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bb1b8f160cb28c25f4efcd53e7344264d070511711a1aaafb243aef6e354c4e6" host="ip-172-31-31-162" Aug 13 00:22:00.093856 containerd[2138]: 2025-08-13 00:21:59.957 [INFO][5993] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-31-162" Aug 13 00:22:00.093856 containerd[2138]: 2025-08-13 00:21:59.971 [INFO][5993] ipam/ipam.go 511: Trying affinity for 192.168.37.192/26 host="ip-172-31-31-162" Aug 13 00:22:00.093856 containerd[2138]: 2025-08-13 00:21:59.977 [INFO][5993] ipam/ipam.go 158: Attempting to load block cidr=192.168.37.192/26 host="ip-172-31-31-162" Aug 13 00:22:00.093856 containerd[2138]: 2025-08-13 00:21:59.982 [INFO][5993] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.37.192/26 host="ip-172-31-31-162" Aug 13 00:22:00.093856 containerd[2138]: 2025-08-13 00:21:59.982 [INFO][5993] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.37.192/26 handle="k8s-pod-network.bb1b8f160cb28c25f4efcd53e7344264d070511711a1aaafb243aef6e354c4e6" host="ip-172-31-31-162" Aug 13 00:22:00.093856 containerd[2138]: 2025-08-13 00:21:59.988 [INFO][5993] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bb1b8f160cb28c25f4efcd53e7344264d070511711a1aaafb243aef6e354c4e6 Aug 13 00:22:00.093856 containerd[2138]: 2025-08-13 00:21:59.996 [INFO][5993] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.37.192/26 handle="k8s-pod-network.bb1b8f160cb28c25f4efcd53e7344264d070511711a1aaafb243aef6e354c4e6" host="ip-172-31-31-162" Aug 13 00:22:00.093856 containerd[2138]: 2025-08-13 00:22:00.022 [INFO][5993] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.37.200/26] block=192.168.37.192/26 handle="k8s-pod-network.bb1b8f160cb28c25f4efcd53e7344264d070511711a1aaafb243aef6e354c4e6" host="ip-172-31-31-162" Aug 13 00:22:00.093856 containerd[2138]: 2025-08-13 00:22:00.022 [INFO][5993] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.37.200/26] handle="k8s-pod-network.bb1b8f160cb28c25f4efcd53e7344264d070511711a1aaafb243aef6e354c4e6" host="ip-172-31-31-162" Aug 13 00:22:00.093856 containerd[2138]: 2025-08-13 00:22:00.023 [INFO][5993] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:22:00.093856 containerd[2138]: 2025-08-13 00:22:00.023 [INFO][5993] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.37.200/26] IPv6=[] ContainerID="bb1b8f160cb28c25f4efcd53e7344264d070511711a1aaafb243aef6e354c4e6" HandleID="k8s-pod-network.bb1b8f160cb28c25f4efcd53e7344264d070511711a1aaafb243aef6e354c4e6" Workload="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--qb8dv-eth0" Aug 13 00:22:00.096157 containerd[2138]: 2025-08-13 00:22:00.034 [INFO][5937] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bb1b8f160cb28c25f4efcd53e7344264d070511711a1aaafb243aef6e354c4e6" Namespace="calico-apiserver" Pod="calico-apiserver-5768c76bdb-qb8dv" WorkloadEndpoint="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--qb8dv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--qb8dv-eth0", GenerateName:"calico-apiserver-5768c76bdb-", Namespace:"calico-apiserver", SelfLink:"", UID:"3cf1659a-34f6-4f08-a0e0-5a806126f297", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 21, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5768c76bdb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-162", ContainerID:"", Pod:"calico-apiserver-5768c76bdb-qb8dv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8b88b6f4780", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:22:00.096157 containerd[2138]: 2025-08-13 00:22:00.035 [INFO][5937] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.200/32] ContainerID="bb1b8f160cb28c25f4efcd53e7344264d070511711a1aaafb243aef6e354c4e6" Namespace="calico-apiserver" Pod="calico-apiserver-5768c76bdb-qb8dv" WorkloadEndpoint="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--qb8dv-eth0" Aug 13 00:22:00.096157 containerd[2138]: 2025-08-13 00:22:00.035 [INFO][5937] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8b88b6f4780 ContainerID="bb1b8f160cb28c25f4efcd53e7344264d070511711a1aaafb243aef6e354c4e6" Namespace="calico-apiserver" Pod="calico-apiserver-5768c76bdb-qb8dv" WorkloadEndpoint="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--qb8dv-eth0" Aug 13 00:22:00.096157 containerd[2138]: 2025-08-13 00:22:00.052 [INFO][5937] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bb1b8f160cb28c25f4efcd53e7344264d070511711a1aaafb243aef6e354c4e6" Namespace="calico-apiserver" Pod="calico-apiserver-5768c76bdb-qb8dv" WorkloadEndpoint="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--qb8dv-eth0" Aug 13 00:22:00.096157 containerd[2138]: 2025-08-13 00:22:00.056 [INFO][5937] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bb1b8f160cb28c25f4efcd53e7344264d070511711a1aaafb243aef6e354c4e6" Namespace="calico-apiserver" Pod="calico-apiserver-5768c76bdb-qb8dv" WorkloadEndpoint="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--qb8dv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--qb8dv-eth0", GenerateName:"calico-apiserver-5768c76bdb-", Namespace:"calico-apiserver", SelfLink:"", UID:"3cf1659a-34f6-4f08-a0e0-5a806126f297", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 21, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5768c76bdb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-162", ContainerID:"bb1b8f160cb28c25f4efcd53e7344264d070511711a1aaafb243aef6e354c4e6", Pod:"calico-apiserver-5768c76bdb-qb8dv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8b88b6f4780", MAC:"b2:20:e0:bf:c1:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:22:00.096157 containerd[2138]: 2025-08-13 00:22:00.082 [INFO][5937] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bb1b8f160cb28c25f4efcd53e7344264d070511711a1aaafb243aef6e354c4e6" Namespace="calico-apiserver" Pod="calico-apiserver-5768c76bdb-qb8dv" WorkloadEndpoint="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--qb8dv-eth0" Aug 13 00:22:00.166668 containerd[2138]: time="2025-08-13T00:22:00.166090113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:22:00.166668 containerd[2138]: time="2025-08-13T00:22:00.166248129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:22:00.166668 containerd[2138]: time="2025-08-13T00:22:00.166287513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:00.168532 containerd[2138]: time="2025-08-13T00:22:00.168353013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:22:00.291337 containerd[2138]: time="2025-08-13T00:22:00.291159669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5768c76bdb-qb8dv,Uid:3cf1659a-34f6-4f08-a0e0-5a806126f297,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"bb1b8f160cb28c25f4efcd53e7344264d070511711a1aaafb243aef6e354c4e6\"" Aug 13 00:22:00.460472 systemd-networkd[1689]: cali7c8d63b9505: Gained IPv6LL Aug 13 00:22:01.846567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2881479433.mount: Deactivated successfully. Aug 13 00:22:01.932976 systemd-networkd[1689]: cali8b88b6f4780: Gained IPv6LL Aug 13 00:22:02.229692 systemd[1]: Started sshd@8-172.31.31.162:22-139.178.89.65:59490.service - OpenSSH per-connection server daemon (139.178.89.65:59490). Aug 13 00:22:02.442469 sshd[6090]: Accepted publickey for core from 139.178.89.65 port 59490 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:22:02.449316 sshd[6090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:02.464333 systemd-logind[2103]: New session 9 of user core. Aug 13 00:22:02.473850 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 00:22:02.595078 containerd[2138]: time="2025-08-13T00:22:02.594267313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:02.600574 containerd[2138]: time="2025-08-13T00:22:02.600247705Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Aug 13 00:22:02.601297 containerd[2138]: time="2025-08-13T00:22:02.601216057Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:02.608635 containerd[2138]: time="2025-08-13T00:22:02.608545429Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:02.611065 containerd[2138]: time="2025-08-13T00:22:02.610438489Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 3.024596415s" Aug 13 00:22:02.611065 containerd[2138]: time="2025-08-13T00:22:02.610504609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Aug 13 00:22:02.615657 containerd[2138]: time="2025-08-13T00:22:02.615580321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 00:22:02.622157 containerd[2138]: time="2025-08-13T00:22:02.622079137Z" level=info msg="CreateContainer within sandbox \"50f844f72d74062e69aaaf98942e6afac9a217e084e0abf75dae6e1acc6c7fce\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 00:22:02.656727 containerd[2138]: time="2025-08-13T00:22:02.655917061Z" level=info msg="CreateContainer within sandbox \"50f844f72d74062e69aaaf98942e6afac9a217e084e0abf75dae6e1acc6c7fce\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b715044dc26d0b26228f2f878bb9ad29042dfcddc565fa923da7230e8bfd904d\"" Aug 13 00:22:02.660070 containerd[2138]: time="2025-08-13T00:22:02.658253761Z" level=info msg="StartContainer for \"b715044dc26d0b26228f2f878bb9ad29042dfcddc565fa923da7230e8bfd904d\"" Aug 13 00:22:02.842441 sshd[6090]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:02.851415 systemd[1]: sshd@8-172.31.31.162:22-139.178.89.65:59490.service: Deactivated successfully. Aug 13 00:22:02.866222 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:22:02.870962 systemd-logind[2103]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:22:02.875886 systemd-logind[2103]: Removed session 9. Aug 13 00:22:02.894290 containerd[2138]: time="2025-08-13T00:22:02.893669966Z" level=info msg="StartContainer for \"b715044dc26d0b26228f2f878bb9ad29042dfcddc565fa923da7230e8bfd904d\" returns successfully" Aug 13 00:22:03.082553 kubelet[3661]: I0813 00:22:03.082435 3661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5f574c55d7-tswpg" podStartSLOduration=5.197988448 podStartE2EDuration="13.082414055s" podCreationTimestamp="2025-08-13 00:21:50 +0000 UTC" firstStartedPulling="2025-08-13 00:21:51.699698607 +0000 UTC m=+48.707582115" lastFinishedPulling="2025-08-13 00:21:59.584124226 +0000 UTC m=+56.592007722" observedRunningTime="2025-08-13 00:22:01.11110207 +0000 UTC m=+58.118985674" watchObservedRunningTime="2025-08-13 00:22:03.082414055 +0000 UTC m=+60.090297563" Aug 13 00:22:03.284955 containerd[2138]: time="2025-08-13T00:22:03.284645604Z" level=info msg="StopPodSandbox for \"37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa\"" Aug 13 00:22:03.654539 containerd[2138]: 2025-08-13 00:22:03.488 [WARNING][6159] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" WorkloadEndpoint="ip--172--31--31--162-k8s-whisker--84dd9bcd4f--x6wsv-eth0" Aug 13 00:22:03.654539 containerd[2138]: 2025-08-13 00:22:03.503 [INFO][6159] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" Aug 13 00:22:03.654539 containerd[2138]: 2025-08-13 00:22:03.503 [INFO][6159] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" iface="eth0" netns="" Aug 13 00:22:03.654539 containerd[2138]: 2025-08-13 00:22:03.503 [INFO][6159] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" Aug 13 00:22:03.654539 containerd[2138]: 2025-08-13 00:22:03.504 [INFO][6159] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" Aug 13 00:22:03.654539 containerd[2138]: 2025-08-13 00:22:03.626 [INFO][6181] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" HandleID="k8s-pod-network.37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" Workload="ip--172--31--31--162-k8s-whisker--84dd9bcd4f--x6wsv-eth0" Aug 13 00:22:03.654539 containerd[2138]: 2025-08-13 00:22:03.627 [INFO][6181] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:22:03.654539 containerd[2138]: 2025-08-13 00:22:03.627 [INFO][6181] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:22:03.654539 containerd[2138]: 2025-08-13 00:22:03.642 [WARNING][6181] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" HandleID="k8s-pod-network.37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" Workload="ip--172--31--31--162-k8s-whisker--84dd9bcd4f--x6wsv-eth0" Aug 13 00:22:03.654539 containerd[2138]: 2025-08-13 00:22:03.642 [INFO][6181] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" HandleID="k8s-pod-network.37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" Workload="ip--172--31--31--162-k8s-whisker--84dd9bcd4f--x6wsv-eth0" Aug 13 00:22:03.654539 containerd[2138]: 2025-08-13 00:22:03.646 [INFO][6181] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:22:03.654539 containerd[2138]: 2025-08-13 00:22:03.650 [INFO][6159] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" Aug 13 00:22:03.655956 containerd[2138]: time="2025-08-13T00:22:03.654638786Z" level=info msg="TearDown network for sandbox \"37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa\" successfully" Aug 13 00:22:03.655956 containerd[2138]: time="2025-08-13T00:22:03.654677930Z" level=info msg="StopPodSandbox for \"37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa\" returns successfully" Aug 13 00:22:03.659729 containerd[2138]: time="2025-08-13T00:22:03.659648714Z" level=info msg="RemovePodSandbox for \"37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa\"" Aug 13 00:22:03.659901 containerd[2138]: time="2025-08-13T00:22:03.659746970Z" level=info msg="Forcibly stopping sandbox \"37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa\"" Aug 13 00:22:03.736311 kubelet[3661]: I0813 00:22:03.735035 3661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-zs6cp" podStartSLOduration=23.385130095 podStartE2EDuration="30.734938683s" podCreationTimestamp="2025-08-13 00:21:33 +0000 UTC" firstStartedPulling="2025-08-13 00:21:55.264566813 +0000 UTC m=+52.272450309" lastFinishedPulling="2025-08-13 00:22:02.614375305 +0000 UTC m=+59.622258897" observedRunningTime="2025-08-13 00:22:03.085410551 +0000 UTC m=+60.093294083" watchObservedRunningTime="2025-08-13 00:22:03.734938683 +0000 UTC m=+60.742822299" Aug 13 00:22:03.829504 containerd[2138]: 2025-08-13 00:22:03.770 [WARNING][6206] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" WorkloadEndpoint="ip--172--31--31--162-k8s-whisker--84dd9bcd4f--x6wsv-eth0" Aug 13 00:22:03.829504 containerd[2138]: 2025-08-13 00:22:03.770 [INFO][6206] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" Aug 13 00:22:03.829504 containerd[2138]: 2025-08-13 00:22:03.770 [INFO][6206] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" iface="eth0" netns="" Aug 13 00:22:03.829504 containerd[2138]: 2025-08-13 00:22:03.770 [INFO][6206] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" Aug 13 00:22:03.829504 containerd[2138]: 2025-08-13 00:22:03.770 [INFO][6206] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" Aug 13 00:22:03.829504 containerd[2138]: 2025-08-13 00:22:03.808 [INFO][6214] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" HandleID="k8s-pod-network.37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" Workload="ip--172--31--31--162-k8s-whisker--84dd9bcd4f--x6wsv-eth0" Aug 13 00:22:03.829504 containerd[2138]: 2025-08-13 00:22:03.808 [INFO][6214] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:22:03.829504 containerd[2138]: 2025-08-13 00:22:03.808 [INFO][6214] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:22:03.829504 containerd[2138]: 2025-08-13 00:22:03.821 [WARNING][6214] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" HandleID="k8s-pod-network.37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" Workload="ip--172--31--31--162-k8s-whisker--84dd9bcd4f--x6wsv-eth0" Aug 13 00:22:03.829504 containerd[2138]: 2025-08-13 00:22:03.821 [INFO][6214] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" HandleID="k8s-pod-network.37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" Workload="ip--172--31--31--162-k8s-whisker--84dd9bcd4f--x6wsv-eth0" Aug 13 00:22:03.829504 containerd[2138]: 2025-08-13 00:22:03.824 [INFO][6214] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:22:03.829504 containerd[2138]: 2025-08-13 00:22:03.826 [INFO][6206] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa" Aug 13 00:22:03.830270 containerd[2138]: time="2025-08-13T00:22:03.829674759Z" level=info msg="TearDown network for sandbox \"37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa\" successfully" Aug 13 00:22:03.838234 containerd[2138]: time="2025-08-13T00:22:03.838153371Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:22:03.838412 containerd[2138]: time="2025-08-13T00:22:03.838272135Z" level=info msg="RemovePodSandbox \"37154a737cf0cdeb598e1bf483160a86c31868dce9477faedd3c932a38c416fa\" returns successfully" Aug 13 00:22:03.839200 containerd[2138]: time="2025-08-13T00:22:03.839077767Z" level=info msg="StopPodSandbox for \"9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3\"" Aug 13 00:22:03.984908 containerd[2138]: 2025-08-13 00:22:03.904 [WARNING][6228] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--97r48-eth0", GenerateName:"calico-apiserver-5768c76bdb-", Namespace:"calico-apiserver", SelfLink:"", UID:"24486bb1-f01d-44a5-bd10-5c11bbdaf03f", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 21, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5768c76bdb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-162", ContainerID:"095bc929cadbd6d01743f2820f40d4364304184146da7e3495c5326e8dd1ea5b", Pod:"calico-apiserver-5768c76bdb-97r48", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2bb88cd7496", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:22:03.984908 containerd[2138]: 2025-08-13 00:22:03.905 [INFO][6228] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" Aug 13 00:22:03.984908 containerd[2138]: 2025-08-13 00:22:03.905 [INFO][6228] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" iface="eth0" netns="" Aug 13 00:22:03.984908 containerd[2138]: 2025-08-13 00:22:03.905 [INFO][6228] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" Aug 13 00:22:03.984908 containerd[2138]: 2025-08-13 00:22:03.905 [INFO][6228] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" Aug 13 00:22:03.984908 containerd[2138]: 2025-08-13 00:22:03.954 [INFO][6235] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" HandleID="k8s-pod-network.9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" Workload="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--97r48-eth0" Aug 13 00:22:03.984908 containerd[2138]: 2025-08-13 00:22:03.954 [INFO][6235] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:22:03.984908 containerd[2138]: 2025-08-13 00:22:03.954 [INFO][6235] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:22:03.984908 containerd[2138]: 2025-08-13 00:22:03.972 [WARNING][6235] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" HandleID="k8s-pod-network.9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" Workload="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--97r48-eth0" Aug 13 00:22:03.984908 containerd[2138]: 2025-08-13 00:22:03.973 [INFO][6235] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" HandleID="k8s-pod-network.9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" Workload="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--97r48-eth0" Aug 13 00:22:03.984908 containerd[2138]: 2025-08-13 00:22:03.975 [INFO][6235] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:22:03.984908 containerd[2138]: 2025-08-13 00:22:03.978 [INFO][6228] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" Aug 13 00:22:03.984908 containerd[2138]: time="2025-08-13T00:22:03.983096812Z" level=info msg="TearDown network for sandbox \"9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3\" successfully" Aug 13 00:22:03.984908 containerd[2138]: time="2025-08-13T00:22:03.983134816Z" level=info msg="StopPodSandbox for \"9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3\" returns successfully" Aug 13 00:22:03.986239 containerd[2138]: time="2025-08-13T00:22:03.985615792Z" level=info msg="RemovePodSandbox for \"9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3\"" Aug 13 00:22:03.986239 containerd[2138]: time="2025-08-13T00:22:03.986097340Z" level=info msg="Forcibly stopping sandbox \"9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3\"" Aug 13 00:22:04.321231 containerd[2138]: 2025-08-13 00:22:04.072 [WARNING][6253] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--97r48-eth0", GenerateName:"calico-apiserver-5768c76bdb-", Namespace:"calico-apiserver", SelfLink:"", UID:"24486bb1-f01d-44a5-bd10-5c11bbdaf03f", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 21, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5768c76bdb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-162", ContainerID:"095bc929cadbd6d01743f2820f40d4364304184146da7e3495c5326e8dd1ea5b", Pod:"calico-apiserver-5768c76bdb-97r48", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2bb88cd7496", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:22:04.321231 containerd[2138]: 2025-08-13 00:22:04.072 [INFO][6253] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" Aug 13 00:22:04.321231 containerd[2138]: 2025-08-13 00:22:04.072 [INFO][6253] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" iface="eth0" netns="" Aug 13 00:22:04.321231 containerd[2138]: 2025-08-13 00:22:04.072 [INFO][6253] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" Aug 13 00:22:04.321231 containerd[2138]: 2025-08-13 00:22:04.072 [INFO][6253] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" Aug 13 00:22:04.321231 containerd[2138]: 2025-08-13 00:22:04.264 [INFO][6260] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" HandleID="k8s-pod-network.9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" Workload="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--97r48-eth0" Aug 13 00:22:04.321231 containerd[2138]: 2025-08-13 00:22:04.265 [INFO][6260] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:22:04.321231 containerd[2138]: 2025-08-13 00:22:04.266 [INFO][6260] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:22:04.321231 containerd[2138]: 2025-08-13 00:22:04.297 [WARNING][6260] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" HandleID="k8s-pod-network.9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" Workload="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--97r48-eth0" Aug 13 00:22:04.321231 containerd[2138]: 2025-08-13 00:22:04.299 [INFO][6260] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" HandleID="k8s-pod-network.9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" Workload="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--97r48-eth0" Aug 13 00:22:04.321231 containerd[2138]: 2025-08-13 00:22:04.307 [INFO][6260] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:22:04.321231 containerd[2138]: 2025-08-13 00:22:04.317 [INFO][6253] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3" Aug 13 00:22:04.321231 containerd[2138]: time="2025-08-13T00:22:04.320898733Z" level=info msg="TearDown network for sandbox \"9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3\" successfully" Aug 13 00:22:04.343861 containerd[2138]: time="2025-08-13T00:22:04.343534826Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:22:04.343861 containerd[2138]: time="2025-08-13T00:22:04.343682666Z" level=info msg="RemovePodSandbox \"9308a0e43be2c6812247140e1120eade57289177a5c1d0a42485c072f7b1fbe3\" returns successfully" Aug 13 00:22:04.352426 containerd[2138]: time="2025-08-13T00:22:04.349605302Z" level=info msg="StopPodSandbox for \"e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79\"" Aug 13 00:22:04.425111 ntpd[2086]: Listen normally on 6 vxlan.calico 192.168.37.192:123 Aug 13 00:22:04.435402 ntpd[2086]: 13 Aug 00:22:04 ntpd[2086]: Listen normally on 6 vxlan.calico 192.168.37.192:123 Aug 13 00:22:04.435402 ntpd[2086]: 13 Aug 00:22:04 ntpd[2086]: Listen normally on 7 calid7f7c877ba2 [fe80::ecee:eeff:feee:eeee%4]:123 Aug 13 00:22:04.435402 ntpd[2086]: 13 Aug 00:22:04 ntpd[2086]: Listen normally on 8 vxlan.calico [fe80::6464:1ff:fe7c:3f63%5]:123 Aug 13 00:22:04.435402 ntpd[2086]: 13 Aug 00:22:04 ntpd[2086]: Listen normally on 9 cali6d700e8764b [fe80::ecee:eeff:feee:eeee%8]:123 Aug 13 00:22:04.435402 ntpd[2086]: 13 Aug 00:22:04 ntpd[2086]: Listen normally on 10 cali93f6d79960c [fe80::ecee:eeff:feee:eeee%9]:123 Aug 13 00:22:04.435402 ntpd[2086]: 13 Aug 00:22:04 ntpd[2086]: Listen normally on 11 cali44f7845f4c4 [fe80::ecee:eeff:feee:eeee%10]:123 Aug 13 00:22:04.435402 ntpd[2086]: 13 Aug 00:22:04 ntpd[2086]: Listen normally on 12 calie46fb86b5a4 [fe80::ecee:eeff:feee:eeee%11]:123 Aug 13 00:22:04.435402 ntpd[2086]: 13 Aug 00:22:04 ntpd[2086]: Listen normally on 13 cali2bb88cd7496 [fe80::ecee:eeff:feee:eeee%12]:123 Aug 13 00:22:04.435402 ntpd[2086]: 13 Aug 00:22:04 ntpd[2086]: Listen normally on 14 cali7c8d63b9505 [fe80::ecee:eeff:feee:eeee%13]:123 Aug 13 00:22:04.435402 ntpd[2086]: 13 Aug 00:22:04 ntpd[2086]: Listen normally on 15 cali8b88b6f4780 [fe80::ecee:eeff:feee:eeee%14]:123 Aug 13 00:22:04.425246 ntpd[2086]: Listen normally on 7 calid7f7c877ba2 [fe80::ecee:eeff:feee:eeee%4]:123 Aug 13 00:22:04.425328 ntpd[2086]: Listen normally on 8 vxlan.calico [fe80::6464:1ff:fe7c:3f63%5]:123 Aug 13 00:22:04.425397 ntpd[2086]: Listen normally on 9 cali6d700e8764b [fe80::ecee:eeff:feee:eeee%8]:123 Aug 13 00:22:04.425465 ntpd[2086]: Listen normally on 10 cali93f6d79960c [fe80::ecee:eeff:feee:eeee%9]:123 Aug 13 00:22:04.426091 ntpd[2086]: Listen normally on 11 cali44f7845f4c4 [fe80::ecee:eeff:feee:eeee%10]:123 Aug 13 00:22:04.426196 ntpd[2086]: Listen normally on 12 calie46fb86b5a4 [fe80::ecee:eeff:feee:eeee%11]:123 Aug 13 00:22:04.426275 ntpd[2086]: Listen normally on 13 cali2bb88cd7496 [fe80::ecee:eeff:feee:eeee%12]:123 Aug 13 00:22:04.426344 ntpd[2086]: Listen normally on 14 cali7c8d63b9505 [fe80::ecee:eeff:feee:eeee%13]:123 Aug 13 00:22:04.426417 ntpd[2086]: Listen normally on 15 cali8b88b6f4780 [fe80::ecee:eeff:feee:eeee%14]:123 Aug 13 00:22:04.567642 containerd[2138]: time="2025-08-13T00:22:04.567585915Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:04.571536 containerd[2138]: time="2025-08-13T00:22:04.570974655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Aug 13 00:22:04.574066 containerd[2138]: time="2025-08-13T00:22:04.573810963Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:04.584866 containerd[2138]: time="2025-08-13T00:22:04.584802507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:04.586288 containerd[2138]: time="2025-08-13T00:22:04.586232523Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.970589238s" Aug 13 00:22:04.586545 containerd[2138]: time="2025-08-13T00:22:04.586513287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Aug 13 00:22:04.591436 containerd[2138]: time="2025-08-13T00:22:04.590870991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:22:04.593335 containerd[2138]: time="2025-08-13T00:22:04.593200551Z" level=info msg="CreateContainer within sandbox \"9780c5c5997e1b4e12ef350559f21e12ac0c8c9c347931bf3599c698f468a49b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 00:22:04.636698 containerd[2138]: time="2025-08-13T00:22:04.636344151Z" level=info msg="CreateContainer within sandbox \"9780c5c5997e1b4e12ef350559f21e12ac0c8c9c347931bf3599c698f468a49b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9984cf277d6bb0c1187313b2bb1effc5f4caa2c2c835ba4fdccc7c2b66189aa8\"" Aug 13 00:22:04.640740 containerd[2138]: time="2025-08-13T00:22:04.640674387Z" level=info msg="StartContainer for \"9984cf277d6bb0c1187313b2bb1effc5f4caa2c2c835ba4fdccc7c2b66189aa8\"" Aug 13 00:22:04.644839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3059014778.mount: Deactivated successfully. Aug 13 00:22:04.703566 containerd[2138]: 2025-08-13 00:22:04.581 [WARNING][6295] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--162-k8s-calico--kube--controllers--764ff7f5f7--s9b5s-eth0", GenerateName:"calico-kube-controllers-764ff7f5f7-", Namespace:"calico-system", SelfLink:"", UID:"0bdee9ba-0652-4e5a-aa31-e915cc90ffb9", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"764ff7f5f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-162", ContainerID:"4305770d2b7ca762c98a746c9ccf55316a42445bdb86836d6faa67aa90858d7a", Pod:"calico-kube-controllers-764ff7f5f7-s9b5s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.37.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7c8d63b9505", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:22:04.703566 containerd[2138]: 2025-08-13 00:22:04.582 [INFO][6295] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" Aug 13 00:22:04.703566 containerd[2138]: 2025-08-13 00:22:04.582 [INFO][6295] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" iface="eth0" netns="" Aug 13 00:22:04.703566 containerd[2138]: 2025-08-13 00:22:04.582 [INFO][6295] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" Aug 13 00:22:04.703566 containerd[2138]: 2025-08-13 00:22:04.582 [INFO][6295] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" Aug 13 00:22:04.703566 containerd[2138]: 2025-08-13 00:22:04.665 [INFO][6304] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" HandleID="k8s-pod-network.e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" Workload="ip--172--31--31--162-k8s-calico--kube--controllers--764ff7f5f7--s9b5s-eth0" Aug 13 00:22:04.703566 containerd[2138]: 2025-08-13 00:22:04.665 [INFO][6304] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:22:04.703566 containerd[2138]: 2025-08-13 00:22:04.665 [INFO][6304] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:22:04.703566 containerd[2138]: 2025-08-13 00:22:04.688 [WARNING][6304] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" HandleID="k8s-pod-network.e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" Workload="ip--172--31--31--162-k8s-calico--kube--controllers--764ff7f5f7--s9b5s-eth0" Aug 13 00:22:04.703566 containerd[2138]: 2025-08-13 00:22:04.690 [INFO][6304] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" HandleID="k8s-pod-network.e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" Workload="ip--172--31--31--162-k8s-calico--kube--controllers--764ff7f5f7--s9b5s-eth0" Aug 13 00:22:04.703566 containerd[2138]: 2025-08-13 00:22:04.697 [INFO][6304] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:22:04.703566 containerd[2138]: 2025-08-13 00:22:04.700 [INFO][6295] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" Aug 13 00:22:04.706151 containerd[2138]: time="2025-08-13T00:22:04.703604487Z" level=info msg="TearDown network for sandbox \"e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79\" successfully" Aug 13 00:22:04.706151 containerd[2138]: time="2025-08-13T00:22:04.703642335Z" level=info msg="StopPodSandbox for \"e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79\" returns successfully" Aug 13 00:22:04.706151 containerd[2138]: time="2025-08-13T00:22:04.705500991Z" level=info msg="RemovePodSandbox for \"e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79\"" Aug 13 00:22:04.706151 containerd[2138]: time="2025-08-13T00:22:04.705557559Z" level=info msg="Forcibly stopping sandbox \"e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79\"" Aug 13 00:22:04.804373 containerd[2138]: time="2025-08-13T00:22:04.804159124Z" level=info msg="StartContainer for \"9984cf277d6bb0c1187313b2bb1effc5f4caa2c2c835ba4fdccc7c2b66189aa8\" returns successfully" Aug 13 00:22:04.884441 containerd[2138]: 2025-08-13 00:22:04.815 [WARNING][6337] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--162-k8s-calico--kube--controllers--764ff7f5f7--s9b5s-eth0", GenerateName:"calico-kube-controllers-764ff7f5f7-", Namespace:"calico-system", SelfLink:"", UID:"0bdee9ba-0652-4e5a-aa31-e915cc90ffb9", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"764ff7f5f7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-162", ContainerID:"4305770d2b7ca762c98a746c9ccf55316a42445bdb86836d6faa67aa90858d7a", Pod:"calico-kube-controllers-764ff7f5f7-s9b5s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.37.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7c8d63b9505", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:22:04.884441 containerd[2138]: 2025-08-13 00:22:04.816 [INFO][6337] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" Aug 13 00:22:04.884441 containerd[2138]: 2025-08-13 00:22:04.816 [INFO][6337] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" iface="eth0" netns="" Aug 13 00:22:04.884441 containerd[2138]: 2025-08-13 00:22:04.816 [INFO][6337] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" Aug 13 00:22:04.884441 containerd[2138]: 2025-08-13 00:22:04.816 [INFO][6337] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" Aug 13 00:22:04.884441 containerd[2138]: 2025-08-13 00:22:04.860 [INFO][6360] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" HandleID="k8s-pod-network.e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" Workload="ip--172--31--31--162-k8s-calico--kube--controllers--764ff7f5f7--s9b5s-eth0" Aug 13 00:22:04.884441 containerd[2138]: 2025-08-13 00:22:04.860 [INFO][6360] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:22:04.884441 containerd[2138]: 2025-08-13 00:22:04.860 [INFO][6360] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:22:04.884441 containerd[2138]: 2025-08-13 00:22:04.872 [WARNING][6360] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" HandleID="k8s-pod-network.e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" Workload="ip--172--31--31--162-k8s-calico--kube--controllers--764ff7f5f7--s9b5s-eth0" Aug 13 00:22:04.884441 containerd[2138]: 2025-08-13 00:22:04.872 [INFO][6360] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" HandleID="k8s-pod-network.e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" Workload="ip--172--31--31--162-k8s-calico--kube--controllers--764ff7f5f7--s9b5s-eth0" Aug 13 00:22:04.884441 containerd[2138]: 2025-08-13 00:22:04.875 [INFO][6360] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:22:04.884441 containerd[2138]: 2025-08-13 00:22:04.879 [INFO][6337] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79" Aug 13 00:22:04.884441 containerd[2138]: time="2025-08-13T00:22:04.884187724Z" level=info msg="TearDown network for sandbox \"e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79\" successfully" Aug 13 00:22:04.900179 containerd[2138]: time="2025-08-13T00:22:04.900079876Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:22:04.900339 containerd[2138]: time="2025-08-13T00:22:04.900195340Z" level=info msg="RemovePodSandbox \"e274bc539942e945e2cf5c23d49cb3062275c90b535833f4d8c75b41e4028a79\" returns successfully" Aug 13 00:22:04.901205 containerd[2138]: time="2025-08-13T00:22:04.901146964Z" level=info msg="StopPodSandbox for \"1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c\"" Aug 13 00:22:05.036575 containerd[2138]: 2025-08-13 00:22:04.961 [WARNING][6374] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--162-k8s-coredns--7c65d6cfc9--gggpz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"aa0e8597-87b1-46a6-b15d-ea2b84ced854", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-162", ContainerID:"1ec66feba418ba7a8b6a062ed070487e19f710bf4f56c645a384bd5119f84bd2", Pod:"coredns-7c65d6cfc9-gggpz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali44f7845f4c4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:22:05.036575 containerd[2138]: 2025-08-13 00:22:04.962 [INFO][6374] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" Aug 13 00:22:05.036575 containerd[2138]: 2025-08-13 00:22:04.962 [INFO][6374] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" iface="eth0" netns="" Aug 13 00:22:05.036575 containerd[2138]: 2025-08-13 00:22:04.962 [INFO][6374] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" Aug 13 00:22:05.036575 containerd[2138]: 2025-08-13 00:22:04.962 [INFO][6374] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" Aug 13 00:22:05.036575 containerd[2138]: 2025-08-13 00:22:05.011 [INFO][6381] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" HandleID="k8s-pod-network.1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" Workload="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--gggpz-eth0" Aug 13 00:22:05.036575 containerd[2138]: 2025-08-13 00:22:05.011 [INFO][6381] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:22:05.036575 containerd[2138]: 2025-08-13 00:22:05.011 [INFO][6381] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:22:05.036575 containerd[2138]: 2025-08-13 00:22:05.025 [WARNING][6381] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" HandleID="k8s-pod-network.1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" Workload="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--gggpz-eth0" Aug 13 00:22:05.036575 containerd[2138]: 2025-08-13 00:22:05.025 [INFO][6381] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" HandleID="k8s-pod-network.1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" Workload="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--gggpz-eth0" Aug 13 00:22:05.036575 containerd[2138]: 2025-08-13 00:22:05.031 [INFO][6381] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:22:05.036575 containerd[2138]: 2025-08-13 00:22:05.033 [INFO][6374] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" Aug 13 00:22:05.037597 containerd[2138]: time="2025-08-13T00:22:05.036679777Z" level=info msg="TearDown network for sandbox \"1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c\" successfully" Aug 13 00:22:05.037597 containerd[2138]: time="2025-08-13T00:22:05.036719029Z" level=info msg="StopPodSandbox for \"1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c\" returns successfully" Aug 13 00:22:05.037597 containerd[2138]: time="2025-08-13T00:22:05.037422385Z" level=info msg="RemovePodSandbox for \"1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c\"" Aug 13 00:22:05.037597 containerd[2138]: time="2025-08-13T00:22:05.037467265Z" level=info msg="Forcibly stopping sandbox \"1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c\"" Aug 13 00:22:05.211107 containerd[2138]: 2025-08-13 00:22:05.111 [WARNING][6395] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--162-k8s-coredns--7c65d6cfc9--gggpz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"aa0e8597-87b1-46a6-b15d-ea2b84ced854", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-162", ContainerID:"1ec66feba418ba7a8b6a062ed070487e19f710bf4f56c645a384bd5119f84bd2", Pod:"coredns-7c65d6cfc9-gggpz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali44f7845f4c4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:22:05.211107 containerd[2138]: 2025-08-13 00:22:05.111 [INFO][6395] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" Aug 13 00:22:05.211107 containerd[2138]: 2025-08-13 00:22:05.111 [INFO][6395] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" iface="eth0" netns="" Aug 13 00:22:05.211107 containerd[2138]: 2025-08-13 00:22:05.111 [INFO][6395] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" Aug 13 00:22:05.211107 containerd[2138]: 2025-08-13 00:22:05.111 [INFO][6395] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" Aug 13 00:22:05.211107 containerd[2138]: 2025-08-13 00:22:05.183 [INFO][6408] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" HandleID="k8s-pod-network.1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" Workload="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--gggpz-eth0" Aug 13 00:22:05.211107 containerd[2138]: 2025-08-13 00:22:05.183 [INFO][6408] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:22:05.211107 containerd[2138]: 2025-08-13 00:22:05.183 [INFO][6408] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:22:05.211107 containerd[2138]: 2025-08-13 00:22:05.199 [WARNING][6408] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" HandleID="k8s-pod-network.1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" Workload="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--gggpz-eth0" Aug 13 00:22:05.211107 containerd[2138]: 2025-08-13 00:22:05.199 [INFO][6408] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" HandleID="k8s-pod-network.1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" Workload="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--gggpz-eth0" Aug 13 00:22:05.211107 containerd[2138]: 2025-08-13 00:22:05.202 [INFO][6408] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:22:05.211107 containerd[2138]: 2025-08-13 00:22:05.205 [INFO][6395] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c" Aug 13 00:22:05.211107 containerd[2138]: time="2025-08-13T00:22:05.210538418Z" level=info msg="TearDown network for sandbox \"1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c\" successfully" Aug 13 00:22:05.231563 containerd[2138]: time="2025-08-13T00:22:05.229121150Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:22:05.231563 containerd[2138]: time="2025-08-13T00:22:05.229234418Z" level=info msg="RemovePodSandbox \"1a896147271d20aff95b141233a633d55ca7edef8e4fcb49f5af983d58827f1c\" returns successfully" Aug 13 00:22:05.231563 containerd[2138]: time="2025-08-13T00:22:05.230704022Z" level=info msg="StopPodSandbox for \"2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8\"" Aug 13 00:22:05.391185 containerd[2138]: 2025-08-13 00:22:05.316 [WARNING][6436] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--qb8dv-eth0", GenerateName:"calico-apiserver-5768c76bdb-", Namespace:"calico-apiserver", SelfLink:"", UID:"3cf1659a-34f6-4f08-a0e0-5a806126f297", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 21, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5768c76bdb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-162", ContainerID:"bb1b8f160cb28c25f4efcd53e7344264d070511711a1aaafb243aef6e354c4e6", Pod:"calico-apiserver-5768c76bdb-qb8dv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8b88b6f4780", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:22:05.391185 containerd[2138]: 2025-08-13 00:22:05.317 [INFO][6436] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" Aug 13 00:22:05.391185 containerd[2138]: 2025-08-13 00:22:05.317 [INFO][6436] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" iface="eth0" netns="" Aug 13 00:22:05.391185 containerd[2138]: 2025-08-13 00:22:05.317 [INFO][6436] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" Aug 13 00:22:05.391185 containerd[2138]: 2025-08-13 00:22:05.317 [INFO][6436] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" Aug 13 00:22:05.391185 containerd[2138]: 2025-08-13 00:22:05.368 [INFO][6445] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" HandleID="k8s-pod-network.2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" Workload="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--qb8dv-eth0" Aug 13 00:22:05.391185 containerd[2138]: 2025-08-13 00:22:05.369 [INFO][6445] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:22:05.391185 containerd[2138]: 2025-08-13 00:22:05.369 [INFO][6445] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:22:05.391185 containerd[2138]: 2025-08-13 00:22:05.382 [WARNING][6445] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" HandleID="k8s-pod-network.2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" Workload="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--qb8dv-eth0" Aug 13 00:22:05.391185 containerd[2138]: 2025-08-13 00:22:05.382 [INFO][6445] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" HandleID="k8s-pod-network.2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" Workload="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--qb8dv-eth0" Aug 13 00:22:05.391185 containerd[2138]: 2025-08-13 00:22:05.385 [INFO][6445] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:22:05.391185 containerd[2138]: 2025-08-13 00:22:05.388 [INFO][6436] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" Aug 13 00:22:05.393067 containerd[2138]: time="2025-08-13T00:22:05.391246539Z" level=info msg="TearDown network for sandbox \"2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8\" successfully" Aug 13 00:22:05.393067 containerd[2138]: time="2025-08-13T00:22:05.391310619Z" level=info msg="StopPodSandbox for \"2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8\" returns successfully" Aug 13 00:22:05.393067 containerd[2138]: time="2025-08-13T00:22:05.392118111Z" level=info msg="RemovePodSandbox for \"2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8\"" Aug 13 00:22:05.393067 containerd[2138]: time="2025-08-13T00:22:05.392167815Z" level=info msg="Forcibly stopping sandbox \"2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8\"" Aug 13 00:22:05.535071 containerd[2138]: 2025-08-13 00:22:05.476 [WARNING][6460] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--qb8dv-eth0", GenerateName:"calico-apiserver-5768c76bdb-", Namespace:"calico-apiserver", SelfLink:"", UID:"3cf1659a-34f6-4f08-a0e0-5a806126f297", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 21, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5768c76bdb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-162", ContainerID:"bb1b8f160cb28c25f4efcd53e7344264d070511711a1aaafb243aef6e354c4e6", Pod:"calico-apiserver-5768c76bdb-qb8dv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8b88b6f4780", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:22:05.535071 containerd[2138]: 2025-08-13 00:22:05.477 [INFO][6460] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" Aug 13 00:22:05.535071 containerd[2138]: 2025-08-13 00:22:05.477 [INFO][6460] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" iface="eth0" netns="" Aug 13 00:22:05.535071 containerd[2138]: 2025-08-13 00:22:05.477 [INFO][6460] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" Aug 13 00:22:05.535071 containerd[2138]: 2025-08-13 00:22:05.477 [INFO][6460] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" Aug 13 00:22:05.535071 containerd[2138]: 2025-08-13 00:22:05.515 [INFO][6467] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" HandleID="k8s-pod-network.2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" Workload="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--qb8dv-eth0" Aug 13 00:22:05.535071 containerd[2138]: 2025-08-13 00:22:05.515 [INFO][6467] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:22:05.535071 containerd[2138]: 2025-08-13 00:22:05.515 [INFO][6467] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:22:05.535071 containerd[2138]: 2025-08-13 00:22:05.527 [WARNING][6467] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" HandleID="k8s-pod-network.2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" Workload="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--qb8dv-eth0" Aug 13 00:22:05.535071 containerd[2138]: 2025-08-13 00:22:05.527 [INFO][6467] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" HandleID="k8s-pod-network.2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" Workload="ip--172--31--31--162-k8s-calico--apiserver--5768c76bdb--qb8dv-eth0" Aug 13 00:22:05.535071 containerd[2138]: 2025-08-13 00:22:05.529 [INFO][6467] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:22:05.535071 containerd[2138]: 2025-08-13 00:22:05.532 [INFO][6460] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8" Aug 13 00:22:05.535071 containerd[2138]: time="2025-08-13T00:22:05.534573472Z" level=info msg="TearDown network for sandbox \"2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8\" successfully" Aug 13 00:22:05.541505 containerd[2138]: time="2025-08-13T00:22:05.541422112Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:22:05.541712 containerd[2138]: time="2025-08-13T00:22:05.541525156Z" level=info msg="RemovePodSandbox \"2a3261c150a36a25953ea81ff846ef3d9b6083cd953f6a8dd39f2e5c303fefa8\" returns successfully" Aug 13 00:22:05.542567 containerd[2138]: time="2025-08-13T00:22:05.542129884Z" level=info msg="StopPodSandbox for \"eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283\"" Aug 13 00:22:05.695339 containerd[2138]: 2025-08-13 00:22:05.606 [WARNING][6481] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--162-k8s-goldmane--58fd7646b9--zs6cp-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"2b8efc0e-ddd2-44d0-b0dc-3fcdf622b514", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-162", ContainerID:"50f844f72d74062e69aaaf98942e6afac9a217e084e0abf75dae6e1acc6c7fce", Pod:"goldmane-58fd7646b9-zs6cp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.37.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6d700e8764b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:22:05.695339 containerd[2138]: 2025-08-13 00:22:05.607 [INFO][6481] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" Aug 13 00:22:05.695339 containerd[2138]: 2025-08-13 00:22:05.607 [INFO][6481] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" iface="eth0" netns="" Aug 13 00:22:05.695339 containerd[2138]: 2025-08-13 00:22:05.607 [INFO][6481] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" Aug 13 00:22:05.695339 containerd[2138]: 2025-08-13 00:22:05.607 [INFO][6481] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" Aug 13 00:22:05.695339 containerd[2138]: 2025-08-13 00:22:05.651 [INFO][6488] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" HandleID="k8s-pod-network.eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" Workload="ip--172--31--31--162-k8s-goldmane--58fd7646b9--zs6cp-eth0" Aug 13 00:22:05.695339 containerd[2138]: 2025-08-13 00:22:05.651 [INFO][6488] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:22:05.695339 containerd[2138]: 2025-08-13 00:22:05.651 [INFO][6488] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:22:05.695339 containerd[2138]: 2025-08-13 00:22:05.675 [WARNING][6488] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" HandleID="k8s-pod-network.eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" Workload="ip--172--31--31--162-k8s-goldmane--58fd7646b9--zs6cp-eth0" Aug 13 00:22:05.695339 containerd[2138]: 2025-08-13 00:22:05.675 [INFO][6488] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" HandleID="k8s-pod-network.eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" Workload="ip--172--31--31--162-k8s-goldmane--58fd7646b9--zs6cp-eth0" Aug 13 00:22:05.695339 containerd[2138]: 2025-08-13 00:22:05.679 [INFO][6488] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:22:05.695339 containerd[2138]: 2025-08-13 00:22:05.685 [INFO][6481] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" Aug 13 00:22:05.696177 containerd[2138]: time="2025-08-13T00:22:05.695389108Z" level=info msg="TearDown network for sandbox \"eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283\" successfully" Aug 13 00:22:05.696177 containerd[2138]: time="2025-08-13T00:22:05.695425432Z" level=info msg="StopPodSandbox for \"eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283\" returns successfully" Aug 13 00:22:05.696177 containerd[2138]: time="2025-08-13T00:22:05.696087400Z" level=info msg="RemovePodSandbox for \"eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283\"" Aug 13 00:22:05.696322 containerd[2138]: time="2025-08-13T00:22:05.696196624Z" level=info msg="Forcibly stopping sandbox \"eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283\"" Aug 13 00:22:05.923704 containerd[2138]: 2025-08-13 00:22:05.816 [WARNING][6502] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--162-k8s-goldmane--58fd7646b9--zs6cp-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"2b8efc0e-ddd2-44d0-b0dc-3fcdf622b514", ResourceVersion:"1110", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-162", ContainerID:"50f844f72d74062e69aaaf98942e6afac9a217e084e0abf75dae6e1acc6c7fce", Pod:"goldmane-58fd7646b9-zs6cp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.37.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6d700e8764b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:22:05.923704 containerd[2138]: 2025-08-13 00:22:05.817 [INFO][6502] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" Aug 13 00:22:05.923704 containerd[2138]: 2025-08-13 00:22:05.817 [INFO][6502] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" iface="eth0" netns="" Aug 13 00:22:05.923704 containerd[2138]: 2025-08-13 00:22:05.817 [INFO][6502] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" Aug 13 00:22:05.923704 containerd[2138]: 2025-08-13 00:22:05.817 [INFO][6502] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" Aug 13 00:22:05.923704 containerd[2138]: 2025-08-13 00:22:05.890 [INFO][6509] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" HandleID="k8s-pod-network.eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" Workload="ip--172--31--31--162-k8s-goldmane--58fd7646b9--zs6cp-eth0" Aug 13 00:22:05.923704 containerd[2138]: 2025-08-13 00:22:05.891 [INFO][6509] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:22:05.923704 containerd[2138]: 2025-08-13 00:22:05.891 [INFO][6509] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:22:05.923704 containerd[2138]: 2025-08-13 00:22:05.910 [WARNING][6509] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" HandleID="k8s-pod-network.eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" Workload="ip--172--31--31--162-k8s-goldmane--58fd7646b9--zs6cp-eth0" Aug 13 00:22:05.923704 containerd[2138]: 2025-08-13 00:22:05.913 [INFO][6509] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" HandleID="k8s-pod-network.eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" Workload="ip--172--31--31--162-k8s-goldmane--58fd7646b9--zs6cp-eth0" Aug 13 00:22:05.923704 containerd[2138]: 2025-08-13 00:22:05.915 [INFO][6509] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:22:05.923704 containerd[2138]: 2025-08-13 00:22:05.919 [INFO][6502] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283" Aug 13 00:22:05.925573 containerd[2138]: time="2025-08-13T00:22:05.924162725Z" level=info msg="TearDown network for sandbox \"eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283\" successfully" Aug 13 00:22:05.954119 containerd[2138]: time="2025-08-13T00:22:05.953100846Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:22:05.954119 containerd[2138]: time="2025-08-13T00:22:05.953205522Z" level=info msg="RemovePodSandbox \"eeca69710011c2bb803dc5302513e32227b8d17d206361e0283a75cb88d05283\" returns successfully" Aug 13 00:22:05.955866 containerd[2138]: time="2025-08-13T00:22:05.955507674Z" level=info msg="StopPodSandbox for \"31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4\"" Aug 13 00:22:06.170845 containerd[2138]: 2025-08-13 00:22:06.057 [WARNING][6528] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--162-k8s-csi--node--driver--4xwwn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5a310c1b-b07f-40c9-96ad-53e0942080e1", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-162", ContainerID:"9780c5c5997e1b4e12ef350559f21e12ac0c8c9c347931bf3599c698f468a49b", Pod:"csi-node-driver-4xwwn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.37.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie46fb86b5a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:22:06.170845 containerd[2138]: 2025-08-13 00:22:06.057 [INFO][6528] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" Aug 13 00:22:06.170845 containerd[2138]: 2025-08-13 00:22:06.057 [INFO][6528] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" iface="eth0" netns="" Aug 13 00:22:06.170845 containerd[2138]: 2025-08-13 00:22:06.057 [INFO][6528] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" Aug 13 00:22:06.170845 containerd[2138]: 2025-08-13 00:22:06.057 [INFO][6528] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" Aug 13 00:22:06.170845 containerd[2138]: 2025-08-13 00:22:06.126 [INFO][6536] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" HandleID="k8s-pod-network.31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" Workload="ip--172--31--31--162-k8s-csi--node--driver--4xwwn-eth0" Aug 13 00:22:06.170845 containerd[2138]: 2025-08-13 00:22:06.127 [INFO][6536] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:22:06.170845 containerd[2138]: 2025-08-13 00:22:06.127 [INFO][6536] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:22:06.170845 containerd[2138]: 2025-08-13 00:22:06.158 [WARNING][6536] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" HandleID="k8s-pod-network.31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" Workload="ip--172--31--31--162-k8s-csi--node--driver--4xwwn-eth0" Aug 13 00:22:06.170845 containerd[2138]: 2025-08-13 00:22:06.159 [INFO][6536] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" HandleID="k8s-pod-network.31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" Workload="ip--172--31--31--162-k8s-csi--node--driver--4xwwn-eth0" Aug 13 00:22:06.170845 containerd[2138]: 2025-08-13 00:22:06.162 [INFO][6536] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:22:06.170845 containerd[2138]: 2025-08-13 00:22:06.165 [INFO][6528] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" Aug 13 00:22:06.170845 containerd[2138]: time="2025-08-13T00:22:06.170558523Z" level=info msg="TearDown network for sandbox \"31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4\" successfully" Aug 13 00:22:06.170845 containerd[2138]: time="2025-08-13T00:22:06.170598435Z" level=info msg="StopPodSandbox for \"31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4\" returns successfully" Aug 13 00:22:06.172798 containerd[2138]: time="2025-08-13T00:22:06.171466803Z" level=info msg="RemovePodSandbox for \"31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4\"" Aug 13 00:22:06.172798 containerd[2138]: time="2025-08-13T00:22:06.171543963Z" level=info msg="Forcibly stopping sandbox \"31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4\"" Aug 13 00:22:06.400122 containerd[2138]: 2025-08-13 00:22:06.267 [WARNING][6552] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--162-k8s-csi--node--driver--4xwwn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5a310c1b-b07f-40c9-96ad-53e0942080e1", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 21, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-162", ContainerID:"9780c5c5997e1b4e12ef350559f21e12ac0c8c9c347931bf3599c698f468a49b", Pod:"csi-node-driver-4xwwn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.37.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie46fb86b5a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:22:06.400122 containerd[2138]: 2025-08-13 00:22:06.268 [INFO][6552] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" Aug 13 00:22:06.400122 containerd[2138]: 2025-08-13 00:22:06.268 [INFO][6552] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" iface="eth0" netns="" Aug 13 00:22:06.400122 containerd[2138]: 2025-08-13 00:22:06.268 [INFO][6552] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" Aug 13 00:22:06.400122 containerd[2138]: 2025-08-13 00:22:06.268 [INFO][6552] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" Aug 13 00:22:06.400122 containerd[2138]: 2025-08-13 00:22:06.343 [INFO][6561] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" HandleID="k8s-pod-network.31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" Workload="ip--172--31--31--162-k8s-csi--node--driver--4xwwn-eth0" Aug 13 00:22:06.400122 containerd[2138]: 2025-08-13 00:22:06.344 [INFO][6561] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:22:06.400122 containerd[2138]: 2025-08-13 00:22:06.344 [INFO][6561] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:22:06.400122 containerd[2138]: 2025-08-13 00:22:06.378 [WARNING][6561] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" HandleID="k8s-pod-network.31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" Workload="ip--172--31--31--162-k8s-csi--node--driver--4xwwn-eth0" Aug 13 00:22:06.400122 containerd[2138]: 2025-08-13 00:22:06.380 [INFO][6561] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" HandleID="k8s-pod-network.31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" Workload="ip--172--31--31--162-k8s-csi--node--driver--4xwwn-eth0" Aug 13 00:22:06.400122 containerd[2138]: 2025-08-13 00:22:06.383 [INFO][6561] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:22:06.400122 containerd[2138]: 2025-08-13 00:22:06.396 [INFO][6552] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4" Aug 13 00:22:06.401700 containerd[2138]: time="2025-08-13T00:22:06.400935820Z" level=info msg="TearDown network for sandbox \"31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4\" successfully" Aug 13 00:22:06.411092 containerd[2138]: time="2025-08-13T00:22:06.411016840Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:22:06.411397 containerd[2138]: time="2025-08-13T00:22:06.411126976Z" level=info msg="RemovePodSandbox \"31d9ef14ffd54066aecb21b1f58ca0ffd9fcbc11b14078287f01d4528d9d61d4\" returns successfully" Aug 13 00:22:06.412438 containerd[2138]: time="2025-08-13T00:22:06.412094680Z" level=info msg="StopPodSandbox for \"d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2\"" Aug 13 00:22:06.603585 containerd[2138]: 2025-08-13 00:22:06.518 [WARNING][6576] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--162-k8s-coredns--7c65d6cfc9--mx5t4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"177eec0b-4e35-4df6-b815-1a477ed2acfc", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-162", ContainerID:"cbb88c05e91242a0ccf204d15f73c893e9d0b3d159f990fa5fd0d4313a96b975", Pod:"coredns-7c65d6cfc9-mx5t4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali93f6d79960c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:22:06.603585 containerd[2138]: 2025-08-13 00:22:06.518 [INFO][6576] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" Aug 13 00:22:06.603585 containerd[2138]: 2025-08-13 00:22:06.518 [INFO][6576] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" iface="eth0" netns="" Aug 13 00:22:06.603585 containerd[2138]: 2025-08-13 00:22:06.518 [INFO][6576] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" Aug 13 00:22:06.603585 containerd[2138]: 2025-08-13 00:22:06.518 [INFO][6576] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" Aug 13 00:22:06.603585 containerd[2138]: 2025-08-13 00:22:06.571 [INFO][6583] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" HandleID="k8s-pod-network.d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" Workload="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--mx5t4-eth0" Aug 13 00:22:06.603585 containerd[2138]: 2025-08-13 00:22:06.571 [INFO][6583] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:22:06.603585 containerd[2138]: 2025-08-13 00:22:06.571 [INFO][6583] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:22:06.603585 containerd[2138]: 2025-08-13 00:22:06.592 [WARNING][6583] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" HandleID="k8s-pod-network.d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" Workload="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--mx5t4-eth0" Aug 13 00:22:06.603585 containerd[2138]: 2025-08-13 00:22:06.592 [INFO][6583] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" HandleID="k8s-pod-network.d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" Workload="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--mx5t4-eth0" Aug 13 00:22:06.603585 containerd[2138]: 2025-08-13 00:22:06.596 [INFO][6583] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:22:06.603585 containerd[2138]: 2025-08-13 00:22:06.599 [INFO][6576] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" Aug 13 00:22:06.604675 containerd[2138]: time="2025-08-13T00:22:06.604433657Z" level=info msg="TearDown network for sandbox \"d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2\" successfully" Aug 13 00:22:06.604675 containerd[2138]: time="2025-08-13T00:22:06.604500005Z" level=info msg="StopPodSandbox for \"d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2\" returns successfully" Aug 13 00:22:06.605900 containerd[2138]: time="2025-08-13T00:22:06.605728481Z" level=info msg="RemovePodSandbox for \"d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2\"" Aug 13 00:22:06.605900 containerd[2138]: time="2025-08-13T00:22:06.605796893Z" level=info msg="Forcibly stopping sandbox \"d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2\"" Aug 13 00:22:06.774916 containerd[2138]: 2025-08-13 00:22:06.682 [WARNING][6598] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--162-k8s-coredns--7c65d6cfc9--mx5t4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"177eec0b-4e35-4df6-b815-1a477ed2acfc", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-162", ContainerID:"cbb88c05e91242a0ccf204d15f73c893e9d0b3d159f990fa5fd0d4313a96b975", Pod:"coredns-7c65d6cfc9-mx5t4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali93f6d79960c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:22:06.774916 containerd[2138]: 2025-08-13 00:22:06.684 [INFO][6598] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" Aug 13 00:22:06.774916 containerd[2138]: 2025-08-13 00:22:06.684 [INFO][6598] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" iface="eth0" netns="" Aug 13 00:22:06.774916 containerd[2138]: 2025-08-13 00:22:06.684 [INFO][6598] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" Aug 13 00:22:06.774916 containerd[2138]: 2025-08-13 00:22:06.684 [INFO][6598] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" Aug 13 00:22:06.774916 containerd[2138]: 2025-08-13 00:22:06.743 [INFO][6606] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" HandleID="k8s-pod-network.d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" Workload="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--mx5t4-eth0" Aug 13 00:22:06.774916 containerd[2138]: 2025-08-13 00:22:06.744 [INFO][6606] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:22:06.774916 containerd[2138]: 2025-08-13 00:22:06.745 [INFO][6606] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:22:06.774916 containerd[2138]: 2025-08-13 00:22:06.763 [WARNING][6606] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" HandleID="k8s-pod-network.d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" Workload="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--mx5t4-eth0" Aug 13 00:22:06.774916 containerd[2138]: 2025-08-13 00:22:06.763 [INFO][6606] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" HandleID="k8s-pod-network.d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" Workload="ip--172--31--31--162-k8s-coredns--7c65d6cfc9--mx5t4-eth0" Aug 13 00:22:06.774916 containerd[2138]: 2025-08-13 00:22:06.766 [INFO][6606] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:22:06.774916 containerd[2138]: 2025-08-13 00:22:06.770 [INFO][6598] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2" Aug 13 00:22:06.777435 containerd[2138]: time="2025-08-13T00:22:06.775115070Z" level=info msg="TearDown network for sandbox \"d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2\" successfully" Aug 13 00:22:06.784686 containerd[2138]: time="2025-08-13T00:22:06.784535550Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:22:06.785647 containerd[2138]: time="2025-08-13T00:22:06.785436210Z" level=info msg="RemovePodSandbox \"d06b4250c0ac8407f7d1f8dc1185b34cd9528fcd27e83beb49fb2b57c3e974e2\" returns successfully" Aug 13 00:22:07.437750 containerd[2138]: time="2025-08-13T00:22:07.436786577Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:07.439029 containerd[2138]: time="2025-08-13T00:22:07.438808277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Aug 13 00:22:07.441393 containerd[2138]: time="2025-08-13T00:22:07.441310445Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:07.446477 containerd[2138]: time="2025-08-13T00:22:07.446383397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:07.448280 containerd[2138]: time="2025-08-13T00:22:07.447880169Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 2.856181982s" Aug 13 00:22:07.448280 containerd[2138]: time="2025-08-13T00:22:07.447941225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 13 00:22:07.450485 containerd[2138]: time="2025-08-13T00:22:07.450202925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 00:22:07.454020 containerd[2138]: time="2025-08-13T00:22:07.453818729Z" level=info msg="CreateContainer within sandbox \"095bc929cadbd6d01743f2820f40d4364304184146da7e3495c5326e8dd1ea5b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:22:07.484033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount983223126.mount: Deactivated successfully. Aug 13 00:22:07.495061 containerd[2138]: time="2025-08-13T00:22:07.494156609Z" level=info msg="CreateContainer within sandbox \"095bc929cadbd6d01743f2820f40d4364304184146da7e3495c5326e8dd1ea5b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"184acaccc54727ad0704f553b4a711333096310e3cf51930ba1b592fd542a286\"" Aug 13 00:22:07.500847 containerd[2138]: time="2025-08-13T00:22:07.498015629Z" level=info msg="StartContainer for \"184acaccc54727ad0704f553b4a711333096310e3cf51930ba1b592fd542a286\"" Aug 13 00:22:07.632874 containerd[2138]: time="2025-08-13T00:22:07.632793606Z" level=info msg="StartContainer for \"184acaccc54727ad0704f553b4a711333096310e3cf51930ba1b592fd542a286\" returns successfully" Aug 13 00:22:07.877539 systemd[1]: Started sshd@9-172.31.31.162:22-139.178.89.65:59494.service - OpenSSH per-connection server daemon (139.178.89.65:59494). Aug 13 00:22:08.120504 sshd[6655]: Accepted publickey for core from 139.178.89.65 port 59494 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:22:08.131224 sshd[6655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:08.141237 systemd-logind[2103]: New session 10 of user core. Aug 13 00:22:08.150503 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 00:22:08.449839 sshd[6655]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:08.456880 systemd-logind[2103]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:22:08.457435 systemd[1]: sshd@9-172.31.31.162:22-139.178.89.65:59494.service: Deactivated successfully. Aug 13 00:22:08.471151 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:22:08.480643 systemd-logind[2103]: Removed session 10. Aug 13 00:22:08.484512 systemd[1]: Started sshd@10-172.31.31.162:22-139.178.89.65:59496.service - OpenSSH per-connection server daemon (139.178.89.65:59496). Aug 13 00:22:08.754130 sshd[6672]: Accepted publickey for core from 139.178.89.65 port 59496 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:22:08.767447 sshd[6672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:08.791107 systemd-logind[2103]: New session 11 of user core. Aug 13 00:22:08.799805 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 00:22:09.320574 sshd[6672]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:09.344365 systemd[1]: sshd@10-172.31.31.162:22-139.178.89.65:59496.service: Deactivated successfully. Aug 13 00:22:09.359456 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:22:09.369029 systemd-logind[2103]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:22:09.385904 systemd[1]: Started sshd@11-172.31.31.162:22-139.178.89.65:51736.service - OpenSSH per-connection server daemon (139.178.89.65:51736). Aug 13 00:22:09.389889 systemd-logind[2103]: Removed session 11. Aug 13 00:22:09.624245 sshd[6690]: Accepted publickey for core from 139.178.89.65 port 51736 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:22:09.628670 sshd[6690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:09.645092 systemd-logind[2103]: New session 12 of user core. Aug 13 00:22:09.653231 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 00:22:10.048529 sshd[6690]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:10.063700 systemd-logind[2103]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:22:10.064879 systemd[1]: sshd@11-172.31.31.162:22-139.178.89.65:51736.service: Deactivated successfully. Aug 13 00:22:10.078967 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:22:10.082202 systemd-logind[2103]: Removed session 12. Aug 13 00:22:11.110698 containerd[2138]: time="2025-08-13T00:22:11.110504851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:11.112969 containerd[2138]: time="2025-08-13T00:22:11.112912111Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Aug 13 00:22:11.116616 containerd[2138]: time="2025-08-13T00:22:11.116532295Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:11.121303 containerd[2138]: time="2025-08-13T00:22:11.121235959Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:11.124280 containerd[2138]: time="2025-08-13T00:22:11.124210951Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 3.673948602s" Aug 13 00:22:11.124444 containerd[2138]: time="2025-08-13T00:22:11.124277851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Aug 13 00:22:11.128405 containerd[2138]: time="2025-08-13T00:22:11.128328019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:22:11.159048 containerd[2138]: time="2025-08-13T00:22:11.157307575Z" level=info msg="CreateContainer within sandbox \"4305770d2b7ca762c98a746c9ccf55316a42445bdb86836d6faa67aa90858d7a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 00:22:11.214959 containerd[2138]: time="2025-08-13T00:22:11.214173260Z" level=info msg="CreateContainer within sandbox \"4305770d2b7ca762c98a746c9ccf55316a42445bdb86836d6faa67aa90858d7a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b043fecd98e208c18257e62cf0a81ec25783966a68dc282b682c4f7321b81393\"" Aug 13 00:22:11.217110 containerd[2138]: time="2025-08-13T00:22:11.216981716Z" level=info msg="StartContainer for \"b043fecd98e208c18257e62cf0a81ec25783966a68dc282b682c4f7321b81393\"" Aug 13 00:22:11.422874 containerd[2138]: time="2025-08-13T00:22:11.422702433Z" level=info msg="StartContainer for \"b043fecd98e208c18257e62cf0a81ec25783966a68dc282b682c4f7321b81393\" returns successfully" Aug 13 00:22:11.480026 containerd[2138]: time="2025-08-13T00:22:11.477687297Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:11.481517 containerd[2138]: time="2025-08-13T00:22:11.481468809Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 00:22:11.494193 containerd[2138]: time="2025-08-13T00:22:11.493505277Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 365.113178ms" Aug 13 00:22:11.494193 containerd[2138]: time="2025-08-13T00:22:11.494046657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 13 00:22:11.504383 containerd[2138]: time="2025-08-13T00:22:11.502759017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 00:22:11.504383 containerd[2138]: time="2025-08-13T00:22:11.504254565Z" level=info msg="CreateContainer within sandbox \"bb1b8f160cb28c25f4efcd53e7344264d070511711a1aaafb243aef6e354c4e6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:22:11.546634 containerd[2138]: time="2025-08-13T00:22:11.546547233Z" level=info msg="CreateContainer within sandbox \"bb1b8f160cb28c25f4efcd53e7344264d070511711a1aaafb243aef6e354c4e6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3aafd7182c309c689ea9408f8d334eb4a7222d6f50fc7ed3cc5df8904e0621d0\"" Aug 13 00:22:11.553017 containerd[2138]: time="2025-08-13T00:22:11.551603589Z" level=info msg="StartContainer for \"3aafd7182c309c689ea9408f8d334eb4a7222d6f50fc7ed3cc5df8904e0621d0\"" Aug 13 00:22:11.777694 containerd[2138]: time="2025-08-13T00:22:11.777566195Z" level=info msg="StartContainer for \"3aafd7182c309c689ea9408f8d334eb4a7222d6f50fc7ed3cc5df8904e0621d0\" returns successfully" Aug 13 00:22:12.227967 kubelet[3661]: I0813 00:22:12.227736 3661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5768c76bdb-97r48" podStartSLOduration=42.160916569 podStartE2EDuration="50.227712669s" podCreationTimestamp="2025-08-13 00:21:22 +0000 UTC" firstStartedPulling="2025-08-13 00:21:59.383182041 +0000 UTC m=+56.391065549" lastFinishedPulling="2025-08-13 00:22:07.449978153 +0000 UTC m=+64.457861649" observedRunningTime="2025-08-13 00:22:08.200832413 +0000 UTC m=+65.208715945" watchObservedRunningTime="2025-08-13 00:22:12.227712669 +0000 UTC m=+69.235596213" Aug 13 00:22:12.250025 kubelet[3661]: I0813 00:22:12.249090 3661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-764ff7f5f7-s9b5s" podStartSLOduration=28.043843762 podStartE2EDuration="39.249063405s" podCreationTimestamp="2025-08-13 00:21:33 +0000 UTC" firstStartedPulling="2025-08-13 00:21:59.920708784 +0000 UTC m=+56.928592292" lastFinishedPulling="2025-08-13 00:22:11.125928415 +0000 UTC m=+68.133811935" observedRunningTime="2025-08-13 00:22:12.247401789 +0000 UTC m=+69.255285321" watchObservedRunningTime="2025-08-13 00:22:12.249063405 +0000 UTC m=+69.256946937" Aug 13 00:22:12.332537 kubelet[3661]: I0813 00:22:12.330724 3661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5768c76bdb-qb8dv" podStartSLOduration=39.128053521 podStartE2EDuration="50.330699021s" podCreationTimestamp="2025-08-13 00:21:22 +0000 UTC" firstStartedPulling="2025-08-13 00:22:00.294807885 +0000 UTC m=+57.302691405" lastFinishedPulling="2025-08-13 00:22:11.497453397 +0000 UTC m=+68.505336905" observedRunningTime="2025-08-13 00:22:12.328170873 +0000 UTC m=+69.336054405" watchObservedRunningTime="2025-08-13 00:22:12.330699021 +0000 UTC m=+69.338582517" Aug 13 00:22:13.213034 kubelet[3661]: I0813 00:22:13.212386 3661 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:22:13.572420 containerd[2138]: time="2025-08-13T00:22:13.571765079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:13.579386 containerd[2138]: time="2025-08-13T00:22:13.577036907Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Aug 13 00:22:13.581024 containerd[2138]: time="2025-08-13T00:22:13.579652763Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:13.615743 containerd[2138]: time="2025-08-13T00:22:13.615629508Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:22:13.615743 containerd[2138]: time="2025-08-13T00:22:13.620226624Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 2.117344523s" Aug 13 00:22:13.615743 containerd[2138]: time="2025-08-13T00:22:13.620285964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Aug 13 00:22:13.630978 containerd[2138]: time="2025-08-13T00:22:13.629703540Z" level=info msg="CreateContainer within sandbox \"9780c5c5997e1b4e12ef350559f21e12ac0c8c9c347931bf3599c698f468a49b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 00:22:13.685335 containerd[2138]: time="2025-08-13T00:22:13.684980664Z" level=info msg="CreateContainer within sandbox \"9780c5c5997e1b4e12ef350559f21e12ac0c8c9c347931bf3599c698f468a49b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"40f92565accb1357376755436016c63c6d8eb6fda77d3fb1e12d3a6862777636\"" Aug 13 00:22:13.695364 containerd[2138]: time="2025-08-13T00:22:13.694239000Z" level=info msg="StartContainer for \"40f92565accb1357376755436016c63c6d8eb6fda77d3fb1e12d3a6862777636\"" Aug 13 00:22:14.021644 containerd[2138]: time="2025-08-13T00:22:14.021517042Z" level=info msg="StartContainer for \"40f92565accb1357376755436016c63c6d8eb6fda77d3fb1e12d3a6862777636\" returns successfully" Aug 13 00:22:14.504844 kubelet[3661]: I0813 00:22:14.504786 3661 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 00:22:14.506519 kubelet[3661]: I0813 00:22:14.504858 3661 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 00:22:15.087750 systemd[1]: Started sshd@12-172.31.31.162:22-139.178.89.65:51746.service - OpenSSH per-connection server daemon (139.178.89.65:51746). Aug 13 00:22:15.310800 sshd[6903]: Accepted publickey for core from 139.178.89.65 port 51746 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:22:15.314295 sshd[6903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:15.327101 systemd-logind[2103]: New session 13 of user core. Aug 13 00:22:15.336471 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 00:22:15.616759 sshd[6903]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:15.622706 systemd-logind[2103]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:22:15.624634 systemd[1]: sshd@12-172.31.31.162:22-139.178.89.65:51746.service: Deactivated successfully. Aug 13 00:22:15.631687 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:22:15.634510 systemd-logind[2103]: Removed session 13. Aug 13 00:22:20.650480 systemd[1]: Started sshd@13-172.31.31.162:22-139.178.89.65:49834.service - OpenSSH per-connection server daemon (139.178.89.65:49834). Aug 13 00:22:20.821198 sshd[6921]: Accepted publickey for core from 139.178.89.65 port 49834 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:22:20.823948 sshd[6921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:20.833769 systemd-logind[2103]: New session 14 of user core. Aug 13 00:22:20.843555 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 00:22:21.099802 sshd[6921]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:21.108365 systemd[1]: sshd@13-172.31.31.162:22-139.178.89.65:49834.service: Deactivated successfully. Aug 13 00:22:21.115844 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:22:21.118820 systemd-logind[2103]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:22:21.121331 systemd-logind[2103]: Removed session 14. Aug 13 00:22:26.141522 systemd[1]: Started sshd@14-172.31.31.162:22-139.178.89.65:49838.service - OpenSSH per-connection server daemon (139.178.89.65:49838). Aug 13 00:22:26.369317 sshd[6936]: Accepted publickey for core from 139.178.89.65 port 49838 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:22:26.374480 sshd[6936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:26.384861 systemd-logind[2103]: New session 15 of user core. Aug 13 00:22:26.393942 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 00:22:26.821473 sshd[6936]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:26.830924 systemd[1]: sshd@14-172.31.31.162:22-139.178.89.65:49838.service: Deactivated successfully. Aug 13 00:22:26.832109 systemd-logind[2103]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:22:26.845846 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:22:26.867240 systemd-logind[2103]: Removed session 15. Aug 13 00:22:31.863495 systemd[1]: Started sshd@15-172.31.31.162:22-139.178.89.65:58564.service - OpenSSH per-connection server daemon (139.178.89.65:58564). Aug 13 00:22:32.058372 sshd[6971]: Accepted publickey for core from 139.178.89.65 port 58564 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:22:32.059492 sshd[6971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:32.070319 systemd-logind[2103]: New session 16 of user core. Aug 13 00:22:32.079247 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 00:22:32.388112 sshd[6971]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:32.395198 systemd-logind[2103]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:22:32.396254 systemd[1]: sshd@15-172.31.31.162:22-139.178.89.65:58564.service: Deactivated successfully. Aug 13 00:22:32.417924 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:22:32.421711 systemd-logind[2103]: Removed session 16. Aug 13 00:22:32.431475 systemd[1]: Started sshd@16-172.31.31.162:22-139.178.89.65:58580.service - OpenSSH per-connection server daemon (139.178.89.65:58580). Aug 13 00:22:32.627405 sshd[6985]: Accepted publickey for core from 139.178.89.65 port 58580 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:22:32.631117 sshd[6985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:32.642177 systemd-logind[2103]: New session 17 of user core. Aug 13 00:22:32.655110 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 00:22:33.413740 sshd[6985]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:33.427641 systemd[1]: sshd@16-172.31.31.162:22-139.178.89.65:58580.service: Deactivated successfully. Aug 13 00:22:33.438072 systemd-logind[2103]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:22:33.448451 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:22:33.469492 systemd[1]: Started sshd@17-172.31.31.162:22-139.178.89.65:58586.service - OpenSSH per-connection server daemon (139.178.89.65:58586). Aug 13 00:22:33.474684 systemd-logind[2103]: Removed session 17. Aug 13 00:22:33.726979 sshd[7004]: Accepted publickey for core from 139.178.89.65 port 58586 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:22:33.730091 sshd[7004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:33.749704 systemd-logind[2103]: New session 18 of user core. Aug 13 00:22:33.763582 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 00:22:36.553217 kubelet[3661]: I0813 00:22:36.553158 3661 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:22:36.621487 kubelet[3661]: I0813 00:22:36.621239 3661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4xwwn" podStartSLOduration=48.764274884 podStartE2EDuration="1m3.621213934s" podCreationTimestamp="2025-08-13 00:21:33 +0000 UTC" firstStartedPulling="2025-08-13 00:21:58.767589418 +0000 UTC m=+55.775472926" lastFinishedPulling="2025-08-13 00:22:13.624528468 +0000 UTC m=+70.632411976" observedRunningTime="2025-08-13 00:22:14.259180691 +0000 UTC m=+71.267064319" watchObservedRunningTime="2025-08-13 00:22:36.621213934 +0000 UTC m=+93.629097442" Aug 13 00:22:38.371310 sshd[7004]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:38.387211 systemd-logind[2103]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:22:38.397387 systemd[1]: sshd@17-172.31.31.162:22-139.178.89.65:58586.service: Deactivated successfully. Aug 13 00:22:38.411715 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:22:38.423266 systemd-logind[2103]: Removed session 18. Aug 13 00:22:38.435514 systemd[1]: Started sshd@18-172.31.31.162:22-139.178.89.65:58600.service - OpenSSH per-connection server daemon (139.178.89.65:58600). Aug 13 00:22:38.666139 sshd[7047]: Accepted publickey for core from 139.178.89.65 port 58600 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:22:38.668564 sshd[7047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:38.698235 systemd-logind[2103]: New session 19 of user core. Aug 13 00:22:38.703622 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 00:22:39.625294 sshd[7047]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:39.640506 systemd[1]: sshd@18-172.31.31.162:22-139.178.89.65:58600.service: Deactivated successfully. Aug 13 00:22:39.643587 systemd-logind[2103]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:22:39.664411 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:22:39.676967 systemd[1]: Started sshd@19-172.31.31.162:22-139.178.89.65:52742.service - OpenSSH per-connection server daemon (139.178.89.65:52742). Aug 13 00:22:39.681444 systemd-logind[2103]: Removed session 19. Aug 13 00:22:39.897877 sshd[7064]: Accepted publickey for core from 139.178.89.65 port 52742 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:22:39.903153 sshd[7064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:39.925312 systemd-logind[2103]: New session 20 of user core. Aug 13 00:22:39.930584 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 00:22:40.293280 sshd[7064]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:40.311256 systemd[1]: sshd@19-172.31.31.162:22-139.178.89.65:52742.service: Deactivated successfully. Aug 13 00:22:40.323554 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:22:40.324452 systemd-logind[2103]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:22:40.329204 systemd-logind[2103]: Removed session 20. Aug 13 00:22:45.337140 systemd[1]: Started sshd@20-172.31.31.162:22-139.178.89.65:52756.service - OpenSSH per-connection server daemon (139.178.89.65:52756). Aug 13 00:22:45.544943 sshd[7140]: Accepted publickey for core from 139.178.89.65 port 52756 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:22:45.550134 sshd[7140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:45.564223 systemd-logind[2103]: New session 21 of user core. Aug 13 00:22:45.575324 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 00:22:45.886518 sshd[7140]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:45.895506 systemd[1]: sshd@20-172.31.31.162:22-139.178.89.65:52756.service: Deactivated successfully. Aug 13 00:22:45.902644 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:22:45.909670 systemd-logind[2103]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:22:45.913721 systemd-logind[2103]: Removed session 21. Aug 13 00:22:50.919847 systemd[1]: Started sshd@21-172.31.31.162:22-139.178.89.65:40924.service - OpenSSH per-connection server daemon (139.178.89.65:40924). Aug 13 00:22:51.104024 sshd[7157]: Accepted publickey for core from 139.178.89.65 port 40924 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:22:51.105375 sshd[7157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:51.115444 systemd-logind[2103]: New session 22 of user core. Aug 13 00:22:51.125566 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 00:22:51.436969 sshd[7157]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:51.445306 systemd-logind[2103]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:22:51.448854 systemd[1]: sshd@21-172.31.31.162:22-139.178.89.65:40924.service: Deactivated successfully. Aug 13 00:22:51.462116 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:22:51.465902 systemd-logind[2103]: Removed session 22. Aug 13 00:22:56.475377 systemd[1]: Started sshd@22-172.31.31.162:22-139.178.89.65:40928.service - OpenSSH per-connection server daemon (139.178.89.65:40928). Aug 13 00:22:56.679922 sshd[7171]: Accepted publickey for core from 139.178.89.65 port 40928 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:22:56.682701 sshd[7171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:22:56.691837 systemd-logind[2103]: New session 23 of user core. Aug 13 00:22:56.701621 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 00:22:57.062042 sshd[7171]: pam_unix(sshd:session): session closed for user core Aug 13 00:22:57.076770 systemd[1]: sshd@22-172.31.31.162:22-139.178.89.65:40928.service: Deactivated successfully. Aug 13 00:22:57.088323 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:22:57.092019 systemd-logind[2103]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:22:57.098438 systemd-logind[2103]: Removed session 23. Aug 13 00:23:02.102154 systemd[1]: Started sshd@23-172.31.31.162:22-139.178.89.65:39924.service - OpenSSH per-connection server daemon (139.178.89.65:39924). Aug 13 00:23:02.323041 sshd[7186]: Accepted publickey for core from 139.178.89.65 port 39924 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:23:02.329466 sshd[7186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:23:02.354268 systemd-logind[2103]: New session 24 of user core. Aug 13 00:23:02.360808 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 00:23:02.642458 sshd[7186]: pam_unix(sshd:session): session closed for user core Aug 13 00:23:02.651111 systemd-logind[2103]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:23:02.658230 systemd[1]: sshd@23-172.31.31.162:22-139.178.89.65:39924.service: Deactivated successfully. Aug 13 00:23:02.667814 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:23:02.670902 systemd-logind[2103]: Removed session 24. Aug 13 00:23:07.676584 systemd[1]: Started sshd@24-172.31.31.162:22-139.178.89.65:39934.service - OpenSSH per-connection server daemon (139.178.89.65:39934). Aug 13 00:23:07.867461 sshd[7226]: Accepted publickey for core from 139.178.89.65 port 39934 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:23:07.871069 sshd[7226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:23:07.886059 systemd-logind[2103]: New session 25 of user core. Aug 13 00:23:07.893408 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 00:23:08.215514 sshd[7226]: pam_unix(sshd:session): session closed for user core Aug 13 00:23:08.228459 systemd[1]: sshd@24-172.31.31.162:22-139.178.89.65:39934.service: Deactivated successfully. Aug 13 00:23:08.239649 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:23:08.246614 systemd-logind[2103]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:23:08.252911 systemd-logind[2103]: Removed session 25. Aug 13 00:23:13.254458 systemd[1]: Started sshd@25-172.31.31.162:22-139.178.89.65:36984.service - OpenSSH per-connection server daemon (139.178.89.65:36984). Aug 13 00:23:13.506901 sshd[7282]: Accepted publickey for core from 139.178.89.65 port 36984 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:23:13.509228 sshd[7282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:23:13.524072 systemd-logind[2103]: New session 26 of user core. Aug 13 00:23:13.563793 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 00:23:13.848130 sshd[7282]: pam_unix(sshd:session): session closed for user core Aug 13 00:23:13.855058 systemd-logind[2103]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:23:13.857734 systemd[1]: sshd@25-172.31.31.162:22-139.178.89.65:36984.service: Deactivated successfully. Aug 13 00:23:13.875564 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:23:13.880157 systemd-logind[2103]: Removed session 26. Aug 13 00:23:29.524923 containerd[2138]: time="2025-08-13T00:23:29.524824609Z" level=info msg="shim disconnected" id=71963f28167d23718ba3cb6e204bc6407da0589a99f769aec4fb8559cb9a2ec1 namespace=k8s.io Aug 13 00:23:29.528387 containerd[2138]: time="2025-08-13T00:23:29.526139497Z" level=warning msg="cleaning up after shim disconnected" id=71963f28167d23718ba3cb6e204bc6407da0589a99f769aec4fb8559cb9a2ec1 namespace=k8s.io Aug 13 00:23:29.528387 containerd[2138]: time="2025-08-13T00:23:29.526202089Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:23:29.532095 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71963f28167d23718ba3cb6e204bc6407da0589a99f769aec4fb8559cb9a2ec1-rootfs.mount: Deactivated successfully. Aug 13 00:23:29.555426 containerd[2138]: time="2025-08-13T00:23:29.555291889Z" level=warning msg="cleanup warnings time=\"2025-08-13T00:23:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 00:23:29.633493 kubelet[3661]: I0813 00:23:29.633443 3661 scope.go:117] "RemoveContainer" containerID="71963f28167d23718ba3cb6e204bc6407da0589a99f769aec4fb8559cb9a2ec1" Aug 13 00:23:29.638359 containerd[2138]: time="2025-08-13T00:23:29.638288365Z" level=info msg="CreateContainer within sandbox \"2a300e294c0ef11346fa160e9936bbb2570c5bf0df1f8347302bfbb043cb60ca\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Aug 13 00:23:29.672191 containerd[2138]: time="2025-08-13T00:23:29.672117037Z" level=info msg="CreateContainer within sandbox \"2a300e294c0ef11346fa160e9936bbb2570c5bf0df1f8347302bfbb043cb60ca\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"351da07ba100924bc9ba2383970d76dcfc12a38c70cc80df7b6428a80c013c5c\"" Aug 13 00:23:29.673301 containerd[2138]: time="2025-08-13T00:23:29.673168933Z" level=info msg="StartContainer for \"351da07ba100924bc9ba2383970d76dcfc12a38c70cc80df7b6428a80c013c5c\"" Aug 13 00:23:29.791049 containerd[2138]: time="2025-08-13T00:23:29.790846370Z" level=info msg="StartContainer for \"351da07ba100924bc9ba2383970d76dcfc12a38c70cc80df7b6428a80c013c5c\" returns successfully" Aug 13 00:23:30.148106 containerd[2138]: time="2025-08-13T00:23:30.147237684Z" level=info msg="shim disconnected" id=adb9a74a9b86ad16fa168fa25991e182c4c97209ec19d72227fb0057c4d01ebc namespace=k8s.io Aug 13 00:23:30.151303 containerd[2138]: time="2025-08-13T00:23:30.151071600Z" level=warning msg="cleaning up after shim disconnected" id=adb9a74a9b86ad16fa168fa25991e182c4c97209ec19d72227fb0057c4d01ebc namespace=k8s.io Aug 13 00:23:30.151303 containerd[2138]: time="2025-08-13T00:23:30.151177512Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:23:30.160353 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-adb9a74a9b86ad16fa168fa25991e182c4c97209ec19d72227fb0057c4d01ebc-rootfs.mount: Deactivated successfully. Aug 13 00:23:30.642890 kubelet[3661]: I0813 00:23:30.642801 3661 scope.go:117] "RemoveContainer" containerID="adb9a74a9b86ad16fa168fa25991e182c4c97209ec19d72227fb0057c4d01ebc" Aug 13 00:23:30.651548 containerd[2138]: time="2025-08-13T00:23:30.649941518Z" level=info msg="CreateContainer within sandbox \"ef642a9ea866d0cf8babb9631b5d7c21d8c6343fbf783319886161d4b1a40c65\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Aug 13 00:23:30.696449 containerd[2138]: time="2025-08-13T00:23:30.690242918Z" level=info msg="CreateContainer within sandbox \"ef642a9ea866d0cf8babb9631b5d7c21d8c6343fbf783319886161d4b1a40c65\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"462c2b1b39b692a42d2bea7069d456d9c407cdae398b4bb30c954617d083a089\"" Aug 13 00:23:30.696449 containerd[2138]: time="2025-08-13T00:23:30.694316427Z" level=info msg="StartContainer for \"462c2b1b39b692a42d2bea7069d456d9c407cdae398b4bb30c954617d083a089\"" Aug 13 00:23:30.788585 systemd[1]: run-containerd-runc-k8s.io-462c2b1b39b692a42d2bea7069d456d9c407cdae398b4bb30c954617d083a089-runc.9Za2f3.mount: Deactivated successfully. Aug 13 00:23:30.853651 containerd[2138]: time="2025-08-13T00:23:30.853437495Z" level=info msg="StartContainer for \"462c2b1b39b692a42d2bea7069d456d9c407cdae398b4bb30c954617d083a089\" returns successfully" Aug 13 00:23:34.837379 containerd[2138]: time="2025-08-13T00:23:34.837270943Z" level=info msg="shim disconnected" id=36feca051c09a0ae38386ce1d8699e6fe99286d5f0e875ea9a80fd5bd8800c15 namespace=k8s.io Aug 13 00:23:34.837379 containerd[2138]: time="2025-08-13T00:23:34.837366907Z" level=warning msg="cleaning up after shim disconnected" id=36feca051c09a0ae38386ce1d8699e6fe99286d5f0e875ea9a80fd5bd8800c15 namespace=k8s.io Aug 13 00:23:34.839031 containerd[2138]: time="2025-08-13T00:23:34.837393979Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:23:34.841463 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36feca051c09a0ae38386ce1d8699e6fe99286d5f0e875ea9a80fd5bd8800c15-rootfs.mount: Deactivated successfully. Aug 13 00:23:35.669904 kubelet[3661]: I0813 00:23:35.669837 3661 scope.go:117] "RemoveContainer" containerID="36feca051c09a0ae38386ce1d8699e6fe99286d5f0e875ea9a80fd5bd8800c15" Aug 13 00:23:35.673323 containerd[2138]: time="2025-08-13T00:23:35.673266823Z" level=info msg="CreateContainer within sandbox \"6a86fb1f75482f5ac443d1a17f00f9d60419d8ffcbaec76cf7f2e4cd3cfe25a6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Aug 13 00:23:35.699548 containerd[2138]: time="2025-08-13T00:23:35.699471691Z" level=info msg="CreateContainer within sandbox \"6a86fb1f75482f5ac443d1a17f00f9d60419d8ffcbaec76cf7f2e4cd3cfe25a6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"6513640a4f4fb959090b28311e8191192e0f8c582435d1f5ac578ed0adf2a255\"" Aug 13 00:23:35.701448 containerd[2138]: time="2025-08-13T00:23:35.701392987Z" level=info msg="StartContainer for \"6513640a4f4fb959090b28311e8191192e0f8c582435d1f5ac578ed0adf2a255\"" Aug 13 00:23:35.824407 containerd[2138]: time="2025-08-13T00:23:35.824168072Z" level=info msg="StartContainer for \"6513640a4f4fb959090b28311e8191192e0f8c582435d1f5ac578ed0adf2a255\" returns successfully" Aug 13 00:23:36.666966 kubelet[3661]: E0813 00:23:36.666123 3661 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-162?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Aug 13 00:23:42.272307 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-462c2b1b39b692a42d2bea7069d456d9c407cdae398b4bb30c954617d083a089-rootfs.mount: Deactivated successfully. Aug 13 00:23:42.283865 containerd[2138]: time="2025-08-13T00:23:42.283426008Z" level=info msg="shim disconnected" id=462c2b1b39b692a42d2bea7069d456d9c407cdae398b4bb30c954617d083a089 namespace=k8s.io Aug 13 00:23:42.283865 containerd[2138]: time="2025-08-13T00:23:42.283707408Z" level=warning msg="cleaning up after shim disconnected" id=462c2b1b39b692a42d2bea7069d456d9c407cdae398b4bb30c954617d083a089 namespace=k8s.io Aug 13 00:23:42.285163 containerd[2138]: time="2025-08-13T00:23:42.283730280Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:23:42.698154 kubelet[3661]: I0813 00:23:42.697363 3661 scope.go:117] "RemoveContainer" containerID="adb9a74a9b86ad16fa168fa25991e182c4c97209ec19d72227fb0057c4d01ebc" Aug 13 00:23:42.698154 kubelet[3661]: I0813 00:23:42.697725 3661 scope.go:117] "RemoveContainer" containerID="462c2b1b39b692a42d2bea7069d456d9c407cdae398b4bb30c954617d083a089" Aug 13 00:23:42.699720 containerd[2138]: time="2025-08-13T00:23:42.699656102Z" level=info msg="RemoveContainer for \"adb9a74a9b86ad16fa168fa25991e182c4c97209ec19d72227fb0057c4d01ebc\"" Aug 13 00:23:42.700640 kubelet[3661]: E0813 00:23:42.700560 3661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-5bf8dfcb4-w4m7k_tigera-operator(50a85f18-7700-47ba-92db-84f83fef182f)\"" pod="tigera-operator/tigera-operator-5bf8dfcb4-w4m7k" podUID="50a85f18-7700-47ba-92db-84f83fef182f" Aug 13 00:23:42.709048 containerd[2138]: time="2025-08-13T00:23:42.708956846Z" level=info msg="RemoveContainer for \"adb9a74a9b86ad16fa168fa25991e182c4c97209ec19d72227fb0057c4d01ebc\" returns successfully" Aug 13 00:23:46.667309 kubelet[3661]: E0813 00:23:46.666974 3661 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-162?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"