Sep 12 17:09:43.228775 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 12 17:09:43.228823 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 12 15:59:19 -00 2025 Sep 12 17:09:43.228849 kernel: KASLR disabled due to lack of seed Sep 12 17:09:43.228866 kernel: efi: EFI v2.7 by EDK II Sep 12 17:09:43.228882 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x7852ee18 Sep 12 17:09:43.228898 kernel: ACPI: Early table checksum verification disabled Sep 12 17:09:43.228917 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 12 17:09:43.228933 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 12 17:09:43.228960 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 12 17:09:43.228982 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 12 17:09:43.229006 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 12 17:09:43.229022 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 12 17:09:43.229038 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 12 17:09:43.229055 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 12 17:09:43.229073 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 12 17:09:43.229094 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 12 17:09:43.229112 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 12 17:09:43.229129 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 12 17:09:43.229146 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 12 17:09:43.229162 kernel: printk: bootconsole [uart0] enabled Sep 12 17:09:43.229179 kernel: NUMA: Failed to initialise from firmware Sep 12 17:09:43.229196 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 12 17:09:43.229213 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Sep 12 17:09:43.229229 kernel: Zone ranges: Sep 12 17:09:43.229246 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 12 17:09:43.229263 kernel: DMA32 empty Sep 12 17:09:43.229284 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 12 17:09:43.229314 kernel: Movable zone start for each node Sep 12 17:09:43.229333 kernel: Early memory node ranges Sep 12 17:09:43.229349 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 12 17:09:43.229366 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 12 17:09:43.229383 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 12 17:09:43.229399 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 12 17:09:43.229416 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 12 17:09:43.229432 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 12 17:09:43.229448 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 12 17:09:43.229465 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 12 17:09:43.229481 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 12 17:09:43.229503 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 12 17:09:43.229521 kernel: psci: probing for conduit method from ACPI. Sep 12 17:09:43.229545 kernel: psci: PSCIv1.0 detected in firmware. Sep 12 17:09:43.229598 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 17:09:43.229621 kernel: psci: Trusted OS migration not required Sep 12 17:09:43.229645 kernel: psci: SMC Calling Convention v1.1 Sep 12 17:09:43.229663 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Sep 12 17:09:43.229681 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 12 17:09:43.229699 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 12 17:09:43.229716 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 12 17:09:43.229734 kernel: Detected PIPT I-cache on CPU0 Sep 12 17:09:43.229752 kernel: CPU features: detected: GIC system register CPU interface Sep 12 17:09:43.229769 kernel: CPU features: detected: Spectre-v2 Sep 12 17:09:43.229786 kernel: CPU features: detected: Spectre-v3a Sep 12 17:09:43.229804 kernel: CPU features: detected: Spectre-BHB Sep 12 17:09:43.229821 kernel: CPU features: detected: ARM erratum 1742098 Sep 12 17:09:43.229843 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 12 17:09:43.229861 kernel: alternatives: applying boot alternatives Sep 12 17:09:43.229881 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1e63d3057914877efa0eb5f75703bd3a3d4c120bdf4a7ab97f41083e29183e56 Sep 12 17:09:43.229900 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:09:43.229918 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:09:43.229936 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:09:43.229953 kernel: Fallback order for Node 0: 0 Sep 12 17:09:43.229971 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Sep 12 17:09:43.229989 kernel: Policy zone: Normal Sep 12 17:09:43.230006 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:09:43.230023 kernel: software IO TLB: area num 2. Sep 12 17:09:43.230058 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Sep 12 17:09:43.230079 kernel: Memory: 3820024K/4030464K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39488K init, 897K bss, 210440K reserved, 0K cma-reserved) Sep 12 17:09:43.230097 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:09:43.230115 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:09:43.230134 kernel: rcu: RCU event tracing is enabled. Sep 12 17:09:43.230153 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:09:43.230171 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:09:43.230189 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:09:43.230206 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:09:43.230224 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:09:43.230242 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 17:09:43.230265 kernel: GICv3: 96 SPIs implemented Sep 12 17:09:43.230283 kernel: GICv3: 0 Extended SPIs implemented Sep 12 17:09:43.230301 kernel: Root IRQ handler: gic_handle_irq Sep 12 17:09:43.230319 kernel: GICv3: GICv3 features: 16 PPIs Sep 12 17:09:43.230336 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 12 17:09:43.230354 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 12 17:09:43.230372 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Sep 12 17:09:43.230390 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Sep 12 17:09:43.230408 kernel: GICv3: using LPI property table @0x00000004000d0000 Sep 12 17:09:43.230426 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 12 17:09:43.230444 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Sep 12 17:09:43.230462 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:09:43.230500 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 12 17:09:43.230519 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 12 17:09:43.230537 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 12 17:09:43.230555 kernel: Console: colour dummy device 80x25 Sep 12 17:09:43.231705 kernel: printk: console [tty1] enabled Sep 12 17:09:43.231728 kernel: ACPI: Core revision 20230628 Sep 12 17:09:43.231748 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 12 17:09:43.231766 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:09:43.231785 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:09:43.231814 kernel: landlock: Up and running. Sep 12 17:09:43.231833 kernel: SELinux: Initializing. Sep 12 17:09:43.231852 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:09:43.231870 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:09:43.231889 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:09:43.231908 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:09:43.231927 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:09:43.231946 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:09:43.231964 kernel: Platform MSI: ITS@0x10080000 domain created Sep 12 17:09:43.231986 kernel: PCI/MSI: ITS@0x10080000 domain created Sep 12 17:09:43.232005 kernel: Remapping and enabling EFI services. Sep 12 17:09:43.232023 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:09:43.232041 kernel: Detected PIPT I-cache on CPU1 Sep 12 17:09:43.232072 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 12 17:09:43.232095 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Sep 12 17:09:43.232113 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 12 17:09:43.232131 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:09:43.232151 kernel: SMP: Total of 2 processors activated. Sep 12 17:09:43.232169 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 17:09:43.232193 kernel: CPU features: detected: 32-bit EL1 Support Sep 12 17:09:43.232212 kernel: CPU features: detected: CRC32 instructions Sep 12 17:09:43.232248 kernel: CPU: All CPU(s) started at EL1 Sep 12 17:09:43.232272 kernel: alternatives: applying system-wide alternatives Sep 12 17:09:43.232291 kernel: devtmpfs: initialized Sep 12 17:09:43.232310 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:09:43.232329 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:09:43.232348 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:09:43.232368 kernel: SMBIOS 3.0.0 present. Sep 12 17:09:43.232391 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 12 17:09:43.232410 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:09:43.232429 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 17:09:43.232448 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 17:09:43.232467 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 17:09:43.232486 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:09:43.232504 kernel: audit: type=2000 audit(0.287:1): state=initialized audit_enabled=0 res=1 Sep 12 17:09:43.232528 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:09:43.232547 kernel: cpuidle: using governor menu Sep 12 17:09:43.232604 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 17:09:43.232626 kernel: ASID allocator initialised with 65536 entries Sep 12 17:09:43.232645 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:09:43.232664 kernel: Serial: AMBA PL011 UART driver Sep 12 17:09:43.232692 kernel: Modules: 17472 pages in range for non-PLT usage Sep 12 17:09:43.232716 kernel: Modules: 508992 pages in range for PLT usage Sep 12 17:09:43.232736 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:09:43.232761 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:09:43.232780 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 17:09:43.232799 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 17:09:43.232818 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:09:43.232836 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:09:43.232855 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 17:09:43.232874 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 17:09:43.232894 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:09:43.232914 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:09:43.232938 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:09:43.232957 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:09:43.232976 kernel: ACPI: Interpreter enabled Sep 12 17:09:43.232994 kernel: ACPI: Using GIC for interrupt routing Sep 12 17:09:43.233013 kernel: ACPI: MCFG table detected, 1 entries Sep 12 17:09:43.233032 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 12 17:09:43.233336 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:09:43.233548 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 12 17:09:43.233830 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 12 17:09:43.234180 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 12 17:09:43.234393 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 12 17:09:43.234419 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 12 17:09:43.234439 kernel: acpiphp: Slot [1] registered Sep 12 17:09:43.234457 kernel: acpiphp: Slot [2] registered Sep 12 17:09:43.234477 kernel: acpiphp: Slot [3] registered Sep 12 17:09:43.234496 kernel: acpiphp: Slot [4] registered Sep 12 17:09:43.234521 kernel: acpiphp: Slot [5] registered Sep 12 17:09:43.234540 kernel: acpiphp: Slot [6] registered Sep 12 17:09:43.234559 kernel: acpiphp: Slot [7] registered Sep 12 17:09:43.234600 kernel: acpiphp: Slot [8] registered Sep 12 17:09:43.234620 kernel: acpiphp: Slot [9] registered Sep 12 17:09:43.234685 kernel: acpiphp: Slot [10] registered Sep 12 17:09:43.234707 kernel: acpiphp: Slot [11] registered Sep 12 17:09:43.234726 kernel: acpiphp: Slot [12] registered Sep 12 17:09:43.234746 kernel: acpiphp: Slot [13] registered Sep 12 17:09:43.234765 kernel: acpiphp: Slot [14] registered Sep 12 17:09:43.234790 kernel: acpiphp: Slot [15] registered Sep 12 17:09:43.234810 kernel: acpiphp: Slot [16] registered Sep 12 17:09:43.234828 kernel: acpiphp: Slot [17] registered Sep 12 17:09:43.234847 kernel: acpiphp: Slot [18] registered Sep 12 17:09:43.234866 kernel: acpiphp: Slot [19] registered Sep 12 17:09:43.234885 kernel: acpiphp: Slot [20] registered Sep 12 17:09:43.234904 kernel: acpiphp: Slot [21] registered Sep 12 17:09:43.234922 kernel: acpiphp: Slot [22] registered Sep 12 17:09:43.234941 kernel: acpiphp: Slot [23] registered Sep 12 17:09:43.234965 kernel: acpiphp: Slot [24] registered Sep 12 17:09:43.234984 kernel: acpiphp: Slot [25] registered Sep 12 17:09:43.235003 kernel: acpiphp: Slot [26] registered Sep 12 17:09:43.235022 kernel: acpiphp: Slot [27] registered Sep 12 17:09:43.235041 kernel: acpiphp: Slot [28] registered Sep 12 17:09:43.235060 kernel: acpiphp: Slot [29] registered Sep 12 17:09:43.235079 kernel: acpiphp: Slot [30] registered Sep 12 17:09:43.235098 kernel: acpiphp: Slot [31] registered Sep 12 17:09:43.235117 kernel: PCI host bridge to bus 0000:00 Sep 12 17:09:43.235741 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 12 17:09:43.235947 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 12 17:09:43.236151 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 12 17:09:43.236356 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 12 17:09:43.236632 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Sep 12 17:09:43.236882 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Sep 12 17:09:43.237132 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Sep 12 17:09:43.237372 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 12 17:09:43.239709 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Sep 12 17:09:43.242402 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 12 17:09:43.242688 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 12 17:09:43.242912 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Sep 12 17:09:43.243142 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Sep 12 17:09:43.243397 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Sep 12 17:09:43.245794 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 12 17:09:43.246056 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Sep 12 17:09:43.246271 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Sep 12 17:09:43.246486 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Sep 12 17:09:43.246739 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Sep 12 17:09:43.246953 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Sep 12 17:09:43.247148 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 12 17:09:43.247345 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 12 17:09:43.249604 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 12 17:09:43.249660 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 12 17:09:43.249681 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 12 17:09:43.249702 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 12 17:09:43.249721 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 12 17:09:43.249741 kernel: iommu: Default domain type: Translated Sep 12 17:09:43.249761 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 17:09:43.249791 kernel: efivars: Registered efivars operations Sep 12 17:09:43.249810 kernel: vgaarb: loaded Sep 12 17:09:43.249830 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 17:09:43.249850 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:09:43.249870 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:09:43.249890 kernel: pnp: PnP ACPI init Sep 12 17:09:43.250147 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 12 17:09:43.250176 kernel: pnp: PnP ACPI: found 1 devices Sep 12 17:09:43.250202 kernel: NET: Registered PF_INET protocol family Sep 12 17:09:43.250222 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:09:43.250241 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 17:09:43.250260 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:09:43.250279 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:09:43.250298 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 17:09:43.250317 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 17:09:43.250336 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:09:43.250354 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:09:43.250378 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:09:43.250397 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:09:43.250416 kernel: kvm [1]: HYP mode not available Sep 12 17:09:43.250434 kernel: Initialise system trusted keyrings Sep 12 17:09:43.250470 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 17:09:43.250491 kernel: Key type asymmetric registered Sep 12 17:09:43.250510 kernel: Asymmetric key parser 'x509' registered Sep 12 17:09:43.250528 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 17:09:43.250547 kernel: io scheduler mq-deadline registered Sep 12 17:09:43.250607 kernel: io scheduler kyber registered Sep 12 17:09:43.250628 kernel: io scheduler bfq registered Sep 12 17:09:43.250869 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 12 17:09:43.250898 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 12 17:09:43.250917 kernel: ACPI: button: Power Button [PWRB] Sep 12 17:09:43.250937 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 12 17:09:43.250955 kernel: ACPI: button: Sleep Button [SLPB] Sep 12 17:09:43.250974 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:09:43.251000 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 12 17:09:43.251260 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 12 17:09:43.251288 kernel: printk: console [ttyS0] disabled Sep 12 17:09:43.251308 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 12 17:09:43.251327 kernel: printk: console [ttyS0] enabled Sep 12 17:09:43.251346 kernel: printk: bootconsole [uart0] disabled Sep 12 17:09:43.251365 kernel: thunder_xcv, ver 1.0 Sep 12 17:09:43.251383 kernel: thunder_bgx, ver 1.0 Sep 12 17:09:43.251402 kernel: nicpf, ver 1.0 Sep 12 17:09:43.251427 kernel: nicvf, ver 1.0 Sep 12 17:09:43.253715 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 17:09:43.253937 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T17:09:42 UTC (1757696982) Sep 12 17:09:43.253964 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 17:09:43.253984 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Sep 12 17:09:43.254004 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 12 17:09:43.254023 kernel: watchdog: Hard watchdog permanently disabled Sep 12 17:09:43.254042 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:09:43.254069 kernel: Segment Routing with IPv6 Sep 12 17:09:43.254088 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:09:43.254108 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:09:43.254127 kernel: Key type dns_resolver registered Sep 12 17:09:43.254146 kernel: registered taskstats version 1 Sep 12 17:09:43.254165 kernel: Loading compiled-in X.509 certificates Sep 12 17:09:43.254184 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 2d576b5e69e6c5de2f731966fe8b55173c144d02' Sep 12 17:09:43.254202 kernel: Key type .fscrypt registered Sep 12 17:09:43.254221 kernel: Key type fscrypt-provisioning registered Sep 12 17:09:43.254244 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:09:43.254263 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:09:43.254282 kernel: ima: No architecture policies found Sep 12 17:09:43.254301 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 17:09:43.254320 kernel: clk: Disabling unused clocks Sep 12 17:09:43.254338 kernel: Freeing unused kernel memory: 39488K Sep 12 17:09:43.254357 kernel: Run /init as init process Sep 12 17:09:43.254376 kernel: with arguments: Sep 12 17:09:43.254394 kernel: /init Sep 12 17:09:43.254413 kernel: with environment: Sep 12 17:09:43.254436 kernel: HOME=/ Sep 12 17:09:43.254455 kernel: TERM=linux Sep 12 17:09:43.254473 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:09:43.254496 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:09:43.254520 systemd[1]: Detected virtualization amazon. Sep 12 17:09:43.254541 systemd[1]: Detected architecture arm64. Sep 12 17:09:43.254577 systemd[1]: Running in initrd. Sep 12 17:09:43.254610 systemd[1]: No hostname configured, using default hostname. Sep 12 17:09:43.254633 systemd[1]: Hostname set to . Sep 12 17:09:43.254655 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:09:43.254676 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:09:43.254697 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:09:43.254718 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:09:43.254739 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:09:43.254761 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:09:43.254788 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:09:43.254811 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:09:43.254835 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:09:43.254857 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:09:43.254878 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:09:43.254898 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:09:43.254919 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:09:43.254945 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:09:43.254966 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:09:43.254986 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:09:43.255007 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:09:43.255028 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:09:43.255049 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:09:43.255070 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:09:43.255093 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:09:43.255115 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:09:43.255143 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:09:43.255164 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:09:43.255186 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:09:43.255208 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:09:43.255230 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:09:43.255252 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:09:43.255273 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:09:43.255296 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:09:43.255323 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:09:43.255345 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:09:43.255367 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:09:43.255389 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:09:43.255470 systemd-journald[251]: Collecting audit messages is disabled. Sep 12 17:09:43.255527 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:09:43.255552 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:09:43.256343 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:09:43.256372 systemd-journald[251]: Journal started Sep 12 17:09:43.256419 systemd-journald[251]: Runtime Journal (/run/log/journal/ec27e2d5e70df89055b93bb56933c96d) is 8.0M, max 75.3M, 67.3M free. Sep 12 17:09:43.232198 systemd-modules-load[252]: Inserted module 'overlay' Sep 12 17:09:43.272636 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:09:43.273457 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:09:43.295593 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:09:43.297347 systemd-modules-load[252]: Inserted module 'br_netfilter' Sep 12 17:09:43.301934 kernel: Bridge firewalling registered Sep 12 17:09:43.304092 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:09:43.318953 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:09:43.319547 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:09:43.333675 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:09:43.346628 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:09:43.373451 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:09:43.377039 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:09:43.384011 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:09:43.395894 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:09:43.407864 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:09:43.426268 dracut-cmdline[286]: dracut-dracut-053 Sep 12 17:09:43.436383 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1e63d3057914877efa0eb5f75703bd3a3d4c120bdf4a7ab97f41083e29183e56 Sep 12 17:09:43.494647 systemd-resolved[288]: Positive Trust Anchors: Sep 12 17:09:43.494684 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:09:43.494748 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:09:43.584654 kernel: SCSI subsystem initialized Sep 12 17:09:43.592680 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:09:43.605945 kernel: iscsi: registered transport (tcp) Sep 12 17:09:43.627901 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:09:43.627974 kernel: QLogic iSCSI HBA Driver Sep 12 17:09:43.717598 kernel: random: crng init done Sep 12 17:09:43.718117 systemd-resolved[288]: Defaulting to hostname 'linux'. Sep 12 17:09:43.722084 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:09:43.730989 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:09:43.750281 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:09:43.761853 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:09:43.796950 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:09:43.797035 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:09:43.798913 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:09:43.864610 kernel: raid6: neonx8 gen() 6656 MB/s Sep 12 17:09:43.881599 kernel: raid6: neonx4 gen() 6476 MB/s Sep 12 17:09:43.898599 kernel: raid6: neonx2 gen() 5405 MB/s Sep 12 17:09:43.915601 kernel: raid6: neonx1 gen() 3943 MB/s Sep 12 17:09:43.932599 kernel: raid6: int64x8 gen() 3795 MB/s Sep 12 17:09:43.949598 kernel: raid6: int64x4 gen() 3691 MB/s Sep 12 17:09:43.966598 kernel: raid6: int64x2 gen() 3577 MB/s Sep 12 17:09:43.984531 kernel: raid6: int64x1 gen() 2755 MB/s Sep 12 17:09:43.984596 kernel: raid6: using algorithm neonx8 gen() 6656 MB/s Sep 12 17:09:44.002782 kernel: raid6: .... xor() 4885 MB/s, rmw enabled Sep 12 17:09:44.002823 kernel: raid6: using neon recovery algorithm Sep 12 17:09:44.010602 kernel: xor: measuring software checksum speed Sep 12 17:09:44.012909 kernel: 8regs : 10221 MB/sec Sep 12 17:09:44.012942 kernel: 32regs : 11911 MB/sec Sep 12 17:09:44.014156 kernel: arm64_neon : 9509 MB/sec Sep 12 17:09:44.014188 kernel: xor: using function: 32regs (11911 MB/sec) Sep 12 17:09:44.100614 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:09:44.119833 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:09:44.135882 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:09:44.182415 systemd-udevd[470]: Using default interface naming scheme 'v255'. Sep 12 17:09:44.190560 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:09:44.212906 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:09:44.238754 dracut-pre-trigger[482]: rd.md=0: removing MD RAID activation Sep 12 17:09:44.293713 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:09:44.306957 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:09:44.427651 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:09:44.440984 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:09:44.478358 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:09:44.485417 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:09:44.491859 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:09:44.494442 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:09:44.506082 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:09:44.545973 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:09:44.627399 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 12 17:09:44.627467 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 12 17:09:44.638391 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 12 17:09:44.638800 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 12 17:09:44.638133 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:09:44.638361 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:09:44.644936 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:09:44.654744 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:09:44.682290 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:bb:b1:05:2e:c9 Sep 12 17:09:44.655030 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:09:44.666371 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:09:44.686256 (udev-worker)[514]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:09:44.690994 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:09:44.716599 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 12 17:09:44.718628 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 12 17:09:44.727622 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 12 17:09:44.736629 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:09:44.745071 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:09:44.745109 kernel: GPT:9289727 != 16777215 Sep 12 17:09:44.745135 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:09:44.745160 kernel: GPT:9289727 != 16777215 Sep 12 17:09:44.745184 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:09:44.745208 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:09:44.754941 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:09:44.798085 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:09:44.830615 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (523) Sep 12 17:09:44.878156 kernel: BTRFS: device fsid 5a23a06a-00d4-4606-89bf-13e31a563129 devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (529) Sep 12 17:09:44.949647 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 12 17:09:44.980291 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 12 17:09:44.998824 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 17:09:45.015452 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 12 17:09:45.018065 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 12 17:09:45.033813 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:09:45.049970 disk-uuid[661]: Primary Header is updated. Sep 12 17:09:45.049970 disk-uuid[661]: Secondary Entries is updated. Sep 12 17:09:45.049970 disk-uuid[661]: Secondary Header is updated. Sep 12 17:09:45.062610 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:09:45.070599 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:09:45.079606 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:09:46.080206 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 17:09:46.083114 disk-uuid[662]: The operation has completed successfully. Sep 12 17:09:46.266504 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:09:46.266759 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:09:46.304887 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:09:46.325361 sh[1005]: Success Sep 12 17:09:46.350810 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 12 17:09:46.457106 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:09:46.469781 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:09:46.476695 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:09:46.502604 kernel: BTRFS info (device dm-0): first mount of filesystem 5a23a06a-00d4-4606-89bf-13e31a563129 Sep 12 17:09:46.502669 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:09:46.502696 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:09:46.502722 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:09:46.503948 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:09:46.603604 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 17:09:46.640867 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:09:46.645618 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:09:46.657955 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:09:46.667083 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:09:46.693029 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:09:46.693113 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:09:46.693145 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 17:09:46.701624 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:09:46.720967 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:09:46.723766 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:09:46.736640 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:09:46.751032 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:09:46.852553 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:09:46.867962 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:09:46.918353 systemd-networkd[1199]: lo: Link UP Sep 12 17:09:46.918860 systemd-networkd[1199]: lo: Gained carrier Sep 12 17:09:46.921632 systemd-networkd[1199]: Enumeration completed Sep 12 17:09:46.921776 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:09:46.923587 systemd-networkd[1199]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:09:46.923594 systemd-networkd[1199]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:09:46.927809 systemd[1]: Reached target network.target - Network. Sep 12 17:09:46.943701 systemd-networkd[1199]: eth0: Link UP Sep 12 17:09:46.943713 systemd-networkd[1199]: eth0: Gained carrier Sep 12 17:09:46.943731 systemd-networkd[1199]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:09:46.963656 systemd-networkd[1199]: eth0: DHCPv4 address 172.31.22.180/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 17:09:47.152595 ignition[1114]: Ignition 2.19.0 Sep 12 17:09:47.154155 ignition[1114]: Stage: fetch-offline Sep 12 17:09:47.157494 ignition[1114]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:09:47.157545 ignition[1114]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:09:47.162191 ignition[1114]: Ignition finished successfully Sep 12 17:09:47.166366 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:09:47.175023 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:09:47.202931 ignition[1208]: Ignition 2.19.0 Sep 12 17:09:47.202958 ignition[1208]: Stage: fetch Sep 12 17:09:47.203685 ignition[1208]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:09:47.203711 ignition[1208]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:09:47.203868 ignition[1208]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:09:47.232890 ignition[1208]: PUT result: OK Sep 12 17:09:47.237471 ignition[1208]: parsed url from cmdline: "" Sep 12 17:09:47.237495 ignition[1208]: no config URL provided Sep 12 17:09:47.237513 ignition[1208]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:09:47.237540 ignition[1208]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:09:47.237595 ignition[1208]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:09:47.241993 ignition[1208]: PUT result: OK Sep 12 17:09:47.243927 ignition[1208]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 12 17:09:47.252356 ignition[1208]: GET result: OK Sep 12 17:09:47.252514 ignition[1208]: parsing config with SHA512: c6f1c63568e2b83a4e8137b9723ef42a39205cedc6b00b4ceb0e2fe724bf07827adbe05a11f46aca6f7babfa89279660274f8cd43364bf1c428bbe11ba2b2ab4 Sep 12 17:09:47.265759 unknown[1208]: fetched base config from "system" Sep 12 17:09:47.266388 unknown[1208]: fetched base config from "system" Sep 12 17:09:47.266413 unknown[1208]: fetched user config from "aws" Sep 12 17:09:47.273017 ignition[1208]: fetch: fetch complete Sep 12 17:09:47.273043 ignition[1208]: fetch: fetch passed Sep 12 17:09:47.273143 ignition[1208]: Ignition finished successfully Sep 12 17:09:47.281076 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:09:47.292912 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:09:47.319678 ignition[1215]: Ignition 2.19.0 Sep 12 17:09:47.319706 ignition[1215]: Stage: kargs Sep 12 17:09:47.320343 ignition[1215]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:09:47.320368 ignition[1215]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:09:47.320520 ignition[1215]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:09:47.322525 ignition[1215]: PUT result: OK Sep 12 17:09:47.333093 ignition[1215]: kargs: kargs passed Sep 12 17:09:47.333192 ignition[1215]: Ignition finished successfully Sep 12 17:09:47.337699 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:09:47.354961 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:09:47.379932 ignition[1221]: Ignition 2.19.0 Sep 12 17:09:47.379959 ignition[1221]: Stage: disks Sep 12 17:09:47.381849 ignition[1221]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:09:47.381876 ignition[1221]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:09:47.382192 ignition[1221]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:09:47.385932 ignition[1221]: PUT result: OK Sep 12 17:09:47.395460 ignition[1221]: disks: disks passed Sep 12 17:09:47.395691 ignition[1221]: Ignition finished successfully Sep 12 17:09:47.399044 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:09:47.406166 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:09:47.408667 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:09:47.411656 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:09:47.414970 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:09:47.425391 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:09:47.443510 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:09:47.488230 systemd-fsck[1229]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 17:09:47.495606 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:09:47.507846 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:09:47.610613 kernel: EXT4-fs (nvme0n1p9): mounted filesystem fc6c61a7-153d-4e7f-95c0-bffdb4824d71 r/w with ordered data mode. Quota mode: none. Sep 12 17:09:47.612397 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:09:47.615930 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:09:47.629918 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:09:47.645158 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:09:47.653101 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:09:47.653197 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:09:47.653250 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:09:47.676952 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:09:47.684643 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1248) Sep 12 17:09:47.690588 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:09:47.690684 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:09:47.690711 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 17:09:47.691893 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:09:47.706601 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:09:47.707686 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:09:48.153369 initrd-setup-root[1273]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:09:48.185372 initrd-setup-root[1280]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:09:48.194880 initrd-setup-root[1287]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:09:48.203980 initrd-setup-root[1294]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:09:48.396777 systemd-networkd[1199]: eth0: Gained IPv6LL Sep 12 17:09:48.571097 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:09:48.583963 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:09:48.591065 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:09:48.610629 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:09:48.611544 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:09:48.664031 ignition[1362]: INFO : Ignition 2.19.0 Sep 12 17:09:48.664031 ignition[1362]: INFO : Stage: mount Sep 12 17:09:48.664031 ignition[1362]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:09:48.664031 ignition[1362]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:09:48.664031 ignition[1362]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:09:48.676775 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:09:48.681925 ignition[1362]: INFO : PUT result: OK Sep 12 17:09:48.686789 ignition[1362]: INFO : mount: mount passed Sep 12 17:09:48.688694 ignition[1362]: INFO : Ignition finished successfully Sep 12 17:09:48.693184 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:09:48.704932 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:09:48.724972 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:09:48.750607 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1375) Sep 12 17:09:48.755032 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem daec7f45-8bde-44bd-bec0-4b8eac931d0c Sep 12 17:09:48.755085 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:09:48.755112 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 17:09:48.761609 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 17:09:48.764850 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:09:48.799767 ignition[1392]: INFO : Ignition 2.19.0 Sep 12 17:09:48.799767 ignition[1392]: INFO : Stage: files Sep 12 17:09:48.803946 ignition[1392]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:09:48.803946 ignition[1392]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:09:48.803946 ignition[1392]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:09:48.811219 ignition[1392]: INFO : PUT result: OK Sep 12 17:09:48.816117 ignition[1392]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:09:48.820998 ignition[1392]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:09:48.820998 ignition[1392]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:09:48.873852 ignition[1392]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:09:48.877056 ignition[1392]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:09:48.880458 unknown[1392]: wrote ssh authorized keys file for user: core Sep 12 17:09:48.883511 ignition[1392]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:09:48.888754 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 12 17:09:48.888754 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 12 17:09:48.888754 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 12 17:09:48.888754 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 12 17:09:48.986381 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 17:09:49.366834 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 12 17:09:49.372006 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:09:49.372006 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:09:49.372006 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:09:49.372006 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:09:49.372006 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:09:49.372006 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:09:49.372006 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:09:49.372006 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:09:49.372006 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:09:49.372006 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:09:49.372006 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:09:49.372006 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:09:49.372006 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:09:49.372006 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 12 17:09:49.915865 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 17:09:50.300096 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 17:09:50.300096 ignition[1392]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 12 17:09:50.308425 ignition[1392]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 12 17:09:50.308425 ignition[1392]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 12 17:09:50.308425 ignition[1392]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 12 17:09:50.308425 ignition[1392]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 12 17:09:50.308425 ignition[1392]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:09:50.308425 ignition[1392]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:09:50.308425 ignition[1392]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 12 17:09:50.308425 ignition[1392]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:09:50.308425 ignition[1392]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:09:50.308425 ignition[1392]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:09:50.308425 ignition[1392]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:09:50.308425 ignition[1392]: INFO : files: files passed Sep 12 17:09:50.308425 ignition[1392]: INFO : Ignition finished successfully Sep 12 17:09:50.317726 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:09:50.342643 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:09:50.378028 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:09:50.389135 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:09:50.392923 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:09:50.415051 initrd-setup-root-after-ignition[1421]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:09:50.415051 initrd-setup-root-after-ignition[1421]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:09:50.425151 initrd-setup-root-after-ignition[1425]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:09:50.430699 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:09:50.436724 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:09:50.449292 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:09:50.498000 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:09:50.498417 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:09:50.507071 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:09:50.509386 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:09:50.511722 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:09:50.525816 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:09:50.556658 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:09:50.573919 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:09:50.600176 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:09:50.600583 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:09:50.601413 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:09:50.602206 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:09:50.602434 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:09:50.603821 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:09:50.604687 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:09:50.605466 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:09:50.606306 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:09:50.607108 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:09:50.607933 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:09:50.608749 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:09:50.609552 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:09:50.610374 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:09:50.611191 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:09:50.611943 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:09:50.612144 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:09:50.613632 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:09:50.614465 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:09:50.615204 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:09:50.645225 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:09:50.679183 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:09:50.679413 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:09:50.702817 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:09:50.703250 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:09:50.710899 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:09:50.711110 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:09:50.728898 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:09:50.733444 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:09:50.742769 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:09:50.744153 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:09:50.755974 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:09:50.756832 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:09:50.781506 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:09:50.784833 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:09:50.787261 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:09:50.798543 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:09:50.800838 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:09:50.808741 ignition[1445]: INFO : Ignition 2.19.0 Sep 12 17:09:50.810747 ignition[1445]: INFO : Stage: umount Sep 12 17:09:50.810747 ignition[1445]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:09:50.810747 ignition[1445]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 17:09:50.810747 ignition[1445]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 17:09:50.820589 ignition[1445]: INFO : PUT result: OK Sep 12 17:09:50.826005 ignition[1445]: INFO : umount: umount passed Sep 12 17:09:50.828179 ignition[1445]: INFO : Ignition finished successfully Sep 12 17:09:50.833065 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:09:50.835270 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:09:50.838703 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:09:50.838848 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:09:50.844240 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:09:50.844709 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:09:50.848611 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:09:50.848697 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:09:50.850774 systemd[1]: Stopped target network.target - Network. Sep 12 17:09:50.854779 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:09:50.854864 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:09:50.858675 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:09:50.864915 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:09:50.871207 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:09:50.871325 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:09:50.875649 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:09:50.882006 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:09:50.882087 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:09:50.884873 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:09:50.884948 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:09:50.890955 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:09:50.891128 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:09:50.897714 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:09:50.898262 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:09:50.901149 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:09:50.901233 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:09:50.902881 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:09:50.911211 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:09:50.918703 systemd-networkd[1199]: eth0: DHCPv6 lease lost Sep 12 17:09:50.921705 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:09:50.921920 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:09:50.933964 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:09:50.934222 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:09:50.952182 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:09:50.952273 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:09:50.964886 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:09:50.971808 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:09:50.971928 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:09:50.972125 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:09:50.972200 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:09:50.972763 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:09:50.972840 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:09:50.973220 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:09:50.973291 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:09:50.974042 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:09:51.020171 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:09:51.020543 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:09:51.032891 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:09:51.033023 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:09:51.040727 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:09:51.040812 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:09:51.043106 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:09:51.043199 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:09:51.045870 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:09:51.045955 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:09:51.060982 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:09:51.061095 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:09:51.077909 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:09:51.083166 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:09:51.083288 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:09:51.094528 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 17:09:51.094676 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:09:51.098765 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:09:51.098858 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:09:51.102154 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:09:51.104099 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:09:51.113371 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:09:51.114609 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:09:51.117653 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:09:51.117846 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:09:51.123312 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:09:51.144980 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:09:51.198557 systemd[1]: Switching root. Sep 12 17:09:51.237358 systemd-journald[251]: Journal stopped Sep 12 17:09:53.919943 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Sep 12 17:09:53.920079 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:09:53.920124 kernel: SELinux: policy capability open_perms=1 Sep 12 17:09:53.920156 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:09:53.920186 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:09:53.920216 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:09:53.920256 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:09:53.920292 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:09:53.920321 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:09:53.920348 kernel: audit: type=1403 audit(1757696991.954:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:09:53.920382 systemd[1]: Successfully loaded SELinux policy in 80.301ms. Sep 12 17:09:53.920438 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.976ms. Sep 12 17:09:53.920497 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:09:53.920532 systemd[1]: Detected virtualization amazon. Sep 12 17:09:53.920601 systemd[1]: Detected architecture arm64. Sep 12 17:09:53.920641 systemd[1]: Detected first boot. Sep 12 17:09:53.920674 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:09:53.920706 zram_generator::config[1505]: No configuration found. Sep 12 17:09:53.920752 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:09:53.920784 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:09:53.920817 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 12 17:09:53.920849 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:09:53.920883 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:09:53.920924 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:09:53.920959 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:09:53.920991 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:09:53.921021 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:09:53.921054 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:09:53.921088 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:09:53.921119 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:09:53.921151 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:09:53.921185 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:09:53.921222 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:09:53.921253 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:09:53.921285 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:09:53.921314 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:09:53.921342 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:09:53.921374 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:09:53.921405 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:09:53.921437 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:09:53.921469 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:09:53.921505 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:09:53.921534 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:09:53.923893 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:09:53.923973 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:09:53.924004 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:09:53.924034 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:09:53.924064 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:09:53.924096 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:09:53.924126 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:09:53.924165 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:09:53.924197 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:09:53.924227 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:09:53.924259 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:09:53.924291 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:09:53.924322 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:09:53.924357 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:09:53.924851 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:09:53.927617 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:09:53.927679 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:09:53.927710 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:09:53.927740 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:09:53.927770 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:09:53.927803 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:09:53.927833 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:09:53.927863 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:09:53.927895 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 12 17:09:53.927935 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 12 17:09:53.927967 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:09:53.927996 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:09:53.928026 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:09:53.928054 kernel: fuse: init (API version 7.39) Sep 12 17:09:53.928083 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:09:53.928114 kernel: loop: module loaded Sep 12 17:09:53.928142 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:09:53.928174 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:09:53.928211 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:09:53.928241 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:09:53.928270 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:09:53.928300 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:09:53.928329 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:09:53.928358 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:09:53.928388 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:09:53.928417 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:09:53.928472 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:09:53.928509 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:09:53.928618 systemd-journald[1602]: Collecting audit messages is disabled. Sep 12 17:09:53.928685 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:09:53.928722 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:09:53.928757 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:09:53.928788 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:09:53.928820 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:09:53.928851 systemd-journald[1602]: Journal started Sep 12 17:09:53.928898 systemd-journald[1602]: Runtime Journal (/run/log/journal/ec27e2d5e70df89055b93bb56933c96d) is 8.0M, max 75.3M, 67.3M free. Sep 12 17:09:53.931608 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:09:53.949604 kernel: ACPI: bus type drm_connector registered Sep 12 17:09:53.949702 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:09:53.948527 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:09:53.950918 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:09:53.955338 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:09:53.960558 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:09:53.970659 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:09:53.999486 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:09:54.009812 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:09:54.022512 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:09:54.028723 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:09:54.054974 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:09:54.070955 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:09:54.073722 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:09:54.088015 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:09:54.093718 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:09:54.102554 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:09:54.115904 systemd-journald[1602]: Time spent on flushing to /var/log/journal/ec27e2d5e70df89055b93bb56933c96d is 68.534ms for 892 entries. Sep 12 17:09:54.115904 systemd-journald[1602]: System Journal (/var/log/journal/ec27e2d5e70df89055b93bb56933c96d) is 8.0M, max 195.6M, 187.6M free. Sep 12 17:09:54.206752 systemd-journald[1602]: Received client request to flush runtime journal. Sep 12 17:09:54.118946 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:09:54.130379 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:09:54.135191 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:09:54.141959 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:09:54.181708 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:09:54.189251 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:09:54.216376 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:09:54.253678 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:09:54.270431 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:09:54.284885 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:09:54.313203 systemd-tmpfiles[1656]: ACLs are not supported, ignoring. Sep 12 17:09:54.313238 systemd-tmpfiles[1656]: ACLs are not supported, ignoring. Sep 12 17:09:54.325005 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:09:54.343177 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:09:54.346355 udevadm[1672]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 12 17:09:54.420049 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:09:54.435999 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:09:54.475743 systemd-tmpfiles[1679]: ACLs are not supported, ignoring. Sep 12 17:09:54.475783 systemd-tmpfiles[1679]: ACLs are not supported, ignoring. Sep 12 17:09:54.486042 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:09:55.255066 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:09:55.273882 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:09:55.320377 systemd-udevd[1685]: Using default interface naming scheme 'v255'. Sep 12 17:09:55.416233 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:09:55.427989 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:09:55.475804 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:09:55.563376 (udev-worker)[1692]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:09:55.568468 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Sep 12 17:09:55.610279 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:09:55.774024 systemd-networkd[1689]: lo: Link UP Sep 12 17:09:55.775073 systemd-networkd[1689]: lo: Gained carrier Sep 12 17:09:55.778188 systemd-networkd[1689]: Enumeration completed Sep 12 17:09:55.778514 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:09:55.782683 systemd-networkd[1689]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:09:55.782698 systemd-networkd[1689]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:09:55.789399 systemd-networkd[1689]: eth0: Link UP Sep 12 17:09:55.790023 systemd-networkd[1689]: eth0: Gained carrier Sep 12 17:09:55.790173 systemd-networkd[1689]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:09:55.791895 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:09:55.803704 systemd-networkd[1689]: eth0: DHCPv4 address 172.31.22.180/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 17:09:55.875774 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1707) Sep 12 17:09:55.889495 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:09:56.076170 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:09:56.122398 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 17:09:56.126021 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:09:56.134842 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:09:56.182587 lvm[1814]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:09:56.218107 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:09:56.221118 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:09:56.231098 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:09:56.243972 lvm[1817]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:09:56.282439 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:09:56.289149 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:09:56.292082 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:09:56.292136 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:09:56.294728 systemd[1]: Reached target machines.target - Containers. Sep 12 17:09:56.298888 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 17:09:56.309888 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:09:56.315884 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:09:56.318587 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:09:56.326848 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:09:56.336905 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 17:09:56.344876 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:09:56.352983 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:09:56.383462 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:09:56.388007 kernel: loop0: detected capacity change from 0 to 52536 Sep 12 17:09:56.390540 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 17:09:56.405924 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:09:56.440012 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:09:56.474717 kernel: loop1: detected capacity change from 0 to 114328 Sep 12 17:09:56.579668 kernel: loop2: detected capacity change from 0 to 114432 Sep 12 17:09:56.711607 kernel: loop3: detected capacity change from 0 to 203944 Sep 12 17:09:56.844633 kernel: loop4: detected capacity change from 0 to 52536 Sep 12 17:09:56.865809 kernel: loop5: detected capacity change from 0 to 114328 Sep 12 17:09:56.885604 kernel: loop6: detected capacity change from 0 to 114432 Sep 12 17:09:56.901607 kernel: loop7: detected capacity change from 0 to 203944 Sep 12 17:09:56.931878 (sd-merge)[1839]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 12 17:09:56.932889 (sd-merge)[1839]: Merged extensions into '/usr'. Sep 12 17:09:56.941855 systemd[1]: Reloading requested from client PID 1825 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:09:56.941887 systemd[1]: Reloading... Sep 12 17:09:57.054605 zram_generator::config[1870]: No configuration found. Sep 12 17:09:57.339606 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:09:57.492802 systemd[1]: Reloading finished in 550 ms. Sep 12 17:09:57.516438 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:09:57.529888 systemd[1]: Starting ensure-sysext.service... Sep 12 17:09:57.540847 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:09:57.557705 systemd[1]: Reloading requested from client PID 1924 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:09:57.557908 systemd[1]: Reloading... Sep 12 17:09:57.610929 systemd-tmpfiles[1925]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:09:57.612598 systemd-tmpfiles[1925]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:09:57.617330 systemd-tmpfiles[1925]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:09:57.617939 systemd-tmpfiles[1925]: ACLs are not supported, ignoring. Sep 12 17:09:57.618073 systemd-tmpfiles[1925]: ACLs are not supported, ignoring. Sep 12 17:09:57.630310 systemd-tmpfiles[1925]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:09:57.630341 systemd-tmpfiles[1925]: Skipping /boot Sep 12 17:09:57.647151 ldconfig[1821]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:09:57.666194 systemd-tmpfiles[1925]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:09:57.666228 systemd-tmpfiles[1925]: Skipping /boot Sep 12 17:09:57.754630 zram_generator::config[1959]: No configuration found. Sep 12 17:09:57.804752 systemd-networkd[1689]: eth0: Gained IPv6LL Sep 12 17:09:57.987902 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:09:58.140013 systemd[1]: Reloading finished in 581 ms. Sep 12 17:09:58.167986 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:09:58.171787 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:09:58.182177 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:09:58.201924 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:09:58.210127 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:09:58.223858 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:09:58.234325 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:09:58.246432 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:09:58.272329 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:09:58.281304 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:09:58.301050 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:09:58.323333 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:09:58.327968 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:09:58.351031 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:09:58.351384 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:09:58.362942 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:09:58.363331 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:09:58.375274 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:09:58.375689 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:09:58.392895 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:09:58.417902 augenrules[2051]: No rules Sep 12 17:09:58.419010 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:09:58.427523 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:09:58.444013 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:09:58.460799 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:09:58.481061 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:09:58.484817 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:09:58.485184 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:09:58.500340 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:09:58.509602 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:09:58.513228 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:09:58.513557 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:09:58.516973 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:09:58.517309 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:09:58.535811 systemd[1]: Finished ensure-sysext.service. Sep 12 17:09:58.541551 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:09:58.541997 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:09:58.547769 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:09:58.550015 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:09:58.569511 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:09:58.583307 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:09:58.583499 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:09:58.594006 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:09:58.596300 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:09:58.613553 systemd-resolved[2028]: Positive Trust Anchors: Sep 12 17:09:58.613607 systemd-resolved[2028]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:09:58.613674 systemd-resolved[2028]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:09:58.627343 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:09:58.631631 systemd-resolved[2028]: Defaulting to hostname 'linux'. Sep 12 17:09:58.635818 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:09:58.638467 systemd[1]: Reached target network.target - Network. Sep 12 17:09:58.640642 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:09:58.643152 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:09:58.646008 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:09:58.648536 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:09:58.651262 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:09:58.654402 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:09:58.656955 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:09:58.659705 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:09:58.662456 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:09:58.662683 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:09:58.664864 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:09:58.668531 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:09:58.674185 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:09:58.679953 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:09:58.686583 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:09:58.689081 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:09:58.691232 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:09:58.693583 systemd[1]: System is tainted: cgroupsv1 Sep 12 17:09:58.693785 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:09:58.693839 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:09:58.697791 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:09:58.712929 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 17:09:58.718846 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:09:58.724750 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:09:58.737885 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:09:58.740471 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:09:58.753760 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:09:58.766390 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:09:58.782155 jq[2086]: false Sep 12 17:09:58.786233 systemd[1]: Started ntpd.service - Network Time Service. Sep 12 17:09:58.805548 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:09:58.814811 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:09:58.830801 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 12 17:09:58.846202 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:09:58.867882 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:09:58.888528 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:09:58.897430 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:09:58.915205 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:09:58.935737 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:09:58.937803 extend-filesystems[2087]: Found loop4 Sep 12 17:09:58.937803 extend-filesystems[2087]: Found loop5 Sep 12 17:09:58.937803 extend-filesystems[2087]: Found loop6 Sep 12 17:09:58.937803 extend-filesystems[2087]: Found loop7 Sep 12 17:09:58.937803 extend-filesystems[2087]: Found nvme0n1 Sep 12 17:09:58.937803 extend-filesystems[2087]: Found nvme0n1p1 Sep 12 17:09:58.937803 extend-filesystems[2087]: Found nvme0n1p2 Sep 12 17:09:58.937803 extend-filesystems[2087]: Found nvme0n1p3 Sep 12 17:09:58.937803 extend-filesystems[2087]: Found usr Sep 12 17:09:58.937803 extend-filesystems[2087]: Found nvme0n1p4 Sep 12 17:09:58.937803 extend-filesystems[2087]: Found nvme0n1p6 Sep 12 17:09:58.937803 extend-filesystems[2087]: Found nvme0n1p7 Sep 12 17:09:58.937803 extend-filesystems[2087]: Found nvme0n1p9 Sep 12 17:09:58.937803 extend-filesystems[2087]: Checking size of /dev/nvme0n1p9 Sep 12 17:09:58.964084 dbus-daemon[2085]: [system] SELinux support is enabled Sep 12 17:09:58.979271 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:09:59.026850 dbus-daemon[2085]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1689 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 12 17:09:59.004288 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:09:59.007857 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:09:59.036061 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:09:59.059012 jq[2113]: true Sep 12 17:09:59.039222 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:09:59.108765 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:09:59.109337 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:09:59.123979 extend-filesystems[2087]: Resized partition /dev/nvme0n1p9 Sep 12 17:09:59.150419 extend-filesystems[2138]: resize2fs 1.47.1 (20-May-2024) Sep 12 17:09:59.154264 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:09:59.179006 coreos-metadata[2083]: Sep 12 17:09:59.175 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 17:09:59.176558 (ntainerd)[2128]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:09:59.197004 jq[2126]: true Sep 12 17:09:59.197359 coreos-metadata[2083]: Sep 12 17:09:59.188 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 12 17:09:59.194802 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:09:59.213662 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 12 17:09:59.194884 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:09:59.217715 ntpd[2090]: 12 Sep 17:09:59 ntpd[2090]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 15:26:25 UTC 2025 (1): Starting Sep 12 17:09:59.217715 ntpd[2090]: 12 Sep 17:09:59 ntpd[2090]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 17:09:59.217715 ntpd[2090]: 12 Sep 17:09:59 ntpd[2090]: ---------------------------------------------------- Sep 12 17:09:59.217715 ntpd[2090]: 12 Sep 17:09:59 ntpd[2090]: ntp-4 is maintained by Network Time Foundation, Sep 12 17:09:59.217715 ntpd[2090]: 12 Sep 17:09:59 ntpd[2090]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 17:09:59.217715 ntpd[2090]: 12 Sep 17:09:59 ntpd[2090]: corporation. Support and training for ntp-4 are Sep 12 17:09:59.217715 ntpd[2090]: 12 Sep 17:09:59 ntpd[2090]: available at https://www.nwtime.org/support Sep 12 17:09:59.217715 ntpd[2090]: 12 Sep 17:09:59 ntpd[2090]: ---------------------------------------------------- Sep 12 17:09:59.206016 ntpd[2090]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 15:26:25 UTC 2025 (1): Starting Sep 12 17:09:59.200855 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:09:59.235181 coreos-metadata[2083]: Sep 12 17:09:59.221 INFO Fetch successful Sep 12 17:09:59.235181 coreos-metadata[2083]: Sep 12 17:09:59.221 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 12 17:09:59.206068 ntpd[2090]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 17:09:59.235385 update_engine[2107]: I20250912 17:09:59.220907 2107 main.cc:92] Flatcar Update Engine starting Sep 12 17:09:59.200907 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:09:59.206090 ntpd[2090]: ---------------------------------------------------- Sep 12 17:09:59.268074 tar[2122]: linux-arm64/helm Sep 12 17:09:59.268524 coreos-metadata[2083]: Sep 12 17:09:59.241 INFO Fetch successful Sep 12 17:09:59.268524 coreos-metadata[2083]: Sep 12 17:09:59.241 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 12 17:09:59.268524 coreos-metadata[2083]: Sep 12 17:09:59.243 INFO Fetch successful Sep 12 17:09:59.268524 coreos-metadata[2083]: Sep 12 17:09:59.243 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 12 17:09:59.268524 coreos-metadata[2083]: Sep 12 17:09:59.252 INFO Fetch successful Sep 12 17:09:59.268524 coreos-metadata[2083]: Sep 12 17:09:59.254 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 12 17:09:59.268963 ntpd[2090]: 12 Sep 17:09:59 ntpd[2090]: proto: precision = 0.096 usec (-23) Sep 12 17:09:59.268963 ntpd[2090]: 12 Sep 17:09:59 ntpd[2090]: basedate set to 2025-08-31 Sep 12 17:09:59.268963 ntpd[2090]: 12 Sep 17:09:59 ntpd[2090]: gps base set to 2025-08-31 (week 2382) Sep 12 17:09:59.257203 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 12 17:09:59.206111 ntpd[2090]: ntp-4 is maintained by Network Time Foundation, Sep 12 17:09:59.290626 ntpd[2090]: 12 Sep 17:09:59 ntpd[2090]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 17:09:59.290626 ntpd[2090]: 12 Sep 17:09:59 ntpd[2090]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 17:09:59.290782 coreos-metadata[2083]: Sep 12 17:09:59.288 INFO Fetch failed with 404: resource not found Sep 12 17:09:59.290782 coreos-metadata[2083]: Sep 12 17:09:59.288 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 12 17:09:59.290782 coreos-metadata[2083]: Sep 12 17:09:59.290 INFO Fetch successful Sep 12 17:09:59.290782 coreos-metadata[2083]: Sep 12 17:09:59.290 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 12 17:09:59.206131 ntpd[2090]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 17:09:59.206155 ntpd[2090]: corporation. Support and training for ntp-4 are Sep 12 17:09:59.206174 ntpd[2090]: available at https://www.nwtime.org/support Sep 12 17:09:59.206193 ntpd[2090]: ---------------------------------------------------- Sep 12 17:09:59.219832 dbus-daemon[2085]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 17:09:59.250015 ntpd[2090]: proto: precision = 0.096 usec (-23) Sep 12 17:09:59.250511 ntpd[2090]: basedate set to 2025-08-31 Sep 12 17:09:59.323027 update_engine[2107]: I20250912 17:09:59.307062 2107 update_check_scheduler.cc:74] Next update check in 9m51s Sep 12 17:09:59.294555 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:09:59.323220 ntpd[2090]: 12 Sep 17:09:59 ntpd[2090]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 17:09:59.323220 ntpd[2090]: 12 Sep 17:09:59 ntpd[2090]: Listen normally on 3 eth0 172.31.22.180:123 Sep 12 17:09:59.323220 ntpd[2090]: 12 Sep 17:09:59 ntpd[2090]: Listen normally on 4 lo [::1]:123 Sep 12 17:09:59.323220 ntpd[2090]: 12 Sep 17:09:59 ntpd[2090]: Listen normally on 5 eth0 [fe80::4bb:b1ff:fe05:2ec9%2]:123 Sep 12 17:09:59.323220 ntpd[2090]: 12 Sep 17:09:59 ntpd[2090]: Listening on routing socket on fd #22 for interface updates Sep 12 17:09:59.323444 coreos-metadata[2083]: Sep 12 17:09:59.293 INFO Fetch successful Sep 12 17:09:59.323444 coreos-metadata[2083]: Sep 12 17:09:59.294 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 12 17:09:59.323444 coreos-metadata[2083]: Sep 12 17:09:59.306 INFO Fetch successful Sep 12 17:09:59.323444 coreos-metadata[2083]: Sep 12 17:09:59.306 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 12 17:09:59.323444 coreos-metadata[2083]: Sep 12 17:09:59.308 INFO Fetch successful Sep 12 17:09:59.323444 coreos-metadata[2083]: Sep 12 17:09:59.308 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 12 17:09:59.323444 coreos-metadata[2083]: Sep 12 17:09:59.314 INFO Fetch successful Sep 12 17:09:59.250545 ntpd[2090]: gps base set to 2025-08-31 (week 2382) Sep 12 17:09:59.298040 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:09:59.284148 ntpd[2090]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 17:09:59.299850 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:09:59.284233 ntpd[2090]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 17:09:59.284551 ntpd[2090]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 17:09:59.297755 ntpd[2090]: Listen normally on 3 eth0 172.31.22.180:123 Sep 12 17:09:59.297833 ntpd[2090]: Listen normally on 4 lo [::1]:123 Sep 12 17:09:59.297914 ntpd[2090]: Listen normally on 5 eth0 [fe80::4bb:b1ff:fe05:2ec9%2]:123 Sep 12 17:09:59.310699 ntpd[2090]: Listening on routing socket on fd #22 for interface updates Sep 12 17:09:59.375674 ntpd[2090]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:09:59.377287 ntpd[2090]: 12 Sep 17:09:59 ntpd[2090]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:09:59.377287 ntpd[2090]: 12 Sep 17:09:59 ntpd[2090]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:09:59.375747 ntpd[2090]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 17:09:59.408743 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 12 17:09:59.419223 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 12 17:09:59.455131 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 12 17:09:59.468991 extend-filesystems[2138]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 12 17:09:59.468991 extend-filesystems[2138]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 17:09:59.468991 extend-filesystems[2138]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 12 17:09:59.461605 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:09:59.494854 extend-filesystems[2087]: Resized filesystem in /dev/nvme0n1p9 Sep 12 17:09:59.462244 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:09:59.573548 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 17:09:59.578955 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:09:59.680782 bash[2196]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:09:59.683304 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:09:59.706590 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (2198) Sep 12 17:09:59.742137 locksmithd[2159]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:09:59.745166 systemd[1]: Starting sshkeys.service... Sep 12 17:09:59.764152 systemd-logind[2103]: Watching system buttons on /dev/input/event0 (Power Button) Sep 12 17:09:59.764195 systemd-logind[2103]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 12 17:09:59.765607 systemd-logind[2103]: New seat seat0. Sep 12 17:09:59.768153 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:09:59.799339 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 17:09:59.813187 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 17:09:59.829143 amazon-ssm-agent[2173]: Initializing new seelog logger Sep 12 17:09:59.852695 amazon-ssm-agent[2173]: New Seelog Logger Creation Complete Sep 12 17:09:59.852695 amazon-ssm-agent[2173]: 2025/09/12 17:09:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:09:59.852695 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:09:59.852695 amazon-ssm-agent[2173]: 2025/09/12 17:09:59 processing appconfig overrides Sep 12 17:09:59.852695 amazon-ssm-agent[2173]: 2025/09/12 17:09:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:09:59.852695 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:09:59.852695 amazon-ssm-agent[2173]: 2025-09-12 17:09:59 INFO Proxy environment variables: Sep 12 17:09:59.852695 amazon-ssm-agent[2173]: 2025/09/12 17:09:59 processing appconfig overrides Sep 12 17:09:59.852695 amazon-ssm-agent[2173]: 2025/09/12 17:09:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:09:59.852695 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:09:59.852695 amazon-ssm-agent[2173]: 2025/09/12 17:09:59 processing appconfig overrides Sep 12 17:09:59.866773 amazon-ssm-agent[2173]: 2025/09/12 17:09:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:09:59.866773 amazon-ssm-agent[2173]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 17:09:59.866773 amazon-ssm-agent[2173]: 2025/09/12 17:09:59 processing appconfig overrides Sep 12 17:09:59.946602 amazon-ssm-agent[2173]: 2025-09-12 17:09:59 INFO https_proxy: Sep 12 17:10:00.055503 amazon-ssm-agent[2173]: 2025-09-12 17:09:59 INFO http_proxy: Sep 12 17:10:00.153730 amazon-ssm-agent[2173]: 2025-09-12 17:09:59 INFO no_proxy: Sep 12 17:10:00.205595 containerd[2128]: time="2025-09-12T17:10:00.202547315Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 17:10:00.259612 amazon-ssm-agent[2173]: 2025-09-12 17:09:59 INFO Checking if agent identity type OnPrem can be assumed Sep 12 17:10:00.308878 coreos-metadata[2230]: Sep 12 17:10:00.308 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 17:10:00.315883 coreos-metadata[2230]: Sep 12 17:10:00.314 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 12 17:10:00.321602 coreos-metadata[2230]: Sep 12 17:10:00.320 INFO Fetch successful Sep 12 17:10:00.321602 coreos-metadata[2230]: Sep 12 17:10:00.320 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 12 17:10:00.326622 coreos-metadata[2230]: Sep 12 17:10:00.326 INFO Fetch successful Sep 12 17:10:00.332693 unknown[2230]: wrote ssh authorized keys file for user: core Sep 12 17:10:00.361744 amazon-ssm-agent[2173]: 2025-09-12 17:09:59 INFO Checking if agent identity type EC2 can be assumed Sep 12 17:10:00.384498 containerd[2128]: time="2025-09-12T17:10:00.383729820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:10:00.394684 containerd[2128]: time="2025-09-12T17:10:00.394594752Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:10:00.394684 containerd[2128]: time="2025-09-12T17:10:00.394673640Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:10:00.394868 containerd[2128]: time="2025-09-12T17:10:00.394725888Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:10:00.395101 containerd[2128]: time="2025-09-12T17:10:00.395056944Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:10:00.395166 containerd[2128]: time="2025-09-12T17:10:00.395114844Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:10:00.395291 containerd[2128]: time="2025-09-12T17:10:00.395245008Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:10:00.395361 containerd[2128]: time="2025-09-12T17:10:00.395287176Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:10:00.396449 containerd[2128]: time="2025-09-12T17:10:00.395681496Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:10:00.396449 containerd[2128]: time="2025-09-12T17:10:00.395732208Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:10:00.396449 containerd[2128]: time="2025-09-12T17:10:00.395768232Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:10:00.396449 containerd[2128]: time="2025-09-12T17:10:00.395793348Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:10:00.396449 containerd[2128]: time="2025-09-12T17:10:00.395979012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:10:00.396449 containerd[2128]: time="2025-09-12T17:10:00.396386268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:10:00.411618 containerd[2128]: time="2025-09-12T17:10:00.408994176Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:10:00.411618 containerd[2128]: time="2025-09-12T17:10:00.409057656Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:10:00.411618 containerd[2128]: time="2025-09-12T17:10:00.409283424Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:10:00.411618 containerd[2128]: time="2025-09-12T17:10:00.409382148Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:10:00.425626 dbus-daemon[2085]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 12 17:10:00.427148 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 12 17:10:00.431619 containerd[2128]: time="2025-09-12T17:10:00.431177041Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:10:00.431619 containerd[2128]: time="2025-09-12T17:10:00.431298733Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:10:00.431619 containerd[2128]: time="2025-09-12T17:10:00.431430625Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:10:00.431619 containerd[2128]: time="2025-09-12T17:10:00.431471869Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:10:00.431619 containerd[2128]: time="2025-09-12T17:10:00.431509741Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:10:00.431925 containerd[2128]: time="2025-09-12T17:10:00.431826961Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:10:00.434895 containerd[2128]: time="2025-09-12T17:10:00.434455357Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:10:00.437812 dbus-daemon[2085]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2157 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 12 17:10:00.443052 containerd[2128]: time="2025-09-12T17:10:00.442970557Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:10:00.443052 containerd[2128]: time="2025-09-12T17:10:00.443041717Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:10:00.443204 containerd[2128]: time="2025-09-12T17:10:00.443076001Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:10:00.443204 containerd[2128]: time="2025-09-12T17:10:00.443109673Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:10:00.443204 containerd[2128]: time="2025-09-12T17:10:00.443145313Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:10:00.443204 containerd[2128]: time="2025-09-12T17:10:00.443178445Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:10:00.443404 containerd[2128]: time="2025-09-12T17:10:00.443213425Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:10:00.443404 containerd[2128]: time="2025-09-12T17:10:00.443246989Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:10:00.443404 containerd[2128]: time="2025-09-12T17:10:00.443282545Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:10:00.443404 containerd[2128]: time="2025-09-12T17:10:00.443311537Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:10:00.443404 containerd[2128]: time="2025-09-12T17:10:00.443342473Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:10:00.443404 containerd[2128]: time="2025-09-12T17:10:00.443385145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:10:00.443678 containerd[2128]: time="2025-09-12T17:10:00.443417905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:10:00.443678 containerd[2128]: time="2025-09-12T17:10:00.443447665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:10:00.443678 containerd[2128]: time="2025-09-12T17:10:00.443477953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:10:00.443678 containerd[2128]: time="2025-09-12T17:10:00.443506897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:10:00.443678 containerd[2128]: time="2025-09-12T17:10:00.443537125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:10:00.454241 systemd[1]: Starting polkit.service - Authorization Manager... Sep 12 17:10:00.457724 containerd[2128]: time="2025-09-12T17:10:00.457645921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:10:00.457848 containerd[2128]: time="2025-09-12T17:10:00.457730797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:10:00.457848 containerd[2128]: time="2025-09-12T17:10:00.457769113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:10:00.457848 containerd[2128]: time="2025-09-12T17:10:00.457806745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:10:00.457976 containerd[2128]: time="2025-09-12T17:10:00.457846345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:10:00.457976 containerd[2128]: time="2025-09-12T17:10:00.457880005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:10:00.460428 update-ssh-keys[2307]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:10:00.463711 amazon-ssm-agent[2173]: 2025-09-12 17:10:00 INFO Agent will take identity from EC2 Sep 12 17:10:00.472002 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 17:10:00.481790 containerd[2128]: time="2025-09-12T17:10:00.480652009Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:10:00.484016 containerd[2128]: time="2025-09-12T17:10:00.483422581Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:10:00.484256 containerd[2128]: time="2025-09-12T17:10:00.484197637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:10:00.488440 containerd[2128]: time="2025-09-12T17:10:00.488314801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:10:00.488560 containerd[2128]: time="2025-09-12T17:10:00.488429893Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:10:00.501373 containerd[2128]: time="2025-09-12T17:10:00.490965145Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:10:00.501373 containerd[2128]: time="2025-09-12T17:10:00.499716865Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:10:00.501373 containerd[2128]: time="2025-09-12T17:10:00.499767289Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:10:00.501373 containerd[2128]: time="2025-09-12T17:10:00.499805365Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:10:00.501373 containerd[2128]: time="2025-09-12T17:10:00.499830889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:10:00.501373 containerd[2128]: time="2025-09-12T17:10:00.499864333Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:10:00.501373 containerd[2128]: time="2025-09-12T17:10:00.499889101Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:10:00.501373 containerd[2128]: time="2025-09-12T17:10:00.499917781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:10:00.501873 containerd[2128]: time="2025-09-12T17:10:00.500652901Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:10:00.501873 containerd[2128]: time="2025-09-12T17:10:00.500764753Z" level=info msg="Connect containerd service" Sep 12 17:10:00.501873 containerd[2128]: time="2025-09-12T17:10:00.500825797Z" level=info msg="using legacy CRI server" Sep 12 17:10:00.501873 containerd[2128]: time="2025-09-12T17:10:00.500843173Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:10:00.501873 containerd[2128]: time="2025-09-12T17:10:00.500988997Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:10:00.501973 systemd[1]: Finished sshkeys.service. Sep 12 17:10:00.527848 containerd[2128]: time="2025-09-12T17:10:00.523790029Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:10:00.527848 containerd[2128]: time="2025-09-12T17:10:00.524662933Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:10:00.527848 containerd[2128]: time="2025-09-12T17:10:00.524769001Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:10:00.527848 containerd[2128]: time="2025-09-12T17:10:00.524975677Z" level=info msg="Start subscribing containerd event" Sep 12 17:10:00.527848 containerd[2128]: time="2025-09-12T17:10:00.525061573Z" level=info msg="Start recovering state" Sep 12 17:10:00.527848 containerd[2128]: time="2025-09-12T17:10:00.525180757Z" level=info msg="Start event monitor" Sep 12 17:10:00.527848 containerd[2128]: time="2025-09-12T17:10:00.525206605Z" level=info msg="Start snapshots syncer" Sep 12 17:10:00.527848 containerd[2128]: time="2025-09-12T17:10:00.525231205Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:10:00.527848 containerd[2128]: time="2025-09-12T17:10:00.525252433Z" level=info msg="Start streaming server" Sep 12 17:10:00.525677 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:10:00.558321 polkitd[2320]: Started polkitd version 121 Sep 12 17:10:00.562580 containerd[2128]: time="2025-09-12T17:10:00.562174225Z" level=info msg="containerd successfully booted in 0.365225s" Sep 12 17:10:00.562835 amazon-ssm-agent[2173]: 2025-09-12 17:10:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 17:10:00.598395 polkitd[2320]: Loading rules from directory /etc/polkit-1/rules.d Sep 12 17:10:00.598522 polkitd[2320]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 12 17:10:00.603034 polkitd[2320]: Finished loading, compiling and executing 2 rules Sep 12 17:10:00.613949 dbus-daemon[2085]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 12 17:10:00.615108 polkitd[2320]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 12 17:10:00.615692 systemd[1]: Started polkit.service - Authorization Manager. Sep 12 17:10:00.661936 amazon-ssm-agent[2173]: 2025-09-12 17:10:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 17:10:00.700062 systemd-resolved[2028]: System hostname changed to 'ip-172-31-22-180'. Sep 12 17:10:00.701986 systemd-hostnamed[2157]: Hostname set to (transient) Sep 12 17:10:00.762585 amazon-ssm-agent[2173]: 2025-09-12 17:10:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 17:10:00.861779 amazon-ssm-agent[2173]: 2025-09-12 17:10:00 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 12 17:10:00.919253 amazon-ssm-agent[2173]: 2025-09-12 17:10:00 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 12 17:10:00.919253 amazon-ssm-agent[2173]: 2025-09-12 17:10:00 INFO [amazon-ssm-agent] Starting Core Agent Sep 12 17:10:00.919253 amazon-ssm-agent[2173]: 2025-09-12 17:10:00 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 12 17:10:00.919875 amazon-ssm-agent[2173]: 2025-09-12 17:10:00 INFO [Registrar] Starting registrar module Sep 12 17:10:00.919964 amazon-ssm-agent[2173]: 2025-09-12 17:10:00 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 12 17:10:00.920082 amazon-ssm-agent[2173]: 2025-09-12 17:10:00 INFO [EC2Identity] EC2 registration was successful. Sep 12 17:10:00.920443 amazon-ssm-agent[2173]: 2025-09-12 17:10:00 INFO [CredentialRefresher] credentialRefresher has started Sep 12 17:10:00.920443 amazon-ssm-agent[2173]: 2025-09-12 17:10:00 INFO [CredentialRefresher] Starting credentials refresher loop Sep 12 17:10:00.920443 amazon-ssm-agent[2173]: 2025-09-12 17:10:00 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 12 17:10:00.961485 amazon-ssm-agent[2173]: 2025-09-12 17:10:00 INFO [CredentialRefresher] Next credential rotation will be in 30.841631859366668 minutes Sep 12 17:10:01.250954 tar[2122]: linux-arm64/LICENSE Sep 12 17:10:01.250954 tar[2122]: linux-arm64/README.md Sep 12 17:10:01.279493 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:10:01.550589 sshd_keygen[2145]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:10:01.591341 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:10:01.603250 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:10:01.627173 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:10:01.627914 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:10:01.646589 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:10:01.666269 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:10:01.681703 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:10:01.692708 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:10:01.696546 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:10:01.947332 amazon-ssm-agent[2173]: 2025-09-12 17:10:01 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 12 17:10:02.049813 amazon-ssm-agent[2173]: 2025-09-12 17:10:01 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2369) started Sep 12 17:10:02.149958 amazon-ssm-agent[2173]: 2025-09-12 17:10:01 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 12 17:10:03.096894 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:10:03.103791 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:10:03.107465 systemd[1]: Startup finished in 10.292s (kernel) + 11.233s (userspace) = 21.525s. Sep 12 17:10:03.109373 (kubelet)[2388]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:10:04.744353 kubelet[2388]: E0912 17:10:04.744266 2388 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:10:04.749889 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:10:04.750293 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:10:06.600799 systemd-resolved[2028]: Clock change detected. Flushing caches. Sep 12 17:10:08.290191 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:10:08.300130 systemd[1]: Started sshd@0-172.31.22.180:22-147.75.109.163:59384.service - OpenSSH per-connection server daemon (147.75.109.163:59384). Sep 12 17:10:08.477786 sshd[2400]: Accepted publickey for core from 147.75.109.163 port 59384 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:10:08.481971 sshd[2400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:08.503413 systemd-logind[2103]: New session 1 of user core. Sep 12 17:10:08.504737 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:10:08.517118 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:10:08.539768 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:10:08.558215 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:10:08.564333 (systemd)[2406]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:10:08.793035 systemd[2406]: Queued start job for default target default.target. Sep 12 17:10:08.793841 systemd[2406]: Created slice app.slice - User Application Slice. Sep 12 17:10:08.793901 systemd[2406]: Reached target paths.target - Paths. Sep 12 17:10:08.793934 systemd[2406]: Reached target timers.target - Timers. Sep 12 17:10:08.800886 systemd[2406]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:10:08.826116 systemd[2406]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:10:08.826227 systemd[2406]: Reached target sockets.target - Sockets. Sep 12 17:10:08.826260 systemd[2406]: Reached target basic.target - Basic System. Sep 12 17:10:08.826341 systemd[2406]: Reached target default.target - Main User Target. Sep 12 17:10:08.826400 systemd[2406]: Startup finished in 249ms. Sep 12 17:10:08.826565 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:10:08.840343 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:10:08.990567 systemd[1]: Started sshd@1-172.31.22.180:22-147.75.109.163:59400.service - OpenSSH per-connection server daemon (147.75.109.163:59400). Sep 12 17:10:09.162623 sshd[2418]: Accepted publickey for core from 147.75.109.163 port 59400 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:10:09.165182 sshd[2418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:09.173054 systemd-logind[2103]: New session 2 of user core. Sep 12 17:10:09.184299 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:10:09.310024 sshd[2418]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:09.317360 systemd-logind[2103]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:10:09.318732 systemd[1]: sshd@1-172.31.22.180:22-147.75.109.163:59400.service: Deactivated successfully. Sep 12 17:10:09.324218 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:10:09.325828 systemd-logind[2103]: Removed session 2. Sep 12 17:10:09.340224 systemd[1]: Started sshd@2-172.31.22.180:22-147.75.109.163:59402.service - OpenSSH per-connection server daemon (147.75.109.163:59402). Sep 12 17:10:09.511910 sshd[2426]: Accepted publickey for core from 147.75.109.163 port 59402 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:10:09.514668 sshd[2426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:09.523889 systemd-logind[2103]: New session 3 of user core. Sep 12 17:10:09.531234 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:10:09.653027 sshd[2426]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:09.660323 systemd[1]: sshd@2-172.31.22.180:22-147.75.109.163:59402.service: Deactivated successfully. Sep 12 17:10:09.665802 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:10:09.667097 systemd-logind[2103]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:10:09.669095 systemd-logind[2103]: Removed session 3. Sep 12 17:10:09.681212 systemd[1]: Started sshd@3-172.31.22.180:22-147.75.109.163:59410.service - OpenSSH per-connection server daemon (147.75.109.163:59410). Sep 12 17:10:09.858149 sshd[2434]: Accepted publickey for core from 147.75.109.163 port 59410 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:10:09.860773 sshd[2434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:09.869878 systemd-logind[2103]: New session 4 of user core. Sep 12 17:10:09.876200 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:10:10.004018 sshd[2434]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:10.013380 systemd[1]: sshd@3-172.31.22.180:22-147.75.109.163:59410.service: Deactivated successfully. Sep 12 17:10:10.017105 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:10:10.018536 systemd-logind[2103]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:10:10.020194 systemd-logind[2103]: Removed session 4. Sep 12 17:10:10.032209 systemd[1]: Started sshd@4-172.31.22.180:22-147.75.109.163:51446.service - OpenSSH per-connection server daemon (147.75.109.163:51446). Sep 12 17:10:10.209973 sshd[2442]: Accepted publickey for core from 147.75.109.163 port 51446 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:10:10.212559 sshd[2442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:10.220026 systemd-logind[2103]: New session 5 of user core. Sep 12 17:10:10.230258 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:10:10.381097 sudo[2446]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:10:10.381736 sudo[2446]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:10:10.411799 sudo[2446]: pam_unix(sudo:session): session closed for user root Sep 12 17:10:10.435368 sshd[2442]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:10.443048 systemd-logind[2103]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:10:10.443536 systemd[1]: sshd@4-172.31.22.180:22-147.75.109.163:51446.service: Deactivated successfully. Sep 12 17:10:10.448589 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:10:10.450774 systemd-logind[2103]: Removed session 5. Sep 12 17:10:10.468275 systemd[1]: Started sshd@5-172.31.22.180:22-147.75.109.163:51462.service - OpenSSH per-connection server daemon (147.75.109.163:51462). Sep 12 17:10:10.638086 sshd[2451]: Accepted publickey for core from 147.75.109.163 port 51462 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:10:10.640757 sshd[2451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:10.649264 systemd-logind[2103]: New session 6 of user core. Sep 12 17:10:10.665183 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:10:10.772604 sudo[2456]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:10:10.773525 sudo[2456]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:10:10.779546 sudo[2456]: pam_unix(sudo:session): session closed for user root Sep 12 17:10:10.789297 sudo[2455]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 17:10:10.789929 sudo[2455]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:10:10.817152 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 17:10:10.820578 auditctl[2459]: No rules Sep 12 17:10:10.823964 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:10:10.824533 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 17:10:10.836423 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:10:10.877259 augenrules[2478]: No rules Sep 12 17:10:10.880868 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:10:10.883626 sudo[2455]: pam_unix(sudo:session): session closed for user root Sep 12 17:10:10.908013 sshd[2451]: pam_unix(sshd:session): session closed for user core Sep 12 17:10:10.914616 systemd[1]: sshd@5-172.31.22.180:22-147.75.109.163:51462.service: Deactivated successfully. Sep 12 17:10:10.919599 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:10:10.921578 systemd-logind[2103]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:10:10.923615 systemd-logind[2103]: Removed session 6. Sep 12 17:10:10.940186 systemd[1]: Started sshd@6-172.31.22.180:22-147.75.109.163:51476.service - OpenSSH per-connection server daemon (147.75.109.163:51476). Sep 12 17:10:11.105729 sshd[2487]: Accepted publickey for core from 147.75.109.163 port 51476 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:10:11.109062 sshd[2487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:10:11.117026 systemd-logind[2103]: New session 7 of user core. Sep 12 17:10:11.127264 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:10:11.231219 sudo[2491]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:10:11.232425 sudo[2491]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:10:11.899102 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:10:11.899540 (dockerd)[2507]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:10:12.398415 dockerd[2507]: time="2025-09-12T17:10:12.398317464Z" level=info msg="Starting up" Sep 12 17:10:12.799748 dockerd[2507]: time="2025-09-12T17:10:12.799569530Z" level=info msg="Loading containers: start." Sep 12 17:10:13.007976 kernel: Initializing XFRM netlink socket Sep 12 17:10:13.076808 (udev-worker)[2530]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:10:13.157411 systemd-networkd[1689]: docker0: Link UP Sep 12 17:10:13.180105 dockerd[2507]: time="2025-09-12T17:10:13.180034464Z" level=info msg="Loading containers: done." Sep 12 17:10:13.203869 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2809553471-merged.mount: Deactivated successfully. Sep 12 17:10:13.207760 dockerd[2507]: time="2025-09-12T17:10:13.207639636Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:10:13.207909 dockerd[2507]: time="2025-09-12T17:10:13.207880500Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 17:10:13.208195 dockerd[2507]: time="2025-09-12T17:10:13.208144560Z" level=info msg="Daemon has completed initialization" Sep 12 17:10:13.260567 dockerd[2507]: time="2025-09-12T17:10:13.260215465Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:10:13.261891 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:10:14.709790 containerd[2128]: time="2025-09-12T17:10:14.709247944Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 17:10:15.295022 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:10:15.303092 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:10:15.352807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3159807607.mount: Deactivated successfully. Sep 12 17:10:15.748215 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:10:15.766900 (kubelet)[2670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:10:15.880873 kubelet[2670]: E0912 17:10:15.880805 2670 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:10:15.891379 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:10:15.891793 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:10:16.937932 containerd[2128]: time="2025-09-12T17:10:16.937841203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:16.940136 containerd[2128]: time="2025-09-12T17:10:16.940061395Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=25687325" Sep 12 17:10:16.942503 containerd[2128]: time="2025-09-12T17:10:16.942434203Z" level=info msg="ImageCreate event name:\"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:16.948706 containerd[2128]: time="2025-09-12T17:10:16.948610807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:16.950983 containerd[2128]: time="2025-09-12T17:10:16.950935063Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"25683924\" in 2.241626483s" Sep 12 17:10:16.951465 containerd[2128]: time="2025-09-12T17:10:16.951126175Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\"" Sep 12 17:10:16.953577 containerd[2128]: time="2025-09-12T17:10:16.953524615Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 17:10:18.372364 containerd[2128]: time="2025-09-12T17:10:18.370724550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:18.381811 containerd[2128]: time="2025-09-12T17:10:18.381748122Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=22459767" Sep 12 17:10:18.382141 containerd[2128]: time="2025-09-12T17:10:18.382104918Z" level=info msg="ImageCreate event name:\"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:18.388055 containerd[2128]: time="2025-09-12T17:10:18.387982374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:18.390672 containerd[2128]: time="2025-09-12T17:10:18.390602538Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"24028542\" in 1.436847895s" Sep 12 17:10:18.390794 containerd[2128]: time="2025-09-12T17:10:18.390669186Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\"" Sep 12 17:10:18.391447 containerd[2128]: time="2025-09-12T17:10:18.391407150Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 17:10:19.584684 containerd[2128]: time="2025-09-12T17:10:19.584621180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:19.586754 containerd[2128]: time="2025-09-12T17:10:19.586681976Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=17127506" Sep 12 17:10:19.587218 containerd[2128]: time="2025-09-12T17:10:19.587150744Z" level=info msg="ImageCreate event name:\"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:19.592989 containerd[2128]: time="2025-09-12T17:10:19.592908368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:19.595323 containerd[2128]: time="2025-09-12T17:10:19.595276688Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"18696299\" in 1.20253461s" Sep 12 17:10:19.595579 containerd[2128]: time="2025-09-12T17:10:19.595434008Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\"" Sep 12 17:10:19.596551 containerd[2128]: time="2025-09-12T17:10:19.596143916Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 17:10:20.796866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3500205812.mount: Deactivated successfully. Sep 12 17:10:21.329759 containerd[2128]: time="2025-09-12T17:10:21.329162553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:21.331401 containerd[2128]: time="2025-09-12T17:10:21.331094193Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=26954907" Sep 12 17:10:21.333754 containerd[2128]: time="2025-09-12T17:10:21.332643177Z" level=info msg="ImageCreate event name:\"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:21.337354 containerd[2128]: time="2025-09-12T17:10:21.337271013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:21.338887 containerd[2128]: time="2025-09-12T17:10:21.338540061Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"26953926\" in 1.742345433s" Sep 12 17:10:21.338887 containerd[2128]: time="2025-09-12T17:10:21.338596029Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 12 17:10:21.339928 containerd[2128]: time="2025-09-12T17:10:21.339626613Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:10:21.882308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2422663498.mount: Deactivated successfully. Sep 12 17:10:23.081014 containerd[2128]: time="2025-09-12T17:10:23.080957001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:23.083967 containerd[2128]: time="2025-09-12T17:10:23.083901033Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Sep 12 17:10:23.086715 containerd[2128]: time="2025-09-12T17:10:23.084785745Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:23.092076 containerd[2128]: time="2025-09-12T17:10:23.092021493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:23.094433 containerd[2128]: time="2025-09-12T17:10:23.094375101Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.75469504s" Sep 12 17:10:23.094602 containerd[2128]: time="2025-09-12T17:10:23.094570821Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 12 17:10:23.096642 containerd[2128]: time="2025-09-12T17:10:23.096574881Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:10:23.544812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3483619803.mount: Deactivated successfully. Sep 12 17:10:23.552610 containerd[2128]: time="2025-09-12T17:10:23.552548652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:23.554241 containerd[2128]: time="2025-09-12T17:10:23.554189628Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 12 17:10:23.556736 containerd[2128]: time="2025-09-12T17:10:23.555788388Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:23.560872 containerd[2128]: time="2025-09-12T17:10:23.560806464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:23.562804 containerd[2128]: time="2025-09-12T17:10:23.562598412Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 465.941091ms" Sep 12 17:10:23.562804 containerd[2128]: time="2025-09-12T17:10:23.562648308Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 17:10:23.563755 containerd[2128]: time="2025-09-12T17:10:23.563718804Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 17:10:24.125555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount77335781.mount: Deactivated successfully. Sep 12 17:10:26.110319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:10:26.122588 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:10:26.631907 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:10:26.647456 (kubelet)[2861]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:10:26.733344 kubelet[2861]: E0912 17:10:26.732661 2861 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:10:26.739866 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:10:26.740908 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:10:27.387750 containerd[2128]: time="2025-09-12T17:10:27.387618351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:27.390166 containerd[2128]: time="2025-09-12T17:10:27.390091143Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537161" Sep 12 17:10:27.392490 containerd[2128]: time="2025-09-12T17:10:27.392417655Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:27.399721 containerd[2128]: time="2025-09-12T17:10:27.399060879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:27.402101 containerd[2128]: time="2025-09-12T17:10:27.401625171Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.837716995s" Sep 12 17:10:27.402101 containerd[2128]: time="2025-09-12T17:10:27.401679663Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 12 17:10:31.129988 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 12 17:10:36.860330 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 17:10:36.870552 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:10:37.236193 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:10:37.252321 (kubelet)[2907]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:10:37.332677 kubelet[2907]: E0912 17:10:37.332608 2907 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:10:37.341890 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:10:37.342277 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:10:37.717638 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:10:37.732952 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:10:37.807819 systemd[1]: Reloading requested from client PID 2923 ('systemctl') (unit session-7.scope)... Sep 12 17:10:37.807858 systemd[1]: Reloading... Sep 12 17:10:38.094768 zram_generator::config[2966]: No configuration found. Sep 12 17:10:38.369804 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:10:38.538937 systemd[1]: Reloading finished in 730 ms. Sep 12 17:10:38.636563 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:10:38.636832 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:10:38.637430 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:10:38.644528 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:10:38.992040 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:10:39.015410 (kubelet)[3038]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:10:39.087745 kubelet[3038]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:10:39.087745 kubelet[3038]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:10:39.087745 kubelet[3038]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:10:39.088346 kubelet[3038]: I0912 17:10:39.087847 3038 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:10:39.577418 kubelet[3038]: I0912 17:10:39.577373 3038 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:10:39.578739 kubelet[3038]: I0912 17:10:39.577569 3038 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:10:39.578739 kubelet[3038]: I0912 17:10:39.577988 3038 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:10:39.630090 kubelet[3038]: E0912 17:10:39.630039 3038 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.22.180:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.22.180:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:10:39.632787 kubelet[3038]: I0912 17:10:39.632733 3038 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:10:39.645829 kubelet[3038]: E0912 17:10:39.645772 3038 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:10:39.645829 kubelet[3038]: I0912 17:10:39.645826 3038 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:10:39.653737 kubelet[3038]: I0912 17:10:39.653168 3038 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:10:39.654249 kubelet[3038]: I0912 17:10:39.654204 3038 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:10:39.654512 kubelet[3038]: I0912 17:10:39.654460 3038 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:10:39.654828 kubelet[3038]: I0912 17:10:39.654515 3038 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-180","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 12 17:10:39.655004 kubelet[3038]: I0912 17:10:39.654971 3038 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:10:39.655004 kubelet[3038]: I0912 17:10:39.654992 3038 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:10:39.655338 kubelet[3038]: I0912 17:10:39.655294 3038 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:10:39.660261 kubelet[3038]: I0912 17:10:39.660197 3038 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:10:39.660261 kubelet[3038]: I0912 17:10:39.660255 3038 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:10:39.660425 kubelet[3038]: I0912 17:10:39.660293 3038 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:10:39.660482 kubelet[3038]: I0912 17:10:39.660446 3038 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:10:39.665362 kubelet[3038]: W0912 17:10:39.664650 3038 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.22.180:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-180&limit=500&resourceVersion=0": dial tcp 172.31.22.180:6443: connect: connection refused Sep 12 17:10:39.665362 kubelet[3038]: E0912 17:10:39.664922 3038 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.22.180:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-180&limit=500&resourceVersion=0\": dial tcp 172.31.22.180:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:10:39.668253 kubelet[3038]: W0912 17:10:39.668172 3038 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.22.180:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.22.180:6443: connect: connection refused Sep 12 17:10:39.668573 kubelet[3038]: E0912 17:10:39.668417 3038 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.22.180:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.22.180:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:10:39.670741 kubelet[3038]: I0912 17:10:39.669442 3038 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:10:39.670883 kubelet[3038]: I0912 17:10:39.670788 3038 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:10:39.671171 kubelet[3038]: W0912 17:10:39.671130 3038 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:10:39.675147 kubelet[3038]: I0912 17:10:39.674852 3038 server.go:1274] "Started kubelet" Sep 12 17:10:39.691108 kubelet[3038]: I0912 17:10:39.691070 3038 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:10:39.693448 kubelet[3038]: I0912 17:10:39.693177 3038 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:10:39.694922 kubelet[3038]: I0912 17:10:39.694849 3038 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:10:39.695764 kubelet[3038]: I0912 17:10:39.695681 3038 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:10:39.696735 kubelet[3038]: E0912 17:10:39.695899 3038 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-22-180\" not found" Sep 12 17:10:39.696735 kubelet[3038]: I0912 17:10:39.696527 3038 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:10:39.696735 kubelet[3038]: I0912 17:10:39.696633 3038 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:10:39.700301 kubelet[3038]: I0912 17:10:39.700127 3038 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:10:39.700550 kubelet[3038]: E0912 17:10:39.697134 3038 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.22.180:6443/api/v1/namespaces/default/events\": dial tcp 172.31.22.180:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-22-180.186498251892b680 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-180,UID:ip-172-31-22-180,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-22-180,},FirstTimestamp:2025-09-12 17:10:39.674816128 +0000 UTC m=+0.652960516,LastTimestamp:2025-09-12 17:10:39.674816128 +0000 UTC m=+0.652960516,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-180,}" Sep 12 17:10:39.704335 kubelet[3038]: I0912 17:10:39.704205 3038 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:10:39.704810 kubelet[3038]: I0912 17:10:39.704767 3038 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:10:39.706210 kubelet[3038]: E0912 17:10:39.706070 3038 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.180:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-180?timeout=10s\": dial tcp 172.31.22.180:6443: connect: connection refused" interval="200ms" Sep 12 17:10:39.707024 kubelet[3038]: W0912 17:10:39.706916 3038 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.22.180:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.180:6443: connect: connection refused Sep 12 17:10:39.707164 kubelet[3038]: E0912 17:10:39.707060 3038 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.22.180:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.22.180:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:10:39.708330 kubelet[3038]: I0912 17:10:39.708263 3038 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:10:39.708593 kubelet[3038]: I0912 17:10:39.708477 3038 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:10:39.713737 kubelet[3038]: E0912 17:10:39.712374 3038 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:10:39.713737 kubelet[3038]: I0912 17:10:39.712545 3038 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:10:39.761949 kubelet[3038]: I0912 17:10:39.761771 3038 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:10:39.764249 kubelet[3038]: I0912 17:10:39.764180 3038 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:10:39.764249 kubelet[3038]: I0912 17:10:39.764249 3038 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:10:39.764454 kubelet[3038]: I0912 17:10:39.764290 3038 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:10:39.764454 kubelet[3038]: E0912 17:10:39.764361 3038 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:10:39.770811 kubelet[3038]: I0912 17:10:39.769875 3038 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:10:39.770811 kubelet[3038]: I0912 17:10:39.769931 3038 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:10:39.770811 kubelet[3038]: I0912 17:10:39.769963 3038 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:10:39.772749 kubelet[3038]: W0912 17:10:39.772506 3038 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.22.180:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.180:6443: connect: connection refused Sep 12 17:10:39.773910 kubelet[3038]: E0912 17:10:39.772889 3038 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.22.180:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.22.180:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:10:39.774574 kubelet[3038]: I0912 17:10:39.774490 3038 policy_none.go:49] "None policy: Start" Sep 12 17:10:39.775844 kubelet[3038]: I0912 17:10:39.775819 3038 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:10:39.776351 kubelet[3038]: I0912 17:10:39.775966 3038 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:10:39.786173 kubelet[3038]: I0912 17:10:39.786130 3038 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:10:39.786752 kubelet[3038]: I0912 17:10:39.786582 3038 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:10:39.786752 kubelet[3038]: I0912 17:10:39.786607 3038 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:10:39.790635 kubelet[3038]: I0912 17:10:39.790128 3038 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:10:39.792196 kubelet[3038]: E0912 17:10:39.792163 3038 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-22-180\" not found" Sep 12 17:10:39.898010 kubelet[3038]: I0912 17:10:39.897815 3038 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-22-180" Sep 12 17:10:39.899085 kubelet[3038]: E0912 17:10:39.899017 3038 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.22.180:6443/api/v1/nodes\": dial tcp 172.31.22.180:6443: connect: connection refused" node="ip-172-31-22-180" Sep 12 17:10:39.907474 kubelet[3038]: E0912 17:10:39.907416 3038 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.180:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-180?timeout=10s\": dial tcp 172.31.22.180:6443: connect: connection refused" interval="400ms" Sep 12 17:10:39.997766 kubelet[3038]: I0912 17:10:39.997684 3038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/620d7b60501d9901ed61fa246f5e8a02-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-180\" (UID: \"620d7b60501d9901ed61fa246f5e8a02\") " pod="kube-system/kube-controller-manager-ip-172-31-22-180" Sep 12 17:10:39.997859 kubelet[3038]: I0912 17:10:39.997767 3038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/620d7b60501d9901ed61fa246f5e8a02-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-180\" (UID: \"620d7b60501d9901ed61fa246f5e8a02\") " pod="kube-system/kube-controller-manager-ip-172-31-22-180" Sep 12 17:10:39.997859 kubelet[3038]: I0912 17:10:39.997812 3038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/620d7b60501d9901ed61fa246f5e8a02-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-180\" (UID: \"620d7b60501d9901ed61fa246f5e8a02\") " pod="kube-system/kube-controller-manager-ip-172-31-22-180" Sep 12 17:10:39.997859 kubelet[3038]: I0912 17:10:39.997850 3038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/972bb88dab3d636e44c2cf1f3551b1af-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-180\" (UID: \"972bb88dab3d636e44c2cf1f3551b1af\") " pod="kube-system/kube-scheduler-ip-172-31-22-180" Sep 12 17:10:39.998040 kubelet[3038]: I0912 17:10:39.997884 3038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b3aa8d5efbb21d8f7af8dafc10045c8-ca-certs\") pod \"kube-apiserver-ip-172-31-22-180\" (UID: \"4b3aa8d5efbb21d8f7af8dafc10045c8\") " pod="kube-system/kube-apiserver-ip-172-31-22-180" Sep 12 17:10:39.998040 kubelet[3038]: I0912 17:10:39.997920 3038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b3aa8d5efbb21d8f7af8dafc10045c8-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-180\" (UID: \"4b3aa8d5efbb21d8f7af8dafc10045c8\") " pod="kube-system/kube-apiserver-ip-172-31-22-180" Sep 12 17:10:39.998040 kubelet[3038]: I0912 17:10:39.997956 3038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/620d7b60501d9901ed61fa246f5e8a02-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-180\" (UID: \"620d7b60501d9901ed61fa246f5e8a02\") " pod="kube-system/kube-controller-manager-ip-172-31-22-180" Sep 12 17:10:39.998040 kubelet[3038]: I0912 17:10:39.997991 3038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b3aa8d5efbb21d8f7af8dafc10045c8-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-180\" (UID: \"4b3aa8d5efbb21d8f7af8dafc10045c8\") " pod="kube-system/kube-apiserver-ip-172-31-22-180" Sep 12 17:10:39.998040 kubelet[3038]: I0912 17:10:39.998025 3038 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/620d7b60501d9901ed61fa246f5e8a02-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-180\" (UID: \"620d7b60501d9901ed61fa246f5e8a02\") " pod="kube-system/kube-controller-manager-ip-172-31-22-180" Sep 12 17:10:40.101820 kubelet[3038]: I0912 17:10:40.101770 3038 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-22-180" Sep 12 17:10:40.102634 kubelet[3038]: E0912 17:10:40.102226 3038 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.22.180:6443/api/v1/nodes\": dial tcp 172.31.22.180:6443: connect: connection refused" node="ip-172-31-22-180" Sep 12 17:10:40.184583 containerd[2128]: time="2025-09-12T17:10:40.184066022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-180,Uid:4b3aa8d5efbb21d8f7af8dafc10045c8,Namespace:kube-system,Attempt:0,}" Sep 12 17:10:40.186502 containerd[2128]: time="2025-09-12T17:10:40.186253310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-180,Uid:620d7b60501d9901ed61fa246f5e8a02,Namespace:kube-system,Attempt:0,}" Sep 12 17:10:40.198413 containerd[2128]: time="2025-09-12T17:10:40.198048326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-180,Uid:972bb88dab3d636e44c2cf1f3551b1af,Namespace:kube-system,Attempt:0,}" Sep 12 17:10:40.310733 kubelet[3038]: E0912 17:10:40.309015 3038 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.180:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-180?timeout=10s\": dial tcp 172.31.22.180:6443: connect: connection refused" interval="800ms" Sep 12 17:10:40.504914 kubelet[3038]: I0912 17:10:40.504767 3038 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-22-180" Sep 12 17:10:40.505399 kubelet[3038]: E0912 17:10:40.505354 3038 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.22.180:6443/api/v1/nodes\": dial tcp 172.31.22.180:6443: connect: connection refused" node="ip-172-31-22-180" Sep 12 17:10:40.713527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2115495518.mount: Deactivated successfully. Sep 12 17:10:40.723646 containerd[2128]: time="2025-09-12T17:10:40.722682209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:10:40.725055 containerd[2128]: time="2025-09-12T17:10:40.724966457Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:10:40.726344 containerd[2128]: time="2025-09-12T17:10:40.726283601Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 12 17:10:40.728379 containerd[2128]: time="2025-09-12T17:10:40.728306177Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:10:40.735730 containerd[2128]: time="2025-09-12T17:10:40.733879289Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:10:40.737098 containerd[2128]: time="2025-09-12T17:10:40.737031077Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:10:40.738469 containerd[2128]: time="2025-09-12T17:10:40.738312377Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:10:40.747732 containerd[2128]: time="2025-09-12T17:10:40.746545433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:10:40.750468 containerd[2128]: time="2025-09-12T17:10:40.750107381Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 551.947635ms" Sep 12 17:10:40.755390 containerd[2128]: time="2025-09-12T17:10:40.755226077Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 568.857315ms" Sep 12 17:10:40.756730 containerd[2128]: time="2025-09-12T17:10:40.756593093Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 572.422479ms" Sep 12 17:10:40.872463 kubelet[3038]: W0912 17:10:40.872232 3038 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.22.180:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-180&limit=500&resourceVersion=0": dial tcp 172.31.22.180:6443: connect: connection refused Sep 12 17:10:40.872463 kubelet[3038]: E0912 17:10:40.872392 3038 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.22.180:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-180&limit=500&resourceVersion=0\": dial tcp 172.31.22.180:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:10:40.943748 containerd[2128]: time="2025-09-12T17:10:40.943581174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:10:40.944155 containerd[2128]: time="2025-09-12T17:10:40.943784454Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:10:40.944155 containerd[2128]: time="2025-09-12T17:10:40.943816614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:40.944155 containerd[2128]: time="2025-09-12T17:10:40.943967694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:40.946971 containerd[2128]: time="2025-09-12T17:10:40.946652730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:10:40.947202 containerd[2128]: time="2025-09-12T17:10:40.947111610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:10:40.949323 containerd[2128]: time="2025-09-12T17:10:40.949123422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:40.950529 containerd[2128]: time="2025-09-12T17:10:40.949869906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:40.952484 containerd[2128]: time="2025-09-12T17:10:40.952133934Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:10:40.952484 containerd[2128]: time="2025-09-12T17:10:40.952246722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:10:40.952484 containerd[2128]: time="2025-09-12T17:10:40.952274646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:40.953821 containerd[2128]: time="2025-09-12T17:10:40.952916286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:41.001123 kubelet[3038]: W0912 17:10:41.000848 3038 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.22.180:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.180:6443: connect: connection refused Sep 12 17:10:41.001123 kubelet[3038]: E0912 17:10:41.000953 3038 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.22.180:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.22.180:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:10:41.104670 containerd[2128]: time="2025-09-12T17:10:41.103022883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-180,Uid:620d7b60501d9901ed61fa246f5e8a02,Namespace:kube-system,Attempt:0,} returns sandbox id \"d81358e1c54b74669165163db342c49a197c944e056bf6605a844b9a3ebd085a\"" Sep 12 17:10:41.110334 containerd[2128]: time="2025-09-12T17:10:41.110165079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-180,Uid:4b3aa8d5efbb21d8f7af8dafc10045c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"97e194c51ddf62e74db23aeea83d4ebe5db3ddc2ac3b4d05167cb3cb10088394\"" Sep 12 17:10:41.110612 kubelet[3038]: E0912 17:10:41.110471 3038 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.180:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-180?timeout=10s\": dial tcp 172.31.22.180:6443: connect: connection refused" interval="1.6s" Sep 12 17:10:41.119167 containerd[2128]: time="2025-09-12T17:10:41.118720011Z" level=info msg="CreateContainer within sandbox \"d81358e1c54b74669165163db342c49a197c944e056bf6605a844b9a3ebd085a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:10:41.120468 containerd[2128]: time="2025-09-12T17:10:41.120189027Z" level=info msg="CreateContainer within sandbox \"97e194c51ddf62e74db23aeea83d4ebe5db3ddc2ac3b4d05167cb3cb10088394\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:10:41.135543 containerd[2128]: time="2025-09-12T17:10:41.135359247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-180,Uid:972bb88dab3d636e44c2cf1f3551b1af,Namespace:kube-system,Attempt:0,} returns sandbox id \"27b64a5f64d05130aaf382a23536ff93e1b2a3fe929a94ae68f1af9b17cedf2b\"" Sep 12 17:10:41.143367 containerd[2128]: time="2025-09-12T17:10:41.143166315Z" level=info msg="CreateContainer within sandbox \"27b64a5f64d05130aaf382a23536ff93e1b2a3fe929a94ae68f1af9b17cedf2b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:10:41.145748 containerd[2128]: time="2025-09-12T17:10:41.145661187Z" level=info msg="CreateContainer within sandbox \"d81358e1c54b74669165163db342c49a197c944e056bf6605a844b9a3ebd085a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9f010a57e0700a2b1f4621abd6e107e7ae01bf58ad3e0e519d4dbb70dfa45306\"" Sep 12 17:10:41.147161 containerd[2128]: time="2025-09-12T17:10:41.147029559Z" level=info msg="StartContainer for \"9f010a57e0700a2b1f4621abd6e107e7ae01bf58ad3e0e519d4dbb70dfa45306\"" Sep 12 17:10:41.161580 containerd[2128]: time="2025-09-12T17:10:41.161522247Z" level=info msg="CreateContainer within sandbox \"97e194c51ddf62e74db23aeea83d4ebe5db3ddc2ac3b4d05167cb3cb10088394\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bd6e7fa078ae5a0d339aba5dc7a43c8fcff18fd75170e2e2256ce730df4264fd\"" Sep 12 17:10:41.164467 containerd[2128]: time="2025-09-12T17:10:41.164286195Z" level=info msg="StartContainer for \"bd6e7fa078ae5a0d339aba5dc7a43c8fcff18fd75170e2e2256ce730df4264fd\"" Sep 12 17:10:41.178550 containerd[2128]: time="2025-09-12T17:10:41.178479651Z" level=info msg="CreateContainer within sandbox \"27b64a5f64d05130aaf382a23536ff93e1b2a3fe929a94ae68f1af9b17cedf2b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c3fbbebe8c118752cdabffcc9d8776fc624a278b82b5311bf5ca0623c956a0a8\"" Sep 12 17:10:41.180058 containerd[2128]: time="2025-09-12T17:10:41.179554023Z" level=info msg="StartContainer for \"c3fbbebe8c118752cdabffcc9d8776fc624a278b82b5311bf5ca0623c956a0a8\"" Sep 12 17:10:41.261719 kubelet[3038]: W0912 17:10:41.260166 3038 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.22.180:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.180:6443: connect: connection refused Sep 12 17:10:41.262931 kubelet[3038]: E0912 17:10:41.262836 3038 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.22.180:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.22.180:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:10:41.271808 kubelet[3038]: W0912 17:10:41.270377 3038 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.22.180:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.22.180:6443: connect: connection refused Sep 12 17:10:41.271808 kubelet[3038]: E0912 17:10:41.271751 3038 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.22.180:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.22.180:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:10:41.311348 kubelet[3038]: I0912 17:10:41.311298 3038 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-22-180" Sep 12 17:10:41.312143 kubelet[3038]: E0912 17:10:41.312068 3038 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.22.180:6443/api/v1/nodes\": dial tcp 172.31.22.180:6443: connect: connection refused" node="ip-172-31-22-180" Sep 12 17:10:41.347499 containerd[2128]: time="2025-09-12T17:10:41.346957984Z" level=info msg="StartContainer for \"bd6e7fa078ae5a0d339aba5dc7a43c8fcff18fd75170e2e2256ce730df4264fd\" returns successfully" Sep 12 17:10:41.371255 containerd[2128]: time="2025-09-12T17:10:41.371178208Z" level=info msg="StartContainer for \"9f010a57e0700a2b1f4621abd6e107e7ae01bf58ad3e0e519d4dbb70dfa45306\" returns successfully" Sep 12 17:10:41.457379 containerd[2128]: time="2025-09-12T17:10:41.457270145Z" level=info msg="StartContainer for \"c3fbbebe8c118752cdabffcc9d8776fc624a278b82b5311bf5ca0623c956a0a8\" returns successfully" Sep 12 17:10:42.919800 kubelet[3038]: I0912 17:10:42.916379 3038 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-22-180" Sep 12 17:10:44.988730 update_engine[2107]: I20250912 17:10:44.986732 2107 update_attempter.cc:509] Updating boot flags... Sep 12 17:10:45.000291 kubelet[3038]: E0912 17:10:45.000163 3038 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-22-180\" not found" node="ip-172-31-22-180" Sep 12 17:10:45.052987 kubelet[3038]: I0912 17:10:45.050514 3038 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-22-180" Sep 12 17:10:45.263717 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3322) Sep 12 17:10:45.673502 kubelet[3038]: I0912 17:10:45.672774 3038 apiserver.go:52] "Watching apiserver" Sep 12 17:10:45.697927 kubelet[3038]: I0912 17:10:45.697841 3038 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:10:46.021187 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3322) Sep 12 17:10:47.493490 systemd[1]: Reloading requested from client PID 3492 ('systemctl') (unit session-7.scope)... Sep 12 17:10:47.493524 systemd[1]: Reloading... Sep 12 17:10:47.691745 zram_generator::config[3535]: No configuration found. Sep 12 17:10:48.019137 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:10:48.224354 systemd[1]: Reloading finished in 730 ms. Sep 12 17:10:48.296246 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:10:48.312119 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:10:48.312770 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:10:48.324717 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:10:48.666024 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:10:48.679657 (kubelet)[3602]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:10:48.784859 kubelet[3602]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:10:48.784859 kubelet[3602]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:10:48.784859 kubelet[3602]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:10:48.786260 kubelet[3602]: I0912 17:10:48.784964 3602 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:10:48.821734 kubelet[3602]: I0912 17:10:48.820748 3602 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:10:48.821734 kubelet[3602]: I0912 17:10:48.820798 3602 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:10:48.821734 kubelet[3602]: I0912 17:10:48.821269 3602 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:10:48.825371 kubelet[3602]: I0912 17:10:48.825320 3602 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:10:48.832040 kubelet[3602]: I0912 17:10:48.831663 3602 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:10:48.838829 kubelet[3602]: E0912 17:10:48.838731 3602 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:10:48.839724 kubelet[3602]: I0912 17:10:48.839070 3602 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:10:48.851712 kubelet[3602]: I0912 17:10:48.843765 3602 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:10:48.851712 kubelet[3602]: I0912 17:10:48.844454 3602 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:10:48.851712 kubelet[3602]: I0912 17:10:48.844680 3602 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:10:48.851712 kubelet[3602]: I0912 17:10:48.844774 3602 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-22-180","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 12 17:10:48.852094 kubelet[3602]: I0912 17:10:48.845043 3602 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:10:48.852094 kubelet[3602]: I0912 17:10:48.845062 3602 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:10:48.852094 kubelet[3602]: I0912 17:10:48.845119 3602 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:10:48.852094 kubelet[3602]: I0912 17:10:48.845279 3602 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:10:48.852094 kubelet[3602]: I0912 17:10:48.845302 3602 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:10:48.852094 kubelet[3602]: I0912 17:10:48.845332 3602 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:10:48.852094 kubelet[3602]: I0912 17:10:48.845359 3602 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:10:48.860294 kubelet[3602]: I0912 17:10:48.860241 3602 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:10:48.863811 kubelet[3602]: I0912 17:10:48.862121 3602 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:10:48.865569 kubelet[3602]: I0912 17:10:48.865511 3602 server.go:1274] "Started kubelet" Sep 12 17:10:48.880422 kubelet[3602]: I0912 17:10:48.879320 3602 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:10:48.881419 kubelet[3602]: I0912 17:10:48.881302 3602 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:10:48.883020 kubelet[3602]: I0912 17:10:48.882973 3602 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:10:48.885182 kubelet[3602]: I0912 17:10:48.885094 3602 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:10:48.890714 kubelet[3602]: I0912 17:10:48.885454 3602 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:10:48.890714 kubelet[3602]: I0912 17:10:48.888231 3602 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:10:48.892193 kubelet[3602]: I0912 17:10:48.892136 3602 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:10:48.892604 kubelet[3602]: E0912 17:10:48.892521 3602 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-22-180\" not found" Sep 12 17:10:48.903535 kubelet[3602]: I0912 17:10:48.903476 3602 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:10:48.904197 kubelet[3602]: I0912 17:10:48.903775 3602 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:10:48.909985 kubelet[3602]: I0912 17:10:48.909905 3602 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:10:48.914710 kubelet[3602]: I0912 17:10:48.910952 3602 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:10:48.915130 kubelet[3602]: I0912 17:10:48.915086 3602 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:10:48.917826 kubelet[3602]: I0912 17:10:48.912170 3602 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:10:48.921749 kubelet[3602]: I0912 17:10:48.919992 3602 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:10:48.921749 kubelet[3602]: I0912 17:10:48.920052 3602 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:10:48.921749 kubelet[3602]: E0912 17:10:48.920139 3602 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:10:48.957534 kubelet[3602]: I0912 17:10:48.957477 3602 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:10:48.981422 kubelet[3602]: E0912 17:10:48.981367 3602 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:10:49.023831 kubelet[3602]: E0912 17:10:49.023756 3602 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:10:49.102524 kubelet[3602]: I0912 17:10:49.102482 3602 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:10:49.103708 kubelet[3602]: I0912 17:10:49.103649 3602 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:10:49.103911 kubelet[3602]: I0912 17:10:49.103835 3602 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:10:49.104676 kubelet[3602]: I0912 17:10:49.104315 3602 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:10:49.104676 kubelet[3602]: I0912 17:10:49.104363 3602 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:10:49.104676 kubelet[3602]: I0912 17:10:49.104401 3602 policy_none.go:49] "None policy: Start" Sep 12 17:10:49.108857 kubelet[3602]: I0912 17:10:49.108725 3602 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:10:49.108857 kubelet[3602]: I0912 17:10:49.108794 3602 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:10:49.109538 kubelet[3602]: I0912 17:10:49.109330 3602 state_mem.go:75] "Updated machine memory state" Sep 12 17:10:49.118782 kubelet[3602]: I0912 17:10:49.118638 3602 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:10:49.120290 kubelet[3602]: I0912 17:10:49.119737 3602 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:10:49.120290 kubelet[3602]: I0912 17:10:49.119769 3602 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:10:49.122731 kubelet[3602]: I0912 17:10:49.120419 3602 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:10:49.244234 kubelet[3602]: E0912 17:10:49.243847 3602 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-22-180\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-22-180" Sep 12 17:10:49.244234 kubelet[3602]: E0912 17:10:49.244057 3602 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-22-180\" already exists" pod="kube-system/kube-scheduler-ip-172-31-22-180" Sep 12 17:10:49.245561 kubelet[3602]: I0912 17:10:49.245520 3602 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-22-180" Sep 12 17:10:49.262512 kubelet[3602]: I0912 17:10:49.262377 3602 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-22-180" Sep 12 17:10:49.263095 kubelet[3602]: I0912 17:10:49.262768 3602 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-22-180" Sep 12 17:10:49.305616 kubelet[3602]: I0912 17:10:49.305552 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b3aa8d5efbb21d8f7af8dafc10045c8-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-180\" (UID: \"4b3aa8d5efbb21d8f7af8dafc10045c8\") " pod="kube-system/kube-apiserver-ip-172-31-22-180" Sep 12 17:10:49.305799 kubelet[3602]: I0912 17:10:49.305624 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/620d7b60501d9901ed61fa246f5e8a02-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-180\" (UID: \"620d7b60501d9901ed61fa246f5e8a02\") " pod="kube-system/kube-controller-manager-ip-172-31-22-180" Sep 12 17:10:49.305799 kubelet[3602]: I0912 17:10:49.305669 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/620d7b60501d9901ed61fa246f5e8a02-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-180\" (UID: \"620d7b60501d9901ed61fa246f5e8a02\") " pod="kube-system/kube-controller-manager-ip-172-31-22-180" Sep 12 17:10:49.305799 kubelet[3602]: I0912 17:10:49.305737 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b3aa8d5efbb21d8f7af8dafc10045c8-ca-certs\") pod \"kube-apiserver-ip-172-31-22-180\" (UID: \"4b3aa8d5efbb21d8f7af8dafc10045c8\") " pod="kube-system/kube-apiserver-ip-172-31-22-180" Sep 12 17:10:49.305959 kubelet[3602]: I0912 17:10:49.305854 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b3aa8d5efbb21d8f7af8dafc10045c8-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-180\" (UID: \"4b3aa8d5efbb21d8f7af8dafc10045c8\") " pod="kube-system/kube-apiserver-ip-172-31-22-180" Sep 12 17:10:49.305959 kubelet[3602]: I0912 17:10:49.305894 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/620d7b60501d9901ed61fa246f5e8a02-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-180\" (UID: \"620d7b60501d9901ed61fa246f5e8a02\") " pod="kube-system/kube-controller-manager-ip-172-31-22-180" Sep 12 17:10:49.305959 kubelet[3602]: I0912 17:10:49.305929 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/620d7b60501d9901ed61fa246f5e8a02-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-180\" (UID: \"620d7b60501d9901ed61fa246f5e8a02\") " pod="kube-system/kube-controller-manager-ip-172-31-22-180" Sep 12 17:10:49.306107 kubelet[3602]: I0912 17:10:49.305965 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/620d7b60501d9901ed61fa246f5e8a02-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-180\" (UID: \"620d7b60501d9901ed61fa246f5e8a02\") " pod="kube-system/kube-controller-manager-ip-172-31-22-180" Sep 12 17:10:49.306107 kubelet[3602]: I0912 17:10:49.306003 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/972bb88dab3d636e44c2cf1f3551b1af-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-180\" (UID: \"972bb88dab3d636e44c2cf1f3551b1af\") " pod="kube-system/kube-scheduler-ip-172-31-22-180" Sep 12 17:10:49.859968 kubelet[3602]: I0912 17:10:49.859900 3602 apiserver.go:52] "Watching apiserver" Sep 12 17:10:49.903915 kubelet[3602]: I0912 17:10:49.903831 3602 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:10:50.012871 kubelet[3602]: E0912 17:10:50.012028 3602 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-22-180\" already exists" pod="kube-system/kube-apiserver-ip-172-31-22-180" Sep 12 17:10:50.039759 kubelet[3602]: I0912 17:10:50.038993 3602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-22-180" podStartSLOduration=1.038950235 podStartE2EDuration="1.038950235s" podCreationTimestamp="2025-09-12 17:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:10:50.038930207 +0000 UTC m=+1.347668767" watchObservedRunningTime="2025-09-12 17:10:50.038950235 +0000 UTC m=+1.347688783" Sep 12 17:10:50.084524 kubelet[3602]: I0912 17:10:50.084116 3602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-22-180" podStartSLOduration=3.084090143 podStartE2EDuration="3.084090143s" podCreationTimestamp="2025-09-12 17:10:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:10:50.062041331 +0000 UTC m=+1.370779879" watchObservedRunningTime="2025-09-12 17:10:50.084090143 +0000 UTC m=+1.392828751" Sep 12 17:10:50.085299 kubelet[3602]: I0912 17:10:50.085015 3602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-22-180" podStartSLOduration=3.084995807 podStartE2EDuration="3.084995807s" podCreationTimestamp="2025-09-12 17:10:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:10:50.081068519 +0000 UTC m=+1.389807091" watchObservedRunningTime="2025-09-12 17:10:50.084995807 +0000 UTC m=+1.393734367" Sep 12 17:10:52.543897 kubelet[3602]: I0912 17:10:52.543853 3602 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:10:52.544560 containerd[2128]: time="2025-09-12T17:10:52.544457668Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:10:52.545055 kubelet[3602]: I0912 17:10:52.544757 3602 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:10:53.236719 kubelet[3602]: I0912 17:10:53.234553 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9723618d-e5e7-4687-a7ac-df5f46946214-kube-proxy\") pod \"kube-proxy-9jqfh\" (UID: \"9723618d-e5e7-4687-a7ac-df5f46946214\") " pod="kube-system/kube-proxy-9jqfh" Sep 12 17:10:53.236719 kubelet[3602]: I0912 17:10:53.234621 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9723618d-e5e7-4687-a7ac-df5f46946214-xtables-lock\") pod \"kube-proxy-9jqfh\" (UID: \"9723618d-e5e7-4687-a7ac-df5f46946214\") " pod="kube-system/kube-proxy-9jqfh" Sep 12 17:10:53.236719 kubelet[3602]: I0912 17:10:53.234657 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9723618d-e5e7-4687-a7ac-df5f46946214-lib-modules\") pod \"kube-proxy-9jqfh\" (UID: \"9723618d-e5e7-4687-a7ac-df5f46946214\") " pod="kube-system/kube-proxy-9jqfh" Sep 12 17:10:53.236719 kubelet[3602]: I0912 17:10:53.234713 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnsz4\" (UniqueName: \"kubernetes.io/projected/9723618d-e5e7-4687-a7ac-df5f46946214-kube-api-access-gnsz4\") pod \"kube-proxy-9jqfh\" (UID: \"9723618d-e5e7-4687-a7ac-df5f46946214\") " pod="kube-system/kube-proxy-9jqfh" Sep 12 17:10:53.509312 containerd[2128]: time="2025-09-12T17:10:53.509113456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9jqfh,Uid:9723618d-e5e7-4687-a7ac-df5f46946214,Namespace:kube-system,Attempt:0,}" Sep 12 17:10:53.587905 containerd[2128]: time="2025-09-12T17:10:53.587141849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:10:53.587905 containerd[2128]: time="2025-09-12T17:10:53.587500253Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:10:53.587905 containerd[2128]: time="2025-09-12T17:10:53.587660837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:53.591329 containerd[2128]: time="2025-09-12T17:10:53.588305597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:53.643728 kubelet[3602]: I0912 17:10:53.637111 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4b05dfc3-8f57-478c-a8fa-ff490d264ca9-var-lib-calico\") pod \"tigera-operator-58fc44c59b-shtzw\" (UID: \"4b05dfc3-8f57-478c-a8fa-ff490d264ca9\") " pod="tigera-operator/tigera-operator-58fc44c59b-shtzw" Sep 12 17:10:53.643728 kubelet[3602]: I0912 17:10:53.637200 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnnx6\" (UniqueName: \"kubernetes.io/projected/4b05dfc3-8f57-478c-a8fa-ff490d264ca9-kube-api-access-tnnx6\") pod \"tigera-operator-58fc44c59b-shtzw\" (UID: \"4b05dfc3-8f57-478c-a8fa-ff490d264ca9\") " pod="tigera-operator/tigera-operator-58fc44c59b-shtzw" Sep 12 17:10:53.720572 containerd[2128]: time="2025-09-12T17:10:53.720402846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9jqfh,Uid:9723618d-e5e7-4687-a7ac-df5f46946214,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f8b2c53265f552c801e4c01453041cd3b9cb84f6c4f6b6cd98a18d14827d7e9\"" Sep 12 17:10:53.729799 containerd[2128]: time="2025-09-12T17:10:53.729609342Z" level=info msg="CreateContainer within sandbox \"5f8b2c53265f552c801e4c01453041cd3b9cb84f6c4f6b6cd98a18d14827d7e9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:10:53.759885 containerd[2128]: time="2025-09-12T17:10:53.759580122Z" level=info msg="CreateContainer within sandbox \"5f8b2c53265f552c801e4c01453041cd3b9cb84f6c4f6b6cd98a18d14827d7e9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ebb772b604f9067264f87125ae7dd354ef48202bb343e52ae2906f8b4f718b10\"" Sep 12 17:10:53.763760 containerd[2128]: time="2025-09-12T17:10:53.762238602Z" level=info msg="StartContainer for \"ebb772b604f9067264f87125ae7dd354ef48202bb343e52ae2906f8b4f718b10\"" Sep 12 17:10:53.863769 containerd[2128]: time="2025-09-12T17:10:53.863532186Z" level=info msg="StartContainer for \"ebb772b604f9067264f87125ae7dd354ef48202bb343e52ae2906f8b4f718b10\" returns successfully" Sep 12 17:10:53.882269 containerd[2128]: time="2025-09-12T17:10:53.882217866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-shtzw,Uid:4b05dfc3-8f57-478c-a8fa-ff490d264ca9,Namespace:tigera-operator,Attempt:0,}" Sep 12 17:10:53.924324 containerd[2128]: time="2025-09-12T17:10:53.924066643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:10:53.924938 containerd[2128]: time="2025-09-12T17:10:53.924670723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:10:53.924938 containerd[2128]: time="2025-09-12T17:10:53.924741427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:53.925288 containerd[2128]: time="2025-09-12T17:10:53.925091947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:10:54.101378 containerd[2128]: time="2025-09-12T17:10:54.100755255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-shtzw,Uid:4b05dfc3-8f57-478c-a8fa-ff490d264ca9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"618e64d7eb289dea1590b81cd2428bc60157e6150e73cf2c42ec44afda74fcbb\"" Sep 12 17:10:54.109040 containerd[2128]: time="2025-09-12T17:10:54.108976407Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 17:10:55.336800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3229411273.mount: Deactivated successfully. Sep 12 17:10:56.221757 containerd[2128]: time="2025-09-12T17:10:56.221676570Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:56.223969 containerd[2128]: time="2025-09-12T17:10:56.223594494Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 12 17:10:56.223969 containerd[2128]: time="2025-09-12T17:10:56.223908966Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:56.228726 containerd[2128]: time="2025-09-12T17:10:56.228174618Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:10:56.230384 containerd[2128]: time="2025-09-12T17:10:56.229853670Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.120810543s" Sep 12 17:10:56.230384 containerd[2128]: time="2025-09-12T17:10:56.229912206Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 12 17:10:56.237134 containerd[2128]: time="2025-09-12T17:10:56.237085722Z" level=info msg="CreateContainer within sandbox \"618e64d7eb289dea1590b81cd2428bc60157e6150e73cf2c42ec44afda74fcbb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 17:10:56.260470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1986073200.mount: Deactivated successfully. Sep 12 17:10:56.262863 containerd[2128]: time="2025-09-12T17:10:56.262779846Z" level=info msg="CreateContainer within sandbox \"618e64d7eb289dea1590b81cd2428bc60157e6150e73cf2c42ec44afda74fcbb\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b42318ac9394e96fb24c28fa41084c68fc2649d09fe8caaf53106adf2d250917\"" Sep 12 17:10:56.265138 containerd[2128]: time="2025-09-12T17:10:56.265021290Z" level=info msg="StartContainer for \"b42318ac9394e96fb24c28fa41084c68fc2649d09fe8caaf53106adf2d250917\"" Sep 12 17:10:56.322121 systemd[1]: run-containerd-runc-k8s.io-b42318ac9394e96fb24c28fa41084c68fc2649d09fe8caaf53106adf2d250917-runc.tEEbda.mount: Deactivated successfully. Sep 12 17:10:56.382058 containerd[2128]: time="2025-09-12T17:10:56.379436671Z" level=info msg="StartContainer for \"b42318ac9394e96fb24c28fa41084c68fc2649d09fe8caaf53106adf2d250917\" returns successfully" Sep 12 17:10:57.047092 kubelet[3602]: I0912 17:10:57.046849 3602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9jqfh" podStartSLOduration=4.046827066 podStartE2EDuration="4.046827066s" podCreationTimestamp="2025-09-12 17:10:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:10:54.081612291 +0000 UTC m=+5.390350839" watchObservedRunningTime="2025-09-12 17:10:57.046827066 +0000 UTC m=+8.355565602" Sep 12 17:10:59.070259 kubelet[3602]: I0912 17:10:59.070175 3602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-shtzw" podStartSLOduration=3.945545313 podStartE2EDuration="6.070152728s" podCreationTimestamp="2025-09-12 17:10:53 +0000 UTC" firstStartedPulling="2025-09-12 17:10:54.106923927 +0000 UTC m=+5.415662463" lastFinishedPulling="2025-09-12 17:10:56.231531354 +0000 UTC m=+7.540269878" observedRunningTime="2025-09-12 17:10:57.047621658 +0000 UTC m=+8.356360230" watchObservedRunningTime="2025-09-12 17:10:59.070152728 +0000 UTC m=+10.378891264" Sep 12 17:11:04.877125 sudo[2491]: pam_unix(sudo:session): session closed for user root Sep 12 17:11:04.902973 sshd[2487]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:04.912519 systemd[1]: sshd@6-172.31.22.180:22-147.75.109.163:51476.service: Deactivated successfully. Sep 12 17:11:04.929410 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:11:04.935008 systemd-logind[2103]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:11:04.938447 systemd-logind[2103]: Removed session 7. Sep 12 17:11:14.895294 kubelet[3602]: I0912 17:11:14.895186 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/dcd9eb50-4fc1-45de-a1f2-a95d1c5f9438-typha-certs\") pod \"calico-typha-5dbcbc88ff-7pggk\" (UID: \"dcd9eb50-4fc1-45de-a1f2-a95d1c5f9438\") " pod="calico-system/calico-typha-5dbcbc88ff-7pggk" Sep 12 17:11:14.895294 kubelet[3602]: I0912 17:11:14.895285 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db4cq\" (UniqueName: \"kubernetes.io/projected/dcd9eb50-4fc1-45de-a1f2-a95d1c5f9438-kube-api-access-db4cq\") pod \"calico-typha-5dbcbc88ff-7pggk\" (UID: \"dcd9eb50-4fc1-45de-a1f2-a95d1c5f9438\") " pod="calico-system/calico-typha-5dbcbc88ff-7pggk" Sep 12 17:11:14.897741 kubelet[3602]: I0912 17:11:14.895335 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd9eb50-4fc1-45de-a1f2-a95d1c5f9438-tigera-ca-bundle\") pod \"calico-typha-5dbcbc88ff-7pggk\" (UID: \"dcd9eb50-4fc1-45de-a1f2-a95d1c5f9438\") " pod="calico-system/calico-typha-5dbcbc88ff-7pggk" Sep 12 17:11:15.154167 containerd[2128]: time="2025-09-12T17:11:15.153913440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5dbcbc88ff-7pggk,Uid:dcd9eb50-4fc1-45de-a1f2-a95d1c5f9438,Namespace:calico-system,Attempt:0,}" Sep 12 17:11:15.232032 containerd[2128]: time="2025-09-12T17:11:15.231238368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:11:15.232032 containerd[2128]: time="2025-09-12T17:11:15.231337992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:11:15.232879 containerd[2128]: time="2025-09-12T17:11:15.232338720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:15.232879 containerd[2128]: time="2025-09-12T17:11:15.232569984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:15.313173 kubelet[3602]: I0912 17:11:15.309153 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5fffaddd-98d9-4b18-88d3-4e475d093efe-xtables-lock\") pod \"calico-node-262tm\" (UID: \"5fffaddd-98d9-4b18-88d3-4e475d093efe\") " pod="calico-system/calico-node-262tm" Sep 12 17:11:15.313173 kubelet[3602]: I0912 17:11:15.309216 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5fffaddd-98d9-4b18-88d3-4e475d093efe-var-run-calico\") pod \"calico-node-262tm\" (UID: \"5fffaddd-98d9-4b18-88d3-4e475d093efe\") " pod="calico-system/calico-node-262tm" Sep 12 17:11:15.313173 kubelet[3602]: I0912 17:11:15.309258 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5fffaddd-98d9-4b18-88d3-4e475d093efe-flexvol-driver-host\") pod \"calico-node-262tm\" (UID: \"5fffaddd-98d9-4b18-88d3-4e475d093efe\") " pod="calico-system/calico-node-262tm" Sep 12 17:11:15.313173 kubelet[3602]: I0912 17:11:15.309300 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5fffaddd-98d9-4b18-88d3-4e475d093efe-var-lib-calico\") pod \"calico-node-262tm\" (UID: \"5fffaddd-98d9-4b18-88d3-4e475d093efe\") " pod="calico-system/calico-node-262tm" Sep 12 17:11:15.313173 kubelet[3602]: I0912 17:11:15.309340 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5fffaddd-98d9-4b18-88d3-4e475d093efe-cni-log-dir\") pod \"calico-node-262tm\" (UID: \"5fffaddd-98d9-4b18-88d3-4e475d093efe\") " pod="calico-system/calico-node-262tm" Sep 12 17:11:15.315872 kubelet[3602]: I0912 17:11:15.309377 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5fffaddd-98d9-4b18-88d3-4e475d093efe-node-certs\") pod \"calico-node-262tm\" (UID: \"5fffaddd-98d9-4b18-88d3-4e475d093efe\") " pod="calico-system/calico-node-262tm" Sep 12 17:11:15.315872 kubelet[3602]: I0912 17:11:15.309413 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fffaddd-98d9-4b18-88d3-4e475d093efe-tigera-ca-bundle\") pod \"calico-node-262tm\" (UID: \"5fffaddd-98d9-4b18-88d3-4e475d093efe\") " pod="calico-system/calico-node-262tm" Sep 12 17:11:15.315872 kubelet[3602]: I0912 17:11:15.309452 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5fffaddd-98d9-4b18-88d3-4e475d093efe-cni-net-dir\") pod \"calico-node-262tm\" (UID: \"5fffaddd-98d9-4b18-88d3-4e475d093efe\") " pod="calico-system/calico-node-262tm" Sep 12 17:11:15.315872 kubelet[3602]: I0912 17:11:15.309489 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5fffaddd-98d9-4b18-88d3-4e475d093efe-policysync\") pod \"calico-node-262tm\" (UID: \"5fffaddd-98d9-4b18-88d3-4e475d093efe\") " pod="calico-system/calico-node-262tm" Sep 12 17:11:15.315872 kubelet[3602]: I0912 17:11:15.309532 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5fffaddd-98d9-4b18-88d3-4e475d093efe-lib-modules\") pod \"calico-node-262tm\" (UID: \"5fffaddd-98d9-4b18-88d3-4e475d093efe\") " pod="calico-system/calico-node-262tm" Sep 12 17:11:15.316174 kubelet[3602]: I0912 17:11:15.309569 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44fhb\" (UniqueName: \"kubernetes.io/projected/5fffaddd-98d9-4b18-88d3-4e475d093efe-kube-api-access-44fhb\") pod \"calico-node-262tm\" (UID: \"5fffaddd-98d9-4b18-88d3-4e475d093efe\") " pod="calico-system/calico-node-262tm" Sep 12 17:11:15.316174 kubelet[3602]: I0912 17:11:15.309605 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5fffaddd-98d9-4b18-88d3-4e475d093efe-cni-bin-dir\") pod \"calico-node-262tm\" (UID: \"5fffaddd-98d9-4b18-88d3-4e475d093efe\") " pod="calico-system/calico-node-262tm" Sep 12 17:11:15.435835 kubelet[3602]: E0912 17:11:15.433797 3602 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2fhht" podUID="99059a22-90fb-418d-a2c0-7e943cbdb29d" Sep 12 17:11:15.435996 containerd[2128]: time="2025-09-12T17:11:15.435242221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5dbcbc88ff-7pggk,Uid:dcd9eb50-4fc1-45de-a1f2-a95d1c5f9438,Namespace:calico-system,Attempt:0,} returns sandbox id \"6442ced7b5a960ff4cf87bb71838eff2ee30c1e505425717ac2fb25ee81b5853\"" Sep 12 17:11:15.442450 kubelet[3602]: E0912 17:11:15.442356 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.444723 kubelet[3602]: W0912 17:11:15.442487 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.444723 kubelet[3602]: E0912 17:11:15.442786 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.447635 kubelet[3602]: E0912 17:11:15.447330 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.450164 kubelet[3602]: W0912 17:11:15.449162 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.450164 kubelet[3602]: E0912 17:11:15.449222 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.453420 kubelet[3602]: E0912 17:11:15.453383 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.455013 kubelet[3602]: W0912 17:11:15.454741 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.459640 containerd[2128]: time="2025-09-12T17:11:15.458258798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 17:11:15.461126 kubelet[3602]: E0912 17:11:15.461065 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.461126 kubelet[3602]: W0912 17:11:15.461111 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.471743 kubelet[3602]: E0912 17:11:15.463224 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.471743 kubelet[3602]: E0912 17:11:15.463278 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.471743 kubelet[3602]: E0912 17:11:15.464475 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.471743 kubelet[3602]: W0912 17:11:15.464502 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.471743 kubelet[3602]: E0912 17:11:15.466059 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.471743 kubelet[3602]: E0912 17:11:15.469430 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.471743 kubelet[3602]: W0912 17:11:15.469463 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.472253 kubelet[3602]: E0912 17:11:15.471769 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.475787 kubelet[3602]: E0912 17:11:15.473129 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.475787 kubelet[3602]: W0912 17:11:15.473280 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.475787 kubelet[3602]: E0912 17:11:15.473318 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.491721 kubelet[3602]: E0912 17:11:15.485876 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.491721 kubelet[3602]: W0912 17:11:15.485958 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.491721 kubelet[3602]: E0912 17:11:15.486144 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.519905 kubelet[3602]: E0912 17:11:15.519762 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.519905 kubelet[3602]: W0912 17:11:15.519804 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.520839 kubelet[3602]: E0912 17:11:15.520652 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.523185 kubelet[3602]: E0912 17:11:15.523132 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.523493 kubelet[3602]: W0912 17:11:15.523175 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.524231 kubelet[3602]: E0912 17:11:15.523503 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.526293 kubelet[3602]: E0912 17:11:15.526259 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.526637 kubelet[3602]: W0912 17:11:15.526457 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.526637 kubelet[3602]: E0912 17:11:15.526494 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.527062 kubelet[3602]: E0912 17:11:15.527040 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.527171 kubelet[3602]: W0912 17:11:15.527150 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.527281 kubelet[3602]: E0912 17:11:15.527259 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.527749 kubelet[3602]: E0912 17:11:15.527726 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.527922 kubelet[3602]: W0912 17:11:15.527897 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.528080 kubelet[3602]: E0912 17:11:15.528055 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.528614 kubelet[3602]: E0912 17:11:15.528584 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.528851 kubelet[3602]: W0912 17:11:15.528823 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.528969 kubelet[3602]: E0912 17:11:15.528945 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.529764 kubelet[3602]: E0912 17:11:15.529706 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.529985 kubelet[3602]: W0912 17:11:15.529957 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.530547 kubelet[3602]: E0912 17:11:15.530507 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.547396 kubelet[3602]: E0912 17:11:15.547335 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.547666 kubelet[3602]: W0912 17:11:15.547575 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.547666 kubelet[3602]: E0912 17:11:15.547621 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.551783 containerd[2128]: time="2025-09-12T17:11:15.549873338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-262tm,Uid:5fffaddd-98d9-4b18-88d3-4e475d093efe,Namespace:calico-system,Attempt:0,}" Sep 12 17:11:15.551961 kubelet[3602]: E0912 17:11:15.551382 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.551961 kubelet[3602]: W0912 17:11:15.551411 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.551961 kubelet[3602]: E0912 17:11:15.551445 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.553652 kubelet[3602]: E0912 17:11:15.553611 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.556006 kubelet[3602]: W0912 17:11:15.554662 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.556006 kubelet[3602]: E0912 17:11:15.554766 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.556006 kubelet[3602]: E0912 17:11:15.555431 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.556006 kubelet[3602]: W0912 17:11:15.555457 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.556006 kubelet[3602]: E0912 17:11:15.555489 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.558410 kubelet[3602]: E0912 17:11:15.558370 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.561191 kubelet[3602]: W0912 17:11:15.560822 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.561191 kubelet[3602]: E0912 17:11:15.560882 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.577222 kubelet[3602]: E0912 17:11:15.575440 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.577972 kubelet[3602]: W0912 17:11:15.577790 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.581594 kubelet[3602]: E0912 17:11:15.578852 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.581594 kubelet[3602]: E0912 17:11:15.580482 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.581594 kubelet[3602]: W0912 17:11:15.580512 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.581594 kubelet[3602]: E0912 17:11:15.580545 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.583467 kubelet[3602]: E0912 17:11:15.583187 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.583467 kubelet[3602]: W0912 17:11:15.583231 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.583467 kubelet[3602]: E0912 17:11:15.583319 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.584793 kubelet[3602]: E0912 17:11:15.584405 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.584908 kubelet[3602]: W0912 17:11:15.584792 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.584908 kubelet[3602]: E0912 17:11:15.584859 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.595025 kubelet[3602]: E0912 17:11:15.594967 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.595025 kubelet[3602]: W0912 17:11:15.595012 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.596256 kubelet[3602]: E0912 17:11:15.595048 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.597726 kubelet[3602]: E0912 17:11:15.596865 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.597726 kubelet[3602]: W0912 17:11:15.596906 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.597726 kubelet[3602]: E0912 17:11:15.596943 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.600068 kubelet[3602]: E0912 17:11:15.598544 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.600068 kubelet[3602]: W0912 17:11:15.598592 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.600068 kubelet[3602]: E0912 17:11:15.598627 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.600068 kubelet[3602]: E0912 17:11:15.599307 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.600068 kubelet[3602]: W0912 17:11:15.599332 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.600068 kubelet[3602]: E0912 17:11:15.599360 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.600491 kubelet[3602]: E0912 17:11:15.600400 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.600491 kubelet[3602]: W0912 17:11:15.600439 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.600491 kubelet[3602]: E0912 17:11:15.600472 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.600657 kubelet[3602]: I0912 17:11:15.600518 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/99059a22-90fb-418d-a2c0-7e943cbdb29d-socket-dir\") pod \"csi-node-driver-2fhht\" (UID: \"99059a22-90fb-418d-a2c0-7e943cbdb29d\") " pod="calico-system/csi-node-driver-2fhht" Sep 12 17:11:15.604230 kubelet[3602]: E0912 17:11:15.602974 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.604230 kubelet[3602]: W0912 17:11:15.603010 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.604230 kubelet[3602]: E0912 17:11:15.603765 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.605894 kubelet[3602]: I0912 17:11:15.604260 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/99059a22-90fb-418d-a2c0-7e943cbdb29d-kubelet-dir\") pod \"csi-node-driver-2fhht\" (UID: \"99059a22-90fb-418d-a2c0-7e943cbdb29d\") " pod="calico-system/csi-node-driver-2fhht" Sep 12 17:11:15.609384 kubelet[3602]: E0912 17:11:15.606854 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.609384 kubelet[3602]: W0912 17:11:15.606897 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.609384 kubelet[3602]: E0912 17:11:15.606942 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.609384 kubelet[3602]: E0912 17:11:15.607423 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.609384 kubelet[3602]: W0912 17:11:15.607445 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.609384 kubelet[3602]: E0912 17:11:15.607590 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.609384 kubelet[3602]: E0912 17:11:15.608245 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.609384 kubelet[3602]: W0912 17:11:15.608272 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.609384 kubelet[3602]: E0912 17:11:15.608874 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.610005 kubelet[3602]: I0912 17:11:15.608932 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwhdt\" (UniqueName: \"kubernetes.io/projected/99059a22-90fb-418d-a2c0-7e943cbdb29d-kube-api-access-xwhdt\") pod \"csi-node-driver-2fhht\" (UID: \"99059a22-90fb-418d-a2c0-7e943cbdb29d\") " pod="calico-system/csi-node-driver-2fhht" Sep 12 17:11:15.610005 kubelet[3602]: E0912 17:11:15.609752 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.610005 kubelet[3602]: W0912 17:11:15.609782 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.610005 kubelet[3602]: E0912 17:11:15.609822 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.614744 kubelet[3602]: E0912 17:11:15.610751 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.614744 kubelet[3602]: W0912 17:11:15.610789 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.614744 kubelet[3602]: E0912 17:11:15.611053 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.614744 kubelet[3602]: E0912 17:11:15.612184 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.614744 kubelet[3602]: W0912 17:11:15.612214 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.614744 kubelet[3602]: E0912 17:11:15.612516 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.614744 kubelet[3602]: I0912 17:11:15.612566 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/99059a22-90fb-418d-a2c0-7e943cbdb29d-registration-dir\") pod \"csi-node-driver-2fhht\" (UID: \"99059a22-90fb-418d-a2c0-7e943cbdb29d\") " pod="calico-system/csi-node-driver-2fhht" Sep 12 17:11:15.614744 kubelet[3602]: E0912 17:11:15.613402 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.614744 kubelet[3602]: W0912 17:11:15.613428 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.615291 kubelet[3602]: E0912 17:11:15.613465 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.615291 kubelet[3602]: E0912 17:11:15.614498 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.615291 kubelet[3602]: W0912 17:11:15.614524 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.615291 kubelet[3602]: E0912 17:11:15.615083 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.615484 kubelet[3602]: E0912 17:11:15.615458 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.615484 kubelet[3602]: W0912 17:11:15.615476 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.619487 kubelet[3602]: E0912 17:11:15.616305 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.619487 kubelet[3602]: I0912 17:11:15.616378 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/99059a22-90fb-418d-a2c0-7e943cbdb29d-varrun\") pod \"csi-node-driver-2fhht\" (UID: \"99059a22-90fb-418d-a2c0-7e943cbdb29d\") " pod="calico-system/csi-node-driver-2fhht" Sep 12 17:11:15.619487 kubelet[3602]: E0912 17:11:15.616894 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.619487 kubelet[3602]: W0912 17:11:15.616918 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.619487 kubelet[3602]: E0912 17:11:15.616954 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.619487 kubelet[3602]: E0912 17:11:15.617830 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.619487 kubelet[3602]: W0912 17:11:15.617854 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.619487 kubelet[3602]: E0912 17:11:15.618144 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.622388 kubelet[3602]: E0912 17:11:15.621783 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.622388 kubelet[3602]: W0912 17:11:15.621819 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.622388 kubelet[3602]: E0912 17:11:15.621856 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.622388 kubelet[3602]: E0912 17:11:15.622287 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.622388 kubelet[3602]: W0912 17:11:15.622310 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.622388 kubelet[3602]: E0912 17:11:15.622336 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.663006 containerd[2128]: time="2025-09-12T17:11:15.660302835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:11:15.663006 containerd[2128]: time="2025-09-12T17:11:15.660399171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:11:15.663006 containerd[2128]: time="2025-09-12T17:11:15.660437355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:15.663006 containerd[2128]: time="2025-09-12T17:11:15.660633699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:15.727008 kubelet[3602]: E0912 17:11:15.718885 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.727008 kubelet[3602]: W0912 17:11:15.719047 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.727008 kubelet[3602]: E0912 17:11:15.719090 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.728369 kubelet[3602]: E0912 17:11:15.727440 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.730088 kubelet[3602]: W0912 17:11:15.730029 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.732391 kubelet[3602]: E0912 17:11:15.732353 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.735203 kubelet[3602]: E0912 17:11:15.735153 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.738002 kubelet[3602]: W0912 17:11:15.737954 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.738220 kubelet[3602]: E0912 17:11:15.738193 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.738984 kubelet[3602]: E0912 17:11:15.738943 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.739249 kubelet[3602]: W0912 17:11:15.739226 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.739494 kubelet[3602]: E0912 17:11:15.739468 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.740987 kubelet[3602]: E0912 17:11:15.740929 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.742221 kubelet[3602]: W0912 17:11:15.740961 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.742221 kubelet[3602]: E0912 17:11:15.741638 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.746390 kubelet[3602]: E0912 17:11:15.745011 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.746390 kubelet[3602]: W0912 17:11:15.745044 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.747011 kubelet[3602]: E0912 17:11:15.746746 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.747437 kubelet[3602]: E0912 17:11:15.747375 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.747633 kubelet[3602]: W0912 17:11:15.747405 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.748880 kubelet[3602]: E0912 17:11:15.748672 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.750508 kubelet[3602]: E0912 17:11:15.750195 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.750508 kubelet[3602]: W0912 17:11:15.750305 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.751237 kubelet[3602]: E0912 17:11:15.751050 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.752242 kubelet[3602]: E0912 17:11:15.752181 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.753132 kubelet[3602]: W0912 17:11:15.752214 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.753337 kubelet[3602]: E0912 17:11:15.753285 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.754699 kubelet[3602]: E0912 17:11:15.754251 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.754699 kubelet[3602]: W0912 17:11:15.754284 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.755461 kubelet[3602]: E0912 17:11:15.755109 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.757500 kubelet[3602]: E0912 17:11:15.757322 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.757500 kubelet[3602]: W0912 17:11:15.757356 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.758195 kubelet[3602]: E0912 17:11:15.757741 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.759032 kubelet[3602]: E0912 17:11:15.758908 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.760721 kubelet[3602]: W0912 17:11:15.760175 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.761518 kubelet[3602]: E0912 17:11:15.760923 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.765461 kubelet[3602]: E0912 17:11:15.764352 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.765461 kubelet[3602]: W0912 17:11:15.764389 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.769234 kubelet[3602]: E0912 17:11:15.768871 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.769234 kubelet[3602]: E0912 17:11:15.768987 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.769234 kubelet[3602]: W0912 17:11:15.769060 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.770031 kubelet[3602]: E0912 17:11:15.769819 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.771409 kubelet[3602]: E0912 17:11:15.770455 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.771409 kubelet[3602]: W0912 17:11:15.771209 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.772866 kubelet[3602]: E0912 17:11:15.772486 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.775745 kubelet[3602]: E0912 17:11:15.774158 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.775745 kubelet[3602]: W0912 17:11:15.774187 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.776661 kubelet[3602]: E0912 17:11:15.776134 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.777477 kubelet[3602]: E0912 17:11:15.777017 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.777477 kubelet[3602]: W0912 17:11:15.777048 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.778352 kubelet[3602]: E0912 17:11:15.778230 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.780033 kubelet[3602]: E0912 17:11:15.779868 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.780033 kubelet[3602]: W0912 17:11:15.779902 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.780930 kubelet[3602]: E0912 17:11:15.780391 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.781796 kubelet[3602]: E0912 17:11:15.781426 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.781796 kubelet[3602]: W0912 17:11:15.781458 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.782555 kubelet[3602]: E0912 17:11:15.782419 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.785312 kubelet[3602]: E0912 17:11:15.785162 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.785312 kubelet[3602]: W0912 17:11:15.785197 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.786537 kubelet[3602]: E0912 17:11:15.786373 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.788386 kubelet[3602]: E0912 17:11:15.787910 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.788847 kubelet[3602]: W0912 17:11:15.788585 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.788847 kubelet[3602]: E0912 17:11:15.788731 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.791222 kubelet[3602]: E0912 17:11:15.791078 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.791222 kubelet[3602]: W0912 17:11:15.791113 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.791985 kubelet[3602]: E0912 17:11:15.791504 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.793277 kubelet[3602]: E0912 17:11:15.792965 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.793277 kubelet[3602]: W0912 17:11:15.792999 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.796017 kubelet[3602]: E0912 17:11:15.795771 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.796379 kubelet[3602]: E0912 17:11:15.796291 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.796379 kubelet[3602]: W0912 17:11:15.796320 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.796791 kubelet[3602]: E0912 17:11:15.796543 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.797508 kubelet[3602]: E0912 17:11:15.797126 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.797508 kubelet[3602]: W0912 17:11:15.797153 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.797508 kubelet[3602]: E0912 17:11:15.797184 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.827761 kubelet[3602]: E0912 17:11:15.826903 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:15.828051 kubelet[3602]: W0912 17:11:15.827927 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:15.828051 kubelet[3602]: E0912 17:11:15.827994 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:15.889421 containerd[2128]: time="2025-09-12T17:11:15.889305208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-262tm,Uid:5fffaddd-98d9-4b18-88d3-4e475d093efe,Namespace:calico-system,Attempt:0,} returns sandbox id \"fc7d0dc49ad658ad4a49bfb8e99e500e2079d7d8294f59d0c89655c36264ec55\"" Sep 12 17:11:16.792526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2358592379.mount: Deactivated successfully. Sep 12 17:11:16.922071 kubelet[3602]: E0912 17:11:16.922005 3602 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2fhht" podUID="99059a22-90fb-418d-a2c0-7e943cbdb29d" Sep 12 17:11:17.699135 containerd[2128]: time="2025-09-12T17:11:17.699042809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:17.701439 containerd[2128]: time="2025-09-12T17:11:17.701334161Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 12 17:11:17.703220 containerd[2128]: time="2025-09-12T17:11:17.703094321Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:17.712368 containerd[2128]: time="2025-09-12T17:11:17.710593553Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:17.712368 containerd[2128]: time="2025-09-12T17:11:17.712081469Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 2.253741599s" Sep 12 17:11:17.712368 containerd[2128]: time="2025-09-12T17:11:17.712135565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 12 17:11:17.722900 containerd[2128]: time="2025-09-12T17:11:17.722739857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 17:11:17.774285 containerd[2128]: time="2025-09-12T17:11:17.773873117Z" level=info msg="CreateContainer within sandbox \"6442ced7b5a960ff4cf87bb71838eff2ee30c1e505425717ac2fb25ee81b5853\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 17:11:17.815081 containerd[2128]: time="2025-09-12T17:11:17.814894073Z" level=info msg="CreateContainer within sandbox \"6442ced7b5a960ff4cf87bb71838eff2ee30c1e505425717ac2fb25ee81b5853\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b1b62216d6801b4435bf5f6957153404103599fb7abd04905d2e248ca805d16b\"" Sep 12 17:11:17.818233 containerd[2128]: time="2025-09-12T17:11:17.817622801Z" level=info msg="StartContainer for \"b1b62216d6801b4435bf5f6957153404103599fb7abd04905d2e248ca805d16b\"" Sep 12 17:11:18.155013 containerd[2128]: time="2025-09-12T17:11:18.154863507Z" level=info msg="StartContainer for \"b1b62216d6801b4435bf5f6957153404103599fb7abd04905d2e248ca805d16b\" returns successfully" Sep 12 17:11:18.923741 kubelet[3602]: E0912 17:11:18.923247 3602 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2fhht" podUID="99059a22-90fb-418d-a2c0-7e943cbdb29d" Sep 12 17:11:19.023403 containerd[2128]: time="2025-09-12T17:11:19.023327955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:19.024937 containerd[2128]: time="2025-09-12T17:11:19.024845067Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 12 17:11:19.026477 containerd[2128]: time="2025-09-12T17:11:19.026007543Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:19.031130 containerd[2128]: time="2025-09-12T17:11:19.031036599Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:19.032861 containerd[2128]: time="2025-09-12T17:11:19.032811915Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.309040154s" Sep 12 17:11:19.033012 containerd[2128]: time="2025-09-12T17:11:19.032981055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 12 17:11:19.037590 containerd[2128]: time="2025-09-12T17:11:19.037264875Z" level=info msg="CreateContainer within sandbox \"fc7d0dc49ad658ad4a49bfb8e99e500e2079d7d8294f59d0c89655c36264ec55\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 17:11:19.064045 containerd[2128]: time="2025-09-12T17:11:19.063986415Z" level=info msg="CreateContainer within sandbox \"fc7d0dc49ad658ad4a49bfb8e99e500e2079d7d8294f59d0c89655c36264ec55\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8cb23b9d9b59686d89e0a93b276284418975d3b80cc319bd0f8f568ab651435f\"" Sep 12 17:11:19.068338 containerd[2128]: time="2025-09-12T17:11:19.068279859Z" level=info msg="StartContainer for \"8cb23b9d9b59686d89e0a93b276284418975d3b80cc319bd0f8f568ab651435f\"" Sep 12 17:11:19.178292 kubelet[3602]: I0912 17:11:19.176136 3602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5dbcbc88ff-7pggk" podStartSLOduration=2.912126172 podStartE2EDuration="5.176116252s" podCreationTimestamp="2025-09-12 17:11:14 +0000 UTC" firstStartedPulling="2025-09-12 17:11:15.451459753 +0000 UTC m=+26.760198301" lastFinishedPulling="2025-09-12 17:11:17.715449773 +0000 UTC m=+29.024188381" observedRunningTime="2025-09-12 17:11:19.175976524 +0000 UTC m=+30.484715084" watchObservedRunningTime="2025-09-12 17:11:19.176116252 +0000 UTC m=+30.484854788" Sep 12 17:11:19.220478 containerd[2128]: time="2025-09-12T17:11:19.220396864Z" level=info msg="StartContainer for \"8cb23b9d9b59686d89e0a93b276284418975d3b80cc319bd0f8f568ab651435f\" returns successfully" Sep 12 17:11:19.231222 kubelet[3602]: E0912 17:11:19.231176 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:19.231222 kubelet[3602]: W0912 17:11:19.231217 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:19.231711 kubelet[3602]: E0912 17:11:19.231252 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:19.232983 kubelet[3602]: E0912 17:11:19.232944 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:19.232983 kubelet[3602]: W0912 17:11:19.232981 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:19.233385 kubelet[3602]: E0912 17:11:19.233014 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:19.235078 kubelet[3602]: E0912 17:11:19.235034 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:19.235078 kubelet[3602]: W0912 17:11:19.235074 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:19.235495 kubelet[3602]: E0912 17:11:19.235107 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:19.236328 kubelet[3602]: E0912 17:11:19.236291 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:19.236328 kubelet[3602]: W0912 17:11:19.236327 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:19.236523 kubelet[3602]: E0912 17:11:19.236357 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:19.237863 kubelet[3602]: E0912 17:11:19.237831 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:19.238177 kubelet[3602]: W0912 17:11:19.238048 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:19.238177 kubelet[3602]: E0912 17:11:19.238087 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:19.238908 kubelet[3602]: E0912 17:11:19.238715 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:19.238908 kubelet[3602]: W0912 17:11:19.238744 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:19.238908 kubelet[3602]: E0912 17:11:19.238771 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:19.239613 kubelet[3602]: E0912 17:11:19.239416 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:19.239613 kubelet[3602]: W0912 17:11:19.239485 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:19.239613 kubelet[3602]: E0912 17:11:19.239516 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:19.240563 kubelet[3602]: E0912 17:11:19.240346 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:19.240563 kubelet[3602]: W0912 17:11:19.240378 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:19.240563 kubelet[3602]: E0912 17:11:19.240405 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:19.241747 kubelet[3602]: E0912 17:11:19.241097 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:19.241747 kubelet[3602]: W0912 17:11:19.241126 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:19.241747 kubelet[3602]: E0912 17:11:19.241150 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:19.242308 kubelet[3602]: E0912 17:11:19.241950 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:19.242308 kubelet[3602]: W0912 17:11:19.241977 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:19.242308 kubelet[3602]: E0912 17:11:19.242004 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:19.244106 kubelet[3602]: E0912 17:11:19.243815 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:19.244106 kubelet[3602]: W0912 17:11:19.243850 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:19.244106 kubelet[3602]: E0912 17:11:19.243883 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:19.245844 kubelet[3602]: E0912 17:11:19.245546 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:19.245844 kubelet[3602]: W0912 17:11:19.245578 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:19.245844 kubelet[3602]: E0912 17:11:19.245610 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:19.246570 kubelet[3602]: E0912 17:11:19.246422 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:19.246570 kubelet[3602]: W0912 17:11:19.246450 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:19.246570 kubelet[3602]: E0912 17:11:19.246479 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:19.247202 kubelet[3602]: E0912 17:11:19.247079 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:19.247202 kubelet[3602]: W0912 17:11:19.247103 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:19.247202 kubelet[3602]: E0912 17:11:19.247128 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:19.247943 kubelet[3602]: E0912 17:11:19.247771 3602 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:11:19.247943 kubelet[3602]: W0912 17:11:19.247797 3602 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:11:19.247943 kubelet[3602]: E0912 17:11:19.247821 3602 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:11:19.301655 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8cb23b9d9b59686d89e0a93b276284418975d3b80cc319bd0f8f568ab651435f-rootfs.mount: Deactivated successfully. Sep 12 17:11:19.365848 containerd[2128]: time="2025-09-12T17:11:19.365303777Z" level=error msg="collecting metrics for 8cb23b9d9b59686d89e0a93b276284418975d3b80cc319bd0f8f568ab651435f" error="cgroups: cgroup deleted: unknown" Sep 12 17:11:19.680494 containerd[2128]: time="2025-09-12T17:11:19.680348454Z" level=info msg="shim disconnected" id=8cb23b9d9b59686d89e0a93b276284418975d3b80cc319bd0f8f568ab651435f namespace=k8s.io Sep 12 17:11:19.680966 containerd[2128]: time="2025-09-12T17:11:19.680466702Z" level=warning msg="cleaning up after shim disconnected" id=8cb23b9d9b59686d89e0a93b276284418975d3b80cc319bd0f8f568ab651435f namespace=k8s.io Sep 12 17:11:19.680966 containerd[2128]: time="2025-09-12T17:11:19.680672910Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:11:20.159743 kubelet[3602]: I0912 17:11:20.159663 3602 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:11:20.164845 containerd[2128]: time="2025-09-12T17:11:20.164148797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 17:11:20.922574 kubelet[3602]: E0912 17:11:20.922106 3602 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2fhht" podUID="99059a22-90fb-418d-a2c0-7e943cbdb29d" Sep 12 17:11:22.921378 kubelet[3602]: E0912 17:11:22.921306 3602 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2fhht" podUID="99059a22-90fb-418d-a2c0-7e943cbdb29d" Sep 12 17:11:23.066102 containerd[2128]: time="2025-09-12T17:11:23.066039511Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:23.067542 containerd[2128]: time="2025-09-12T17:11:23.067491271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 12 17:11:23.068604 containerd[2128]: time="2025-09-12T17:11:23.068490811Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:23.073172 containerd[2128]: time="2025-09-12T17:11:23.073089979Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:23.075771 containerd[2128]: time="2025-09-12T17:11:23.074762167Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.910549734s" Sep 12 17:11:23.075771 containerd[2128]: time="2025-09-12T17:11:23.074820523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 12 17:11:23.081178 containerd[2128]: time="2025-09-12T17:11:23.080749063Z" level=info msg="CreateContainer within sandbox \"fc7d0dc49ad658ad4a49bfb8e99e500e2079d7d8294f59d0c89655c36264ec55\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 17:11:23.111038 containerd[2128]: time="2025-09-12T17:11:23.110980628Z" level=info msg="CreateContainer within sandbox \"fc7d0dc49ad658ad4a49bfb8e99e500e2079d7d8294f59d0c89655c36264ec55\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ded6b76fb34456ec084bef9a9ca0cc8c501def8b47f57117897c207a13fd030c\"" Sep 12 17:11:23.112927 containerd[2128]: time="2025-09-12T17:11:23.112824128Z" level=info msg="StartContainer for \"ded6b76fb34456ec084bef9a9ca0cc8c501def8b47f57117897c207a13fd030c\"" Sep 12 17:11:23.234208 systemd[1]: run-containerd-runc-k8s.io-ded6b76fb34456ec084bef9a9ca0cc8c501def8b47f57117897c207a13fd030c-runc.hZCnim.mount: Deactivated successfully. Sep 12 17:11:23.402829 containerd[2128]: time="2025-09-12T17:11:23.402750981Z" level=info msg="StartContainer for \"ded6b76fb34456ec084bef9a9ca0cc8c501def8b47f57117897c207a13fd030c\" returns successfully" Sep 12 17:11:24.384505 containerd[2128]: time="2025-09-12T17:11:24.384348058Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:11:24.416934 kubelet[3602]: I0912 17:11:24.414999 3602 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 17:11:24.442664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ded6b76fb34456ec084bef9a9ca0cc8c501def8b47f57117897c207a13fd030c-rootfs.mount: Deactivated successfully. Sep 12 17:11:24.541352 kubelet[3602]: I0912 17:11:24.536600 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e2fad767-5bdb-4cdd-95b1-9b4c25a4c939-calico-apiserver-certs\") pod \"calico-apiserver-5dcf8cdb5c-s49l2\" (UID: \"e2fad767-5bdb-4cdd-95b1-9b4c25a4c939\") " pod="calico-apiserver/calico-apiserver-5dcf8cdb5c-s49l2" Sep 12 17:11:24.541352 kubelet[3602]: I0912 17:11:24.537744 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sdd6\" (UniqueName: \"kubernetes.io/projected/e2fad767-5bdb-4cdd-95b1-9b4c25a4c939-kube-api-access-4sdd6\") pod \"calico-apiserver-5dcf8cdb5c-s49l2\" (UID: \"e2fad767-5bdb-4cdd-95b1-9b4c25a4c939\") " pod="calico-apiserver/calico-apiserver-5dcf8cdb5c-s49l2" Sep 12 17:11:24.639454 kubelet[3602]: I0912 17:11:24.638518 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ec98a9d-a44d-4dce-aecc-5307cd4bde54-config\") pod \"goldmane-7988f88666-zhfpt\" (UID: \"8ec98a9d-a44d-4dce-aecc-5307cd4bde54\") " pod="calico-system/goldmane-7988f88666-zhfpt" Sep 12 17:11:24.639454 kubelet[3602]: I0912 17:11:24.638596 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e325245f-abb8-483c-98d1-72adafe39d13-whisker-ca-bundle\") pod \"whisker-558b875cc4-928jw\" (UID: \"e325245f-abb8-483c-98d1-72adafe39d13\") " pod="calico-system/whisker-558b875cc4-928jw" Sep 12 17:11:24.639454 kubelet[3602]: I0912 17:11:24.638644 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/248d5f01-15dc-4b45-9fde-eec5e30019c2-tigera-ca-bundle\") pod \"calico-kube-controllers-5f8cc79964-sfrvv\" (UID: \"248d5f01-15dc-4b45-9fde-eec5e30019c2\") " pod="calico-system/calico-kube-controllers-5f8cc79964-sfrvv" Sep 12 17:11:24.639454 kubelet[3602]: I0912 17:11:24.638711 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7wq6\" (UniqueName: \"kubernetes.io/projected/bc9ee936-954c-4af7-aedd-76c2de2ef89a-kube-api-access-t7wq6\") pod \"calico-apiserver-7964ddc67d-v8tpq\" (UID: \"bc9ee936-954c-4af7-aedd-76c2de2ef89a\") " pod="calico-apiserver/calico-apiserver-7964ddc67d-v8tpq" Sep 12 17:11:24.641231 kubelet[3602]: I0912 17:11:24.640893 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3783ba1d-f77e-47c0-89fd-9efbe6435e26-config-volume\") pod \"coredns-7c65d6cfc9-h298k\" (UID: \"3783ba1d-f77e-47c0-89fd-9efbe6435e26\") " pod="kube-system/coredns-7c65d6cfc9-h298k" Sep 12 17:11:24.641231 kubelet[3602]: I0912 17:11:24.640979 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmrnn\" (UniqueName: \"kubernetes.io/projected/e325245f-abb8-483c-98d1-72adafe39d13-kube-api-access-xmrnn\") pod \"whisker-558b875cc4-928jw\" (UID: \"e325245f-abb8-483c-98d1-72adafe39d13\") " pod="calico-system/whisker-558b875cc4-928jw" Sep 12 17:11:24.641231 kubelet[3602]: I0912 17:11:24.641024 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8ec98a9d-a44d-4dce-aecc-5307cd4bde54-goldmane-key-pair\") pod \"goldmane-7988f88666-zhfpt\" (UID: \"8ec98a9d-a44d-4dce-aecc-5307cd4bde54\") " pod="calico-system/goldmane-7988f88666-zhfpt" Sep 12 17:11:24.641231 kubelet[3602]: I0912 17:11:24.641067 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ec98a9d-a44d-4dce-aecc-5307cd4bde54-goldmane-ca-bundle\") pod \"goldmane-7988f88666-zhfpt\" (UID: \"8ec98a9d-a44d-4dce-aecc-5307cd4bde54\") " pod="calico-system/goldmane-7988f88666-zhfpt" Sep 12 17:11:24.641231 kubelet[3602]: I0912 17:11:24.641115 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b5r4\" (UniqueName: \"kubernetes.io/projected/92e449f8-4616-42f4-87f1-0de4ba32c288-kube-api-access-7b5r4\") pod \"coredns-7c65d6cfc9-h2wq2\" (UID: \"92e449f8-4616-42f4-87f1-0de4ba32c288\") " pod="kube-system/coredns-7c65d6cfc9-h2wq2" Sep 12 17:11:24.641590 kubelet[3602]: I0912 17:11:24.641152 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e325245f-abb8-483c-98d1-72adafe39d13-whisker-backend-key-pair\") pod \"whisker-558b875cc4-928jw\" (UID: \"e325245f-abb8-483c-98d1-72adafe39d13\") " pod="calico-system/whisker-558b875cc4-928jw" Sep 12 17:11:24.641590 kubelet[3602]: I0912 17:11:24.641188 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/831c8b21-3a30-4e09-bfba-cb39dd0935d8-calico-apiserver-certs\") pod \"calico-apiserver-7964ddc67d-2fn9n\" (UID: \"831c8b21-3a30-4e09-bfba-cb39dd0935d8\") " pod="calico-apiserver/calico-apiserver-7964ddc67d-2fn9n" Sep 12 17:11:24.641590 kubelet[3602]: I0912 17:11:24.641228 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ttmn\" (UniqueName: \"kubernetes.io/projected/248d5f01-15dc-4b45-9fde-eec5e30019c2-kube-api-access-4ttmn\") pod \"calico-kube-controllers-5f8cc79964-sfrvv\" (UID: \"248d5f01-15dc-4b45-9fde-eec5e30019c2\") " pod="calico-system/calico-kube-controllers-5f8cc79964-sfrvv" Sep 12 17:11:24.641590 kubelet[3602]: I0912 17:11:24.641268 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwxlz\" (UniqueName: \"kubernetes.io/projected/831c8b21-3a30-4e09-bfba-cb39dd0935d8-kube-api-access-wwxlz\") pod \"calico-apiserver-7964ddc67d-2fn9n\" (UID: \"831c8b21-3a30-4e09-bfba-cb39dd0935d8\") " pod="calico-apiserver/calico-apiserver-7964ddc67d-2fn9n" Sep 12 17:11:24.641590 kubelet[3602]: I0912 17:11:24.641310 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcngz\" (UniqueName: \"kubernetes.io/projected/8ec98a9d-a44d-4dce-aecc-5307cd4bde54-kube-api-access-rcngz\") pod \"goldmane-7988f88666-zhfpt\" (UID: \"8ec98a9d-a44d-4dce-aecc-5307cd4bde54\") " pod="calico-system/goldmane-7988f88666-zhfpt" Sep 12 17:11:24.641894 kubelet[3602]: I0912 17:11:24.641345 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgmcr\" (UniqueName: \"kubernetes.io/projected/3783ba1d-f77e-47c0-89fd-9efbe6435e26-kube-api-access-dgmcr\") pod \"coredns-7c65d6cfc9-h298k\" (UID: \"3783ba1d-f77e-47c0-89fd-9efbe6435e26\") " pod="kube-system/coredns-7c65d6cfc9-h298k" Sep 12 17:11:24.641894 kubelet[3602]: I0912 17:11:24.641431 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bc9ee936-954c-4af7-aedd-76c2de2ef89a-calico-apiserver-certs\") pod \"calico-apiserver-7964ddc67d-v8tpq\" (UID: \"bc9ee936-954c-4af7-aedd-76c2de2ef89a\") " pod="calico-apiserver/calico-apiserver-7964ddc67d-v8tpq" Sep 12 17:11:24.641894 kubelet[3602]: I0912 17:11:24.641505 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92e449f8-4616-42f4-87f1-0de4ba32c288-config-volume\") pod \"coredns-7c65d6cfc9-h2wq2\" (UID: \"92e449f8-4616-42f4-87f1-0de4ba32c288\") " pod="kube-system/coredns-7c65d6cfc9-h2wq2" Sep 12 17:11:24.881588 containerd[2128]: time="2025-09-12T17:11:24.880162704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dcf8cdb5c-s49l2,Uid:e2fad767-5bdb-4cdd-95b1-9b4c25a4c939,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:11:24.881588 containerd[2128]: time="2025-09-12T17:11:24.880911804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f8cc79964-sfrvv,Uid:248d5f01-15dc-4b45-9fde-eec5e30019c2,Namespace:calico-system,Attempt:0,}" Sep 12 17:11:24.890992 containerd[2128]: time="2025-09-12T17:11:24.890624532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7964ddc67d-2fn9n,Uid:831c8b21-3a30-4e09-bfba-cb39dd0935d8,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:11:24.893232 containerd[2128]: time="2025-09-12T17:11:24.892935768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-558b875cc4-928jw,Uid:e325245f-abb8-483c-98d1-72adafe39d13,Namespace:calico-system,Attempt:0,}" Sep 12 17:11:24.915892 containerd[2128]: time="2025-09-12T17:11:24.915816396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-h2wq2,Uid:92e449f8-4616-42f4-87f1-0de4ba32c288,Namespace:kube-system,Attempt:0,}" Sep 12 17:11:24.916237 containerd[2128]: time="2025-09-12T17:11:24.916195740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-h298k,Uid:3783ba1d-f77e-47c0-89fd-9efbe6435e26,Namespace:kube-system,Attempt:0,}" Sep 12 17:11:24.916503 containerd[2128]: time="2025-09-12T17:11:24.916465417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7964ddc67d-v8tpq,Uid:bc9ee936-954c-4af7-aedd-76c2de2ef89a,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:11:24.916787 containerd[2128]: time="2025-09-12T17:11:24.916749277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-zhfpt,Uid:8ec98a9d-a44d-4dce-aecc-5307cd4bde54,Namespace:calico-system,Attempt:0,}" Sep 12 17:11:24.929785 containerd[2128]: time="2025-09-12T17:11:24.929568157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2fhht,Uid:99059a22-90fb-418d-a2c0-7e943cbdb29d,Namespace:calico-system,Attempt:0,}" Sep 12 17:11:25.054226 containerd[2128]: time="2025-09-12T17:11:25.054150825Z" level=info msg="shim disconnected" id=ded6b76fb34456ec084bef9a9ca0cc8c501def8b47f57117897c207a13fd030c namespace=k8s.io Sep 12 17:11:25.054838 containerd[2128]: time="2025-09-12T17:11:25.054498081Z" level=warning msg="cleaning up after shim disconnected" id=ded6b76fb34456ec084bef9a9ca0cc8c501def8b47f57117897c207a13fd030c namespace=k8s.io Sep 12 17:11:25.054838 containerd[2128]: time="2025-09-12T17:11:25.054529545Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:11:25.238571 containerd[2128]: time="2025-09-12T17:11:25.238165990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 17:11:25.635077 containerd[2128]: time="2025-09-12T17:11:25.634960404Z" level=error msg="Failed to destroy network for sandbox \"da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.647358 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b-shm.mount: Deactivated successfully. Sep 12 17:11:25.653015 containerd[2128]: time="2025-09-12T17:11:25.652626492Z" level=error msg="encountered an error cleaning up failed sandbox \"da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.653015 containerd[2128]: time="2025-09-12T17:11:25.652864848Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-h2wq2,Uid:92e449f8-4616-42f4-87f1-0de4ba32c288,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.660855 kubelet[3602]: E0912 17:11:25.659402 3602 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.660855 kubelet[3602]: E0912 17:11:25.659673 3602 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-h2wq2" Sep 12 17:11:25.660855 kubelet[3602]: E0912 17:11:25.659734 3602 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-h2wq2" Sep 12 17:11:25.661634 kubelet[3602]: E0912 17:11:25.659818 3602 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-h2wq2_kube-system(92e449f8-4616-42f4-87f1-0de4ba32c288)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-h2wq2_kube-system(92e449f8-4616-42f4-87f1-0de4ba32c288)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-h2wq2" podUID="92e449f8-4616-42f4-87f1-0de4ba32c288" Sep 12 17:11:25.738940 containerd[2128]: time="2025-09-12T17:11:25.737878933Z" level=error msg="Failed to destroy network for sandbox \"7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.746113 containerd[2128]: time="2025-09-12T17:11:25.745061809Z" level=error msg="encountered an error cleaning up failed sandbox \"7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.745580 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8-shm.mount: Deactivated successfully. Sep 12 17:11:25.750674 containerd[2128]: time="2025-09-12T17:11:25.750585541Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-558b875cc4-928jw,Uid:e325245f-abb8-483c-98d1-72adafe39d13,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.751207 kubelet[3602]: E0912 17:11:25.751072 3602 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.751207 kubelet[3602]: E0912 17:11:25.751171 3602 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-558b875cc4-928jw" Sep 12 17:11:25.751408 kubelet[3602]: E0912 17:11:25.751205 3602 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-558b875cc4-928jw" Sep 12 17:11:25.751408 kubelet[3602]: E0912 17:11:25.751296 3602 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-558b875cc4-928jw_calico-system(e325245f-abb8-483c-98d1-72adafe39d13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-558b875cc4-928jw_calico-system(e325245f-abb8-483c-98d1-72adafe39d13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-558b875cc4-928jw" podUID="e325245f-abb8-483c-98d1-72adafe39d13" Sep 12 17:11:25.755982 containerd[2128]: time="2025-09-12T17:11:25.755837077Z" level=error msg="Failed to destroy network for sandbox \"f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.764781 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5-shm.mount: Deactivated successfully. Sep 12 17:11:25.769379 containerd[2128]: time="2025-09-12T17:11:25.769129333Z" level=error msg="encountered an error cleaning up failed sandbox \"f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.769379 containerd[2128]: time="2025-09-12T17:11:25.769227469Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-h298k,Uid:3783ba1d-f77e-47c0-89fd-9efbe6435e26,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.769616 kubelet[3602]: E0912 17:11:25.769531 3602 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.769888 kubelet[3602]: E0912 17:11:25.769712 3602 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-h298k" Sep 12 17:11:25.769888 kubelet[3602]: E0912 17:11:25.769769 3602 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-h298k" Sep 12 17:11:25.770117 kubelet[3602]: E0912 17:11:25.769872 3602 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-h298k_kube-system(3783ba1d-f77e-47c0-89fd-9efbe6435e26)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-h298k_kube-system(3783ba1d-f77e-47c0-89fd-9efbe6435e26)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-h298k" podUID="3783ba1d-f77e-47c0-89fd-9efbe6435e26" Sep 12 17:11:25.823642 containerd[2128]: time="2025-09-12T17:11:25.822887713Z" level=error msg="Failed to destroy network for sandbox \"9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.829198 containerd[2128]: time="2025-09-12T17:11:25.828931201Z" level=error msg="encountered an error cleaning up failed sandbox \"9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.829198 containerd[2128]: time="2025-09-12T17:11:25.829026781Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7964ddc67d-2fn9n,Uid:831c8b21-3a30-4e09-bfba-cb39dd0935d8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.829975 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e-shm.mount: Deactivated successfully. Sep 12 17:11:25.830844 kubelet[3602]: E0912 17:11:25.830625 3602 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.830844 kubelet[3602]: E0912 17:11:25.830737 3602 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7964ddc67d-2fn9n" Sep 12 17:11:25.830844 kubelet[3602]: E0912 17:11:25.830771 3602 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7964ddc67d-2fn9n" Sep 12 17:11:25.832905 kubelet[3602]: E0912 17:11:25.830841 3602 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7964ddc67d-2fn9n_calico-apiserver(831c8b21-3a30-4e09-bfba-cb39dd0935d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7964ddc67d-2fn9n_calico-apiserver(831c8b21-3a30-4e09-bfba-cb39dd0935d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7964ddc67d-2fn9n" podUID="831c8b21-3a30-4e09-bfba-cb39dd0935d8" Sep 12 17:11:25.834570 containerd[2128]: time="2025-09-12T17:11:25.834507421Z" level=error msg="Failed to destroy network for sandbox \"43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.837346 containerd[2128]: time="2025-09-12T17:11:25.837247429Z" level=error msg="Failed to destroy network for sandbox \"ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.838418 containerd[2128]: time="2025-09-12T17:11:25.838247173Z" level=error msg="encountered an error cleaning up failed sandbox \"43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.838418 containerd[2128]: time="2025-09-12T17:11:25.838344133Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f8cc79964-sfrvv,Uid:248d5f01-15dc-4b45-9fde-eec5e30019c2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.839049 kubelet[3602]: E0912 17:11:25.838705 3602 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.839049 kubelet[3602]: E0912 17:11:25.838786 3602 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f8cc79964-sfrvv" Sep 12 17:11:25.839049 kubelet[3602]: E0912 17:11:25.838849 3602 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f8cc79964-sfrvv" Sep 12 17:11:25.840740 kubelet[3602]: E0912 17:11:25.838946 3602 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f8cc79964-sfrvv_calico-system(248d5f01-15dc-4b45-9fde-eec5e30019c2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f8cc79964-sfrvv_calico-system(248d5f01-15dc-4b45-9fde-eec5e30019c2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f8cc79964-sfrvv" podUID="248d5f01-15dc-4b45-9fde-eec5e30019c2" Sep 12 17:11:25.840976 containerd[2128]: time="2025-09-12T17:11:25.840551365Z" level=error msg="Failed to destroy network for sandbox \"6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.842755 containerd[2128]: time="2025-09-12T17:11:25.842129257Z" level=error msg="encountered an error cleaning up failed sandbox \"ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.842755 containerd[2128]: time="2025-09-12T17:11:25.842231317Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-zhfpt,Uid:8ec98a9d-a44d-4dce-aecc-5307cd4bde54,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.842755 containerd[2128]: time="2025-09-12T17:11:25.842610157Z" level=error msg="encountered an error cleaning up failed sandbox \"6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.843554 kubelet[3602]: E0912 17:11:25.843496 3602 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.843829 containerd[2128]: time="2025-09-12T17:11:25.842666485Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dcf8cdb5c-s49l2,Uid:e2fad767-5bdb-4cdd-95b1-9b4c25a4c939,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.844553 kubelet[3602]: E0912 17:11:25.844123 3602 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-zhfpt" Sep 12 17:11:25.844553 kubelet[3602]: E0912 17:11:25.844170 3602 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-zhfpt" Sep 12 17:11:25.844553 kubelet[3602]: E0912 17:11:25.844258 3602 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-zhfpt_calico-system(8ec98a9d-a44d-4dce-aecc-5307cd4bde54)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-zhfpt_calico-system(8ec98a9d-a44d-4dce-aecc-5307cd4bde54)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-zhfpt" podUID="8ec98a9d-a44d-4dce-aecc-5307cd4bde54" Sep 12 17:11:25.846818 kubelet[3602]: E0912 17:11:25.845851 3602 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.846818 kubelet[3602]: E0912 17:11:25.845923 3602 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dcf8cdb5c-s49l2" Sep 12 17:11:25.846818 kubelet[3602]: E0912 17:11:25.845954 3602 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5dcf8cdb5c-s49l2" Sep 12 17:11:25.847354 kubelet[3602]: E0912 17:11:25.846012 3602 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5dcf8cdb5c-s49l2_calico-apiserver(e2fad767-5bdb-4cdd-95b1-9b4c25a4c939)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5dcf8cdb5c-s49l2_calico-apiserver(e2fad767-5bdb-4cdd-95b1-9b4c25a4c939)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dcf8cdb5c-s49l2" podUID="e2fad767-5bdb-4cdd-95b1-9b4c25a4c939" Sep 12 17:11:25.859607 containerd[2128]: time="2025-09-12T17:11:25.859439197Z" level=error msg="Failed to destroy network for sandbox \"a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.860768 containerd[2128]: time="2025-09-12T17:11:25.860161561Z" level=error msg="encountered an error cleaning up failed sandbox \"a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.860768 containerd[2128]: time="2025-09-12T17:11:25.860239057Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2fhht,Uid:99059a22-90fb-418d-a2c0-7e943cbdb29d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.861360 kubelet[3602]: E0912 17:11:25.860522 3602 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.861360 kubelet[3602]: E0912 17:11:25.860603 3602 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2fhht" Sep 12 17:11:25.861360 kubelet[3602]: E0912 17:11:25.860641 3602 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2fhht" Sep 12 17:11:25.862288 kubelet[3602]: E0912 17:11:25.860741 3602 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2fhht_calico-system(99059a22-90fb-418d-a2c0-7e943cbdb29d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2fhht_calico-system(99059a22-90fb-418d-a2c0-7e943cbdb29d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2fhht" podUID="99059a22-90fb-418d-a2c0-7e943cbdb29d" Sep 12 17:11:25.873349 containerd[2128]: time="2025-09-12T17:11:25.873242569Z" level=error msg="Failed to destroy network for sandbox \"8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.873914 containerd[2128]: time="2025-09-12T17:11:25.873847441Z" level=error msg="encountered an error cleaning up failed sandbox \"8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.874044 containerd[2128]: time="2025-09-12T17:11:25.873941521Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7964ddc67d-v8tpq,Uid:bc9ee936-954c-4af7-aedd-76c2de2ef89a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.874813 kubelet[3602]: E0912 17:11:25.874303 3602 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:25.874813 kubelet[3602]: E0912 17:11:25.874380 3602 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7964ddc67d-v8tpq" Sep 12 17:11:25.874813 kubelet[3602]: E0912 17:11:25.874412 3602 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7964ddc67d-v8tpq" Sep 12 17:11:25.875029 kubelet[3602]: E0912 17:11:25.874497 3602 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7964ddc67d-v8tpq_calico-apiserver(bc9ee936-954c-4af7-aedd-76c2de2ef89a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7964ddc67d-v8tpq_calico-apiserver(bc9ee936-954c-4af7-aedd-76c2de2ef89a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7964ddc67d-v8tpq" podUID="bc9ee936-954c-4af7-aedd-76c2de2ef89a" Sep 12 17:11:26.232558 kubelet[3602]: I0912 17:11:26.232522 3602 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" Sep 12 17:11:26.236063 kubelet[3602]: I0912 17:11:26.236019 3602 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" Sep 12 17:11:26.238540 containerd[2128]: time="2025-09-12T17:11:26.236993279Z" level=info msg="StopPodSandbox for \"9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e\"" Sep 12 17:11:26.238540 containerd[2128]: time="2025-09-12T17:11:26.237069611Z" level=info msg="StopPodSandbox for \"a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283\"" Sep 12 17:11:26.238540 containerd[2128]: time="2025-09-12T17:11:26.237289907Z" level=info msg="Ensure that sandbox 9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e in task-service has been cleanup successfully" Sep 12 17:11:26.239472 containerd[2128]: time="2025-09-12T17:11:26.237298763Z" level=info msg="Ensure that sandbox a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283 in task-service has been cleanup successfully" Sep 12 17:11:26.244469 kubelet[3602]: I0912 17:11:26.244436 3602 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" Sep 12 17:11:26.246955 containerd[2128]: time="2025-09-12T17:11:26.246374783Z" level=info msg="StopPodSandbox for \"8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e\"" Sep 12 17:11:26.250832 containerd[2128]: time="2025-09-12T17:11:26.250208951Z" level=info msg="Ensure that sandbox 8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e in task-service has been cleanup successfully" Sep 12 17:11:26.256604 kubelet[3602]: I0912 17:11:26.256510 3602 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" Sep 12 17:11:26.258520 containerd[2128]: time="2025-09-12T17:11:26.258437195Z" level=info msg="StopPodSandbox for \"ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c\"" Sep 12 17:11:26.259273 containerd[2128]: time="2025-09-12T17:11:26.259123451Z" level=info msg="Ensure that sandbox ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c in task-service has been cleanup successfully" Sep 12 17:11:26.264866 kubelet[3602]: I0912 17:11:26.263942 3602 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" Sep 12 17:11:26.266157 containerd[2128]: time="2025-09-12T17:11:26.265672343Z" level=info msg="StopPodSandbox for \"6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32\"" Sep 12 17:11:26.268241 containerd[2128]: time="2025-09-12T17:11:26.268183715Z" level=info msg="Ensure that sandbox 6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32 in task-service has been cleanup successfully" Sep 12 17:11:26.277312 kubelet[3602]: I0912 17:11:26.277261 3602 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" Sep 12 17:11:26.283607 containerd[2128]: time="2025-09-12T17:11:26.283551695Z" level=info msg="StopPodSandbox for \"f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5\"" Sep 12 17:11:26.287754 kubelet[3602]: I0912 17:11:26.287461 3602 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" Sep 12 17:11:26.288484 containerd[2128]: time="2025-09-12T17:11:26.288123599Z" level=info msg="Ensure that sandbox f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5 in task-service has been cleanup successfully" Sep 12 17:11:26.294721 containerd[2128]: time="2025-09-12T17:11:26.293942507Z" level=info msg="StopPodSandbox for \"43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc\"" Sep 12 17:11:26.294721 containerd[2128]: time="2025-09-12T17:11:26.294244667Z" level=info msg="Ensure that sandbox 43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc in task-service has been cleanup successfully" Sep 12 17:11:26.304327 kubelet[3602]: I0912 17:11:26.304271 3602 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" Sep 12 17:11:26.306991 containerd[2128]: time="2025-09-12T17:11:26.306549707Z" level=info msg="StopPodSandbox for \"7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8\"" Sep 12 17:11:26.312996 containerd[2128]: time="2025-09-12T17:11:26.312923447Z" level=info msg="Ensure that sandbox 7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8 in task-service has been cleanup successfully" Sep 12 17:11:26.314272 kubelet[3602]: I0912 17:11:26.314214 3602 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" Sep 12 17:11:26.320923 containerd[2128]: time="2025-09-12T17:11:26.320870855Z" level=info msg="StopPodSandbox for \"da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b\"" Sep 12 17:11:26.326459 containerd[2128]: time="2025-09-12T17:11:26.326265768Z" level=info msg="Ensure that sandbox da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b in task-service has been cleanup successfully" Sep 12 17:11:26.435371 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283-shm.mount: Deactivated successfully. Sep 12 17:11:26.436756 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e-shm.mount: Deactivated successfully. Sep 12 17:11:26.436993 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c-shm.mount: Deactivated successfully. Sep 12 17:11:26.437220 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32-shm.mount: Deactivated successfully. Sep 12 17:11:26.437464 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc-shm.mount: Deactivated successfully. Sep 12 17:11:26.489582 containerd[2128]: time="2025-09-12T17:11:26.489392688Z" level=error msg="StopPodSandbox for \"da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b\" failed" error="failed to destroy network for sandbox \"da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:26.489800 kubelet[3602]: E0912 17:11:26.489738 3602 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" Sep 12 17:11:26.490010 kubelet[3602]: E0912 17:11:26.489824 3602 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b"} Sep 12 17:11:26.490010 kubelet[3602]: E0912 17:11:26.489925 3602 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"92e449f8-4616-42f4-87f1-0de4ba32c288\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:11:26.490010 kubelet[3602]: E0912 17:11:26.489966 3602 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"92e449f8-4616-42f4-87f1-0de4ba32c288\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-h2wq2" podUID="92e449f8-4616-42f4-87f1-0de4ba32c288" Sep 12 17:11:26.503249 containerd[2128]: time="2025-09-12T17:11:26.502874040Z" level=error msg="StopPodSandbox for \"43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc\" failed" error="failed to destroy network for sandbox \"43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:26.503382 kubelet[3602]: E0912 17:11:26.503218 3602 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" Sep 12 17:11:26.503382 kubelet[3602]: E0912 17:11:26.503297 3602 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc"} Sep 12 17:11:26.503382 kubelet[3602]: E0912 17:11:26.503355 3602 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"248d5f01-15dc-4b45-9fde-eec5e30019c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:11:26.503646 kubelet[3602]: E0912 17:11:26.503411 3602 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"248d5f01-15dc-4b45-9fde-eec5e30019c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f8cc79964-sfrvv" podUID="248d5f01-15dc-4b45-9fde-eec5e30019c2" Sep 12 17:11:26.524725 containerd[2128]: time="2025-09-12T17:11:26.524415372Z" level=error msg="StopPodSandbox for \"9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e\" failed" error="failed to destroy network for sandbox \"9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:26.524885 kubelet[3602]: E0912 17:11:26.524790 3602 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" Sep 12 17:11:26.525166 kubelet[3602]: E0912 17:11:26.524970 3602 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e"} Sep 12 17:11:26.526735 kubelet[3602]: E0912 17:11:26.525105 3602 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"831c8b21-3a30-4e09-bfba-cb39dd0935d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:11:26.526735 kubelet[3602]: E0912 17:11:26.525275 3602 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"831c8b21-3a30-4e09-bfba-cb39dd0935d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7964ddc67d-2fn9n" podUID="831c8b21-3a30-4e09-bfba-cb39dd0935d8" Sep 12 17:11:26.558038 containerd[2128]: time="2025-09-12T17:11:26.557971717Z" level=error msg="StopPodSandbox for \"ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c\" failed" error="failed to destroy network for sandbox \"ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:26.558639 kubelet[3602]: E0912 17:11:26.558563 3602 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" Sep 12 17:11:26.558811 kubelet[3602]: E0912 17:11:26.558715 3602 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c"} Sep 12 17:11:26.558871 kubelet[3602]: E0912 17:11:26.558775 3602 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8ec98a9d-a44d-4dce-aecc-5307cd4bde54\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:11:26.559013 kubelet[3602]: E0912 17:11:26.558848 3602 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8ec98a9d-a44d-4dce-aecc-5307cd4bde54\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-zhfpt" podUID="8ec98a9d-a44d-4dce-aecc-5307cd4bde54" Sep 12 17:11:26.570181 containerd[2128]: time="2025-09-12T17:11:26.570077341Z" level=error msg="StopPodSandbox for \"8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e\" failed" error="failed to destroy network for sandbox \"8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:26.570764 kubelet[3602]: E0912 17:11:26.570459 3602 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" Sep 12 17:11:26.570866 kubelet[3602]: E0912 17:11:26.570732 3602 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e"} Sep 12 17:11:26.570866 kubelet[3602]: E0912 17:11:26.570846 3602 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bc9ee936-954c-4af7-aedd-76c2de2ef89a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:11:26.571076 kubelet[3602]: E0912 17:11:26.570912 3602 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bc9ee936-954c-4af7-aedd-76c2de2ef89a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7964ddc67d-v8tpq" podUID="bc9ee936-954c-4af7-aedd-76c2de2ef89a" Sep 12 17:11:26.576061 containerd[2128]: time="2025-09-12T17:11:26.575980597Z" level=error msg="StopPodSandbox for \"a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283\" failed" error="failed to destroy network for sandbox \"a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:26.576331 kubelet[3602]: E0912 17:11:26.576267 3602 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" Sep 12 17:11:26.576410 kubelet[3602]: E0912 17:11:26.576344 3602 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283"} Sep 12 17:11:26.576489 kubelet[3602]: E0912 17:11:26.576400 3602 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"99059a22-90fb-418d-a2c0-7e943cbdb29d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:11:26.576489 kubelet[3602]: E0912 17:11:26.576440 3602 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"99059a22-90fb-418d-a2c0-7e943cbdb29d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2fhht" podUID="99059a22-90fb-418d-a2c0-7e943cbdb29d" Sep 12 17:11:26.583107 containerd[2128]: time="2025-09-12T17:11:26.583027321Z" level=error msg="StopPodSandbox for \"6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32\" failed" error="failed to destroy network for sandbox \"6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:26.583381 kubelet[3602]: E0912 17:11:26.583318 3602 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" Sep 12 17:11:26.583609 containerd[2128]: time="2025-09-12T17:11:26.583558465Z" level=error msg="StopPodSandbox for \"f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5\" failed" error="failed to destroy network for sandbox \"f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:26.583865 kubelet[3602]: E0912 17:11:26.583796 3602 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" Sep 12 17:11:26.583939 kubelet[3602]: E0912 17:11:26.583881 3602 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5"} Sep 12 17:11:26.584012 kubelet[3602]: E0912 17:11:26.583937 3602 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3783ba1d-f77e-47c0-89fd-9efbe6435e26\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:11:26.584012 kubelet[3602]: E0912 17:11:26.583984 3602 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3783ba1d-f77e-47c0-89fd-9efbe6435e26\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-h298k" podUID="3783ba1d-f77e-47c0-89fd-9efbe6435e26" Sep 12 17:11:26.584438 kubelet[3602]: E0912 17:11:26.584396 3602 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32"} Sep 12 17:11:26.584515 kubelet[3602]: E0912 17:11:26.584491 3602 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e2fad767-5bdb-4cdd-95b1-9b4c25a4c939\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:11:26.584602 kubelet[3602]: E0912 17:11:26.584532 3602 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e2fad767-5bdb-4cdd-95b1-9b4c25a4c939\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5dcf8cdb5c-s49l2" podUID="e2fad767-5bdb-4cdd-95b1-9b4c25a4c939" Sep 12 17:11:26.591512 containerd[2128]: time="2025-09-12T17:11:26.591428041Z" level=error msg="StopPodSandbox for \"7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8\" failed" error="failed to destroy network for sandbox \"7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:11:26.592067 kubelet[3602]: E0912 17:11:26.592006 3602 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" Sep 12 17:11:26.592207 kubelet[3602]: E0912 17:11:26.592085 3602 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8"} Sep 12 17:11:26.592207 kubelet[3602]: E0912 17:11:26.592148 3602 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e325245f-abb8-483c-98d1-72adafe39d13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:11:26.592207 kubelet[3602]: E0912 17:11:26.592190 3602 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e325245f-abb8-483c-98d1-72adafe39d13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-558b875cc4-928jw" podUID="e325245f-abb8-483c-98d1-72adafe39d13" Sep 12 17:11:31.620932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2155549282.mount: Deactivated successfully. Sep 12 17:11:31.671836 containerd[2128]: time="2025-09-12T17:11:31.671735538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:31.674026 containerd[2128]: time="2025-09-12T17:11:31.673775658Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 12 17:11:31.675032 containerd[2128]: time="2025-09-12T17:11:31.674976462Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:31.680338 containerd[2128]: time="2025-09-12T17:11:31.680252958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:31.681518 containerd[2128]: time="2025-09-12T17:11:31.681459162Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 6.44321684s" Sep 12 17:11:31.681624 containerd[2128]: time="2025-09-12T17:11:31.681520710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 12 17:11:31.727084 containerd[2128]: time="2025-09-12T17:11:31.726805158Z" level=info msg="CreateContainer within sandbox \"fc7d0dc49ad658ad4a49bfb8e99e500e2079d7d8294f59d0c89655c36264ec55\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 17:11:31.760740 containerd[2128]: time="2025-09-12T17:11:31.760354782Z" level=info msg="CreateContainer within sandbox \"fc7d0dc49ad658ad4a49bfb8e99e500e2079d7d8294f59d0c89655c36264ec55\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1135d548403014898e4c08a50bef1389f19f50f8df0f3a52527f400bb9a25328\"" Sep 12 17:11:31.763367 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2649116416.mount: Deactivated successfully. Sep 12 17:11:31.767610 containerd[2128]: time="2025-09-12T17:11:31.765316747Z" level=info msg="StartContainer for \"1135d548403014898e4c08a50bef1389f19f50f8df0f3a52527f400bb9a25328\"" Sep 12 17:11:31.895505 containerd[2128]: time="2025-09-12T17:11:31.895343935Z" level=info msg="StartContainer for \"1135d548403014898e4c08a50bef1389f19f50f8df0f3a52527f400bb9a25328\" returns successfully" Sep 12 17:11:32.174209 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 17:11:32.174433 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 17:11:32.415635 containerd[2128]: time="2025-09-12T17:11:32.414994278Z" level=info msg="StopPodSandbox for \"7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8\"" Sep 12 17:11:32.961713 kubelet[3602]: I0912 17:11:32.961561 3602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-262tm" podStartSLOduration=2.171526526 podStartE2EDuration="17.961505984s" podCreationTimestamp="2025-09-12 17:11:15 +0000 UTC" firstStartedPulling="2025-09-12 17:11:15.893365036 +0000 UTC m=+27.202103572" lastFinishedPulling="2025-09-12 17:11:31.683344494 +0000 UTC m=+42.992083030" observedRunningTime="2025-09-12 17:11:32.43404741 +0000 UTC m=+43.742785982" watchObservedRunningTime="2025-09-12 17:11:32.961505984 +0000 UTC m=+44.270244532" Sep 12 17:11:33.059027 containerd[2128]: 2025-09-12 17:11:32.959 [INFO][4814] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" Sep 12 17:11:33.059027 containerd[2128]: 2025-09-12 17:11:32.960 [INFO][4814] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" iface="eth0" netns="/var/run/netns/cni-b17c87e6-2163-4c8e-b0cd-124a8c42c157" Sep 12 17:11:33.059027 containerd[2128]: 2025-09-12 17:11:32.960 [INFO][4814] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" iface="eth0" netns="/var/run/netns/cni-b17c87e6-2163-4c8e-b0cd-124a8c42c157" Sep 12 17:11:33.059027 containerd[2128]: 2025-09-12 17:11:32.963 [INFO][4814] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" iface="eth0" netns="/var/run/netns/cni-b17c87e6-2163-4c8e-b0cd-124a8c42c157" Sep 12 17:11:33.059027 containerd[2128]: 2025-09-12 17:11:32.963 [INFO][4814] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" Sep 12 17:11:33.059027 containerd[2128]: 2025-09-12 17:11:32.963 [INFO][4814] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" Sep 12 17:11:33.059027 containerd[2128]: 2025-09-12 17:11:33.032 [INFO][4833] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" HandleID="k8s-pod-network.7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" Workload="ip--172--31--22--180-k8s-whisker--558b875cc4--928jw-eth0" Sep 12 17:11:33.059027 containerd[2128]: 2025-09-12 17:11:33.032 [INFO][4833] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:33.059027 containerd[2128]: 2025-09-12 17:11:33.032 [INFO][4833] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:33.059027 containerd[2128]: 2025-09-12 17:11:33.049 [WARNING][4833] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" HandleID="k8s-pod-network.7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" Workload="ip--172--31--22--180-k8s-whisker--558b875cc4--928jw-eth0" Sep 12 17:11:33.059027 containerd[2128]: 2025-09-12 17:11:33.049 [INFO][4833] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" HandleID="k8s-pod-network.7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" Workload="ip--172--31--22--180-k8s-whisker--558b875cc4--928jw-eth0" Sep 12 17:11:33.059027 containerd[2128]: 2025-09-12 17:11:33.051 [INFO][4833] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:33.059027 containerd[2128]: 2025-09-12 17:11:33.056 [INFO][4814] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" Sep 12 17:11:33.062972 containerd[2128]: time="2025-09-12T17:11:33.060948893Z" level=info msg="TearDown network for sandbox \"7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8\" successfully" Sep 12 17:11:33.062972 containerd[2128]: time="2025-09-12T17:11:33.061843829Z" level=info msg="StopPodSandbox for \"7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8\" returns successfully" Sep 12 17:11:33.069596 systemd[1]: run-netns-cni\x2db17c87e6\x2d2163\x2d4c8e\x2db0cd\x2d124a8c42c157.mount: Deactivated successfully. Sep 12 17:11:33.113413 kubelet[3602]: I0912 17:11:33.112841 3602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmrnn\" (UniqueName: \"kubernetes.io/projected/e325245f-abb8-483c-98d1-72adafe39d13-kube-api-access-xmrnn\") pod \"e325245f-abb8-483c-98d1-72adafe39d13\" (UID: \"e325245f-abb8-483c-98d1-72adafe39d13\") " Sep 12 17:11:33.113413 kubelet[3602]: I0912 17:11:33.112911 3602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e325245f-abb8-483c-98d1-72adafe39d13-whisker-ca-bundle\") pod \"e325245f-abb8-483c-98d1-72adafe39d13\" (UID: \"e325245f-abb8-483c-98d1-72adafe39d13\") " Sep 12 17:11:33.113413 kubelet[3602]: I0912 17:11:33.112956 3602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e325245f-abb8-483c-98d1-72adafe39d13-whisker-backend-key-pair\") pod \"e325245f-abb8-483c-98d1-72adafe39d13\" (UID: \"e325245f-abb8-483c-98d1-72adafe39d13\") " Sep 12 17:11:33.116535 kubelet[3602]: I0912 17:11:33.116346 3602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e325245f-abb8-483c-98d1-72adafe39d13-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e325245f-abb8-483c-98d1-72adafe39d13" (UID: "e325245f-abb8-483c-98d1-72adafe39d13"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 17:11:33.120194 kubelet[3602]: I0912 17:11:33.120127 3602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e325245f-abb8-483c-98d1-72adafe39d13-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e325245f-abb8-483c-98d1-72adafe39d13" (UID: "e325245f-abb8-483c-98d1-72adafe39d13"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 17:11:33.123171 systemd[1]: var-lib-kubelet-pods-e325245f\x2dabb8\x2d483c\x2d98d1\x2d72adafe39d13-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 17:11:33.127068 kubelet[3602]: I0912 17:11:33.126996 3602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e325245f-abb8-483c-98d1-72adafe39d13-kube-api-access-xmrnn" (OuterVolumeSpecName: "kube-api-access-xmrnn") pod "e325245f-abb8-483c-98d1-72adafe39d13" (UID: "e325245f-abb8-483c-98d1-72adafe39d13"). InnerVolumeSpecName "kube-api-access-xmrnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:11:33.130084 systemd[1]: var-lib-kubelet-pods-e325245f\x2dabb8\x2d483c\x2d98d1\x2d72adafe39d13-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxmrnn.mount: Deactivated successfully. Sep 12 17:11:33.214377 kubelet[3602]: I0912 17:11:33.214227 3602 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e325245f-abb8-483c-98d1-72adafe39d13-whisker-ca-bundle\") on node \"ip-172-31-22-180\" DevicePath \"\"" Sep 12 17:11:33.214377 kubelet[3602]: I0912 17:11:33.214282 3602 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmrnn\" (UniqueName: \"kubernetes.io/projected/e325245f-abb8-483c-98d1-72adafe39d13-kube-api-access-xmrnn\") on node \"ip-172-31-22-180\" DevicePath \"\"" Sep 12 17:11:33.214377 kubelet[3602]: I0912 17:11:33.214308 3602 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e325245f-abb8-483c-98d1-72adafe39d13-whisker-backend-key-pair\") on node \"ip-172-31-22-180\" DevicePath \"\"" Sep 12 17:11:33.358081 kubelet[3602]: I0912 17:11:33.357222 3602 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:11:33.517628 kubelet[3602]: I0912 17:11:33.517496 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b4bb5f61-24e1-485f-94d6-5e04a8e8980a-whisker-backend-key-pair\") pod \"whisker-64dfb5d9df-6gnlm\" (UID: \"b4bb5f61-24e1-485f-94d6-5e04a8e8980a\") " pod="calico-system/whisker-64dfb5d9df-6gnlm" Sep 12 17:11:33.518191 kubelet[3602]: I0912 17:11:33.518087 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv4hs\" (UniqueName: \"kubernetes.io/projected/b4bb5f61-24e1-485f-94d6-5e04a8e8980a-kube-api-access-jv4hs\") pod \"whisker-64dfb5d9df-6gnlm\" (UID: \"b4bb5f61-24e1-485f-94d6-5e04a8e8980a\") " pod="calico-system/whisker-64dfb5d9df-6gnlm" Sep 12 17:11:33.518362 kubelet[3602]: I0912 17:11:33.518339 3602 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4bb5f61-24e1-485f-94d6-5e04a8e8980a-whisker-ca-bundle\") pod \"whisker-64dfb5d9df-6gnlm\" (UID: \"b4bb5f61-24e1-485f-94d6-5e04a8e8980a\") " pod="calico-system/whisker-64dfb5d9df-6gnlm" Sep 12 17:11:33.763097 containerd[2128]: time="2025-09-12T17:11:33.763044284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64dfb5d9df-6gnlm,Uid:b4bb5f61-24e1-485f-94d6-5e04a8e8980a,Namespace:calico-system,Attempt:0,}" Sep 12 17:11:34.022496 (udev-worker)[4798]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:11:34.029199 systemd-networkd[1689]: cali9eaa6f7787e: Link UP Sep 12 17:11:34.029599 systemd-networkd[1689]: cali9eaa6f7787e: Gained carrier Sep 12 17:11:34.071846 containerd[2128]: 2025-09-12 17:11:33.847 [INFO][4848] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:11:34.071846 containerd[2128]: 2025-09-12 17:11:33.867 [INFO][4848] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--180-k8s-whisker--64dfb5d9df--6gnlm-eth0 whisker-64dfb5d9df- calico-system b4bb5f61-24e1-485f-94d6-5e04a8e8980a 942 0 2025-09-12 17:11:33 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:64dfb5d9df projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-22-180 whisker-64dfb5d9df-6gnlm eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9eaa6f7787e [] [] }} ContainerID="d1a4c471ec008e78ffa557ed01a2932cdbbe02a888c8943f3b1a0c28c691cd2f" Namespace="calico-system" Pod="whisker-64dfb5d9df-6gnlm" WorkloadEndpoint="ip--172--31--22--180-k8s-whisker--64dfb5d9df--6gnlm-" Sep 12 17:11:34.071846 containerd[2128]: 2025-09-12 17:11:33.867 [INFO][4848] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d1a4c471ec008e78ffa557ed01a2932cdbbe02a888c8943f3b1a0c28c691cd2f" Namespace="calico-system" Pod="whisker-64dfb5d9df-6gnlm" WorkloadEndpoint="ip--172--31--22--180-k8s-whisker--64dfb5d9df--6gnlm-eth0" Sep 12 17:11:34.071846 containerd[2128]: 2025-09-12 17:11:33.921 [INFO][4857] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d1a4c471ec008e78ffa557ed01a2932cdbbe02a888c8943f3b1a0c28c691cd2f" HandleID="k8s-pod-network.d1a4c471ec008e78ffa557ed01a2932cdbbe02a888c8943f3b1a0c28c691cd2f" Workload="ip--172--31--22--180-k8s-whisker--64dfb5d9df--6gnlm-eth0" Sep 12 17:11:34.071846 containerd[2128]: 2025-09-12 17:11:33.921 [INFO][4857] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d1a4c471ec008e78ffa557ed01a2932cdbbe02a888c8943f3b1a0c28c691cd2f" HandleID="k8s-pod-network.d1a4c471ec008e78ffa557ed01a2932cdbbe02a888c8943f3b1a0c28c691cd2f" Workload="ip--172--31--22--180-k8s-whisker--64dfb5d9df--6gnlm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002aa420), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-180", "pod":"whisker-64dfb5d9df-6gnlm", "timestamp":"2025-09-12 17:11:33.920996841 +0000 UTC"}, Hostname:"ip-172-31-22-180", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:11:34.071846 containerd[2128]: 2025-09-12 17:11:33.921 [INFO][4857] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:34.071846 containerd[2128]: 2025-09-12 17:11:33.921 [INFO][4857] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:34.071846 containerd[2128]: 2025-09-12 17:11:33.921 [INFO][4857] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-180' Sep 12 17:11:34.071846 containerd[2128]: 2025-09-12 17:11:33.936 [INFO][4857] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d1a4c471ec008e78ffa557ed01a2932cdbbe02a888c8943f3b1a0c28c691cd2f" host="ip-172-31-22-180" Sep 12 17:11:34.071846 containerd[2128]: 2025-09-12 17:11:33.945 [INFO][4857] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-180" Sep 12 17:11:34.071846 containerd[2128]: 2025-09-12 17:11:33.954 [INFO][4857] ipam/ipam.go 511: Trying affinity for 192.168.2.128/26 host="ip-172-31-22-180" Sep 12 17:11:34.071846 containerd[2128]: 2025-09-12 17:11:33.960 [INFO][4857] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.128/26 host="ip-172-31-22-180" Sep 12 17:11:34.071846 containerd[2128]: 2025-09-12 17:11:33.964 [INFO][4857] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.128/26 host="ip-172-31-22-180" Sep 12 17:11:34.071846 containerd[2128]: 2025-09-12 17:11:33.964 [INFO][4857] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.128/26 handle="k8s-pod-network.d1a4c471ec008e78ffa557ed01a2932cdbbe02a888c8943f3b1a0c28c691cd2f" host="ip-172-31-22-180" Sep 12 17:11:34.071846 containerd[2128]: 2025-09-12 17:11:33.967 [INFO][4857] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d1a4c471ec008e78ffa557ed01a2932cdbbe02a888c8943f3b1a0c28c691cd2f Sep 12 17:11:34.071846 containerd[2128]: 2025-09-12 17:11:33.980 [INFO][4857] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.128/26 handle="k8s-pod-network.d1a4c471ec008e78ffa557ed01a2932cdbbe02a888c8943f3b1a0c28c691cd2f" host="ip-172-31-22-180" Sep 12 17:11:34.071846 containerd[2128]: 2025-09-12 17:11:33.996 [INFO][4857] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.129/26] block=192.168.2.128/26 handle="k8s-pod-network.d1a4c471ec008e78ffa557ed01a2932cdbbe02a888c8943f3b1a0c28c691cd2f" host="ip-172-31-22-180" Sep 12 17:11:34.071846 containerd[2128]: 2025-09-12 17:11:33.996 [INFO][4857] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.129/26] handle="k8s-pod-network.d1a4c471ec008e78ffa557ed01a2932cdbbe02a888c8943f3b1a0c28c691cd2f" host="ip-172-31-22-180" Sep 12 17:11:34.071846 containerd[2128]: 2025-09-12 17:11:33.996 [INFO][4857] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:34.071846 containerd[2128]: 2025-09-12 17:11:33.996 [INFO][4857] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.129/26] IPv6=[] ContainerID="d1a4c471ec008e78ffa557ed01a2932cdbbe02a888c8943f3b1a0c28c691cd2f" HandleID="k8s-pod-network.d1a4c471ec008e78ffa557ed01a2932cdbbe02a888c8943f3b1a0c28c691cd2f" Workload="ip--172--31--22--180-k8s-whisker--64dfb5d9df--6gnlm-eth0" Sep 12 17:11:34.076471 containerd[2128]: 2025-09-12 17:11:34.002 [INFO][4848] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d1a4c471ec008e78ffa557ed01a2932cdbbe02a888c8943f3b1a0c28c691cd2f" Namespace="calico-system" Pod="whisker-64dfb5d9df-6gnlm" WorkloadEndpoint="ip--172--31--22--180-k8s-whisker--64dfb5d9df--6gnlm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--180-k8s-whisker--64dfb5d9df--6gnlm-eth0", GenerateName:"whisker-64dfb5d9df-", Namespace:"calico-system", SelfLink:"", UID:"b4bb5f61-24e1-485f-94d6-5e04a8e8980a", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64dfb5d9df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-180", ContainerID:"", Pod:"whisker-64dfb5d9df-6gnlm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.2.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9eaa6f7787e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:34.076471 containerd[2128]: 2025-09-12 17:11:34.002 [INFO][4848] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.129/32] ContainerID="d1a4c471ec008e78ffa557ed01a2932cdbbe02a888c8943f3b1a0c28c691cd2f" Namespace="calico-system" Pod="whisker-64dfb5d9df-6gnlm" WorkloadEndpoint="ip--172--31--22--180-k8s-whisker--64dfb5d9df--6gnlm-eth0" Sep 12 17:11:34.076471 containerd[2128]: 2025-09-12 17:11:34.002 [INFO][4848] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9eaa6f7787e ContainerID="d1a4c471ec008e78ffa557ed01a2932cdbbe02a888c8943f3b1a0c28c691cd2f" Namespace="calico-system" Pod="whisker-64dfb5d9df-6gnlm" WorkloadEndpoint="ip--172--31--22--180-k8s-whisker--64dfb5d9df--6gnlm-eth0" Sep 12 17:11:34.076471 containerd[2128]: 2025-09-12 17:11:34.031 [INFO][4848] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d1a4c471ec008e78ffa557ed01a2932cdbbe02a888c8943f3b1a0c28c691cd2f" Namespace="calico-system" Pod="whisker-64dfb5d9df-6gnlm" WorkloadEndpoint="ip--172--31--22--180-k8s-whisker--64dfb5d9df--6gnlm-eth0" Sep 12 17:11:34.076471 containerd[2128]: 2025-09-12 17:11:34.033 [INFO][4848] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d1a4c471ec008e78ffa557ed01a2932cdbbe02a888c8943f3b1a0c28c691cd2f" Namespace="calico-system" Pod="whisker-64dfb5d9df-6gnlm" WorkloadEndpoint="ip--172--31--22--180-k8s-whisker--64dfb5d9df--6gnlm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--180-k8s-whisker--64dfb5d9df--6gnlm-eth0", GenerateName:"whisker-64dfb5d9df-", Namespace:"calico-system", SelfLink:"", UID:"b4bb5f61-24e1-485f-94d6-5e04a8e8980a", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64dfb5d9df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-180", ContainerID:"d1a4c471ec008e78ffa557ed01a2932cdbbe02a888c8943f3b1a0c28c691cd2f", Pod:"whisker-64dfb5d9df-6gnlm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.2.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9eaa6f7787e", MAC:"42:2a:b0:26:8e:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:34.076471 containerd[2128]: 2025-09-12 17:11:34.066 [INFO][4848] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d1a4c471ec008e78ffa557ed01a2932cdbbe02a888c8943f3b1a0c28c691cd2f" Namespace="calico-system" Pod="whisker-64dfb5d9df-6gnlm" WorkloadEndpoint="ip--172--31--22--180-k8s-whisker--64dfb5d9df--6gnlm-eth0" Sep 12 17:11:34.184509 containerd[2128]: time="2025-09-12T17:11:34.178597387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:11:34.184509 containerd[2128]: time="2025-09-12T17:11:34.178732399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:11:34.184509 containerd[2128]: time="2025-09-12T17:11:34.178771231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:34.184509 containerd[2128]: time="2025-09-12T17:11:34.179181955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:34.434141 containerd[2128]: time="2025-09-12T17:11:34.433976132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64dfb5d9df-6gnlm,Uid:b4bb5f61-24e1-485f-94d6-5e04a8e8980a,Namespace:calico-system,Attempt:0,} returns sandbox id \"d1a4c471ec008e78ffa557ed01a2932cdbbe02a888c8943f3b1a0c28c691cd2f\"" Sep 12 17:11:34.442775 containerd[2128]: time="2025-09-12T17:11:34.442668596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 17:11:34.925328 kubelet[3602]: I0912 17:11:34.924991 3602 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e325245f-abb8-483c-98d1-72adafe39d13" path="/var/lib/kubelet/pods/e325245f-abb8-483c-98d1-72adafe39d13/volumes" Sep 12 17:11:35.285008 systemd-networkd[1689]: cali9eaa6f7787e: Gained IPv6LL Sep 12 17:11:36.265109 containerd[2128]: time="2025-09-12T17:11:36.265015137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:36.267161 containerd[2128]: time="2025-09-12T17:11:36.266868093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 12 17:11:36.269714 containerd[2128]: time="2025-09-12T17:11:36.269035533Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:36.274205 containerd[2128]: time="2025-09-12T17:11:36.274155321Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:36.275681 containerd[2128]: time="2025-09-12T17:11:36.275621829Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.832840649s" Sep 12 17:11:36.275895 containerd[2128]: time="2025-09-12T17:11:36.275679069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 12 17:11:36.282601 containerd[2128]: time="2025-09-12T17:11:36.282550809Z" level=info msg="CreateContainer within sandbox \"d1a4c471ec008e78ffa557ed01a2932cdbbe02a888c8943f3b1a0c28c691cd2f\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 17:11:36.313584 containerd[2128]: time="2025-09-12T17:11:36.313524513Z" level=info msg="CreateContainer within sandbox \"d1a4c471ec008e78ffa557ed01a2932cdbbe02a888c8943f3b1a0c28c691cd2f\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"ce2fc0648bcf5b64315fe815dc5d2eefc0ffe67e17b61e7189c988a011cb4ce1\"" Sep 12 17:11:36.316605 containerd[2128]: time="2025-09-12T17:11:36.314577897Z" level=info msg="StartContainer for \"ce2fc0648bcf5b64315fe815dc5d2eefc0ffe67e17b61e7189c988a011cb4ce1\"" Sep 12 17:11:36.330891 kubelet[3602]: I0912 17:11:36.330846 3602 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:11:36.511008 containerd[2128]: time="2025-09-12T17:11:36.510953122Z" level=info msg="StartContainer for \"ce2fc0648bcf5b64315fe815dc5d2eefc0ffe67e17b61e7189c988a011cb4ce1\" returns successfully" Sep 12 17:11:36.517226 containerd[2128]: time="2025-09-12T17:11:36.517096738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 17:11:37.600334 ntpd[2090]: Listen normally on 6 cali9eaa6f7787e [fe80::ecee:eeff:feee:eeee%4]:123 Sep 12 17:11:37.601237 ntpd[2090]: 12 Sep 17:11:37 ntpd[2090]: Listen normally on 6 cali9eaa6f7787e [fe80::ecee:eeff:feee:eeee%4]:123 Sep 12 17:11:37.692835 kubelet[3602]: I0912 17:11:37.692062 3602 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:11:37.922358 containerd[2128]: time="2025-09-12T17:11:37.922285993Z" level=info msg="StopPodSandbox for \"f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5\"" Sep 12 17:11:37.925025 containerd[2128]: time="2025-09-12T17:11:37.922344385Z" level=info msg="StopPodSandbox for \"ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c\"" Sep 12 17:11:38.170763 kernel: bpftool[5197]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 12 17:11:38.212435 containerd[2128]: 2025-09-12 17:11:38.055 [INFO][5159] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" Sep 12 17:11:38.212435 containerd[2128]: 2025-09-12 17:11:38.056 [INFO][5159] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" iface="eth0" netns="/var/run/netns/cni-d9c8b561-03b2-7d46-9412-db859e65468a" Sep 12 17:11:38.212435 containerd[2128]: 2025-09-12 17:11:38.059 [INFO][5159] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" iface="eth0" netns="/var/run/netns/cni-d9c8b561-03b2-7d46-9412-db859e65468a" Sep 12 17:11:38.212435 containerd[2128]: 2025-09-12 17:11:38.059 [INFO][5159] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" iface="eth0" netns="/var/run/netns/cni-d9c8b561-03b2-7d46-9412-db859e65468a" Sep 12 17:11:38.212435 containerd[2128]: 2025-09-12 17:11:38.060 [INFO][5159] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" Sep 12 17:11:38.212435 containerd[2128]: 2025-09-12 17:11:38.060 [INFO][5159] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" Sep 12 17:11:38.212435 containerd[2128]: 2025-09-12 17:11:38.168 [INFO][5177] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" HandleID="k8s-pod-network.f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" Workload="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h298k-eth0" Sep 12 17:11:38.212435 containerd[2128]: 2025-09-12 17:11:38.168 [INFO][5177] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:38.212435 containerd[2128]: 2025-09-12 17:11:38.168 [INFO][5177] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:38.212435 containerd[2128]: 2025-09-12 17:11:38.186 [WARNING][5177] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" HandleID="k8s-pod-network.f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" Workload="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h298k-eth0" Sep 12 17:11:38.212435 containerd[2128]: 2025-09-12 17:11:38.186 [INFO][5177] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" HandleID="k8s-pod-network.f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" Workload="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h298k-eth0" Sep 12 17:11:38.212435 containerd[2128]: 2025-09-12 17:11:38.190 [INFO][5177] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:38.212435 containerd[2128]: 2025-09-12 17:11:38.194 [INFO][5159] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" Sep 12 17:11:38.212435 containerd[2128]: time="2025-09-12T17:11:38.209777639Z" level=info msg="TearDown network for sandbox \"f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5\" successfully" Sep 12 17:11:38.212435 containerd[2128]: time="2025-09-12T17:11:38.209823527Z" level=info msg="StopPodSandbox for \"f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5\" returns successfully" Sep 12 17:11:38.215099 systemd[1]: run-netns-cni\x2dd9c8b561\x2d03b2\x2d7d46\x2d9412\x2ddb859e65468a.mount: Deactivated successfully. Sep 12 17:11:38.227242 containerd[2128]: time="2025-09-12T17:11:38.219131243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-h298k,Uid:3783ba1d-f77e-47c0-89fd-9efbe6435e26,Namespace:kube-system,Attempt:1,}" Sep 12 17:11:38.594926 containerd[2128]: 2025-09-12 17:11:38.268 [INFO][5166] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" Sep 12 17:11:38.594926 containerd[2128]: 2025-09-12 17:11:38.268 [INFO][5166] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" iface="eth0" netns="/var/run/netns/cni-1f360068-2919-429e-ee5b-f1a4f971e661" Sep 12 17:11:38.594926 containerd[2128]: 2025-09-12 17:11:38.298 [INFO][5166] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" iface="eth0" netns="/var/run/netns/cni-1f360068-2919-429e-ee5b-f1a4f971e661" Sep 12 17:11:38.594926 containerd[2128]: 2025-09-12 17:11:38.315 [INFO][5166] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" iface="eth0" netns="/var/run/netns/cni-1f360068-2919-429e-ee5b-f1a4f971e661" Sep 12 17:11:38.594926 containerd[2128]: 2025-09-12 17:11:38.320 [INFO][5166] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" Sep 12 17:11:38.594926 containerd[2128]: 2025-09-12 17:11:38.320 [INFO][5166] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" Sep 12 17:11:38.594926 containerd[2128]: 2025-09-12 17:11:38.526 [INFO][5212] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" HandleID="k8s-pod-network.ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" Workload="ip--172--31--22--180-k8s-goldmane--7988f88666--zhfpt-eth0" Sep 12 17:11:38.594926 containerd[2128]: 2025-09-12 17:11:38.530 [INFO][5212] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:38.594926 containerd[2128]: 2025-09-12 17:11:38.530 [INFO][5212] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:38.594926 containerd[2128]: 2025-09-12 17:11:38.547 [WARNING][5212] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" HandleID="k8s-pod-network.ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" Workload="ip--172--31--22--180-k8s-goldmane--7988f88666--zhfpt-eth0" Sep 12 17:11:38.594926 containerd[2128]: 2025-09-12 17:11:38.547 [INFO][5212] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" HandleID="k8s-pod-network.ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" Workload="ip--172--31--22--180-k8s-goldmane--7988f88666--zhfpt-eth0" Sep 12 17:11:38.594926 containerd[2128]: 2025-09-12 17:11:38.550 [INFO][5212] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:38.594926 containerd[2128]: 2025-09-12 17:11:38.578 [INFO][5166] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" Sep 12 17:11:38.602390 containerd[2128]: time="2025-09-12T17:11:38.600856908Z" level=info msg="TearDown network for sandbox \"ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c\" successfully" Sep 12 17:11:38.602390 containerd[2128]: time="2025-09-12T17:11:38.600907608Z" level=info msg="StopPodSandbox for \"ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c\" returns successfully" Sep 12 17:11:38.602390 containerd[2128]: time="2025-09-12T17:11:38.601892988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-zhfpt,Uid:8ec98a9d-a44d-4dce-aecc-5307cd4bde54,Namespace:calico-system,Attempt:1,}" Sep 12 17:11:38.614709 systemd[1]: run-netns-cni\x2d1f360068\x2d2919\x2d429e\x2dee5b\x2df1a4f971e661.mount: Deactivated successfully. Sep 12 17:11:39.148855 systemd-networkd[1689]: calid2956ff9de2: Link UP Sep 12 17:11:39.165441 systemd-networkd[1689]: calid2956ff9de2: Gained carrier Sep 12 17:11:39.187111 (udev-worker)[5281]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:11:39.246765 containerd[2128]: 2025-09-12 17:11:38.728 [INFO][5211] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h298k-eth0 coredns-7c65d6cfc9- kube-system 3783ba1d-f77e-47c0-89fd-9efbe6435e26 971 0 2025-09-12 17:10:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-22-180 coredns-7c65d6cfc9-h298k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid2956ff9de2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="561eea2150579a5d023485117307c83a352178b40c63565b0ae0fb29e9665239" Namespace="kube-system" Pod="coredns-7c65d6cfc9-h298k" WorkloadEndpoint="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h298k-" Sep 12 17:11:39.246765 containerd[2128]: 2025-09-12 17:11:38.728 [INFO][5211] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="561eea2150579a5d023485117307c83a352178b40c63565b0ae0fb29e9665239" Namespace="kube-system" Pod="coredns-7c65d6cfc9-h298k" WorkloadEndpoint="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h298k-eth0" Sep 12 17:11:39.246765 containerd[2128]: 2025-09-12 17:11:38.980 [INFO][5247] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="561eea2150579a5d023485117307c83a352178b40c63565b0ae0fb29e9665239" HandleID="k8s-pod-network.561eea2150579a5d023485117307c83a352178b40c63565b0ae0fb29e9665239" Workload="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h298k-eth0" Sep 12 17:11:39.246765 containerd[2128]: 2025-09-12 17:11:38.982 [INFO][5247] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="561eea2150579a5d023485117307c83a352178b40c63565b0ae0fb29e9665239" HandleID="k8s-pod-network.561eea2150579a5d023485117307c83a352178b40c63565b0ae0fb29e9665239" Workload="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h298k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003141d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-22-180", "pod":"coredns-7c65d6cfc9-h298k", "timestamp":"2025-09-12 17:11:38.98052713 +0000 UTC"}, Hostname:"ip-172-31-22-180", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:11:39.246765 containerd[2128]: 2025-09-12 17:11:38.982 [INFO][5247] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:39.246765 containerd[2128]: 2025-09-12 17:11:38.982 [INFO][5247] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:39.246765 containerd[2128]: 2025-09-12 17:11:38.983 [INFO][5247] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-180' Sep 12 17:11:39.246765 containerd[2128]: 2025-09-12 17:11:39.016 [INFO][5247] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.561eea2150579a5d023485117307c83a352178b40c63565b0ae0fb29e9665239" host="ip-172-31-22-180" Sep 12 17:11:39.246765 containerd[2128]: 2025-09-12 17:11:39.032 [INFO][5247] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-180" Sep 12 17:11:39.246765 containerd[2128]: 2025-09-12 17:11:39.050 [INFO][5247] ipam/ipam.go 511: Trying affinity for 192.168.2.128/26 host="ip-172-31-22-180" Sep 12 17:11:39.246765 containerd[2128]: 2025-09-12 17:11:39.057 [INFO][5247] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.128/26 host="ip-172-31-22-180" Sep 12 17:11:39.246765 containerd[2128]: 2025-09-12 17:11:39.068 [INFO][5247] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.128/26 host="ip-172-31-22-180" Sep 12 17:11:39.246765 containerd[2128]: 2025-09-12 17:11:39.068 [INFO][5247] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.128/26 handle="k8s-pod-network.561eea2150579a5d023485117307c83a352178b40c63565b0ae0fb29e9665239" host="ip-172-31-22-180" Sep 12 17:11:39.246765 containerd[2128]: 2025-09-12 17:11:39.076 [INFO][5247] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.561eea2150579a5d023485117307c83a352178b40c63565b0ae0fb29e9665239 Sep 12 17:11:39.246765 containerd[2128]: 2025-09-12 17:11:39.093 [INFO][5247] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.128/26 handle="k8s-pod-network.561eea2150579a5d023485117307c83a352178b40c63565b0ae0fb29e9665239" host="ip-172-31-22-180" Sep 12 17:11:39.246765 containerd[2128]: 2025-09-12 17:11:39.122 [INFO][5247] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.130/26] block=192.168.2.128/26 handle="k8s-pod-network.561eea2150579a5d023485117307c83a352178b40c63565b0ae0fb29e9665239" host="ip-172-31-22-180" Sep 12 17:11:39.246765 containerd[2128]: 2025-09-12 17:11:39.122 [INFO][5247] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.130/26] handle="k8s-pod-network.561eea2150579a5d023485117307c83a352178b40c63565b0ae0fb29e9665239" host="ip-172-31-22-180" Sep 12 17:11:39.246765 containerd[2128]: 2025-09-12 17:11:39.122 [INFO][5247] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:39.246765 containerd[2128]: 2025-09-12 17:11:39.122 [INFO][5247] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.130/26] IPv6=[] ContainerID="561eea2150579a5d023485117307c83a352178b40c63565b0ae0fb29e9665239" HandleID="k8s-pod-network.561eea2150579a5d023485117307c83a352178b40c63565b0ae0fb29e9665239" Workload="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h298k-eth0" Sep 12 17:11:39.251438 containerd[2128]: 2025-09-12 17:11:39.132 [INFO][5211] cni-plugin/k8s.go 418: Populated endpoint ContainerID="561eea2150579a5d023485117307c83a352178b40c63565b0ae0fb29e9665239" Namespace="kube-system" Pod="coredns-7c65d6cfc9-h298k" WorkloadEndpoint="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h298k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h298k-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3783ba1d-f77e-47c0-89fd-9efbe6435e26", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 10, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-180", ContainerID:"", Pod:"coredns-7c65d6cfc9-h298k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2956ff9de2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:39.251438 containerd[2128]: 2025-09-12 17:11:39.133 [INFO][5211] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.130/32] ContainerID="561eea2150579a5d023485117307c83a352178b40c63565b0ae0fb29e9665239" Namespace="kube-system" Pod="coredns-7c65d6cfc9-h298k" WorkloadEndpoint="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h298k-eth0" Sep 12 17:11:39.251438 containerd[2128]: 2025-09-12 17:11:39.133 [INFO][5211] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid2956ff9de2 ContainerID="561eea2150579a5d023485117307c83a352178b40c63565b0ae0fb29e9665239" Namespace="kube-system" Pod="coredns-7c65d6cfc9-h298k" WorkloadEndpoint="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h298k-eth0" Sep 12 17:11:39.251438 containerd[2128]: 2025-09-12 17:11:39.171 [INFO][5211] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="561eea2150579a5d023485117307c83a352178b40c63565b0ae0fb29e9665239" Namespace="kube-system" Pod="coredns-7c65d6cfc9-h298k" WorkloadEndpoint="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h298k-eth0" Sep 12 17:11:39.251438 containerd[2128]: 2025-09-12 17:11:39.180 [INFO][5211] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="561eea2150579a5d023485117307c83a352178b40c63565b0ae0fb29e9665239" Namespace="kube-system" Pod="coredns-7c65d6cfc9-h298k" WorkloadEndpoint="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h298k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h298k-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3783ba1d-f77e-47c0-89fd-9efbe6435e26", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 10, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-180", ContainerID:"561eea2150579a5d023485117307c83a352178b40c63565b0ae0fb29e9665239", Pod:"coredns-7c65d6cfc9-h298k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2956ff9de2", MAC:"be:11:69:f3:83:7b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:39.251438 containerd[2128]: 2025-09-12 17:11:39.208 [INFO][5211] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="561eea2150579a5d023485117307c83a352178b40c63565b0ae0fb29e9665239" Namespace="kube-system" Pod="coredns-7c65d6cfc9-h298k" WorkloadEndpoint="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h298k-eth0" Sep 12 17:11:39.474854 containerd[2128]: time="2025-09-12T17:11:39.466039081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:11:39.474854 containerd[2128]: time="2025-09-12T17:11:39.466125565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:11:39.474854 containerd[2128]: time="2025-09-12T17:11:39.466167457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:39.474854 containerd[2128]: time="2025-09-12T17:11:39.466338457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:39.584448 (udev-worker)[5284]: Network interface NamePolicy= disabled on kernel command line. Sep 12 17:11:39.595463 systemd-networkd[1689]: calicae9bbab4a1: Link UP Sep 12 17:11:39.600026 systemd-networkd[1689]: calicae9bbab4a1: Gained carrier Sep 12 17:11:39.661748 containerd[2128]: 2025-09-12 17:11:39.013 [INFO][5236] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--180-k8s-goldmane--7988f88666--zhfpt-eth0 goldmane-7988f88666- calico-system 8ec98a9d-a44d-4dce-aecc-5307cd4bde54 972 0 2025-09-12 17:11:15 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-22-180 goldmane-7988f88666-zhfpt eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calicae9bbab4a1 [] [] }} ContainerID="8630903ab5dda7add8754621f62b74408f4a3c1fecc39d20db7807a64eb389b4" Namespace="calico-system" Pod="goldmane-7988f88666-zhfpt" WorkloadEndpoint="ip--172--31--22--180-k8s-goldmane--7988f88666--zhfpt-" Sep 12 17:11:39.661748 containerd[2128]: 2025-09-12 17:11:39.013 [INFO][5236] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8630903ab5dda7add8754621f62b74408f4a3c1fecc39d20db7807a64eb389b4" Namespace="calico-system" Pod="goldmane-7988f88666-zhfpt" WorkloadEndpoint="ip--172--31--22--180-k8s-goldmane--7988f88666--zhfpt-eth0" Sep 12 17:11:39.661748 containerd[2128]: 2025-09-12 17:11:39.335 [INFO][5274] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8630903ab5dda7add8754621f62b74408f4a3c1fecc39d20db7807a64eb389b4" HandleID="k8s-pod-network.8630903ab5dda7add8754621f62b74408f4a3c1fecc39d20db7807a64eb389b4" Workload="ip--172--31--22--180-k8s-goldmane--7988f88666--zhfpt-eth0" Sep 12 17:11:39.661748 containerd[2128]: 2025-09-12 17:11:39.336 [INFO][5274] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8630903ab5dda7add8754621f62b74408f4a3c1fecc39d20db7807a64eb389b4" HandleID="k8s-pod-network.8630903ab5dda7add8754621f62b74408f4a3c1fecc39d20db7807a64eb389b4" Workload="ip--172--31--22--180-k8s-goldmane--7988f88666--zhfpt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d8b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-180", "pod":"goldmane-7988f88666-zhfpt", "timestamp":"2025-09-12 17:11:39.332919816 +0000 UTC"}, Hostname:"ip-172-31-22-180", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:11:39.661748 containerd[2128]: 2025-09-12 17:11:39.338 [INFO][5274] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:39.661748 containerd[2128]: 2025-09-12 17:11:39.339 [INFO][5274] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:39.661748 containerd[2128]: 2025-09-12 17:11:39.340 [INFO][5274] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-180' Sep 12 17:11:39.661748 containerd[2128]: 2025-09-12 17:11:39.366 [INFO][5274] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8630903ab5dda7add8754621f62b74408f4a3c1fecc39d20db7807a64eb389b4" host="ip-172-31-22-180" Sep 12 17:11:39.661748 containerd[2128]: 2025-09-12 17:11:39.392 [INFO][5274] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-180" Sep 12 17:11:39.661748 containerd[2128]: 2025-09-12 17:11:39.405 [INFO][5274] ipam/ipam.go 511: Trying affinity for 192.168.2.128/26 host="ip-172-31-22-180" Sep 12 17:11:39.661748 containerd[2128]: 2025-09-12 17:11:39.411 [INFO][5274] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.128/26 host="ip-172-31-22-180" Sep 12 17:11:39.661748 containerd[2128]: 2025-09-12 17:11:39.417 [INFO][5274] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.128/26 host="ip-172-31-22-180" Sep 12 17:11:39.661748 containerd[2128]: 2025-09-12 17:11:39.418 [INFO][5274] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.128/26 handle="k8s-pod-network.8630903ab5dda7add8754621f62b74408f4a3c1fecc39d20db7807a64eb389b4" host="ip-172-31-22-180" Sep 12 17:11:39.661748 containerd[2128]: 2025-09-12 17:11:39.421 [INFO][5274] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8630903ab5dda7add8754621f62b74408f4a3c1fecc39d20db7807a64eb389b4 Sep 12 17:11:39.661748 containerd[2128]: 2025-09-12 17:11:39.443 [INFO][5274] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.128/26 handle="k8s-pod-network.8630903ab5dda7add8754621f62b74408f4a3c1fecc39d20db7807a64eb389b4" host="ip-172-31-22-180" Sep 12 17:11:39.661748 containerd[2128]: 2025-09-12 17:11:39.512 [INFO][5274] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.131/26] block=192.168.2.128/26 handle="k8s-pod-network.8630903ab5dda7add8754621f62b74408f4a3c1fecc39d20db7807a64eb389b4" host="ip-172-31-22-180" Sep 12 17:11:39.661748 containerd[2128]: 2025-09-12 17:11:39.512 [INFO][5274] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.131/26] handle="k8s-pod-network.8630903ab5dda7add8754621f62b74408f4a3c1fecc39d20db7807a64eb389b4" host="ip-172-31-22-180" Sep 12 17:11:39.661748 containerd[2128]: 2025-09-12 17:11:39.512 [INFO][5274] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:39.661748 containerd[2128]: 2025-09-12 17:11:39.512 [INFO][5274] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.131/26] IPv6=[] ContainerID="8630903ab5dda7add8754621f62b74408f4a3c1fecc39d20db7807a64eb389b4" HandleID="k8s-pod-network.8630903ab5dda7add8754621f62b74408f4a3c1fecc39d20db7807a64eb389b4" Workload="ip--172--31--22--180-k8s-goldmane--7988f88666--zhfpt-eth0" Sep 12 17:11:39.664869 containerd[2128]: 2025-09-12 17:11:39.535 [INFO][5236] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8630903ab5dda7add8754621f62b74408f4a3c1fecc39d20db7807a64eb389b4" Namespace="calico-system" Pod="goldmane-7988f88666-zhfpt" WorkloadEndpoint="ip--172--31--22--180-k8s-goldmane--7988f88666--zhfpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--180-k8s-goldmane--7988f88666--zhfpt-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"8ec98a9d-a44d-4dce-aecc-5307cd4bde54", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-180", ContainerID:"", Pod:"goldmane-7988f88666-zhfpt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.2.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicae9bbab4a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:39.664869 containerd[2128]: 2025-09-12 17:11:39.535 [INFO][5236] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.131/32] ContainerID="8630903ab5dda7add8754621f62b74408f4a3c1fecc39d20db7807a64eb389b4" Namespace="calico-system" Pod="goldmane-7988f88666-zhfpt" WorkloadEndpoint="ip--172--31--22--180-k8s-goldmane--7988f88666--zhfpt-eth0" Sep 12 17:11:39.664869 containerd[2128]: 2025-09-12 17:11:39.535 [INFO][5236] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicae9bbab4a1 ContainerID="8630903ab5dda7add8754621f62b74408f4a3c1fecc39d20db7807a64eb389b4" Namespace="calico-system" Pod="goldmane-7988f88666-zhfpt" WorkloadEndpoint="ip--172--31--22--180-k8s-goldmane--7988f88666--zhfpt-eth0" Sep 12 17:11:39.664869 containerd[2128]: 2025-09-12 17:11:39.603 [INFO][5236] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8630903ab5dda7add8754621f62b74408f4a3c1fecc39d20db7807a64eb389b4" Namespace="calico-system" Pod="goldmane-7988f88666-zhfpt" WorkloadEndpoint="ip--172--31--22--180-k8s-goldmane--7988f88666--zhfpt-eth0" Sep 12 17:11:39.664869 containerd[2128]: 2025-09-12 17:11:39.606 [INFO][5236] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8630903ab5dda7add8754621f62b74408f4a3c1fecc39d20db7807a64eb389b4" Namespace="calico-system" Pod="goldmane-7988f88666-zhfpt" WorkloadEndpoint="ip--172--31--22--180-k8s-goldmane--7988f88666--zhfpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--180-k8s-goldmane--7988f88666--zhfpt-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"8ec98a9d-a44d-4dce-aecc-5307cd4bde54", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-180", ContainerID:"8630903ab5dda7add8754621f62b74408f4a3c1fecc39d20db7807a64eb389b4", Pod:"goldmane-7988f88666-zhfpt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.2.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicae9bbab4a1", MAC:"0a:bb:5b:c5:0f:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:39.664869 containerd[2128]: 2025-09-12 17:11:39.633 [INFO][5236] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8630903ab5dda7add8754621f62b74408f4a3c1fecc39d20db7807a64eb389b4" Namespace="calico-system" Pod="goldmane-7988f88666-zhfpt" WorkloadEndpoint="ip--172--31--22--180-k8s-goldmane--7988f88666--zhfpt-eth0" Sep 12 17:11:39.907243 containerd[2128]: time="2025-09-12T17:11:39.907060407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-h298k,Uid:3783ba1d-f77e-47c0-89fd-9efbe6435e26,Namespace:kube-system,Attempt:1,} returns sandbox id \"561eea2150579a5d023485117307c83a352178b40c63565b0ae0fb29e9665239\"" Sep 12 17:11:39.919048 containerd[2128]: time="2025-09-12T17:11:39.911089035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:11:39.919048 containerd[2128]: time="2025-09-12T17:11:39.911195103Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:11:39.919048 containerd[2128]: time="2025-09-12T17:11:39.911232243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:39.919048 containerd[2128]: time="2025-09-12T17:11:39.911396979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:39.936737 containerd[2128]: time="2025-09-12T17:11:39.935878851Z" level=info msg="StopPodSandbox for \"6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32\"" Sep 12 17:11:39.939836 containerd[2128]: time="2025-09-12T17:11:39.938836227Z" level=info msg="StopPodSandbox for \"da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b\"" Sep 12 17:11:39.951748 containerd[2128]: time="2025-09-12T17:11:39.947189031Z" level=info msg="StopPodSandbox for \"9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e\"" Sep 12 17:11:39.952724 containerd[2128]: time="2025-09-12T17:11:39.940326651Z" level=info msg="StopPodSandbox for \"a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283\"" Sep 12 17:11:39.967759 containerd[2128]: time="2025-09-12T17:11:39.967065459Z" level=info msg="StopPodSandbox for \"8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e\"" Sep 12 17:11:40.049733 containerd[2128]: time="2025-09-12T17:11:40.048989232Z" level=info msg="CreateContainer within sandbox \"561eea2150579a5d023485117307c83a352178b40c63565b0ae0fb29e9665239\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:11:40.293787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1488692984.mount: Deactivated successfully. Sep 12 17:11:40.341856 systemd-networkd[1689]: vxlan.calico: Link UP Sep 12 17:11:40.341870 systemd-networkd[1689]: vxlan.calico: Gained carrier Sep 12 17:11:40.397715 containerd[2128]: time="2025-09-12T17:11:40.393200989Z" level=info msg="CreateContainer within sandbox \"561eea2150579a5d023485117307c83a352178b40c63565b0ae0fb29e9665239\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"694732355b76ef324167c99c5634580ba42bbcd7c059299833ee96ec9936f2a6\"" Sep 12 17:11:40.405493 systemd-networkd[1689]: calid2956ff9de2: Gained IPv6LL Sep 12 17:11:40.415864 containerd[2128]: time="2025-09-12T17:11:40.414421345Z" level=info msg="StartContainer for \"694732355b76ef324167c99c5634580ba42bbcd7c059299833ee96ec9936f2a6\"" Sep 12 17:11:40.516464 systemd[1]: Started sshd@7-172.31.22.180:22-147.75.109.163:53640.service - OpenSSH per-connection server daemon (147.75.109.163:53640). Sep 12 17:11:40.788995 systemd-networkd[1689]: calicae9bbab4a1: Gained IPv6LL Sep 12 17:11:40.842235 sshd[5469]: Accepted publickey for core from 147.75.109.163 port 53640 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:11:40.845748 sshd[5469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:40.865883 systemd-logind[2103]: New session 8 of user core. Sep 12 17:11:40.877451 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:11:41.038408 containerd[2128]: time="2025-09-12T17:11:41.034509973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-zhfpt,Uid:8ec98a9d-a44d-4dce-aecc-5307cd4bde54,Namespace:calico-system,Attempt:1,} returns sandbox id \"8630903ab5dda7add8754621f62b74408f4a3c1fecc39d20db7807a64eb389b4\"" Sep 12 17:11:41.538263 sshd[5469]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:41.559406 systemd[1]: sshd@7-172.31.22.180:22-147.75.109.163:53640.service: Deactivated successfully. Sep 12 17:11:41.578612 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:11:41.587197 systemd-logind[2103]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:11:41.597946 systemd-logind[2103]: Removed session 8. Sep 12 17:11:41.606056 containerd[2128]: time="2025-09-12T17:11:41.605264043Z" level=info msg="StartContainer for \"694732355b76ef324167c99c5634580ba42bbcd7c059299833ee96ec9936f2a6\" returns successfully" Sep 12 17:11:41.623093 systemd-networkd[1689]: vxlan.calico: Gained IPv6LL Sep 12 17:11:41.664867 containerd[2128]: 2025-09-12 17:11:40.781 [INFO][5424] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" Sep 12 17:11:41.664867 containerd[2128]: 2025-09-12 17:11:40.781 [INFO][5424] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" iface="eth0" netns="/var/run/netns/cni-27eb71c1-93fa-3b15-a8b9-482aedada0ff" Sep 12 17:11:41.664867 containerd[2128]: 2025-09-12 17:11:40.782 [INFO][5424] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" iface="eth0" netns="/var/run/netns/cni-27eb71c1-93fa-3b15-a8b9-482aedada0ff" Sep 12 17:11:41.664867 containerd[2128]: 2025-09-12 17:11:40.782 [INFO][5424] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" iface="eth0" netns="/var/run/netns/cni-27eb71c1-93fa-3b15-a8b9-482aedada0ff" Sep 12 17:11:41.664867 containerd[2128]: 2025-09-12 17:11:40.782 [INFO][5424] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" Sep 12 17:11:41.664867 containerd[2128]: 2025-09-12 17:11:40.782 [INFO][5424] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" Sep 12 17:11:41.664867 containerd[2128]: 2025-09-12 17:11:41.442 [INFO][5507] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" HandleID="k8s-pod-network.9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-eth0" Sep 12 17:11:41.664867 containerd[2128]: 2025-09-12 17:11:41.442 [INFO][5507] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:41.664867 containerd[2128]: 2025-09-12 17:11:41.443 [INFO][5507] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:41.664867 containerd[2128]: 2025-09-12 17:11:41.542 [WARNING][5507] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" HandleID="k8s-pod-network.9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-eth0" Sep 12 17:11:41.664867 containerd[2128]: 2025-09-12 17:11:41.543 [INFO][5507] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" HandleID="k8s-pod-network.9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-eth0" Sep 12 17:11:41.664867 containerd[2128]: 2025-09-12 17:11:41.560 [INFO][5507] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:41.664867 containerd[2128]: 2025-09-12 17:11:41.617 [INFO][5424] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" Sep 12 17:11:41.673994 containerd[2128]: time="2025-09-12T17:11:41.668562964Z" level=info msg="TearDown network for sandbox \"9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e\" successfully" Sep 12 17:11:41.673994 containerd[2128]: time="2025-09-12T17:11:41.670910020Z" level=info msg="StopPodSandbox for \"9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e\" returns successfully" Sep 12 17:11:41.678666 systemd[1]: run-netns-cni\x2d27eb71c1\x2d93fa\x2d3b15\x2da8b9\x2d482aedada0ff.mount: Deactivated successfully. Sep 12 17:11:41.681842 containerd[2128]: time="2025-09-12T17:11:41.680201428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7964ddc67d-2fn9n,Uid:831c8b21-3a30-4e09-bfba-cb39dd0935d8,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:11:41.851316 containerd[2128]: 2025-09-12 17:11:40.861 [INFO][5423] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" Sep 12 17:11:41.851316 containerd[2128]: 2025-09-12 17:11:40.861 [INFO][5423] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" iface="eth0" netns="/var/run/netns/cni-3e66f85f-b5b1-fd8b-d6ed-2fa1215dd750" Sep 12 17:11:41.851316 containerd[2128]: 2025-09-12 17:11:40.863 [INFO][5423] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" iface="eth0" netns="/var/run/netns/cni-3e66f85f-b5b1-fd8b-d6ed-2fa1215dd750" Sep 12 17:11:41.851316 containerd[2128]: 2025-09-12 17:11:40.901 [INFO][5423] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" iface="eth0" netns="/var/run/netns/cni-3e66f85f-b5b1-fd8b-d6ed-2fa1215dd750" Sep 12 17:11:41.851316 containerd[2128]: 2025-09-12 17:11:40.911 [INFO][5423] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" Sep 12 17:11:41.851316 containerd[2128]: 2025-09-12 17:11:40.911 [INFO][5423] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" Sep 12 17:11:41.851316 containerd[2128]: 2025-09-12 17:11:41.688 [INFO][5516] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" HandleID="k8s-pod-network.8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-eth0" Sep 12 17:11:41.851316 containerd[2128]: 2025-09-12 17:11:41.692 [INFO][5516] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:41.851316 containerd[2128]: 2025-09-12 17:11:41.693 [INFO][5516] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:41.851316 containerd[2128]: 2025-09-12 17:11:41.740 [WARNING][5516] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" HandleID="k8s-pod-network.8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-eth0" Sep 12 17:11:41.851316 containerd[2128]: 2025-09-12 17:11:41.740 [INFO][5516] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" HandleID="k8s-pod-network.8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-eth0" Sep 12 17:11:41.851316 containerd[2128]: 2025-09-12 17:11:41.745 [INFO][5516] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:41.851316 containerd[2128]: 2025-09-12 17:11:41.807 [INFO][5423] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" Sep 12 17:11:41.859038 containerd[2128]: time="2025-09-12T17:11:41.851202977Z" level=info msg="TearDown network for sandbox \"8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e\" successfully" Sep 12 17:11:41.859038 containerd[2128]: time="2025-09-12T17:11:41.856232741Z" level=info msg="StopPodSandbox for \"8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e\" returns successfully" Sep 12 17:11:41.871720 systemd[1]: run-netns-cni\x2d3e66f85f\x2db5b1\x2dfd8b\x2dd6ed\x2d2fa1215dd750.mount: Deactivated successfully. Sep 12 17:11:41.894451 containerd[2128]: time="2025-09-12T17:11:41.891988601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7964ddc67d-v8tpq,Uid:bc9ee936-954c-4af7-aedd-76c2de2ef89a,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:11:41.926746 containerd[2128]: time="2025-09-12T17:11:41.925258757Z" level=info msg="StopPodSandbox for \"43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc\"" Sep 12 17:11:41.953775 containerd[2128]: 2025-09-12 17:11:40.937 [INFO][5417] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" Sep 12 17:11:41.953775 containerd[2128]: 2025-09-12 17:11:40.937 [INFO][5417] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" iface="eth0" netns="/var/run/netns/cni-9aff41bf-2764-2213-2624-b0f877dbb4c1" Sep 12 17:11:41.953775 containerd[2128]: 2025-09-12 17:11:40.938 [INFO][5417] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" iface="eth0" netns="/var/run/netns/cni-9aff41bf-2764-2213-2624-b0f877dbb4c1" Sep 12 17:11:41.953775 containerd[2128]: 2025-09-12 17:11:40.944 [INFO][5417] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" iface="eth0" netns="/var/run/netns/cni-9aff41bf-2764-2213-2624-b0f877dbb4c1" Sep 12 17:11:41.953775 containerd[2128]: 2025-09-12 17:11:40.945 [INFO][5417] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" Sep 12 17:11:41.953775 containerd[2128]: 2025-09-12 17:11:40.945 [INFO][5417] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" Sep 12 17:11:41.953775 containerd[2128]: 2025-09-12 17:11:41.691 [INFO][5521] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" HandleID="k8s-pod-network.6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" Workload="ip--172--31--22--180-k8s-calico--apiserver--5dcf8cdb5c--s49l2-eth0" Sep 12 17:11:41.953775 containerd[2128]: 2025-09-12 17:11:41.695 [INFO][5521] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:41.953775 containerd[2128]: 2025-09-12 17:11:41.745 [INFO][5521] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:41.953775 containerd[2128]: 2025-09-12 17:11:41.879 [WARNING][5521] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" HandleID="k8s-pod-network.6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" Workload="ip--172--31--22--180-k8s-calico--apiserver--5dcf8cdb5c--s49l2-eth0" Sep 12 17:11:41.953775 containerd[2128]: 2025-09-12 17:11:41.882 [INFO][5521] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" HandleID="k8s-pod-network.6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" Workload="ip--172--31--22--180-k8s-calico--apiserver--5dcf8cdb5c--s49l2-eth0" Sep 12 17:11:41.953775 containerd[2128]: 2025-09-12 17:11:41.895 [INFO][5521] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:41.953775 containerd[2128]: 2025-09-12 17:11:41.941 [INFO][5417] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" Sep 12 17:11:41.959133 containerd[2128]: time="2025-09-12T17:11:41.958968017Z" level=info msg="TearDown network for sandbox \"6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32\" successfully" Sep 12 17:11:41.960046 containerd[2128]: time="2025-09-12T17:11:41.959980193Z" level=info msg="StopPodSandbox for \"6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32\" returns successfully" Sep 12 17:11:41.961282 containerd[2128]: time="2025-09-12T17:11:41.960971165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dcf8cdb5c-s49l2,Uid:e2fad767-5bdb-4cdd-95b1-9b4c25a4c939,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:11:41.982191 systemd[1]: run-netns-cni\x2d9aff41bf\x2d2764\x2d2213\x2d2624\x2db0f877dbb4c1.mount: Deactivated successfully. Sep 12 17:11:42.031060 containerd[2128]: 2025-09-12 17:11:41.175 [INFO][5425] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" Sep 12 17:11:42.031060 containerd[2128]: 2025-09-12 17:11:41.221 [INFO][5425] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" iface="eth0" netns="/var/run/netns/cni-0acbbecb-fdc7-0b16-d765-39bca78c70ac" Sep 12 17:11:42.031060 containerd[2128]: 2025-09-12 17:11:41.222 [INFO][5425] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" iface="eth0" netns="/var/run/netns/cni-0acbbecb-fdc7-0b16-d765-39bca78c70ac" Sep 12 17:11:42.031060 containerd[2128]: 2025-09-12 17:11:41.229 [INFO][5425] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" iface="eth0" netns="/var/run/netns/cni-0acbbecb-fdc7-0b16-d765-39bca78c70ac" Sep 12 17:11:42.031060 containerd[2128]: 2025-09-12 17:11:41.229 [INFO][5425] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" Sep 12 17:11:42.031060 containerd[2128]: 2025-09-12 17:11:41.229 [INFO][5425] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" Sep 12 17:11:42.031060 containerd[2128]: 2025-09-12 17:11:41.832 [INFO][5548] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" HandleID="k8s-pod-network.a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" Workload="ip--172--31--22--180-k8s-csi--node--driver--2fhht-eth0" Sep 12 17:11:42.031060 containerd[2128]: 2025-09-12 17:11:41.833 [INFO][5548] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:42.031060 containerd[2128]: 2025-09-12 17:11:41.895 [INFO][5548] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:42.031060 containerd[2128]: 2025-09-12 17:11:41.963 [WARNING][5548] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" HandleID="k8s-pod-network.a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" Workload="ip--172--31--22--180-k8s-csi--node--driver--2fhht-eth0" Sep 12 17:11:42.031060 containerd[2128]: 2025-09-12 17:11:41.963 [INFO][5548] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" HandleID="k8s-pod-network.a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" Workload="ip--172--31--22--180-k8s-csi--node--driver--2fhht-eth0" Sep 12 17:11:42.031060 containerd[2128]: 2025-09-12 17:11:41.982 [INFO][5548] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:42.031060 containerd[2128]: 2025-09-12 17:11:42.010 [INFO][5425] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" Sep 12 17:11:42.042959 containerd[2128]: time="2025-09-12T17:11:42.041583590Z" level=info msg="TearDown network for sandbox \"a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283\" successfully" Sep 12 17:11:42.042959 containerd[2128]: time="2025-09-12T17:11:42.041651402Z" level=info msg="StopPodSandbox for \"a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283\" returns successfully" Sep 12 17:11:42.042959 containerd[2128]: time="2025-09-12T17:11:42.042577970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2fhht,Uid:99059a22-90fb-418d-a2c0-7e943cbdb29d,Namespace:calico-system,Attempt:1,}" Sep 12 17:11:42.084899 containerd[2128]: 2025-09-12 17:11:41.207 [INFO][5422] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" Sep 12 17:11:42.084899 containerd[2128]: 2025-09-12 17:11:41.216 [INFO][5422] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" iface="eth0" netns="/var/run/netns/cni-3707e899-e50b-3f79-25b4-1a8de86b212d" Sep 12 17:11:42.084899 containerd[2128]: 2025-09-12 17:11:41.218 [INFO][5422] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" iface="eth0" netns="/var/run/netns/cni-3707e899-e50b-3f79-25b4-1a8de86b212d" Sep 12 17:11:42.084899 containerd[2128]: 2025-09-12 17:11:41.220 [INFO][5422] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" iface="eth0" netns="/var/run/netns/cni-3707e899-e50b-3f79-25b4-1a8de86b212d" Sep 12 17:11:42.084899 containerd[2128]: 2025-09-12 17:11:41.220 [INFO][5422] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" Sep 12 17:11:42.084899 containerd[2128]: 2025-09-12 17:11:41.220 [INFO][5422] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" Sep 12 17:11:42.084899 containerd[2128]: 2025-09-12 17:11:41.951 [INFO][5546] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" HandleID="k8s-pod-network.da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" Workload="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h2wq2-eth0" Sep 12 17:11:42.084899 containerd[2128]: 2025-09-12 17:11:41.952 [INFO][5546] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:42.084899 containerd[2128]: 2025-09-12 17:11:41.982 [INFO][5546] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:42.084899 containerd[2128]: 2025-09-12 17:11:42.029 [WARNING][5546] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" HandleID="k8s-pod-network.da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" Workload="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h2wq2-eth0" Sep 12 17:11:42.084899 containerd[2128]: 2025-09-12 17:11:42.029 [INFO][5546] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" HandleID="k8s-pod-network.da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" Workload="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h2wq2-eth0" Sep 12 17:11:42.084899 containerd[2128]: 2025-09-12 17:11:42.037 [INFO][5546] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:42.084899 containerd[2128]: 2025-09-12 17:11:42.065 [INFO][5422] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" Sep 12 17:11:42.090017 containerd[2128]: time="2025-09-12T17:11:42.089147762Z" level=info msg="TearDown network for sandbox \"da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b\" successfully" Sep 12 17:11:42.090017 containerd[2128]: time="2025-09-12T17:11:42.089207942Z" level=info msg="StopPodSandbox for \"da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b\" returns successfully" Sep 12 17:11:42.092740 containerd[2128]: time="2025-09-12T17:11:42.092073782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-h2wq2,Uid:92e449f8-4616-42f4-87f1-0de4ba32c288,Namespace:kube-system,Attempt:1,}" Sep 12 17:11:42.588293 kubelet[3602]: I0912 17:11:42.588189 3602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-h298k" podStartSLOduration=49.588165484 podStartE2EDuration="49.588165484s" podCreationTimestamp="2025-09-12 17:10:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:11:42.588090496 +0000 UTC m=+53.896829056" watchObservedRunningTime="2025-09-12 17:11:42.588165484 +0000 UTC m=+53.896904020" Sep 12 17:11:42.713611 systemd[1]: run-netns-cni\x2d0acbbecb\x2dfdc7\x2d0b16\x2dd765\x2d39bca78c70ac.mount: Deactivated successfully. Sep 12 17:11:42.715257 systemd[1]: run-netns-cni\x2d3707e899\x2de50b\x2d3f79\x2d25b4\x2d1a8de86b212d.mount: Deactivated successfully. Sep 12 17:11:42.918424 systemd-networkd[1689]: cali76836b15a77: Link UP Sep 12 17:11:42.920606 systemd-networkd[1689]: cali76836b15a77: Gained carrier Sep 12 17:11:42.988305 containerd[2128]: 2025-09-12 17:11:42.316 [INFO][5605] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-eth0 calico-apiserver-7964ddc67d- calico-apiserver 831c8b21-3a30-4e09-bfba-cb39dd0935d8 1024 0 2025-09-12 17:11:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7964ddc67d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-22-180 calico-apiserver-7964ddc67d-2fn9n eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali76836b15a77 [] [] }} ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" Namespace="calico-apiserver" Pod="calico-apiserver-7964ddc67d-2fn9n" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-" Sep 12 17:11:42.988305 containerd[2128]: 2025-09-12 17:11:42.316 [INFO][5605] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" Namespace="calico-apiserver" Pod="calico-apiserver-7964ddc67d-2fn9n" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-eth0" Sep 12 17:11:42.988305 containerd[2128]: 2025-09-12 17:11:42.771 [INFO][5691] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" HandleID="k8s-pod-network.d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-eth0" Sep 12 17:11:42.988305 containerd[2128]: 2025-09-12 17:11:42.773 [INFO][5691] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" HandleID="k8s-pod-network.d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002b9f70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-22-180", "pod":"calico-apiserver-7964ddc67d-2fn9n", "timestamp":"2025-09-12 17:11:42.771508001 +0000 UTC"}, Hostname:"ip-172-31-22-180", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:11:42.988305 containerd[2128]: 2025-09-12 17:11:42.773 [INFO][5691] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:42.988305 containerd[2128]: 2025-09-12 17:11:42.773 [INFO][5691] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:42.988305 containerd[2128]: 2025-09-12 17:11:42.773 [INFO][5691] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-180' Sep 12 17:11:42.988305 containerd[2128]: 2025-09-12 17:11:42.822 [INFO][5691] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" host="ip-172-31-22-180" Sep 12 17:11:42.988305 containerd[2128]: 2025-09-12 17:11:42.839 [INFO][5691] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-180" Sep 12 17:11:42.988305 containerd[2128]: 2025-09-12 17:11:42.852 [INFO][5691] ipam/ipam.go 511: Trying affinity for 192.168.2.128/26 host="ip-172-31-22-180" Sep 12 17:11:42.988305 containerd[2128]: 2025-09-12 17:11:42.857 [INFO][5691] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.128/26 host="ip-172-31-22-180" Sep 12 17:11:42.988305 containerd[2128]: 2025-09-12 17:11:42.864 [INFO][5691] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.128/26 host="ip-172-31-22-180" Sep 12 17:11:42.988305 containerd[2128]: 2025-09-12 17:11:42.864 [INFO][5691] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.128/26 handle="k8s-pod-network.d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" host="ip-172-31-22-180" Sep 12 17:11:42.988305 containerd[2128]: 2025-09-12 17:11:42.868 [INFO][5691] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc Sep 12 17:11:42.988305 containerd[2128]: 2025-09-12 17:11:42.880 [INFO][5691] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.128/26 handle="k8s-pod-network.d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" host="ip-172-31-22-180" Sep 12 17:11:42.988305 containerd[2128]: 2025-09-12 17:11:42.898 [INFO][5691] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.132/26] block=192.168.2.128/26 handle="k8s-pod-network.d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" host="ip-172-31-22-180" Sep 12 17:11:42.988305 containerd[2128]: 2025-09-12 17:11:42.899 [INFO][5691] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.132/26] handle="k8s-pod-network.d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" host="ip-172-31-22-180" Sep 12 17:11:42.988305 containerd[2128]: 2025-09-12 17:11:42.899 [INFO][5691] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:42.988305 containerd[2128]: 2025-09-12 17:11:42.899 [INFO][5691] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.132/26] IPv6=[] ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" HandleID="k8s-pod-network.d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-eth0" Sep 12 17:11:42.992804 containerd[2128]: 2025-09-12 17:11:42.910 [INFO][5605] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" Namespace="calico-apiserver" Pod="calico-apiserver-7964ddc67d-2fn9n" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-eth0", GenerateName:"calico-apiserver-7964ddc67d-", Namespace:"calico-apiserver", SelfLink:"", UID:"831c8b21-3a30-4e09-bfba-cb39dd0935d8", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7964ddc67d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-180", ContainerID:"", Pod:"calico-apiserver-7964ddc67d-2fn9n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali76836b15a77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:42.992804 containerd[2128]: 2025-09-12 17:11:42.912 [INFO][5605] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.132/32] ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" Namespace="calico-apiserver" Pod="calico-apiserver-7964ddc67d-2fn9n" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-eth0" Sep 12 17:11:42.992804 containerd[2128]: 2025-09-12 17:11:42.912 [INFO][5605] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali76836b15a77 ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" Namespace="calico-apiserver" Pod="calico-apiserver-7964ddc67d-2fn9n" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-eth0" Sep 12 17:11:42.992804 containerd[2128]: 2025-09-12 17:11:42.926 [INFO][5605] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" Namespace="calico-apiserver" Pod="calico-apiserver-7964ddc67d-2fn9n" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-eth0" Sep 12 17:11:42.992804 containerd[2128]: 2025-09-12 17:11:42.933 [INFO][5605] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" Namespace="calico-apiserver" Pod="calico-apiserver-7964ddc67d-2fn9n" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-eth0", GenerateName:"calico-apiserver-7964ddc67d-", Namespace:"calico-apiserver", SelfLink:"", UID:"831c8b21-3a30-4e09-bfba-cb39dd0935d8", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7964ddc67d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-180", ContainerID:"d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc", Pod:"calico-apiserver-7964ddc67d-2fn9n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali76836b15a77", MAC:"4e:b1:8e:09:b8:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:42.992804 containerd[2128]: 2025-09-12 17:11:42.967 [INFO][5605] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" Namespace="calico-apiserver" Pod="calico-apiserver-7964ddc67d-2fn9n" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-eth0" Sep 12 17:11:43.119407 systemd-networkd[1689]: calic522652ab8d: Link UP Sep 12 17:11:43.128949 systemd-networkd[1689]: calic522652ab8d: Gained carrier Sep 12 17:11:43.229166 containerd[2128]: 2025-09-12 17:11:42.469 [INFO][5648] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--180-k8s-calico--apiserver--5dcf8cdb5c--s49l2-eth0 calico-apiserver-5dcf8cdb5c- calico-apiserver e2fad767-5bdb-4cdd-95b1-9b4c25a4c939 1027 0 2025-09-12 17:11:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5dcf8cdb5c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-22-180 calico-apiserver-5dcf8cdb5c-s49l2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic522652ab8d [] [] }} ContainerID="4fe86673ea59398a2ac0500de93d03e460478b530648a9ece0ae81e6084eb411" Namespace="calico-apiserver" Pod="calico-apiserver-5dcf8cdb5c-s49l2" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--apiserver--5dcf8cdb5c--s49l2-" Sep 12 17:11:43.229166 containerd[2128]: 2025-09-12 17:11:42.471 [INFO][5648] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4fe86673ea59398a2ac0500de93d03e460478b530648a9ece0ae81e6084eb411" Namespace="calico-apiserver" Pod="calico-apiserver-5dcf8cdb5c-s49l2" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--apiserver--5dcf8cdb5c--s49l2-eth0" Sep 12 17:11:43.229166 containerd[2128]: 2025-09-12 17:11:42.785 [INFO][5710] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4fe86673ea59398a2ac0500de93d03e460478b530648a9ece0ae81e6084eb411" HandleID="k8s-pod-network.4fe86673ea59398a2ac0500de93d03e460478b530648a9ece0ae81e6084eb411" Workload="ip--172--31--22--180-k8s-calico--apiserver--5dcf8cdb5c--s49l2-eth0" Sep 12 17:11:43.229166 containerd[2128]: 2025-09-12 17:11:42.785 [INFO][5710] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4fe86673ea59398a2ac0500de93d03e460478b530648a9ece0ae81e6084eb411" HandleID="k8s-pod-network.4fe86673ea59398a2ac0500de93d03e460478b530648a9ece0ae81e6084eb411" Workload="ip--172--31--22--180-k8s-calico--apiserver--5dcf8cdb5c--s49l2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000319510), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-22-180", "pod":"calico-apiserver-5dcf8cdb5c-s49l2", "timestamp":"2025-09-12 17:11:42.785024621 +0000 UTC"}, Hostname:"ip-172-31-22-180", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:11:43.229166 containerd[2128]: 2025-09-12 17:11:42.785 [INFO][5710] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:43.229166 containerd[2128]: 2025-09-12 17:11:42.899 [INFO][5710] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:43.229166 containerd[2128]: 2025-09-12 17:11:42.899 [INFO][5710] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-180' Sep 12 17:11:43.229166 containerd[2128]: 2025-09-12 17:11:42.932 [INFO][5710] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4fe86673ea59398a2ac0500de93d03e460478b530648a9ece0ae81e6084eb411" host="ip-172-31-22-180" Sep 12 17:11:43.229166 containerd[2128]: 2025-09-12 17:11:42.950 [INFO][5710] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-180" Sep 12 17:11:43.229166 containerd[2128]: 2025-09-12 17:11:42.971 [INFO][5710] ipam/ipam.go 511: Trying affinity for 192.168.2.128/26 host="ip-172-31-22-180" Sep 12 17:11:43.229166 containerd[2128]: 2025-09-12 17:11:42.980 [INFO][5710] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.128/26 host="ip-172-31-22-180" Sep 12 17:11:43.229166 containerd[2128]: 2025-09-12 17:11:42.993 [INFO][5710] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.128/26 host="ip-172-31-22-180" Sep 12 17:11:43.229166 containerd[2128]: 2025-09-12 17:11:42.993 [INFO][5710] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.128/26 handle="k8s-pod-network.4fe86673ea59398a2ac0500de93d03e460478b530648a9ece0ae81e6084eb411" host="ip-172-31-22-180" Sep 12 17:11:43.229166 containerd[2128]: 2025-09-12 17:11:43.005 [INFO][5710] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4fe86673ea59398a2ac0500de93d03e460478b530648a9ece0ae81e6084eb411 Sep 12 17:11:43.229166 containerd[2128]: 2025-09-12 17:11:43.016 [INFO][5710] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.128/26 handle="k8s-pod-network.4fe86673ea59398a2ac0500de93d03e460478b530648a9ece0ae81e6084eb411" host="ip-172-31-22-180" Sep 12 17:11:43.229166 containerd[2128]: 2025-09-12 17:11:43.042 [INFO][5710] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.133/26] block=192.168.2.128/26 handle="k8s-pod-network.4fe86673ea59398a2ac0500de93d03e460478b530648a9ece0ae81e6084eb411" host="ip-172-31-22-180" Sep 12 17:11:43.229166 containerd[2128]: 2025-09-12 17:11:43.045 [INFO][5710] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.133/26] handle="k8s-pod-network.4fe86673ea59398a2ac0500de93d03e460478b530648a9ece0ae81e6084eb411" host="ip-172-31-22-180" Sep 12 17:11:43.229166 containerd[2128]: 2025-09-12 17:11:43.047 [INFO][5710] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:43.229166 containerd[2128]: 2025-09-12 17:11:43.048 [INFO][5710] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.133/26] IPv6=[] ContainerID="4fe86673ea59398a2ac0500de93d03e460478b530648a9ece0ae81e6084eb411" HandleID="k8s-pod-network.4fe86673ea59398a2ac0500de93d03e460478b530648a9ece0ae81e6084eb411" Workload="ip--172--31--22--180-k8s-calico--apiserver--5dcf8cdb5c--s49l2-eth0" Sep 12 17:11:43.230329 containerd[2128]: 2025-09-12 17:11:43.077 [INFO][5648] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4fe86673ea59398a2ac0500de93d03e460478b530648a9ece0ae81e6084eb411" Namespace="calico-apiserver" Pod="calico-apiserver-5dcf8cdb5c-s49l2" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--apiserver--5dcf8cdb5c--s49l2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--180-k8s-calico--apiserver--5dcf8cdb5c--s49l2-eth0", GenerateName:"calico-apiserver-5dcf8cdb5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e2fad767-5bdb-4cdd-95b1-9b4c25a4c939", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dcf8cdb5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-180", ContainerID:"", Pod:"calico-apiserver-5dcf8cdb5c-s49l2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic522652ab8d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:43.230329 containerd[2128]: 2025-09-12 17:11:43.077 [INFO][5648] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.133/32] ContainerID="4fe86673ea59398a2ac0500de93d03e460478b530648a9ece0ae81e6084eb411" Namespace="calico-apiserver" Pod="calico-apiserver-5dcf8cdb5c-s49l2" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--apiserver--5dcf8cdb5c--s49l2-eth0" Sep 12 17:11:43.230329 containerd[2128]: 2025-09-12 17:11:43.078 [INFO][5648] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic522652ab8d ContainerID="4fe86673ea59398a2ac0500de93d03e460478b530648a9ece0ae81e6084eb411" Namespace="calico-apiserver" Pod="calico-apiserver-5dcf8cdb5c-s49l2" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--apiserver--5dcf8cdb5c--s49l2-eth0" Sep 12 17:11:43.230329 containerd[2128]: 2025-09-12 17:11:43.145 [INFO][5648] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4fe86673ea59398a2ac0500de93d03e460478b530648a9ece0ae81e6084eb411" Namespace="calico-apiserver" Pod="calico-apiserver-5dcf8cdb5c-s49l2" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--apiserver--5dcf8cdb5c--s49l2-eth0" Sep 12 17:11:43.230329 containerd[2128]: 2025-09-12 17:11:43.159 [INFO][5648] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4fe86673ea59398a2ac0500de93d03e460478b530648a9ece0ae81e6084eb411" Namespace="calico-apiserver" Pod="calico-apiserver-5dcf8cdb5c-s49l2" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--apiserver--5dcf8cdb5c--s49l2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--180-k8s-calico--apiserver--5dcf8cdb5c--s49l2-eth0", GenerateName:"calico-apiserver-5dcf8cdb5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e2fad767-5bdb-4cdd-95b1-9b4c25a4c939", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dcf8cdb5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-180", ContainerID:"4fe86673ea59398a2ac0500de93d03e460478b530648a9ece0ae81e6084eb411", Pod:"calico-apiserver-5dcf8cdb5c-s49l2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic522652ab8d", MAC:"9a:d3:83:27:c2:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:43.230329 containerd[2128]: 2025-09-12 17:11:43.215 [INFO][5648] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4fe86673ea59398a2ac0500de93d03e460478b530648a9ece0ae81e6084eb411" Namespace="calico-apiserver" Pod="calico-apiserver-5dcf8cdb5c-s49l2" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--apiserver--5dcf8cdb5c--s49l2-eth0" Sep 12 17:11:43.240814 containerd[2128]: time="2025-09-12T17:11:43.237367756Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:11:43.240814 containerd[2128]: time="2025-09-12T17:11:43.237579196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:11:43.240814 containerd[2128]: time="2025-09-12T17:11:43.237606652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:43.240814 containerd[2128]: time="2025-09-12T17:11:43.238035676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:43.362074 systemd-networkd[1689]: cali662660f5d8e: Link UP Sep 12 17:11:43.365873 systemd-networkd[1689]: cali662660f5d8e: Gained carrier Sep 12 17:11:43.470620 containerd[2128]: 2025-09-12 17:11:42.328 [INFO][5636] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-eth0 calico-apiserver-7964ddc67d- calico-apiserver bc9ee936-954c-4af7-aedd-76c2de2ef89a 1025 0 2025-09-12 17:11:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7964ddc67d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-22-180 calico-apiserver-7964ddc67d-v8tpq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali662660f5d8e [] [] }} ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" Namespace="calico-apiserver" Pod="calico-apiserver-7964ddc67d-v8tpq" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-" Sep 12 17:11:43.470620 containerd[2128]: 2025-09-12 17:11:42.328 [INFO][5636] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" Namespace="calico-apiserver" Pod="calico-apiserver-7964ddc67d-v8tpq" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-eth0" Sep 12 17:11:43.470620 containerd[2128]: 2025-09-12 17:11:42.817 [INFO][5697] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" HandleID="k8s-pod-network.9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-eth0" Sep 12 17:11:43.470620 containerd[2128]: 2025-09-12 17:11:42.820 [INFO][5697] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" HandleID="k8s-pod-network.9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000183ed0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-22-180", "pod":"calico-apiserver-7964ddc67d-v8tpq", "timestamp":"2025-09-12 17:11:42.817275089 +0000 UTC"}, Hostname:"ip-172-31-22-180", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:11:43.470620 containerd[2128]: 2025-09-12 17:11:42.820 [INFO][5697] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:43.470620 containerd[2128]: 2025-09-12 17:11:43.047 [INFO][5697] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:43.470620 containerd[2128]: 2025-09-12 17:11:43.050 [INFO][5697] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-180' Sep 12 17:11:43.470620 containerd[2128]: 2025-09-12 17:11:43.122 [INFO][5697] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" host="ip-172-31-22-180" Sep 12 17:11:43.470620 containerd[2128]: 2025-09-12 17:11:43.169 [INFO][5697] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-180" Sep 12 17:11:43.470620 containerd[2128]: 2025-09-12 17:11:43.212 [INFO][5697] ipam/ipam.go 511: Trying affinity for 192.168.2.128/26 host="ip-172-31-22-180" Sep 12 17:11:43.470620 containerd[2128]: 2025-09-12 17:11:43.222 [INFO][5697] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.128/26 host="ip-172-31-22-180" Sep 12 17:11:43.470620 containerd[2128]: 2025-09-12 17:11:43.249 [INFO][5697] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.128/26 host="ip-172-31-22-180" Sep 12 17:11:43.470620 containerd[2128]: 2025-09-12 17:11:43.249 [INFO][5697] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.128/26 handle="k8s-pod-network.9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" host="ip-172-31-22-180" Sep 12 17:11:43.470620 containerd[2128]: 2025-09-12 17:11:43.275 [INFO][5697] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e Sep 12 17:11:43.470620 containerd[2128]: 2025-09-12 17:11:43.307 [INFO][5697] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.128/26 handle="k8s-pod-network.9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" host="ip-172-31-22-180" Sep 12 17:11:43.470620 containerd[2128]: 2025-09-12 17:11:43.330 [INFO][5697] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.134/26] block=192.168.2.128/26 handle="k8s-pod-network.9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" host="ip-172-31-22-180" Sep 12 17:11:43.470620 containerd[2128]: 2025-09-12 17:11:43.332 [INFO][5697] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.134/26] handle="k8s-pod-network.9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" host="ip-172-31-22-180" Sep 12 17:11:43.470620 containerd[2128]: 2025-09-12 17:11:43.332 [INFO][5697] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:43.470620 containerd[2128]: 2025-09-12 17:11:43.335 [INFO][5697] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.134/26] IPv6=[] ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" HandleID="k8s-pod-network.9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-eth0" Sep 12 17:11:43.482451 containerd[2128]: 2025-09-12 17:11:43.343 [INFO][5636] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" Namespace="calico-apiserver" Pod="calico-apiserver-7964ddc67d-v8tpq" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-eth0", GenerateName:"calico-apiserver-7964ddc67d-", Namespace:"calico-apiserver", SelfLink:"", UID:"bc9ee936-954c-4af7-aedd-76c2de2ef89a", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7964ddc67d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-180", ContainerID:"", Pod:"calico-apiserver-7964ddc67d-v8tpq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali662660f5d8e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:43.482451 containerd[2128]: 2025-09-12 17:11:43.343 [INFO][5636] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.134/32] ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" Namespace="calico-apiserver" Pod="calico-apiserver-7964ddc67d-v8tpq" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-eth0" Sep 12 17:11:43.482451 containerd[2128]: 2025-09-12 17:11:43.343 [INFO][5636] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali662660f5d8e ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" Namespace="calico-apiserver" Pod="calico-apiserver-7964ddc67d-v8tpq" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-eth0" Sep 12 17:11:43.482451 containerd[2128]: 2025-09-12 17:11:43.355 [INFO][5636] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" Namespace="calico-apiserver" Pod="calico-apiserver-7964ddc67d-v8tpq" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-eth0" Sep 12 17:11:43.482451 containerd[2128]: 2025-09-12 17:11:43.356 [INFO][5636] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" Namespace="calico-apiserver" Pod="calico-apiserver-7964ddc67d-v8tpq" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-eth0", GenerateName:"calico-apiserver-7964ddc67d-", Namespace:"calico-apiserver", SelfLink:"", UID:"bc9ee936-954c-4af7-aedd-76c2de2ef89a", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7964ddc67d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-180", ContainerID:"9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e", Pod:"calico-apiserver-7964ddc67d-v8tpq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali662660f5d8e", MAC:"66:03:fe:da:95:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:43.482451 containerd[2128]: 2025-09-12 17:11:43.401 [INFO][5636] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" Namespace="calico-apiserver" Pod="calico-apiserver-7964ddc67d-v8tpq" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-eth0" Sep 12 17:11:43.534587 containerd[2128]: 2025-09-12 17:11:42.498 [INFO][5634] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" Sep 12 17:11:43.534587 containerd[2128]: 2025-09-12 17:11:42.502 [INFO][5634] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" iface="eth0" netns="/var/run/netns/cni-f84d9caf-1e68-899a-b25f-da6ea3f46c92" Sep 12 17:11:43.534587 containerd[2128]: 2025-09-12 17:11:42.505 [INFO][5634] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" iface="eth0" netns="/var/run/netns/cni-f84d9caf-1e68-899a-b25f-da6ea3f46c92" Sep 12 17:11:43.534587 containerd[2128]: 2025-09-12 17:11:42.506 [INFO][5634] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" iface="eth0" netns="/var/run/netns/cni-f84d9caf-1e68-899a-b25f-da6ea3f46c92" Sep 12 17:11:43.534587 containerd[2128]: 2025-09-12 17:11:42.507 [INFO][5634] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" Sep 12 17:11:43.534587 containerd[2128]: 2025-09-12 17:11:42.507 [INFO][5634] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" Sep 12 17:11:43.534587 containerd[2128]: 2025-09-12 17:11:42.846 [INFO][5711] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" HandleID="k8s-pod-network.43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" Workload="ip--172--31--22--180-k8s-calico--kube--controllers--5f8cc79964--sfrvv-eth0" Sep 12 17:11:43.534587 containerd[2128]: 2025-09-12 17:11:42.847 [INFO][5711] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:43.534587 containerd[2128]: 2025-09-12 17:11:43.332 [INFO][5711] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:43.534587 containerd[2128]: 2025-09-12 17:11:43.459 [WARNING][5711] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" HandleID="k8s-pod-network.43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" Workload="ip--172--31--22--180-k8s-calico--kube--controllers--5f8cc79964--sfrvv-eth0" Sep 12 17:11:43.534587 containerd[2128]: 2025-09-12 17:11:43.459 [INFO][5711] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" HandleID="k8s-pod-network.43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" Workload="ip--172--31--22--180-k8s-calico--kube--controllers--5f8cc79964--sfrvv-eth0" Sep 12 17:11:43.534587 containerd[2128]: 2025-09-12 17:11:43.462 [INFO][5711] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:43.534587 containerd[2128]: 2025-09-12 17:11:43.512 [INFO][5634] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" Sep 12 17:11:43.559362 containerd[2128]: time="2025-09-12T17:11:43.554242013Z" level=info msg="TearDown network for sandbox \"43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc\" successfully" Sep 12 17:11:43.559362 containerd[2128]: time="2025-09-12T17:11:43.554294261Z" level=info msg="StopPodSandbox for \"43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc\" returns successfully" Sep 12 17:11:43.559362 containerd[2128]: time="2025-09-12T17:11:43.552431009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:11:43.559362 containerd[2128]: time="2025-09-12T17:11:43.552976013Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:11:43.565395 containerd[2128]: time="2025-09-12T17:11:43.560157209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f8cc79964-sfrvv,Uid:248d5f01-15dc-4b45-9fde-eec5e30019c2,Namespace:calico-system,Attempt:1,}" Sep 12 17:11:43.561278 systemd[1]: run-netns-cni\x2df84d9caf\x2d1e68\x2d899a\x2db25f\x2dda6ea3f46c92.mount: Deactivated successfully. Sep 12 17:11:43.565623 containerd[2128]: time="2025-09-12T17:11:43.561328973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:43.565623 containerd[2128]: time="2025-09-12T17:11:43.561585809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:43.700901 systemd[1]: run-containerd-runc-k8s.io-4fe86673ea59398a2ac0500de93d03e460478b530648a9ece0ae81e6084eb411-runc.Iab4L0.mount: Deactivated successfully. Sep 12 17:11:43.785298 systemd-networkd[1689]: cali513ec739653: Link UP Sep 12 17:11:43.788019 containerd[2128]: time="2025-09-12T17:11:43.786934026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7964ddc67d-2fn9n,Uid:831c8b21-3a30-4e09-bfba-cb39dd0935d8,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc\"" Sep 12 17:11:43.806803 systemd-networkd[1689]: cali513ec739653: Gained carrier Sep 12 17:11:43.839787 containerd[2128]: time="2025-09-12T17:11:43.835330974Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:11:43.839787 containerd[2128]: time="2025-09-12T17:11:43.835439046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:11:43.839787 containerd[2128]: time="2025-09-12T17:11:43.835468266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:43.843120 containerd[2128]: time="2025-09-12T17:11:43.835684038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:43.902360 containerd[2128]: 2025-09-12 17:11:42.430 [INFO][5660] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--180-k8s-csi--node--driver--2fhht-eth0 csi-node-driver- calico-system 99059a22-90fb-418d-a2c0-7e943cbdb29d 1028 0 2025-09-12 17:11:15 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-22-180 csi-node-driver-2fhht eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali513ec739653 [] [] }} ContainerID="d7b00122675f7914d9176a180b8cb18ab386d4c267bf636e09c05cd89035bd48" Namespace="calico-system" Pod="csi-node-driver-2fhht" WorkloadEndpoint="ip--172--31--22--180-k8s-csi--node--driver--2fhht-" Sep 12 17:11:43.902360 containerd[2128]: 2025-09-12 17:11:42.430 [INFO][5660] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d7b00122675f7914d9176a180b8cb18ab386d4c267bf636e09c05cd89035bd48" Namespace="calico-system" Pod="csi-node-driver-2fhht" WorkloadEndpoint="ip--172--31--22--180-k8s-csi--node--driver--2fhht-eth0" Sep 12 17:11:43.902360 containerd[2128]: 2025-09-12 17:11:42.844 [INFO][5701] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d7b00122675f7914d9176a180b8cb18ab386d4c267bf636e09c05cd89035bd48" HandleID="k8s-pod-network.d7b00122675f7914d9176a180b8cb18ab386d4c267bf636e09c05cd89035bd48" Workload="ip--172--31--22--180-k8s-csi--node--driver--2fhht-eth0" Sep 12 17:11:43.902360 containerd[2128]: 2025-09-12 17:11:42.847 [INFO][5701] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d7b00122675f7914d9176a180b8cb18ab386d4c267bf636e09c05cd89035bd48" HandleID="k8s-pod-network.d7b00122675f7914d9176a180b8cb18ab386d4c267bf636e09c05cd89035bd48" Workload="ip--172--31--22--180-k8s-csi--node--driver--2fhht-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000334490), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-180", "pod":"csi-node-driver-2fhht", "timestamp":"2025-09-12 17:11:42.84425115 +0000 UTC"}, Hostname:"ip-172-31-22-180", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:11:43.902360 containerd[2128]: 2025-09-12 17:11:42.847 [INFO][5701] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:43.902360 containerd[2128]: 2025-09-12 17:11:43.462 [INFO][5701] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:43.902360 containerd[2128]: 2025-09-12 17:11:43.462 [INFO][5701] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-180' Sep 12 17:11:43.902360 containerd[2128]: 2025-09-12 17:11:43.520 [INFO][5701] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d7b00122675f7914d9176a180b8cb18ab386d4c267bf636e09c05cd89035bd48" host="ip-172-31-22-180" Sep 12 17:11:43.902360 containerd[2128]: 2025-09-12 17:11:43.580 [INFO][5701] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-180" Sep 12 17:11:43.902360 containerd[2128]: 2025-09-12 17:11:43.605 [INFO][5701] ipam/ipam.go 511: Trying affinity for 192.168.2.128/26 host="ip-172-31-22-180" Sep 12 17:11:43.902360 containerd[2128]: 2025-09-12 17:11:43.612 [INFO][5701] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.128/26 host="ip-172-31-22-180" Sep 12 17:11:43.902360 containerd[2128]: 2025-09-12 17:11:43.634 [INFO][5701] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.128/26 host="ip-172-31-22-180" Sep 12 17:11:43.902360 containerd[2128]: 2025-09-12 17:11:43.634 [INFO][5701] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.128/26 handle="k8s-pod-network.d7b00122675f7914d9176a180b8cb18ab386d4c267bf636e09c05cd89035bd48" host="ip-172-31-22-180" Sep 12 17:11:43.902360 containerd[2128]: 2025-09-12 17:11:43.652 [INFO][5701] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d7b00122675f7914d9176a180b8cb18ab386d4c267bf636e09c05cd89035bd48 Sep 12 17:11:43.902360 containerd[2128]: 2025-09-12 17:11:43.689 [INFO][5701] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.128/26 handle="k8s-pod-network.d7b00122675f7914d9176a180b8cb18ab386d4c267bf636e09c05cd89035bd48" host="ip-172-31-22-180" Sep 12 17:11:43.902360 containerd[2128]: 2025-09-12 17:11:43.725 [INFO][5701] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.135/26] block=192.168.2.128/26 handle="k8s-pod-network.d7b00122675f7914d9176a180b8cb18ab386d4c267bf636e09c05cd89035bd48" host="ip-172-31-22-180" Sep 12 17:11:43.902360 containerd[2128]: 2025-09-12 17:11:43.726 [INFO][5701] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.135/26] handle="k8s-pod-network.d7b00122675f7914d9176a180b8cb18ab386d4c267bf636e09c05cd89035bd48" host="ip-172-31-22-180" Sep 12 17:11:43.902360 containerd[2128]: 2025-09-12 17:11:43.726 [INFO][5701] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:43.902360 containerd[2128]: 2025-09-12 17:11:43.726 [INFO][5701] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.135/26] IPv6=[] ContainerID="d7b00122675f7914d9176a180b8cb18ab386d4c267bf636e09c05cd89035bd48" HandleID="k8s-pod-network.d7b00122675f7914d9176a180b8cb18ab386d4c267bf636e09c05cd89035bd48" Workload="ip--172--31--22--180-k8s-csi--node--driver--2fhht-eth0" Sep 12 17:11:43.907932 containerd[2128]: 2025-09-12 17:11:43.756 [INFO][5660] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d7b00122675f7914d9176a180b8cb18ab386d4c267bf636e09c05cd89035bd48" Namespace="calico-system" Pod="csi-node-driver-2fhht" WorkloadEndpoint="ip--172--31--22--180-k8s-csi--node--driver--2fhht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--180-k8s-csi--node--driver--2fhht-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"99059a22-90fb-418d-a2c0-7e943cbdb29d", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-180", ContainerID:"", Pod:"csi-node-driver-2fhht", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali513ec739653", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:43.907932 containerd[2128]: 2025-09-12 17:11:43.756 [INFO][5660] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.135/32] ContainerID="d7b00122675f7914d9176a180b8cb18ab386d4c267bf636e09c05cd89035bd48" Namespace="calico-system" Pod="csi-node-driver-2fhht" WorkloadEndpoint="ip--172--31--22--180-k8s-csi--node--driver--2fhht-eth0" Sep 12 17:11:43.907932 containerd[2128]: 2025-09-12 17:11:43.756 [INFO][5660] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali513ec739653 ContainerID="d7b00122675f7914d9176a180b8cb18ab386d4c267bf636e09c05cd89035bd48" Namespace="calico-system" Pod="csi-node-driver-2fhht" WorkloadEndpoint="ip--172--31--22--180-k8s-csi--node--driver--2fhht-eth0" Sep 12 17:11:43.907932 containerd[2128]: 2025-09-12 17:11:43.813 [INFO][5660] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d7b00122675f7914d9176a180b8cb18ab386d4c267bf636e09c05cd89035bd48" Namespace="calico-system" Pod="csi-node-driver-2fhht" WorkloadEndpoint="ip--172--31--22--180-k8s-csi--node--driver--2fhht-eth0" Sep 12 17:11:43.907932 containerd[2128]: 2025-09-12 17:11:43.814 [INFO][5660] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d7b00122675f7914d9176a180b8cb18ab386d4c267bf636e09c05cd89035bd48" Namespace="calico-system" Pod="csi-node-driver-2fhht" WorkloadEndpoint="ip--172--31--22--180-k8s-csi--node--driver--2fhht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--180-k8s-csi--node--driver--2fhht-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"99059a22-90fb-418d-a2c0-7e943cbdb29d", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-180", ContainerID:"d7b00122675f7914d9176a180b8cb18ab386d4c267bf636e09c05cd89035bd48", Pod:"csi-node-driver-2fhht", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali513ec739653", MAC:"fe:ab:a7:44:79:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:43.907932 containerd[2128]: 2025-09-12 17:11:43.856 [INFO][5660] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d7b00122675f7914d9176a180b8cb18ab386d4c267bf636e09c05cd89035bd48" Namespace="calico-system" Pod="csi-node-driver-2fhht" WorkloadEndpoint="ip--172--31--22--180-k8s-csi--node--driver--2fhht-eth0" Sep 12 17:11:43.924799 systemd-resolved[2028]: Under memory pressure, flushing caches. Sep 12 17:11:43.924871 systemd-resolved[2028]: Flushed all caches. Sep 12 17:11:43.928723 systemd-journald[1602]: Under memory pressure, flushing caches. Sep 12 17:11:44.044838 systemd-networkd[1689]: cali6b6628fdf1f: Link UP Sep 12 17:11:44.080318 systemd-networkd[1689]: cali6b6628fdf1f: Gained carrier Sep 12 17:11:44.137173 containerd[2128]: 2025-09-12 17:11:42.523 [INFO][5674] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h2wq2-eth0 coredns-7c65d6cfc9- kube-system 92e449f8-4616-42f4-87f1-0de4ba32c288 1029 0 2025-09-12 17:10:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-22-180 coredns-7c65d6cfc9-h2wq2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6b6628fdf1f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9c1c7224f8547492a186b93bde1c4d501472f88ce1a9d17c7c63a8f770aacd3b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-h2wq2" WorkloadEndpoint="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h2wq2-" Sep 12 17:11:44.137173 containerd[2128]: 2025-09-12 17:11:42.523 [INFO][5674] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9c1c7224f8547492a186b93bde1c4d501472f88ce1a9d17c7c63a8f770aacd3b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-h2wq2" WorkloadEndpoint="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h2wq2-eth0" Sep 12 17:11:44.137173 containerd[2128]: 2025-09-12 17:11:42.856 [INFO][5719] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9c1c7224f8547492a186b93bde1c4d501472f88ce1a9d17c7c63a8f770aacd3b" HandleID="k8s-pod-network.9c1c7224f8547492a186b93bde1c4d501472f88ce1a9d17c7c63a8f770aacd3b" Workload="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h2wq2-eth0" Sep 12 17:11:44.137173 containerd[2128]: 2025-09-12 17:11:42.856 [INFO][5719] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9c1c7224f8547492a186b93bde1c4d501472f88ce1a9d17c7c63a8f770aacd3b" HandleID="k8s-pod-network.9c1c7224f8547492a186b93bde1c4d501472f88ce1a9d17c7c63a8f770aacd3b" Workload="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h2wq2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400039a4d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-22-180", "pod":"coredns-7c65d6cfc9-h2wq2", "timestamp":"2025-09-12 17:11:42.856034082 +0000 UTC"}, Hostname:"ip-172-31-22-180", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:11:44.137173 containerd[2128]: 2025-09-12 17:11:42.856 [INFO][5719] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:44.137173 containerd[2128]: 2025-09-12 17:11:43.726 [INFO][5719] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:44.137173 containerd[2128]: 2025-09-12 17:11:43.727 [INFO][5719] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-180' Sep 12 17:11:44.137173 containerd[2128]: 2025-09-12 17:11:43.757 [INFO][5719] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9c1c7224f8547492a186b93bde1c4d501472f88ce1a9d17c7c63a8f770aacd3b" host="ip-172-31-22-180" Sep 12 17:11:44.137173 containerd[2128]: 2025-09-12 17:11:43.789 [INFO][5719] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-180" Sep 12 17:11:44.137173 containerd[2128]: 2025-09-12 17:11:43.809 [INFO][5719] ipam/ipam.go 511: Trying affinity for 192.168.2.128/26 host="ip-172-31-22-180" Sep 12 17:11:44.137173 containerd[2128]: 2025-09-12 17:11:43.814 [INFO][5719] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.128/26 host="ip-172-31-22-180" Sep 12 17:11:44.137173 containerd[2128]: 2025-09-12 17:11:43.841 [INFO][5719] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.128/26 host="ip-172-31-22-180" Sep 12 17:11:44.137173 containerd[2128]: 2025-09-12 17:11:43.845 [INFO][5719] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.128/26 handle="k8s-pod-network.9c1c7224f8547492a186b93bde1c4d501472f88ce1a9d17c7c63a8f770aacd3b" host="ip-172-31-22-180" Sep 12 17:11:44.137173 containerd[2128]: 2025-09-12 17:11:43.855 [INFO][5719] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9c1c7224f8547492a186b93bde1c4d501472f88ce1a9d17c7c63a8f770aacd3b Sep 12 17:11:44.137173 containerd[2128]: 2025-09-12 17:11:43.893 [INFO][5719] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.128/26 handle="k8s-pod-network.9c1c7224f8547492a186b93bde1c4d501472f88ce1a9d17c7c63a8f770aacd3b" host="ip-172-31-22-180" Sep 12 17:11:44.137173 containerd[2128]: 2025-09-12 17:11:43.916 [INFO][5719] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.136/26] block=192.168.2.128/26 handle="k8s-pod-network.9c1c7224f8547492a186b93bde1c4d501472f88ce1a9d17c7c63a8f770aacd3b" host="ip-172-31-22-180" Sep 12 17:11:44.137173 containerd[2128]: 2025-09-12 17:11:43.916 [INFO][5719] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.136/26] handle="k8s-pod-network.9c1c7224f8547492a186b93bde1c4d501472f88ce1a9d17c7c63a8f770aacd3b" host="ip-172-31-22-180" Sep 12 17:11:44.137173 containerd[2128]: 2025-09-12 17:11:43.916 [INFO][5719] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:44.137173 containerd[2128]: 2025-09-12 17:11:43.916 [INFO][5719] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.136/26] IPv6=[] ContainerID="9c1c7224f8547492a186b93bde1c4d501472f88ce1a9d17c7c63a8f770aacd3b" HandleID="k8s-pod-network.9c1c7224f8547492a186b93bde1c4d501472f88ce1a9d17c7c63a8f770aacd3b" Workload="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h2wq2-eth0" Sep 12 17:11:44.139152 containerd[2128]: 2025-09-12 17:11:43.963 [INFO][5674] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9c1c7224f8547492a186b93bde1c4d501472f88ce1a9d17c7c63a8f770aacd3b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-h2wq2" WorkloadEndpoint="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h2wq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h2wq2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"92e449f8-4616-42f4-87f1-0de4ba32c288", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 10, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-180", ContainerID:"", Pod:"coredns-7c65d6cfc9-h2wq2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b6628fdf1f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:44.139152 containerd[2128]: 2025-09-12 17:11:43.982 [INFO][5674] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.136/32] ContainerID="9c1c7224f8547492a186b93bde1c4d501472f88ce1a9d17c7c63a8f770aacd3b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-h2wq2" WorkloadEndpoint="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h2wq2-eth0" Sep 12 17:11:44.139152 containerd[2128]: 2025-09-12 17:11:43.982 [INFO][5674] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b6628fdf1f ContainerID="9c1c7224f8547492a186b93bde1c4d501472f88ce1a9d17c7c63a8f770aacd3b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-h2wq2" WorkloadEndpoint="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h2wq2-eth0" Sep 12 17:11:44.139152 containerd[2128]: 2025-09-12 17:11:44.081 [INFO][5674] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9c1c7224f8547492a186b93bde1c4d501472f88ce1a9d17c7c63a8f770aacd3b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-h2wq2" WorkloadEndpoint="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h2wq2-eth0" Sep 12 17:11:44.139152 containerd[2128]: 2025-09-12 17:11:44.086 [INFO][5674] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9c1c7224f8547492a186b93bde1c4d501472f88ce1a9d17c7c63a8f770aacd3b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-h2wq2" WorkloadEndpoint="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h2wq2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h2wq2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"92e449f8-4616-42f4-87f1-0de4ba32c288", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 10, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-180", ContainerID:"9c1c7224f8547492a186b93bde1c4d501472f88ce1a9d17c7c63a8f770aacd3b", Pod:"coredns-7c65d6cfc9-h2wq2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b6628fdf1f", MAC:"d2:39:5f:dd:cb:0d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:44.139152 containerd[2128]: 2025-09-12 17:11:44.115 [INFO][5674] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9c1c7224f8547492a186b93bde1c4d501472f88ce1a9d17c7c63a8f770aacd3b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-h2wq2" WorkloadEndpoint="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h2wq2-eth0" Sep 12 17:11:44.175829 containerd[2128]: time="2025-09-12T17:11:44.173418952Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:11:44.175829 containerd[2128]: time="2025-09-12T17:11:44.173515156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:11:44.175829 containerd[2128]: time="2025-09-12T17:11:44.173552764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:44.191347 containerd[2128]: time="2025-09-12T17:11:44.189027856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:44.203895 containerd[2128]: time="2025-09-12T17:11:44.203682436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7964ddc67d-v8tpq,Uid:bc9ee936-954c-4af7-aedd-76c2de2ef89a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e\"" Sep 12 17:11:44.252382 containerd[2128]: time="2025-09-12T17:11:44.252326933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5dcf8cdb5c-s49l2,Uid:e2fad767-5bdb-4cdd-95b1-9b4c25a4c939,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4fe86673ea59398a2ac0500de93d03e460478b530648a9ece0ae81e6084eb411\"" Sep 12 17:11:44.276011 containerd[2128]: time="2025-09-12T17:11:44.275456465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:11:44.276011 containerd[2128]: time="2025-09-12T17:11:44.275548181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:11:44.276011 containerd[2128]: time="2025-09-12T17:11:44.275586065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:44.276011 containerd[2128]: time="2025-09-12T17:11:44.275795501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:44.447761 containerd[2128]: time="2025-09-12T17:11:44.447656958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2fhht,Uid:99059a22-90fb-418d-a2c0-7e943cbdb29d,Namespace:calico-system,Attempt:1,} returns sandbox id \"d7b00122675f7914d9176a180b8cb18ab386d4c267bf636e09c05cd89035bd48\"" Sep 12 17:11:44.502580 systemd-networkd[1689]: cali662660f5d8e: Gained IPv6LL Sep 12 17:11:44.505088 systemd-networkd[1689]: cali76836b15a77: Gained IPv6LL Sep 12 17:11:44.536923 containerd[2128]: time="2025-09-12T17:11:44.536744910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-h2wq2,Uid:92e449f8-4616-42f4-87f1-0de4ba32c288,Namespace:kube-system,Attempt:1,} returns sandbox id \"9c1c7224f8547492a186b93bde1c4d501472f88ce1a9d17c7c63a8f770aacd3b\"" Sep 12 17:11:44.555491 containerd[2128]: time="2025-09-12T17:11:44.555385122Z" level=info msg="CreateContainer within sandbox \"9c1c7224f8547492a186b93bde1c4d501472f88ce1a9d17c7c63a8f770aacd3b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:11:44.603238 systemd-networkd[1689]: cali11300661d05: Link UP Sep 12 17:11:44.605022 systemd-networkd[1689]: cali11300661d05: Gained carrier Sep 12 17:11:44.629920 systemd-networkd[1689]: calic522652ab8d: Gained IPv6LL Sep 12 17:11:44.719289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount268274993.mount: Deactivated successfully. Sep 12 17:11:44.728541 containerd[2128]: 2025-09-12 17:11:44.101 [INFO][5831] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--180-k8s-calico--kube--controllers--5f8cc79964--sfrvv-eth0 calico-kube-controllers-5f8cc79964- calico-system 248d5f01-15dc-4b45-9fde-eec5e30019c2 1041 0 2025-09-12 17:11:15 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5f8cc79964 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-22-180 calico-kube-controllers-5f8cc79964-sfrvv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali11300661d05 [] [] }} ContainerID="076cd40656f3d7c9c0a5b1eba7ea977198c9ce2c595ffd0d3e6bf1e315c9938a" Namespace="calico-system" Pod="calico-kube-controllers-5f8cc79964-sfrvv" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--kube--controllers--5f8cc79964--sfrvv-" Sep 12 17:11:44.728541 containerd[2128]: 2025-09-12 17:11:44.113 [INFO][5831] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="076cd40656f3d7c9c0a5b1eba7ea977198c9ce2c595ffd0d3e6bf1e315c9938a" Namespace="calico-system" Pod="calico-kube-controllers-5f8cc79964-sfrvv" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--kube--controllers--5f8cc79964--sfrvv-eth0" Sep 12 17:11:44.728541 containerd[2128]: 2025-09-12 17:11:44.380 [INFO][5926] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="076cd40656f3d7c9c0a5b1eba7ea977198c9ce2c595ffd0d3e6bf1e315c9938a" HandleID="k8s-pod-network.076cd40656f3d7c9c0a5b1eba7ea977198c9ce2c595ffd0d3e6bf1e315c9938a" Workload="ip--172--31--22--180-k8s-calico--kube--controllers--5f8cc79964--sfrvv-eth0" Sep 12 17:11:44.728541 containerd[2128]: 2025-09-12 17:11:44.380 [INFO][5926] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="076cd40656f3d7c9c0a5b1eba7ea977198c9ce2c595ffd0d3e6bf1e315c9938a" HandleID="k8s-pod-network.076cd40656f3d7c9c0a5b1eba7ea977198c9ce2c595ffd0d3e6bf1e315c9938a" Workload="ip--172--31--22--180-k8s-calico--kube--controllers--5f8cc79964--sfrvv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003691d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-180", "pod":"calico-kube-controllers-5f8cc79964-sfrvv", "timestamp":"2025-09-12 17:11:44.380313677 +0000 UTC"}, Hostname:"ip-172-31-22-180", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:11:44.728541 containerd[2128]: 2025-09-12 17:11:44.381 [INFO][5926] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:44.728541 containerd[2128]: 2025-09-12 17:11:44.381 [INFO][5926] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:44.728541 containerd[2128]: 2025-09-12 17:11:44.381 [INFO][5926] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-180' Sep 12 17:11:44.728541 containerd[2128]: 2025-09-12 17:11:44.428 [INFO][5926] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.076cd40656f3d7c9c0a5b1eba7ea977198c9ce2c595ffd0d3e6bf1e315c9938a" host="ip-172-31-22-180" Sep 12 17:11:44.728541 containerd[2128]: 2025-09-12 17:11:44.444 [INFO][5926] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-22-180" Sep 12 17:11:44.728541 containerd[2128]: 2025-09-12 17:11:44.471 [INFO][5926] ipam/ipam.go 511: Trying affinity for 192.168.2.128/26 host="ip-172-31-22-180" Sep 12 17:11:44.728541 containerd[2128]: 2025-09-12 17:11:44.477 [INFO][5926] ipam/ipam.go 158: Attempting to load block cidr=192.168.2.128/26 host="ip-172-31-22-180" Sep 12 17:11:44.728541 containerd[2128]: 2025-09-12 17:11:44.484 [INFO][5926] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.2.128/26 host="ip-172-31-22-180" Sep 12 17:11:44.728541 containerd[2128]: 2025-09-12 17:11:44.484 [INFO][5926] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.2.128/26 handle="k8s-pod-network.076cd40656f3d7c9c0a5b1eba7ea977198c9ce2c595ffd0d3e6bf1e315c9938a" host="ip-172-31-22-180" Sep 12 17:11:44.728541 containerd[2128]: 2025-09-12 17:11:44.488 [INFO][5926] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.076cd40656f3d7c9c0a5b1eba7ea977198c9ce2c595ffd0d3e6bf1e315c9938a Sep 12 17:11:44.728541 containerd[2128]: 2025-09-12 17:11:44.508 [INFO][5926] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.2.128/26 handle="k8s-pod-network.076cd40656f3d7c9c0a5b1eba7ea977198c9ce2c595ffd0d3e6bf1e315c9938a" host="ip-172-31-22-180" Sep 12 17:11:44.728541 containerd[2128]: 2025-09-12 17:11:44.536 [INFO][5926] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.2.137/26] block=192.168.2.128/26 handle="k8s-pod-network.076cd40656f3d7c9c0a5b1eba7ea977198c9ce2c595ffd0d3e6bf1e315c9938a" host="ip-172-31-22-180" Sep 12 17:11:44.728541 containerd[2128]: 2025-09-12 17:11:44.538 [INFO][5926] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.2.137/26] handle="k8s-pod-network.076cd40656f3d7c9c0a5b1eba7ea977198c9ce2c595ffd0d3e6bf1e315c9938a" host="ip-172-31-22-180" Sep 12 17:11:44.728541 containerd[2128]: 2025-09-12 17:11:44.539 [INFO][5926] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:44.728541 containerd[2128]: 2025-09-12 17:11:44.540 [INFO][5926] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.2.137/26] IPv6=[] ContainerID="076cd40656f3d7c9c0a5b1eba7ea977198c9ce2c595ffd0d3e6bf1e315c9938a" HandleID="k8s-pod-network.076cd40656f3d7c9c0a5b1eba7ea977198c9ce2c595ffd0d3e6bf1e315c9938a" Workload="ip--172--31--22--180-k8s-calico--kube--controllers--5f8cc79964--sfrvv-eth0" Sep 12 17:11:44.730483 containerd[2128]: 2025-09-12 17:11:44.565 [INFO][5831] cni-plugin/k8s.go 418: Populated endpoint ContainerID="076cd40656f3d7c9c0a5b1eba7ea977198c9ce2c595ffd0d3e6bf1e315c9938a" Namespace="calico-system" Pod="calico-kube-controllers-5f8cc79964-sfrvv" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--kube--controllers--5f8cc79964--sfrvv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--180-k8s-calico--kube--controllers--5f8cc79964--sfrvv-eth0", GenerateName:"calico-kube-controllers-5f8cc79964-", Namespace:"calico-system", SelfLink:"", UID:"248d5f01-15dc-4b45-9fde-eec5e30019c2", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f8cc79964", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-180", ContainerID:"", Pod:"calico-kube-controllers-5f8cc79964-sfrvv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali11300661d05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:44.730483 containerd[2128]: 2025-09-12 17:11:44.578 [INFO][5831] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.137/32] ContainerID="076cd40656f3d7c9c0a5b1eba7ea977198c9ce2c595ffd0d3e6bf1e315c9938a" Namespace="calico-system" Pod="calico-kube-controllers-5f8cc79964-sfrvv" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--kube--controllers--5f8cc79964--sfrvv-eth0" Sep 12 17:11:44.730483 containerd[2128]: 2025-09-12 17:11:44.578 [INFO][5831] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali11300661d05 ContainerID="076cd40656f3d7c9c0a5b1eba7ea977198c9ce2c595ffd0d3e6bf1e315c9938a" Namespace="calico-system" Pod="calico-kube-controllers-5f8cc79964-sfrvv" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--kube--controllers--5f8cc79964--sfrvv-eth0" Sep 12 17:11:44.730483 containerd[2128]: 2025-09-12 17:11:44.606 [INFO][5831] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="076cd40656f3d7c9c0a5b1eba7ea977198c9ce2c595ffd0d3e6bf1e315c9938a" Namespace="calico-system" Pod="calico-kube-controllers-5f8cc79964-sfrvv" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--kube--controllers--5f8cc79964--sfrvv-eth0" Sep 12 17:11:44.730483 containerd[2128]: 2025-09-12 17:11:44.609 [INFO][5831] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="076cd40656f3d7c9c0a5b1eba7ea977198c9ce2c595ffd0d3e6bf1e315c9938a" Namespace="calico-system" Pod="calico-kube-controllers-5f8cc79964-sfrvv" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--kube--controllers--5f8cc79964--sfrvv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--180-k8s-calico--kube--controllers--5f8cc79964--sfrvv-eth0", GenerateName:"calico-kube-controllers-5f8cc79964-", Namespace:"calico-system", SelfLink:"", UID:"248d5f01-15dc-4b45-9fde-eec5e30019c2", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f8cc79964", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-180", ContainerID:"076cd40656f3d7c9c0a5b1eba7ea977198c9ce2c595ffd0d3e6bf1e315c9938a", Pod:"calico-kube-controllers-5f8cc79964-sfrvv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali11300661d05", MAC:"0a:dc:30:ee:cc:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:44.730483 containerd[2128]: 2025-09-12 17:11:44.655 [INFO][5831] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="076cd40656f3d7c9c0a5b1eba7ea977198c9ce2c595ffd0d3e6bf1e315c9938a" Namespace="calico-system" Pod="calico-kube-controllers-5f8cc79964-sfrvv" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--kube--controllers--5f8cc79964--sfrvv-eth0" Sep 12 17:11:44.740955 containerd[2128]: time="2025-09-12T17:11:44.740426035Z" level=info msg="CreateContainer within sandbox \"9c1c7224f8547492a186b93bde1c4d501472f88ce1a9d17c7c63a8f770aacd3b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"15ae3dddab6108fb5d6107d909b996d24a5e6bfee80bbda0840dbd58667a8096\"" Sep 12 17:11:44.744985 containerd[2128]: time="2025-09-12T17:11:44.743801923Z" level=info msg="StartContainer for \"15ae3dddab6108fb5d6107d909b996d24a5e6bfee80bbda0840dbd58667a8096\"" Sep 12 17:11:44.885594 systemd-networkd[1689]: cali513ec739653: Gained IPv6LL Sep 12 17:11:44.901173 containerd[2128]: time="2025-09-12T17:11:44.895277768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:11:44.901173 containerd[2128]: time="2025-09-12T17:11:44.895391504Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:11:44.901173 containerd[2128]: time="2025-09-12T17:11:44.895421000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:44.901173 containerd[2128]: time="2025-09-12T17:11:44.895599548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:11:45.127865 containerd[2128]: time="2025-09-12T17:11:45.126273953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 12 17:11:45.139743 containerd[2128]: time="2025-09-12T17:11:45.126487877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:45.160823 containerd[2128]: time="2025-09-12T17:11:45.160533713Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:45.170725 containerd[2128]: time="2025-09-12T17:11:45.168421985Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:45.182050 containerd[2128]: time="2025-09-12T17:11:45.181734317Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 8.664278851s" Sep 12 17:11:45.182212 containerd[2128]: time="2025-09-12T17:11:45.181946249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 12 17:11:45.193355 containerd[2128]: time="2025-09-12T17:11:45.191993093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 17:11:45.200718 containerd[2128]: time="2025-09-12T17:11:45.198770213Z" level=info msg="CreateContainer within sandbox \"d1a4c471ec008e78ffa557ed01a2932cdbbe02a888c8943f3b1a0c28c691cd2f\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 17:11:45.225414 containerd[2128]: time="2025-09-12T17:11:45.225128357Z" level=info msg="StartContainer for \"15ae3dddab6108fb5d6107d909b996d24a5e6bfee80bbda0840dbd58667a8096\" returns successfully" Sep 12 17:11:45.267836 containerd[2128]: time="2025-09-12T17:11:45.267720570Z" level=info msg="CreateContainer within sandbox \"d1a4c471ec008e78ffa557ed01a2932cdbbe02a888c8943f3b1a0c28c691cd2f\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"6ee0d194018bca6c5c5f346fd2f5198728c3e9be5bb6929a2a7dadbf7b8f8dbb\"" Sep 12 17:11:45.274156 containerd[2128]: time="2025-09-12T17:11:45.273504378Z" level=info msg="StartContainer for \"6ee0d194018bca6c5c5f346fd2f5198728c3e9be5bb6929a2a7dadbf7b8f8dbb\"" Sep 12 17:11:45.295376 containerd[2128]: time="2025-09-12T17:11:45.295294998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f8cc79964-sfrvv,Uid:248d5f01-15dc-4b45-9fde-eec5e30019c2,Namespace:calico-system,Attempt:1,} returns sandbox id \"076cd40656f3d7c9c0a5b1eba7ea977198c9ce2c595ffd0d3e6bf1e315c9938a\"" Sep 12 17:11:45.456088 containerd[2128]: time="2025-09-12T17:11:45.455590807Z" level=info msg="StartContainer for \"6ee0d194018bca6c5c5f346fd2f5198728c3e9be5bb6929a2a7dadbf7b8f8dbb\" returns successfully" Sep 12 17:11:45.678578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4115686398.mount: Deactivated successfully. Sep 12 17:11:45.741815 kubelet[3602]: I0912 17:11:45.740059 3602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-h2wq2" podStartSLOduration=52.740037356 podStartE2EDuration="52.740037356s" podCreationTimestamp="2025-09-12 17:10:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:11:45.739980452 +0000 UTC m=+57.048719000" watchObservedRunningTime="2025-09-12 17:11:45.740037356 +0000 UTC m=+57.048775880" Sep 12 17:11:45.802167 kubelet[3602]: I0912 17:11:45.802069 3602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-64dfb5d9df-6gnlm" podStartSLOduration=2.052374763 podStartE2EDuration="12.802041284s" podCreationTimestamp="2025-09-12 17:11:33 +0000 UTC" firstStartedPulling="2025-09-12 17:11:34.438898376 +0000 UTC m=+45.747636912" lastFinishedPulling="2025-09-12 17:11:45.188564897 +0000 UTC m=+56.497303433" observedRunningTime="2025-09-12 17:11:45.800629508 +0000 UTC m=+57.109368080" watchObservedRunningTime="2025-09-12 17:11:45.802041284 +0000 UTC m=+57.110779820" Sep 12 17:11:46.101774 systemd-networkd[1689]: cali6b6628fdf1f: Gained IPv6LL Sep 12 17:11:46.572594 systemd[1]: Started sshd@8-172.31.22.180:22-147.75.109.163:53656.service - OpenSSH per-connection server daemon (147.75.109.163:53656). Sep 12 17:11:46.613613 systemd-networkd[1689]: cali11300661d05: Gained IPv6LL Sep 12 17:11:46.788234 sshd[6153]: Accepted publickey for core from 147.75.109.163 port 53656 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:11:46.794980 sshd[6153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:46.804933 systemd-logind[2103]: New session 9 of user core. Sep 12 17:11:46.810226 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:11:47.209093 sshd[6153]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:47.231306 systemd[1]: sshd@8-172.31.22.180:22-147.75.109.163:53656.service: Deactivated successfully. Sep 12 17:11:47.231361 systemd-logind[2103]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:11:47.245264 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:11:47.251202 systemd-logind[2103]: Removed session 9. Sep 12 17:11:48.334595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount671199671.mount: Deactivated successfully. Sep 12 17:11:48.951585 containerd[2128]: time="2025-09-12T17:11:48.951430212Z" level=info msg="StopPodSandbox for \"8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e\"" Sep 12 17:11:49.094074 containerd[2128]: time="2025-09-12T17:11:49.093546477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:49.095077 containerd[2128]: time="2025-09-12T17:11:49.095020437Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 12 17:11:49.098255 containerd[2128]: time="2025-09-12T17:11:49.098100729Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:49.106479 containerd[2128]: time="2025-09-12T17:11:49.106372773Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:49.122661 containerd[2128]: time="2025-09-12T17:11:49.121887429Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 3.929820392s" Sep 12 17:11:49.122661 containerd[2128]: time="2025-09-12T17:11:49.121962285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 12 17:11:49.129538 containerd[2128]: time="2025-09-12T17:11:49.128747181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:11:49.138881 containerd[2128]: time="2025-09-12T17:11:49.138825657Z" level=info msg="CreateContainer within sandbox \"8630903ab5dda7add8754621f62b74408f4a3c1fecc39d20db7807a64eb389b4\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 17:11:49.179991 containerd[2128]: time="2025-09-12T17:11:49.179933721Z" level=info msg="CreateContainer within sandbox \"8630903ab5dda7add8754621f62b74408f4a3c1fecc39d20db7807a64eb389b4\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"0155632fdd28bf0765e990f735d58eff0487fa01681efc07a77c6fd28daec1ed\"" Sep 12 17:11:49.183120 containerd[2128]: time="2025-09-12T17:11:49.182836857Z" level=info msg="StartContainer for \"0155632fdd28bf0765e990f735d58eff0487fa01681efc07a77c6fd28daec1ed\"" Sep 12 17:11:49.231266 containerd[2128]: 2025-09-12 17:11:49.078 [WARNING][6192] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-eth0", GenerateName:"calico-apiserver-7964ddc67d-", Namespace:"calico-apiserver", SelfLink:"", UID:"bc9ee936-954c-4af7-aedd-76c2de2ef89a", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7964ddc67d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-180", ContainerID:"9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e", Pod:"calico-apiserver-7964ddc67d-v8tpq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali662660f5d8e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:49.231266 containerd[2128]: 2025-09-12 17:11:49.078 [INFO][6192] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" Sep 12 17:11:49.231266 containerd[2128]: 2025-09-12 17:11:49.078 [INFO][6192] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" iface="eth0" netns="" Sep 12 17:11:49.231266 containerd[2128]: 2025-09-12 17:11:49.078 [INFO][6192] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" Sep 12 17:11:49.231266 containerd[2128]: 2025-09-12 17:11:49.078 [INFO][6192] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" Sep 12 17:11:49.231266 containerd[2128]: 2025-09-12 17:11:49.194 [INFO][6203] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" HandleID="k8s-pod-network.8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-eth0" Sep 12 17:11:49.231266 containerd[2128]: 2025-09-12 17:11:49.194 [INFO][6203] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:49.231266 containerd[2128]: 2025-09-12 17:11:49.194 [INFO][6203] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:49.231266 containerd[2128]: 2025-09-12 17:11:49.213 [WARNING][6203] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" HandleID="k8s-pod-network.8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-eth0" Sep 12 17:11:49.231266 containerd[2128]: 2025-09-12 17:11:49.214 [INFO][6203] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" HandleID="k8s-pod-network.8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-eth0" Sep 12 17:11:49.231266 containerd[2128]: 2025-09-12 17:11:49.219 [INFO][6203] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:49.231266 containerd[2128]: 2025-09-12 17:11:49.224 [INFO][6192] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" Sep 12 17:11:49.231266 containerd[2128]: time="2025-09-12T17:11:49.230603325Z" level=info msg="TearDown network for sandbox \"8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e\" successfully" Sep 12 17:11:49.231266 containerd[2128]: time="2025-09-12T17:11:49.230667033Z" level=info msg="StopPodSandbox for \"8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e\" returns successfully" Sep 12 17:11:49.234371 containerd[2128]: time="2025-09-12T17:11:49.233929269Z" level=info msg="RemovePodSandbox for \"8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e\"" Sep 12 17:11:49.234371 containerd[2128]: time="2025-09-12T17:11:49.234079641Z" level=info msg="Forcibly stopping sandbox \"8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e\"" Sep 12 17:11:49.319948 systemd[1]: run-containerd-runc-k8s.io-0155632fdd28bf0765e990f735d58eff0487fa01681efc07a77c6fd28daec1ed-runc.IP20xA.mount: Deactivated successfully. Sep 12 17:11:49.442388 containerd[2128]: time="2025-09-12T17:11:49.441679450Z" level=info msg="StartContainer for \"0155632fdd28bf0765e990f735d58eff0487fa01681efc07a77c6fd28daec1ed\" returns successfully" Sep 12 17:11:49.457062 containerd[2128]: 2025-09-12 17:11:49.363 [WARNING][6222] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-eth0", GenerateName:"calico-apiserver-7964ddc67d-", Namespace:"calico-apiserver", SelfLink:"", UID:"bc9ee936-954c-4af7-aedd-76c2de2ef89a", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7964ddc67d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-180", ContainerID:"9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e", Pod:"calico-apiserver-7964ddc67d-v8tpq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali662660f5d8e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:49.457062 containerd[2128]: 2025-09-12 17:11:49.364 [INFO][6222] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" Sep 12 17:11:49.457062 containerd[2128]: 2025-09-12 17:11:49.364 [INFO][6222] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" iface="eth0" netns="" Sep 12 17:11:49.457062 containerd[2128]: 2025-09-12 17:11:49.365 [INFO][6222] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" Sep 12 17:11:49.457062 containerd[2128]: 2025-09-12 17:11:49.366 [INFO][6222] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" Sep 12 17:11:49.457062 containerd[2128]: 2025-09-12 17:11:49.427 [INFO][6248] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" HandleID="k8s-pod-network.8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-eth0" Sep 12 17:11:49.457062 containerd[2128]: 2025-09-12 17:11:49.427 [INFO][6248] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:49.457062 containerd[2128]: 2025-09-12 17:11:49.427 [INFO][6248] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:49.457062 containerd[2128]: 2025-09-12 17:11:49.445 [WARNING][6248] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" HandleID="k8s-pod-network.8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-eth0" Sep 12 17:11:49.457062 containerd[2128]: 2025-09-12 17:11:49.445 [INFO][6248] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" HandleID="k8s-pod-network.8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-eth0" Sep 12 17:11:49.457062 containerd[2128]: 2025-09-12 17:11:49.449 [INFO][6248] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:49.457062 containerd[2128]: 2025-09-12 17:11:49.452 [INFO][6222] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e" Sep 12 17:11:49.459336 containerd[2128]: time="2025-09-12T17:11:49.457095634Z" level=info msg="TearDown network for sandbox \"8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e\" successfully" Sep 12 17:11:49.468635 containerd[2128]: time="2025-09-12T17:11:49.468313378Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:11:49.468635 containerd[2128]: time="2025-09-12T17:11:49.468420598Z" level=info msg="RemovePodSandbox \"8b5ae5d35793b8f91454198b186f2547b783137119017d2d42351f7263fab41e\" returns successfully" Sep 12 17:11:49.470969 containerd[2128]: time="2025-09-12T17:11:49.470900374Z" level=info msg="StopPodSandbox for \"da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b\"" Sep 12 17:11:49.603278 ntpd[2090]: Listen normally on 7 vxlan.calico 192.168.2.128:123 Sep 12 17:11:49.609577 ntpd[2090]: 12 Sep 17:11:49 ntpd[2090]: Listen normally on 7 vxlan.calico 192.168.2.128:123 Sep 12 17:11:49.609577 ntpd[2090]: 12 Sep 17:11:49 ntpd[2090]: Listen normally on 8 calid2956ff9de2 [fe80::ecee:eeff:feee:eeee%5]:123 Sep 12 17:11:49.609577 ntpd[2090]: 12 Sep 17:11:49 ntpd[2090]: Listen normally on 9 calicae9bbab4a1 [fe80::ecee:eeff:feee:eeee%6]:123 Sep 12 17:11:49.609577 ntpd[2090]: 12 Sep 17:11:49 ntpd[2090]: Listen normally on 10 vxlan.calico [fe80::6483:27ff:fe1e:5f1f%7]:123 Sep 12 17:11:49.609577 ntpd[2090]: 12 Sep 17:11:49 ntpd[2090]: Listen normally on 11 cali76836b15a77 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 12 17:11:49.609577 ntpd[2090]: 12 Sep 17:11:49 ntpd[2090]: Listen normally on 12 calic522652ab8d [fe80::ecee:eeff:feee:eeee%11]:123 Sep 12 17:11:49.609577 ntpd[2090]: 12 Sep 17:11:49 ntpd[2090]: Listen normally on 13 cali662660f5d8e [fe80::ecee:eeff:feee:eeee%12]:123 Sep 12 17:11:49.609577 ntpd[2090]: 12 Sep 17:11:49 ntpd[2090]: Listen normally on 14 cali513ec739653 [fe80::ecee:eeff:feee:eeee%13]:123 Sep 12 17:11:49.609577 ntpd[2090]: 12 Sep 17:11:49 ntpd[2090]: Listen normally on 15 cali6b6628fdf1f [fe80::ecee:eeff:feee:eeee%14]:123 Sep 12 17:11:49.609577 ntpd[2090]: 12 Sep 17:11:49 ntpd[2090]: Listen normally on 16 cali11300661d05 [fe80::ecee:eeff:feee:eeee%15]:123 Sep 12 17:11:49.605831 ntpd[2090]: Listen normally on 8 calid2956ff9de2 [fe80::ecee:eeff:feee:eeee%5]:123 Sep 12 17:11:49.605917 ntpd[2090]: Listen normally on 9 calicae9bbab4a1 [fe80::ecee:eeff:feee:eeee%6]:123 Sep 12 17:11:49.605988 ntpd[2090]: Listen normally on 10 vxlan.calico [fe80::6483:27ff:fe1e:5f1f%7]:123 Sep 12 17:11:49.606064 ntpd[2090]: Listen normally on 11 cali76836b15a77 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 12 17:11:49.606139 ntpd[2090]: Listen normally on 12 calic522652ab8d [fe80::ecee:eeff:feee:eeee%11]:123 Sep 12 17:11:49.606205 ntpd[2090]: Listen normally on 13 cali662660f5d8e [fe80::ecee:eeff:feee:eeee%12]:123 Sep 12 17:11:49.606271 ntpd[2090]: Listen normally on 14 cali513ec739653 [fe80::ecee:eeff:feee:eeee%13]:123 Sep 12 17:11:49.606337 ntpd[2090]: Listen normally on 15 cali6b6628fdf1f [fe80::ecee:eeff:feee:eeee%14]:123 Sep 12 17:11:49.606406 ntpd[2090]: Listen normally on 16 cali11300661d05 [fe80::ecee:eeff:feee:eeee%15]:123 Sep 12 17:11:49.673155 containerd[2128]: 2025-09-12 17:11:49.568 [WARNING][6271] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h2wq2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"92e449f8-4616-42f4-87f1-0de4ba32c288", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 10, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-180", ContainerID:"9c1c7224f8547492a186b93bde1c4d501472f88ce1a9d17c7c63a8f770aacd3b", Pod:"coredns-7c65d6cfc9-h2wq2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b6628fdf1f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:49.673155 containerd[2128]: 2025-09-12 17:11:49.568 [INFO][6271] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" Sep 12 17:11:49.673155 containerd[2128]: 2025-09-12 17:11:49.568 [INFO][6271] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" iface="eth0" netns="" Sep 12 17:11:49.673155 containerd[2128]: 2025-09-12 17:11:49.568 [INFO][6271] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" Sep 12 17:11:49.673155 containerd[2128]: 2025-09-12 17:11:49.568 [INFO][6271] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" Sep 12 17:11:49.673155 containerd[2128]: 2025-09-12 17:11:49.651 [INFO][6279] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" HandleID="k8s-pod-network.da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" Workload="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h2wq2-eth0" Sep 12 17:11:49.673155 containerd[2128]: 2025-09-12 17:11:49.651 [INFO][6279] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:49.673155 containerd[2128]: 2025-09-12 17:11:49.652 [INFO][6279] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:49.673155 containerd[2128]: 2025-09-12 17:11:49.664 [WARNING][6279] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" HandleID="k8s-pod-network.da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" Workload="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h2wq2-eth0" Sep 12 17:11:49.673155 containerd[2128]: 2025-09-12 17:11:49.664 [INFO][6279] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" HandleID="k8s-pod-network.da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" Workload="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h2wq2-eth0" Sep 12 17:11:49.673155 containerd[2128]: 2025-09-12 17:11:49.667 [INFO][6279] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:49.673155 containerd[2128]: 2025-09-12 17:11:49.670 [INFO][6271] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" Sep 12 17:11:49.673983 containerd[2128]: time="2025-09-12T17:11:49.673857731Z" level=info msg="TearDown network for sandbox \"da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b\" successfully" Sep 12 17:11:49.673983 containerd[2128]: time="2025-09-12T17:11:49.673918835Z" level=info msg="StopPodSandbox for \"da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b\" returns successfully" Sep 12 17:11:49.675365 containerd[2128]: time="2025-09-12T17:11:49.674894567Z" level=info msg="RemovePodSandbox for \"da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b\"" Sep 12 17:11:49.675365 containerd[2128]: time="2025-09-12T17:11:49.674945399Z" level=info msg="Forcibly stopping sandbox \"da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b\"" Sep 12 17:11:49.801625 kubelet[3602]: I0912 17:11:49.797665 3602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-zhfpt" podStartSLOduration=26.862358428 podStartE2EDuration="34.79763424s" podCreationTimestamp="2025-09-12 17:11:15 +0000 UTC" firstStartedPulling="2025-09-12 17:11:41.191742793 +0000 UTC m=+52.500481329" lastFinishedPulling="2025-09-12 17:11:49.127018617 +0000 UTC m=+60.435757141" observedRunningTime="2025-09-12 17:11:49.79693206 +0000 UTC m=+61.105670608" watchObservedRunningTime="2025-09-12 17:11:49.79763424 +0000 UTC m=+61.106372776" Sep 12 17:11:49.853383 containerd[2128]: 2025-09-12 17:11:49.739 [WARNING][6295] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h2wq2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"92e449f8-4616-42f4-87f1-0de4ba32c288", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 10, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-180", ContainerID:"9c1c7224f8547492a186b93bde1c4d501472f88ce1a9d17c7c63a8f770aacd3b", Pod:"coredns-7c65d6cfc9-h2wq2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6b6628fdf1f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:49.853383 containerd[2128]: 2025-09-12 17:11:49.740 [INFO][6295] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" Sep 12 17:11:49.853383 containerd[2128]: 2025-09-12 17:11:49.740 [INFO][6295] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" iface="eth0" netns="" Sep 12 17:11:49.853383 containerd[2128]: 2025-09-12 17:11:49.740 [INFO][6295] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" Sep 12 17:11:49.853383 containerd[2128]: 2025-09-12 17:11:49.740 [INFO][6295] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" Sep 12 17:11:49.853383 containerd[2128]: 2025-09-12 17:11:49.825 [INFO][6302] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" HandleID="k8s-pod-network.da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" Workload="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h2wq2-eth0" Sep 12 17:11:49.853383 containerd[2128]: 2025-09-12 17:11:49.825 [INFO][6302] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:49.853383 containerd[2128]: 2025-09-12 17:11:49.825 [INFO][6302] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:49.853383 containerd[2128]: 2025-09-12 17:11:49.843 [WARNING][6302] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" HandleID="k8s-pod-network.da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" Workload="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h2wq2-eth0" Sep 12 17:11:49.853383 containerd[2128]: 2025-09-12 17:11:49.843 [INFO][6302] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" HandleID="k8s-pod-network.da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" Workload="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h2wq2-eth0" Sep 12 17:11:49.853383 containerd[2128]: 2025-09-12 17:11:49.847 [INFO][6302] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:49.853383 containerd[2128]: 2025-09-12 17:11:49.850 [INFO][6295] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b" Sep 12 17:11:49.854480 containerd[2128]: time="2025-09-12T17:11:49.853420536Z" level=info msg="TearDown network for sandbox \"da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b\" successfully" Sep 12 17:11:49.878088 containerd[2128]: time="2025-09-12T17:11:49.877830708Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:11:49.878088 containerd[2128]: time="2025-09-12T17:11:49.877933452Z" level=info msg="RemovePodSandbox \"da73d394ff1eaa323bf2a7f3f1255e937768f3dbdf70e3076349d88185f8d85b\" returns successfully" Sep 12 17:11:49.880488 containerd[2128]: time="2025-09-12T17:11:49.880007724Z" level=info msg="StopPodSandbox for \"6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32\"" Sep 12 17:11:50.135534 containerd[2128]: 2025-09-12 17:11:50.056 [WARNING][6337] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--180-k8s-calico--apiserver--5dcf8cdb5c--s49l2-eth0", GenerateName:"calico-apiserver-5dcf8cdb5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e2fad767-5bdb-4cdd-95b1-9b4c25a4c939", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dcf8cdb5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-180", ContainerID:"4fe86673ea59398a2ac0500de93d03e460478b530648a9ece0ae81e6084eb411", Pod:"calico-apiserver-5dcf8cdb5c-s49l2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic522652ab8d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:50.135534 containerd[2128]: 2025-09-12 17:11:50.058 [INFO][6337] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" Sep 12 17:11:50.135534 containerd[2128]: 2025-09-12 17:11:50.058 [INFO][6337] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" iface="eth0" netns="" Sep 12 17:11:50.135534 containerd[2128]: 2025-09-12 17:11:50.058 [INFO][6337] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" Sep 12 17:11:50.135534 containerd[2128]: 2025-09-12 17:11:50.058 [INFO][6337] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" Sep 12 17:11:50.135534 containerd[2128]: 2025-09-12 17:11:50.112 [INFO][6347] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" HandleID="k8s-pod-network.6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" Workload="ip--172--31--22--180-k8s-calico--apiserver--5dcf8cdb5c--s49l2-eth0" Sep 12 17:11:50.135534 containerd[2128]: 2025-09-12 17:11:50.112 [INFO][6347] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:50.135534 containerd[2128]: 2025-09-12 17:11:50.113 [INFO][6347] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:50.135534 containerd[2128]: 2025-09-12 17:11:50.126 [WARNING][6347] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" HandleID="k8s-pod-network.6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" Workload="ip--172--31--22--180-k8s-calico--apiserver--5dcf8cdb5c--s49l2-eth0" Sep 12 17:11:50.135534 containerd[2128]: 2025-09-12 17:11:50.126 [INFO][6347] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" HandleID="k8s-pod-network.6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" Workload="ip--172--31--22--180-k8s-calico--apiserver--5dcf8cdb5c--s49l2-eth0" Sep 12 17:11:50.135534 containerd[2128]: 2025-09-12 17:11:50.130 [INFO][6347] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:50.135534 containerd[2128]: 2025-09-12 17:11:50.132 [INFO][6337] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" Sep 12 17:11:50.137555 containerd[2128]: time="2025-09-12T17:11:50.136787446Z" level=info msg="TearDown network for sandbox \"6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32\" successfully" Sep 12 17:11:50.137555 containerd[2128]: time="2025-09-12T17:11:50.136831438Z" level=info msg="StopPodSandbox for \"6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32\" returns successfully" Sep 12 17:11:50.139090 containerd[2128]: time="2025-09-12T17:11:50.138553354Z" level=info msg="RemovePodSandbox for \"6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32\"" Sep 12 17:11:50.139090 containerd[2128]: time="2025-09-12T17:11:50.138604282Z" level=info msg="Forcibly stopping sandbox \"6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32\"" Sep 12 17:11:50.293745 containerd[2128]: 2025-09-12 17:11:50.220 [WARNING][6361] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--180-k8s-calico--apiserver--5dcf8cdb5c--s49l2-eth0", GenerateName:"calico-apiserver-5dcf8cdb5c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e2fad767-5bdb-4cdd-95b1-9b4c25a4c939", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5dcf8cdb5c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-180", ContainerID:"4fe86673ea59398a2ac0500de93d03e460478b530648a9ece0ae81e6084eb411", Pod:"calico-apiserver-5dcf8cdb5c-s49l2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic522652ab8d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:50.293745 containerd[2128]: 2025-09-12 17:11:50.221 [INFO][6361] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" Sep 12 17:11:50.293745 containerd[2128]: 2025-09-12 17:11:50.221 [INFO][6361] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" iface="eth0" netns="" Sep 12 17:11:50.293745 containerd[2128]: 2025-09-12 17:11:50.221 [INFO][6361] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" Sep 12 17:11:50.293745 containerd[2128]: 2025-09-12 17:11:50.221 [INFO][6361] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" Sep 12 17:11:50.293745 containerd[2128]: 2025-09-12 17:11:50.267 [INFO][6368] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" HandleID="k8s-pod-network.6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" Workload="ip--172--31--22--180-k8s-calico--apiserver--5dcf8cdb5c--s49l2-eth0" Sep 12 17:11:50.293745 containerd[2128]: 2025-09-12 17:11:50.267 [INFO][6368] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:50.293745 containerd[2128]: 2025-09-12 17:11:50.267 [INFO][6368] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:50.293745 containerd[2128]: 2025-09-12 17:11:50.284 [WARNING][6368] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" HandleID="k8s-pod-network.6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" Workload="ip--172--31--22--180-k8s-calico--apiserver--5dcf8cdb5c--s49l2-eth0" Sep 12 17:11:50.293745 containerd[2128]: 2025-09-12 17:11:50.284 [INFO][6368] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" HandleID="k8s-pod-network.6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" Workload="ip--172--31--22--180-k8s-calico--apiserver--5dcf8cdb5c--s49l2-eth0" Sep 12 17:11:50.293745 containerd[2128]: 2025-09-12 17:11:50.287 [INFO][6368] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:50.293745 containerd[2128]: 2025-09-12 17:11:50.290 [INFO][6361] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32" Sep 12 17:11:50.293745 containerd[2128]: time="2025-09-12T17:11:50.293331791Z" level=info msg="TearDown network for sandbox \"6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32\" successfully" Sep 12 17:11:50.300208 containerd[2128]: time="2025-09-12T17:11:50.300138383Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:11:50.300359 containerd[2128]: time="2025-09-12T17:11:50.300242699Z" level=info msg="RemovePodSandbox \"6df470d1237473d8dabed8b0f40d68647feff62dcce7e29bdc3b2b57485c9d32\" returns successfully" Sep 12 17:11:50.300916 containerd[2128]: time="2025-09-12T17:11:50.300873359Z" level=info msg="StopPodSandbox for \"ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c\"" Sep 12 17:11:50.460431 containerd[2128]: 2025-09-12 17:11:50.371 [WARNING][6382] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--180-k8s-goldmane--7988f88666--zhfpt-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"8ec98a9d-a44d-4dce-aecc-5307cd4bde54", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-180", ContainerID:"8630903ab5dda7add8754621f62b74408f4a3c1fecc39d20db7807a64eb389b4", Pod:"goldmane-7988f88666-zhfpt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.2.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicae9bbab4a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:50.460431 containerd[2128]: 2025-09-12 17:11:50.372 [INFO][6382] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" Sep 12 17:11:50.460431 containerd[2128]: 2025-09-12 17:11:50.372 [INFO][6382] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" iface="eth0" netns="" Sep 12 17:11:50.460431 containerd[2128]: 2025-09-12 17:11:50.372 [INFO][6382] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" Sep 12 17:11:50.460431 containerd[2128]: 2025-09-12 17:11:50.372 [INFO][6382] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" Sep 12 17:11:50.460431 containerd[2128]: 2025-09-12 17:11:50.418 [INFO][6389] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" HandleID="k8s-pod-network.ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" Workload="ip--172--31--22--180-k8s-goldmane--7988f88666--zhfpt-eth0" Sep 12 17:11:50.460431 containerd[2128]: 2025-09-12 17:11:50.418 [INFO][6389] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:50.460431 containerd[2128]: 2025-09-12 17:11:50.418 [INFO][6389] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:50.460431 containerd[2128]: 2025-09-12 17:11:50.442 [WARNING][6389] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" HandleID="k8s-pod-network.ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" Workload="ip--172--31--22--180-k8s-goldmane--7988f88666--zhfpt-eth0" Sep 12 17:11:50.460431 containerd[2128]: 2025-09-12 17:11:50.442 [INFO][6389] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" HandleID="k8s-pod-network.ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" Workload="ip--172--31--22--180-k8s-goldmane--7988f88666--zhfpt-eth0" Sep 12 17:11:50.460431 containerd[2128]: 2025-09-12 17:11:50.447 [INFO][6389] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:50.460431 containerd[2128]: 2025-09-12 17:11:50.456 [INFO][6382] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" Sep 12 17:11:50.463655 containerd[2128]: time="2025-09-12T17:11:50.460498463Z" level=info msg="TearDown network for sandbox \"ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c\" successfully" Sep 12 17:11:50.463655 containerd[2128]: time="2025-09-12T17:11:50.460656095Z" level=info msg="StopPodSandbox for \"ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c\" returns successfully" Sep 12 17:11:50.463655 containerd[2128]: time="2025-09-12T17:11:50.462928139Z" level=info msg="RemovePodSandbox for \"ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c\"" Sep 12 17:11:50.463655 containerd[2128]: time="2025-09-12T17:11:50.462979211Z" level=info msg="Forcibly stopping sandbox \"ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c\"" Sep 12 17:11:50.710398 containerd[2128]: 2025-09-12 17:11:50.571 [WARNING][6403] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--180-k8s-goldmane--7988f88666--zhfpt-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"8ec98a9d-a44d-4dce-aecc-5307cd4bde54", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-180", ContainerID:"8630903ab5dda7add8754621f62b74408f4a3c1fecc39d20db7807a64eb389b4", Pod:"goldmane-7988f88666-zhfpt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.2.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicae9bbab4a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:50.710398 containerd[2128]: 2025-09-12 17:11:50.571 [INFO][6403] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" Sep 12 17:11:50.710398 containerd[2128]: 2025-09-12 17:11:50.571 [INFO][6403] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" iface="eth0" netns="" Sep 12 17:11:50.710398 containerd[2128]: 2025-09-12 17:11:50.571 [INFO][6403] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" Sep 12 17:11:50.710398 containerd[2128]: 2025-09-12 17:11:50.571 [INFO][6403] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" Sep 12 17:11:50.710398 containerd[2128]: 2025-09-12 17:11:50.657 [INFO][6410] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" HandleID="k8s-pod-network.ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" Workload="ip--172--31--22--180-k8s-goldmane--7988f88666--zhfpt-eth0" Sep 12 17:11:50.710398 containerd[2128]: 2025-09-12 17:11:50.661 [INFO][6410] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:50.710398 containerd[2128]: 2025-09-12 17:11:50.661 [INFO][6410] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:50.710398 containerd[2128]: 2025-09-12 17:11:50.691 [WARNING][6410] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" HandleID="k8s-pod-network.ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" Workload="ip--172--31--22--180-k8s-goldmane--7988f88666--zhfpt-eth0" Sep 12 17:11:50.710398 containerd[2128]: 2025-09-12 17:11:50.692 [INFO][6410] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" HandleID="k8s-pod-network.ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" Workload="ip--172--31--22--180-k8s-goldmane--7988f88666--zhfpt-eth0" Sep 12 17:11:50.710398 containerd[2128]: 2025-09-12 17:11:50.697 [INFO][6410] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:50.710398 containerd[2128]: 2025-09-12 17:11:50.706 [INFO][6403] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c" Sep 12 17:11:50.714148 containerd[2128]: time="2025-09-12T17:11:50.710446093Z" level=info msg="TearDown network for sandbox \"ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c\" successfully" Sep 12 17:11:50.728921 containerd[2128]: time="2025-09-12T17:11:50.728865025Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:11:50.729341 containerd[2128]: time="2025-09-12T17:11:50.729300337Z" level=info msg="RemovePodSandbox \"ea4637a83b83f74849e2466bb75c1b13530cd41e92dbb6e475fd4254a35a530c\" returns successfully" Sep 12 17:11:50.731036 containerd[2128]: time="2025-09-12T17:11:50.730932241Z" level=info msg="StopPodSandbox for \"43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc\"" Sep 12 17:11:51.092121 containerd[2128]: 2025-09-12 17:11:50.902 [WARNING][6424] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--180-k8s-calico--kube--controllers--5f8cc79964--sfrvv-eth0", GenerateName:"calico-kube-controllers-5f8cc79964-", Namespace:"calico-system", SelfLink:"", UID:"248d5f01-15dc-4b45-9fde-eec5e30019c2", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f8cc79964", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-180", ContainerID:"076cd40656f3d7c9c0a5b1eba7ea977198c9ce2c595ffd0d3e6bf1e315c9938a", Pod:"calico-kube-controllers-5f8cc79964-sfrvv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali11300661d05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:51.092121 containerd[2128]: 2025-09-12 17:11:50.903 [INFO][6424] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" Sep 12 17:11:51.092121 containerd[2128]: 2025-09-12 17:11:50.903 [INFO][6424] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" iface="eth0" netns="" Sep 12 17:11:51.092121 containerd[2128]: 2025-09-12 17:11:50.903 [INFO][6424] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" Sep 12 17:11:51.092121 containerd[2128]: 2025-09-12 17:11:50.903 [INFO][6424] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" Sep 12 17:11:51.092121 containerd[2128]: 2025-09-12 17:11:51.038 [INFO][6451] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" HandleID="k8s-pod-network.43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" Workload="ip--172--31--22--180-k8s-calico--kube--controllers--5f8cc79964--sfrvv-eth0" Sep 12 17:11:51.092121 containerd[2128]: 2025-09-12 17:11:51.038 [INFO][6451] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:51.092121 containerd[2128]: 2025-09-12 17:11:51.038 [INFO][6451] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:51.092121 containerd[2128]: 2025-09-12 17:11:51.073 [WARNING][6451] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" HandleID="k8s-pod-network.43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" Workload="ip--172--31--22--180-k8s-calico--kube--controllers--5f8cc79964--sfrvv-eth0" Sep 12 17:11:51.092121 containerd[2128]: 2025-09-12 17:11:51.073 [INFO][6451] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" HandleID="k8s-pod-network.43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" Workload="ip--172--31--22--180-k8s-calico--kube--controllers--5f8cc79964--sfrvv-eth0" Sep 12 17:11:51.092121 containerd[2128]: 2025-09-12 17:11:51.077 [INFO][6451] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:51.092121 containerd[2128]: 2025-09-12 17:11:51.084 [INFO][6424] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" Sep 12 17:11:51.094961 containerd[2128]: time="2025-09-12T17:11:51.094880255Z" level=info msg="TearDown network for sandbox \"43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc\" successfully" Sep 12 17:11:51.094961 containerd[2128]: time="2025-09-12T17:11:51.094955159Z" level=info msg="StopPodSandbox for \"43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc\" returns successfully" Sep 12 17:11:51.095865 containerd[2128]: time="2025-09-12T17:11:51.095778719Z" level=info msg="RemovePodSandbox for \"43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc\"" Sep 12 17:11:51.095865 containerd[2128]: time="2025-09-12T17:11:51.095841167Z" level=info msg="Forcibly stopping sandbox \"43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc\"" Sep 12 17:11:51.298625 containerd[2128]: 2025-09-12 17:11:51.201 [WARNING][6469] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--180-k8s-calico--kube--controllers--5f8cc79964--sfrvv-eth0", GenerateName:"calico-kube-controllers-5f8cc79964-", Namespace:"calico-system", SelfLink:"", UID:"248d5f01-15dc-4b45-9fde-eec5e30019c2", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f8cc79964", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-180", ContainerID:"076cd40656f3d7c9c0a5b1eba7ea977198c9ce2c595ffd0d3e6bf1e315c9938a", Pod:"calico-kube-controllers-5f8cc79964-sfrvv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali11300661d05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:51.298625 containerd[2128]: 2025-09-12 17:11:51.202 [INFO][6469] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" Sep 12 17:11:51.298625 containerd[2128]: 2025-09-12 17:11:51.202 [INFO][6469] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" iface="eth0" netns="" Sep 12 17:11:51.298625 containerd[2128]: 2025-09-12 17:11:51.202 [INFO][6469] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" Sep 12 17:11:51.298625 containerd[2128]: 2025-09-12 17:11:51.202 [INFO][6469] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" Sep 12 17:11:51.298625 containerd[2128]: 2025-09-12 17:11:51.265 [INFO][6476] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" HandleID="k8s-pod-network.43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" Workload="ip--172--31--22--180-k8s-calico--kube--controllers--5f8cc79964--sfrvv-eth0" Sep 12 17:11:51.298625 containerd[2128]: 2025-09-12 17:11:51.266 [INFO][6476] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:51.298625 containerd[2128]: 2025-09-12 17:11:51.266 [INFO][6476] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:51.298625 containerd[2128]: 2025-09-12 17:11:51.288 [WARNING][6476] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" HandleID="k8s-pod-network.43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" Workload="ip--172--31--22--180-k8s-calico--kube--controllers--5f8cc79964--sfrvv-eth0" Sep 12 17:11:51.298625 containerd[2128]: 2025-09-12 17:11:51.288 [INFO][6476] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" HandleID="k8s-pod-network.43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" Workload="ip--172--31--22--180-k8s-calico--kube--controllers--5f8cc79964--sfrvv-eth0" Sep 12 17:11:51.298625 containerd[2128]: 2025-09-12 17:11:51.292 [INFO][6476] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:51.298625 containerd[2128]: 2025-09-12 17:11:51.295 [INFO][6469] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc" Sep 12 17:11:51.300315 containerd[2128]: time="2025-09-12T17:11:51.298680480Z" level=info msg="TearDown network for sandbox \"43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc\" successfully" Sep 12 17:11:51.309328 containerd[2128]: time="2025-09-12T17:11:51.309262740Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:11:51.309934 containerd[2128]: time="2025-09-12T17:11:51.309367812Z" level=info msg="RemovePodSandbox \"43c40de9d454620545d4f7b0e7e0b74b875b96ccd911764fd27193e3d2ce84cc\" returns successfully" Sep 12 17:11:51.310887 containerd[2128]: time="2025-09-12T17:11:51.310570140Z" level=info msg="StopPodSandbox for \"9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e\"" Sep 12 17:11:51.547312 containerd[2128]: 2025-09-12 17:11:51.437 [WARNING][6490] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-eth0", GenerateName:"calico-apiserver-7964ddc67d-", Namespace:"calico-apiserver", SelfLink:"", UID:"831c8b21-3a30-4e09-bfba-cb39dd0935d8", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7964ddc67d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-180", ContainerID:"d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc", Pod:"calico-apiserver-7964ddc67d-2fn9n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali76836b15a77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:51.547312 containerd[2128]: 2025-09-12 17:11:51.437 [INFO][6490] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" Sep 12 17:11:51.547312 containerd[2128]: 2025-09-12 17:11:51.437 [INFO][6490] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" iface="eth0" netns="" Sep 12 17:11:51.547312 containerd[2128]: 2025-09-12 17:11:51.437 [INFO][6490] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" Sep 12 17:11:51.547312 containerd[2128]: 2025-09-12 17:11:51.437 [INFO][6490] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" Sep 12 17:11:51.547312 containerd[2128]: 2025-09-12 17:11:51.513 [INFO][6498] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" HandleID="k8s-pod-network.9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-eth0" Sep 12 17:11:51.547312 containerd[2128]: 2025-09-12 17:11:51.513 [INFO][6498] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:51.547312 containerd[2128]: 2025-09-12 17:11:51.513 [INFO][6498] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:51.547312 containerd[2128]: 2025-09-12 17:11:51.532 [WARNING][6498] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" HandleID="k8s-pod-network.9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-eth0" Sep 12 17:11:51.547312 containerd[2128]: 2025-09-12 17:11:51.532 [INFO][6498] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" HandleID="k8s-pod-network.9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-eth0" Sep 12 17:11:51.547312 containerd[2128]: 2025-09-12 17:11:51.536 [INFO][6498] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:51.547312 containerd[2128]: 2025-09-12 17:11:51.541 [INFO][6490] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" Sep 12 17:11:51.549458 containerd[2128]: time="2025-09-12T17:11:51.547378237Z" level=info msg="TearDown network for sandbox \"9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e\" successfully" Sep 12 17:11:51.549458 containerd[2128]: time="2025-09-12T17:11:51.547440289Z" level=info msg="StopPodSandbox for \"9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e\" returns successfully" Sep 12 17:11:51.549458 containerd[2128]: time="2025-09-12T17:11:51.548303185Z" level=info msg="RemovePodSandbox for \"9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e\"" Sep 12 17:11:51.549458 containerd[2128]: time="2025-09-12T17:11:51.548366353Z" level=info msg="Forcibly stopping sandbox \"9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e\"" Sep 12 17:11:51.765738 containerd[2128]: 2025-09-12 17:11:51.644 [WARNING][6512] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-eth0", GenerateName:"calico-apiserver-7964ddc67d-", Namespace:"calico-apiserver", SelfLink:"", UID:"831c8b21-3a30-4e09-bfba-cb39dd0935d8", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7964ddc67d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-180", ContainerID:"d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc", Pod:"calico-apiserver-7964ddc67d-2fn9n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali76836b15a77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:51.765738 containerd[2128]: 2025-09-12 17:11:51.644 [INFO][6512] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" Sep 12 17:11:51.765738 containerd[2128]: 2025-09-12 17:11:51.644 [INFO][6512] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" iface="eth0" netns="" Sep 12 17:11:51.765738 containerd[2128]: 2025-09-12 17:11:51.644 [INFO][6512] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" Sep 12 17:11:51.765738 containerd[2128]: 2025-09-12 17:11:51.644 [INFO][6512] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" Sep 12 17:11:51.765738 containerd[2128]: 2025-09-12 17:11:51.727 [INFO][6519] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" HandleID="k8s-pod-network.9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-eth0" Sep 12 17:11:51.765738 containerd[2128]: 2025-09-12 17:11:51.727 [INFO][6519] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:51.765738 containerd[2128]: 2025-09-12 17:11:51.728 [INFO][6519] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:51.765738 containerd[2128]: 2025-09-12 17:11:51.745 [WARNING][6519] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" HandleID="k8s-pod-network.9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-eth0" Sep 12 17:11:51.765738 containerd[2128]: 2025-09-12 17:11:51.745 [INFO][6519] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" HandleID="k8s-pod-network.9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-eth0" Sep 12 17:11:51.765738 containerd[2128]: 2025-09-12 17:11:51.748 [INFO][6519] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:51.765738 containerd[2128]: 2025-09-12 17:11:51.755 [INFO][6512] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e" Sep 12 17:11:51.765738 containerd[2128]: time="2025-09-12T17:11:51.763528418Z" level=info msg="TearDown network for sandbox \"9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e\" successfully" Sep 12 17:11:51.775235 containerd[2128]: time="2025-09-12T17:11:51.775072814Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:11:51.775235 containerd[2128]: time="2025-09-12T17:11:51.775173002Z" level=info msg="RemovePodSandbox \"9449029fe5f017c8ef051c993831ddff67410d18eaf21aa58108012d23711a5e\" returns successfully" Sep 12 17:11:51.780570 containerd[2128]: time="2025-09-12T17:11:51.780500834Z" level=info msg="StopPodSandbox for \"f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5\"" Sep 12 17:11:52.218310 containerd[2128]: 2025-09-12 17:11:51.939 [WARNING][6539] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h298k-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3783ba1d-f77e-47c0-89fd-9efbe6435e26", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 10, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-180", ContainerID:"561eea2150579a5d023485117307c83a352178b40c63565b0ae0fb29e9665239", Pod:"coredns-7c65d6cfc9-h298k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2956ff9de2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:52.218310 containerd[2128]: 2025-09-12 17:11:51.943 [INFO][6539] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" Sep 12 17:11:52.218310 containerd[2128]: 2025-09-12 17:11:51.943 [INFO][6539] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" iface="eth0" netns="" Sep 12 17:11:52.218310 containerd[2128]: 2025-09-12 17:11:51.943 [INFO][6539] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" Sep 12 17:11:52.218310 containerd[2128]: 2025-09-12 17:11:51.943 [INFO][6539] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" Sep 12 17:11:52.218310 containerd[2128]: 2025-09-12 17:11:52.071 [INFO][6547] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" HandleID="k8s-pod-network.f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" Workload="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h298k-eth0" Sep 12 17:11:52.218310 containerd[2128]: 2025-09-12 17:11:52.080 [INFO][6547] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:52.218310 containerd[2128]: 2025-09-12 17:11:52.080 [INFO][6547] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:52.218310 containerd[2128]: 2025-09-12 17:11:52.162 [WARNING][6547] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" HandleID="k8s-pod-network.f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" Workload="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h298k-eth0" Sep 12 17:11:52.218310 containerd[2128]: 2025-09-12 17:11:52.162 [INFO][6547] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" HandleID="k8s-pod-network.f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" Workload="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h298k-eth0" Sep 12 17:11:52.218310 containerd[2128]: 2025-09-12 17:11:52.179 [INFO][6547] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:52.218310 containerd[2128]: 2025-09-12 17:11:52.202 [INFO][6539] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" Sep 12 17:11:52.218310 containerd[2128]: time="2025-09-12T17:11:52.218247720Z" level=info msg="TearDown network for sandbox \"f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5\" successfully" Sep 12 17:11:52.218310 containerd[2128]: time="2025-09-12T17:11:52.218288280Z" level=info msg="StopPodSandbox for \"f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5\" returns successfully" Sep 12 17:11:52.221672 containerd[2128]: time="2025-09-12T17:11:52.219236760Z" level=info msg="RemovePodSandbox for \"f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5\"" Sep 12 17:11:52.221672 containerd[2128]: time="2025-09-12T17:11:52.219288048Z" level=info msg="Forcibly stopping sandbox \"f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5\"" Sep 12 17:11:52.292338 systemd[1]: Started sshd@9-172.31.22.180:22-147.75.109.163:44062.service - OpenSSH per-connection server daemon (147.75.109.163:44062). Sep 12 17:11:52.501863 sshd[6558]: Accepted publickey for core from 147.75.109.163 port 44062 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:11:52.508527 sshd[6558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:52.530999 systemd-logind[2103]: New session 10 of user core. Sep 12 17:11:52.538591 containerd[2128]: 2025-09-12 17:11:52.393 [WARNING][6562] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h298k-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3783ba1d-f77e-47c0-89fd-9efbe6435e26", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 10, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-180", ContainerID:"561eea2150579a5d023485117307c83a352178b40c63565b0ae0fb29e9665239", Pod:"coredns-7c65d6cfc9-h298k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2956ff9de2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:52.538591 containerd[2128]: 2025-09-12 17:11:52.395 [INFO][6562] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" Sep 12 17:11:52.538591 containerd[2128]: 2025-09-12 17:11:52.395 [INFO][6562] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" iface="eth0" netns="" Sep 12 17:11:52.538591 containerd[2128]: 2025-09-12 17:11:52.395 [INFO][6562] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" Sep 12 17:11:52.538591 containerd[2128]: 2025-09-12 17:11:52.395 [INFO][6562] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" Sep 12 17:11:52.538591 containerd[2128]: 2025-09-12 17:11:52.491 [INFO][6570] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" HandleID="k8s-pod-network.f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" Workload="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h298k-eth0" Sep 12 17:11:52.538591 containerd[2128]: 2025-09-12 17:11:52.491 [INFO][6570] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:52.538591 containerd[2128]: 2025-09-12 17:11:52.491 [INFO][6570] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:52.538591 containerd[2128]: 2025-09-12 17:11:52.513 [WARNING][6570] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" HandleID="k8s-pod-network.f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" Workload="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h298k-eth0" Sep 12 17:11:52.538591 containerd[2128]: 2025-09-12 17:11:52.514 [INFO][6570] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" HandleID="k8s-pod-network.f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" Workload="ip--172--31--22--180-k8s-coredns--7c65d6cfc9--h298k-eth0" Sep 12 17:11:52.538591 containerd[2128]: 2025-09-12 17:11:52.518 [INFO][6570] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:52.538591 containerd[2128]: 2025-09-12 17:11:52.526 [INFO][6562] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5" Sep 12 17:11:52.538591 containerd[2128]: time="2025-09-12T17:11:52.538472966Z" level=info msg="TearDown network for sandbox \"f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5\" successfully" Sep 12 17:11:52.539319 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:11:52.557777 containerd[2128]: time="2025-09-12T17:11:52.554408198Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:11:52.557777 containerd[2128]: time="2025-09-12T17:11:52.555392630Z" level=info msg="RemovePodSandbox \"f9b4dcd8cb2bfd0d8ff314f33f4d476ea8a3d0efefefa0637a0e187f2f1a6fc5\" returns successfully" Sep 12 17:11:52.559221 containerd[2128]: time="2025-09-12T17:11:52.559178018Z" level=info msg="StopPodSandbox for \"a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283\"" Sep 12 17:11:52.892164 containerd[2128]: 2025-09-12 17:11:52.672 [WARNING][6588] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--180-k8s-csi--node--driver--2fhht-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"99059a22-90fb-418d-a2c0-7e943cbdb29d", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-180", ContainerID:"d7b00122675f7914d9176a180b8cb18ab386d4c267bf636e09c05cd89035bd48", Pod:"csi-node-driver-2fhht", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali513ec739653", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:52.892164 containerd[2128]: 2025-09-12 17:11:52.674 [INFO][6588] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" Sep 12 17:11:52.892164 containerd[2128]: 2025-09-12 17:11:52.675 [INFO][6588] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" iface="eth0" netns="" Sep 12 17:11:52.892164 containerd[2128]: 2025-09-12 17:11:52.676 [INFO][6588] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" Sep 12 17:11:52.892164 containerd[2128]: 2025-09-12 17:11:52.676 [INFO][6588] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" Sep 12 17:11:52.892164 containerd[2128]: 2025-09-12 17:11:52.840 [INFO][6603] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" HandleID="k8s-pod-network.a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" Workload="ip--172--31--22--180-k8s-csi--node--driver--2fhht-eth0" Sep 12 17:11:52.892164 containerd[2128]: 2025-09-12 17:11:52.840 [INFO][6603] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:52.892164 containerd[2128]: 2025-09-12 17:11:52.840 [INFO][6603] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:52.892164 containerd[2128]: 2025-09-12 17:11:52.874 [WARNING][6603] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" HandleID="k8s-pod-network.a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" Workload="ip--172--31--22--180-k8s-csi--node--driver--2fhht-eth0" Sep 12 17:11:52.892164 containerd[2128]: 2025-09-12 17:11:52.874 [INFO][6603] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" HandleID="k8s-pod-network.a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" Workload="ip--172--31--22--180-k8s-csi--node--driver--2fhht-eth0" Sep 12 17:11:52.892164 containerd[2128]: 2025-09-12 17:11:52.881 [INFO][6603] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:52.892164 containerd[2128]: 2025-09-12 17:11:52.885 [INFO][6588] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" Sep 12 17:11:52.894277 containerd[2128]: time="2025-09-12T17:11:52.893276775Z" level=info msg="TearDown network for sandbox \"a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283\" successfully" Sep 12 17:11:52.894277 containerd[2128]: time="2025-09-12T17:11:52.893321403Z" level=info msg="StopPodSandbox for \"a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283\" returns successfully" Sep 12 17:11:52.898741 containerd[2128]: time="2025-09-12T17:11:52.896737707Z" level=info msg="RemovePodSandbox for \"a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283\"" Sep 12 17:11:52.898741 containerd[2128]: time="2025-09-12T17:11:52.896790435Z" level=info msg="Forcibly stopping sandbox \"a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283\"" Sep 12 17:11:52.943256 sshd[6558]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:52.951266 systemd[1]: sshd@9-172.31.22.180:22-147.75.109.163:44062.service: Deactivated successfully. Sep 12 17:11:52.967376 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:11:52.971638 systemd-logind[2103]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:11:52.979785 systemd[1]: Started sshd@10-172.31.22.180:22-147.75.109.163:44074.service - OpenSSH per-connection server daemon (147.75.109.163:44074). Sep 12 17:11:52.983137 systemd-logind[2103]: Removed session 10. Sep 12 17:11:53.198219 sshd[6626]: Accepted publickey for core from 147.75.109.163 port 44074 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:11:53.206546 sshd[6626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:53.214112 containerd[2128]: 2025-09-12 17:11:53.064 [WARNING][6619] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--180-k8s-csi--node--driver--2fhht-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"99059a22-90fb-418d-a2c0-7e943cbdb29d", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 11, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-180", ContainerID:"d7b00122675f7914d9176a180b8cb18ab386d4c267bf636e09c05cd89035bd48", Pod:"csi-node-driver-2fhht", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali513ec739653", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:11:53.214112 containerd[2128]: 2025-09-12 17:11:53.065 [INFO][6619] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" Sep 12 17:11:53.214112 containerd[2128]: 2025-09-12 17:11:53.065 [INFO][6619] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" iface="eth0" netns="" Sep 12 17:11:53.214112 containerd[2128]: 2025-09-12 17:11:53.065 [INFO][6619] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" Sep 12 17:11:53.214112 containerd[2128]: 2025-09-12 17:11:53.066 [INFO][6619] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" Sep 12 17:11:53.214112 containerd[2128]: 2025-09-12 17:11:53.137 [INFO][6632] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" HandleID="k8s-pod-network.a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" Workload="ip--172--31--22--180-k8s-csi--node--driver--2fhht-eth0" Sep 12 17:11:53.214112 containerd[2128]: 2025-09-12 17:11:53.138 [INFO][6632] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:53.214112 containerd[2128]: 2025-09-12 17:11:53.138 [INFO][6632] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:53.214112 containerd[2128]: 2025-09-12 17:11:53.175 [WARNING][6632] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" HandleID="k8s-pod-network.a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" Workload="ip--172--31--22--180-k8s-csi--node--driver--2fhht-eth0" Sep 12 17:11:53.214112 containerd[2128]: 2025-09-12 17:11:53.180 [INFO][6632] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" HandleID="k8s-pod-network.a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" Workload="ip--172--31--22--180-k8s-csi--node--driver--2fhht-eth0" Sep 12 17:11:53.214112 containerd[2128]: 2025-09-12 17:11:53.184 [INFO][6632] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:53.214112 containerd[2128]: 2025-09-12 17:11:53.193 [INFO][6619] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283" Sep 12 17:11:53.216575 containerd[2128]: time="2025-09-12T17:11:53.214160293Z" level=info msg="TearDown network for sandbox \"a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283\" successfully" Sep 12 17:11:53.225373 systemd-logind[2103]: New session 11 of user core. Sep 12 17:11:53.229953 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:11:53.247339 containerd[2128]: time="2025-09-12T17:11:53.246921169Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:11:53.247339 containerd[2128]: time="2025-09-12T17:11:53.247265965Z" level=info msg="RemovePodSandbox \"a93af661e74378e5106f13225bb88de3caae43d491589296ecfae221db63c283\" returns successfully" Sep 12 17:11:53.250468 containerd[2128]: time="2025-09-12T17:11:53.250242073Z" level=info msg="StopPodSandbox for \"7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8\"" Sep 12 17:11:53.357261 containerd[2128]: time="2025-09-12T17:11:53.354830978Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:53.363852 containerd[2128]: time="2025-09-12T17:11:53.363778106Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 12 17:11:53.367500 containerd[2128]: time="2025-09-12T17:11:53.367435970Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:53.400135 containerd[2128]: time="2025-09-12T17:11:53.398728346Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:53.404820 containerd[2128]: time="2025-09-12T17:11:53.403999490Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 4.275184461s" Sep 12 17:11:53.404820 containerd[2128]: time="2025-09-12T17:11:53.404070530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 17:11:53.413891 containerd[2128]: time="2025-09-12T17:11:53.413562506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:11:53.418603 containerd[2128]: time="2025-09-12T17:11:53.418187258Z" level=info msg="CreateContainer within sandbox \"d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:11:53.453515 containerd[2128]: time="2025-09-12T17:11:53.452948978Z" level=info msg="CreateContainer within sandbox \"d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1fb8f314ef0b2d4d4cac4024a2af26a8c6e35f6561cf5ab6660b357f5408f540\"" Sep 12 17:11:53.462010 containerd[2128]: time="2025-09-12T17:11:53.455996714Z" level=info msg="StartContainer for \"1fb8f314ef0b2d4d4cac4024a2af26a8c6e35f6561cf5ab6660b357f5408f540\"" Sep 12 17:11:53.574510 containerd[2128]: 2025-09-12 17:11:53.378 [WARNING][6648] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" WorkloadEndpoint="ip--172--31--22--180-k8s-whisker--558b875cc4--928jw-eth0" Sep 12 17:11:53.574510 containerd[2128]: 2025-09-12 17:11:53.378 [INFO][6648] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" Sep 12 17:11:53.574510 containerd[2128]: 2025-09-12 17:11:53.378 [INFO][6648] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" iface="eth0" netns="" Sep 12 17:11:53.574510 containerd[2128]: 2025-09-12 17:11:53.378 [INFO][6648] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" Sep 12 17:11:53.574510 containerd[2128]: 2025-09-12 17:11:53.378 [INFO][6648] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" Sep 12 17:11:53.574510 containerd[2128]: 2025-09-12 17:11:53.515 [INFO][6663] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" HandleID="k8s-pod-network.7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" Workload="ip--172--31--22--180-k8s-whisker--558b875cc4--928jw-eth0" Sep 12 17:11:53.574510 containerd[2128]: 2025-09-12 17:11:53.516 [INFO][6663] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:53.574510 containerd[2128]: 2025-09-12 17:11:53.516 [INFO][6663] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:53.574510 containerd[2128]: 2025-09-12 17:11:53.552 [WARNING][6663] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" HandleID="k8s-pod-network.7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" Workload="ip--172--31--22--180-k8s-whisker--558b875cc4--928jw-eth0" Sep 12 17:11:53.574510 containerd[2128]: 2025-09-12 17:11:53.552 [INFO][6663] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" HandleID="k8s-pod-network.7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" Workload="ip--172--31--22--180-k8s-whisker--558b875cc4--928jw-eth0" Sep 12 17:11:53.574510 containerd[2128]: 2025-09-12 17:11:53.557 [INFO][6663] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:53.574510 containerd[2128]: 2025-09-12 17:11:53.566 [INFO][6648] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" Sep 12 17:11:53.574510 containerd[2128]: time="2025-09-12T17:11:53.574354791Z" level=info msg="TearDown network for sandbox \"7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8\" successfully" Sep 12 17:11:53.574510 containerd[2128]: time="2025-09-12T17:11:53.574393995Z" level=info msg="StopPodSandbox for \"7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8\" returns successfully" Sep 12 17:11:53.579880 containerd[2128]: time="2025-09-12T17:11:53.579793251Z" level=info msg="RemovePodSandbox for \"7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8\"" Sep 12 17:11:53.579880 containerd[2128]: time="2025-09-12T17:11:53.579868047Z" level=info msg="Forcibly stopping sandbox \"7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8\"" Sep 12 17:11:53.696255 containerd[2128]: time="2025-09-12T17:11:53.696095859Z" level=info msg="StartContainer for \"1fb8f314ef0b2d4d4cac4024a2af26a8c6e35f6561cf5ab6660b357f5408f540\" returns successfully" Sep 12 17:11:53.823811 containerd[2128]: time="2025-09-12T17:11:53.822234544Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:53.834404 sshd[6626]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:53.840086 containerd[2128]: time="2025-09-12T17:11:53.839843212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:11:53.892777 containerd[2128]: time="2025-09-12T17:11:53.892614196Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 478.99169ms" Sep 12 17:11:53.893463 containerd[2128]: time="2025-09-12T17:11:53.893319772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 17:11:53.905344 containerd[2128]: time="2025-09-12T17:11:53.905286268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:11:53.908581 containerd[2128]: time="2025-09-12T17:11:53.908513285Z" level=info msg="CreateContainer within sandbox \"9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:11:53.911339 systemd[1]: Started sshd@11-172.31.22.180:22-147.75.109.163:44082.service - OpenSSH per-connection server daemon (147.75.109.163:44082). Sep 12 17:11:53.912461 systemd[1]: sshd@10-172.31.22.180:22-147.75.109.163:44074.service: Deactivated successfully. Sep 12 17:11:53.923357 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:11:53.938077 systemd-logind[2103]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:11:53.949547 systemd-logind[2103]: Removed session 11. Sep 12 17:11:53.960111 kubelet[3602]: I0912 17:11:53.958880 3602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7964ddc67d-2fn9n" podStartSLOduration=37.352032449 podStartE2EDuration="46.958858505s" podCreationTimestamp="2025-09-12 17:11:07 +0000 UTC" firstStartedPulling="2025-09-12 17:11:43.801042066 +0000 UTC m=+55.109780602" lastFinishedPulling="2025-09-12 17:11:53.407868038 +0000 UTC m=+64.716606658" observedRunningTime="2025-09-12 17:11:53.956975789 +0000 UTC m=+65.265714349" watchObservedRunningTime="2025-09-12 17:11:53.958858505 +0000 UTC m=+65.267597041" Sep 12 17:11:54.020210 containerd[2128]: time="2025-09-12T17:11:54.020140717Z" level=info msg="CreateContainer within sandbox \"9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3945be13df64a20060207fc9e090b1eb19547609c227dad0bad904c18078ec20\"" Sep 12 17:11:54.027502 containerd[2128]: time="2025-09-12T17:11:54.026995405Z" level=info msg="StartContainer for \"3945be13df64a20060207fc9e090b1eb19547609c227dad0bad904c18078ec20\"" Sep 12 17:11:54.037078 containerd[2128]: 2025-09-12 17:11:53.753 [WARNING][6702] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" WorkloadEndpoint="ip--172--31--22--180-k8s-whisker--558b875cc4--928jw-eth0" Sep 12 17:11:54.037078 containerd[2128]: 2025-09-12 17:11:53.753 [INFO][6702] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" Sep 12 17:11:54.037078 containerd[2128]: 2025-09-12 17:11:53.753 [INFO][6702] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" iface="eth0" netns="" Sep 12 17:11:54.037078 containerd[2128]: 2025-09-12 17:11:53.753 [INFO][6702] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" Sep 12 17:11:54.037078 containerd[2128]: 2025-09-12 17:11:53.753 [INFO][6702] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" Sep 12 17:11:54.037078 containerd[2128]: 2025-09-12 17:11:53.979 [INFO][6720] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" HandleID="k8s-pod-network.7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" Workload="ip--172--31--22--180-k8s-whisker--558b875cc4--928jw-eth0" Sep 12 17:11:54.037078 containerd[2128]: 2025-09-12 17:11:53.985 [INFO][6720] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:11:54.037078 containerd[2128]: 2025-09-12 17:11:53.986 [INFO][6720] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:11:54.037078 containerd[2128]: 2025-09-12 17:11:54.019 [WARNING][6720] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" HandleID="k8s-pod-network.7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" Workload="ip--172--31--22--180-k8s-whisker--558b875cc4--928jw-eth0" Sep 12 17:11:54.037078 containerd[2128]: 2025-09-12 17:11:54.019 [INFO][6720] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" HandleID="k8s-pod-network.7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" Workload="ip--172--31--22--180-k8s-whisker--558b875cc4--928jw-eth0" Sep 12 17:11:54.037078 containerd[2128]: 2025-09-12 17:11:54.022 [INFO][6720] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:11:54.037078 containerd[2128]: 2025-09-12 17:11:54.030 [INFO][6702] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8" Sep 12 17:11:54.039969 containerd[2128]: time="2025-09-12T17:11:54.038135341Z" level=info msg="TearDown network for sandbox \"7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8\" successfully" Sep 12 17:11:54.064301 containerd[2128]: time="2025-09-12T17:11:54.064039237Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:11:54.064301 containerd[2128]: time="2025-09-12T17:11:54.064139881Z" level=info msg="RemovePodSandbox \"7936daf80990a1eae4f0ade157c300d2186ce9154720b5679d2710936e53a6f8\" returns successfully" Sep 12 17:11:54.160639 sshd[6726]: Accepted publickey for core from 147.75.109.163 port 44082 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:11:54.164045 sshd[6726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:11:54.185581 systemd-logind[2103]: New session 12 of user core. Sep 12 17:11:54.191354 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:11:54.240251 containerd[2128]: time="2025-09-12T17:11:54.240185690Z" level=info msg="StartContainer for \"3945be13df64a20060207fc9e090b1eb19547609c227dad0bad904c18078ec20\" returns successfully" Sep 12 17:11:54.337013 containerd[2128]: time="2025-09-12T17:11:54.333223299Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:54.337533 containerd[2128]: time="2025-09-12T17:11:54.337467039Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:11:54.377722 containerd[2128]: time="2025-09-12T17:11:54.375571455Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 468.619203ms" Sep 12 17:11:54.379200 containerd[2128]: time="2025-09-12T17:11:54.378564483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 17:11:54.385811 containerd[2128]: time="2025-09-12T17:11:54.384977979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 17:11:54.390727 containerd[2128]: time="2025-09-12T17:11:54.389749311Z" level=info msg="CreateContainer within sandbox \"4fe86673ea59398a2ac0500de93d03e460478b530648a9ece0ae81e6084eb411\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:11:54.429980 containerd[2128]: time="2025-09-12T17:11:54.428354187Z" level=info msg="CreateContainer within sandbox \"4fe86673ea59398a2ac0500de93d03e460478b530648a9ece0ae81e6084eb411\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a81b22cb15d70e772374ca6730a99df01d5a89dacdf673564f7b04b5d790ffec\"" Sep 12 17:11:54.432987 containerd[2128]: time="2025-09-12T17:11:54.430761015Z" level=info msg="StartContainer for \"a81b22cb15d70e772374ca6730a99df01d5a89dacdf673564f7b04b5d790ffec\"" Sep 12 17:11:54.788809 containerd[2128]: time="2025-09-12T17:11:54.786581417Z" level=info msg="StartContainer for \"a81b22cb15d70e772374ca6730a99df01d5a89dacdf673564f7b04b5d790ffec\" returns successfully" Sep 12 17:11:54.819170 sshd[6726]: pam_unix(sshd:session): session closed for user core Sep 12 17:11:54.836573 systemd[1]: sshd@11-172.31.22.180:22-147.75.109.163:44082.service: Deactivated successfully. Sep 12 17:11:54.853513 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:11:54.854167 systemd-logind[2103]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:11:54.858570 systemd-logind[2103]: Removed session 12. Sep 12 17:11:55.009724 kubelet[3602]: I0912 17:11:55.006655 3602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7964ddc67d-v8tpq" podStartSLOduration=38.314221302 podStartE2EDuration="48.006631298s" podCreationTimestamp="2025-09-12 17:11:07 +0000 UTC" firstStartedPulling="2025-09-12 17:11:44.209597596 +0000 UTC m=+55.518336132" lastFinishedPulling="2025-09-12 17:11:53.902007604 +0000 UTC m=+65.210746128" observedRunningTime="2025-09-12 17:11:55.006091598 +0000 UTC m=+66.314830122" watchObservedRunningTime="2025-09-12 17:11:55.006631298 +0000 UTC m=+66.315369822" Sep 12 17:11:55.016854 kubelet[3602]: I0912 17:11:55.013908 3602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5dcf8cdb5c-s49l2" podStartSLOduration=34.891635756 podStartE2EDuration="45.01388369s" podCreationTimestamp="2025-09-12 17:11:10 +0000 UTC" firstStartedPulling="2025-09-12 17:11:44.258926153 +0000 UTC m=+55.567664689" lastFinishedPulling="2025-09-12 17:11:54.381174099 +0000 UTC m=+65.689912623" observedRunningTime="2025-09-12 17:11:54.973219902 +0000 UTC m=+66.281958450" watchObservedRunningTime="2025-09-12 17:11:55.01388369 +0000 UTC m=+66.322622226" Sep 12 17:11:55.042352 systemd[1]: run-containerd-runc-k8s.io-0155632fdd28bf0765e990f735d58eff0487fa01681efc07a77c6fd28daec1ed-runc.7E75TX.mount: Deactivated successfully. Sep 12 17:11:56.324227 containerd[2128]: time="2025-09-12T17:11:56.324144089Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:56.329245 containerd[2128]: time="2025-09-12T17:11:56.328213085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 12 17:11:56.331004 containerd[2128]: time="2025-09-12T17:11:56.330942293Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:56.341734 containerd[2128]: time="2025-09-12T17:11:56.341645741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:11:56.348713 containerd[2128]: time="2025-09-12T17:11:56.346465745Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.961205286s" Sep 12 17:11:56.348713 containerd[2128]: time="2025-09-12T17:11:56.347767001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 12 17:11:56.356870 containerd[2128]: time="2025-09-12T17:11:56.353763713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 17:11:56.360777 containerd[2128]: time="2025-09-12T17:11:56.359160761Z" level=info msg="CreateContainer within sandbox \"d7b00122675f7914d9176a180b8cb18ab386d4c267bf636e09c05cd89035bd48\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 17:11:56.467467 containerd[2128]: time="2025-09-12T17:11:56.462151061Z" level=info msg="CreateContainer within sandbox \"d7b00122675f7914d9176a180b8cb18ab386d4c267bf636e09c05cd89035bd48\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2cff52da918114eebafb0fadae960e6384fe7b5b786ac445df5543be8a10c94d\"" Sep 12 17:11:56.464249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount933452695.mount: Deactivated successfully. Sep 12 17:11:56.480187 containerd[2128]: time="2025-09-12T17:11:56.479643857Z" level=info msg="StartContainer for \"2cff52da918114eebafb0fadae960e6384fe7b5b786ac445df5543be8a10c94d\"" Sep 12 17:11:56.821419 containerd[2128]: time="2025-09-12T17:11:56.821326207Z" level=info msg="StartContainer for \"2cff52da918114eebafb0fadae960e6384fe7b5b786ac445df5543be8a10c94d\" returns successfully" Sep 12 17:11:59.426733 kubelet[3602]: I0912 17:11:59.425331 3602 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:11:59.432554 containerd[2128]: time="2025-09-12T17:11:59.430040276Z" level=info msg="StopContainer for \"3945be13df64a20060207fc9e090b1eb19547609c227dad0bad904c18078ec20\" with timeout 30 (s)" Sep 12 17:11:59.433198 containerd[2128]: time="2025-09-12T17:11:59.432860804Z" level=info msg="Stop container \"3945be13df64a20060207fc9e090b1eb19547609c227dad0bad904c18078ec20\" with signal terminated" Sep 12 17:11:59.770282 containerd[2128]: time="2025-09-12T17:11:59.765122362Z" level=info msg="shim disconnected" id=3945be13df64a20060207fc9e090b1eb19547609c227dad0bad904c18078ec20 namespace=k8s.io Sep 12 17:11:59.770282 containerd[2128]: time="2025-09-12T17:11:59.766121218Z" level=warning msg="cleaning up after shim disconnected" id=3945be13df64a20060207fc9e090b1eb19547609c227dad0bad904c18078ec20 namespace=k8s.io Sep 12 17:11:59.770282 containerd[2128]: time="2025-09-12T17:11:59.766211902Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:11:59.780290 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3945be13df64a20060207fc9e090b1eb19547609c227dad0bad904c18078ec20-rootfs.mount: Deactivated successfully. Sep 12 17:11:59.855783 systemd[1]: Started sshd@12-172.31.22.180:22-147.75.109.163:44088.service - OpenSSH per-connection server daemon (147.75.109.163:44088). Sep 12 17:11:59.863336 containerd[2128]: time="2025-09-12T17:11:59.863279278Z" level=info msg="StopContainer for \"3945be13df64a20060207fc9e090b1eb19547609c227dad0bad904c18078ec20\" returns successfully" Sep 12 17:11:59.869293 containerd[2128]: time="2025-09-12T17:11:59.869203714Z" level=info msg="StopPodSandbox for \"9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e\"" Sep 12 17:11:59.870683 containerd[2128]: time="2025-09-12T17:11:59.870550762Z" level=info msg="Container to stop \"3945be13df64a20060207fc9e090b1eb19547609c227dad0bad904c18078ec20\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:11:59.894727 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e-shm.mount: Deactivated successfully. Sep 12 17:12:00.046036 containerd[2128]: time="2025-09-12T17:12:00.045818431Z" level=info msg="shim disconnected" id=9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e namespace=k8s.io Sep 12 17:12:00.046736 containerd[2128]: time="2025-09-12T17:12:00.046378507Z" level=warning msg="cleaning up after shim disconnected" id=9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e namespace=k8s.io Sep 12 17:12:00.046736 containerd[2128]: time="2025-09-12T17:12:00.046420555Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:12:00.062774 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e-rootfs.mount: Deactivated successfully. Sep 12 17:12:00.122928 sshd[6940]: Accepted publickey for core from 147.75.109.163 port 44088 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:00.128316 sshd[6940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:00.144991 systemd-logind[2103]: New session 13 of user core. Sep 12 17:12:00.151228 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:12:00.494034 systemd-networkd[1689]: cali662660f5d8e: Link DOWN Sep 12 17:12:00.494055 systemd-networkd[1689]: cali662660f5d8e: Lost carrier Sep 12 17:12:00.625508 sshd[6940]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:00.638803 systemd[1]: sshd@12-172.31.22.180:22-147.75.109.163:44088.service: Deactivated successfully. Sep 12 17:12:00.648632 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:12:00.650916 systemd-logind[2103]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:12:00.653981 systemd-logind[2103]: Removed session 13. Sep 12 17:12:00.759511 containerd[2128]: 2025-09-12 17:12:00.481 [INFO][6988] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" Sep 12 17:12:00.759511 containerd[2128]: 2025-09-12 17:12:00.487 [INFO][6988] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" iface="eth0" netns="/var/run/netns/cni-7d45fef0-7689-5af0-8e78-84c23cb4a9aa" Sep 12 17:12:00.759511 containerd[2128]: 2025-09-12 17:12:00.491 [INFO][6988] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" iface="eth0" netns="/var/run/netns/cni-7d45fef0-7689-5af0-8e78-84c23cb4a9aa" Sep 12 17:12:00.759511 containerd[2128]: 2025-09-12 17:12:00.517 [INFO][6988] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" after=30.091848ms iface="eth0" netns="/var/run/netns/cni-7d45fef0-7689-5af0-8e78-84c23cb4a9aa" Sep 12 17:12:00.759511 containerd[2128]: 2025-09-12 17:12:00.518 [INFO][6988] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" Sep 12 17:12:00.759511 containerd[2128]: 2025-09-12 17:12:00.518 [INFO][6988] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" Sep 12 17:12:00.759511 containerd[2128]: 2025-09-12 17:12:00.661 [INFO][7009] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" HandleID="k8s-pod-network.9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-eth0" Sep 12 17:12:00.759511 containerd[2128]: 2025-09-12 17:12:00.662 [INFO][7009] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:12:00.759511 containerd[2128]: 2025-09-12 17:12:00.662 [INFO][7009] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:12:00.759511 containerd[2128]: 2025-09-12 17:12:00.744 [INFO][7009] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" HandleID="k8s-pod-network.9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-eth0" Sep 12 17:12:00.759511 containerd[2128]: 2025-09-12 17:12:00.744 [INFO][7009] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" HandleID="k8s-pod-network.9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-eth0" Sep 12 17:12:00.759511 containerd[2128]: 2025-09-12 17:12:00.748 [INFO][7009] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:12:00.759511 containerd[2128]: 2025-09-12 17:12:00.753 [INFO][6988] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" Sep 12 17:12:00.764928 containerd[2128]: time="2025-09-12T17:12:00.764865059Z" level=info msg="TearDown network for sandbox \"9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e\" successfully" Sep 12 17:12:00.765084 containerd[2128]: time="2025-09-12T17:12:00.764943743Z" level=info msg="StopPodSandbox for \"9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e\" returns successfully" Sep 12 17:12:00.776857 systemd[1]: run-netns-cni\x2d7d45fef0\x2d7689\x2d5af0\x2d8e78\x2d84c23cb4a9aa.mount: Deactivated successfully. Sep 12 17:12:00.874752 kubelet[3602]: I0912 17:12:00.874312 3602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7wq6\" (UniqueName: \"kubernetes.io/projected/bc9ee936-954c-4af7-aedd-76c2de2ef89a-kube-api-access-t7wq6\") pod \"bc9ee936-954c-4af7-aedd-76c2de2ef89a\" (UID: \"bc9ee936-954c-4af7-aedd-76c2de2ef89a\") " Sep 12 17:12:00.874752 kubelet[3602]: I0912 17:12:00.874384 3602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bc9ee936-954c-4af7-aedd-76c2de2ef89a-calico-apiserver-certs\") pod \"bc9ee936-954c-4af7-aedd-76c2de2ef89a\" (UID: \"bc9ee936-954c-4af7-aedd-76c2de2ef89a\") " Sep 12 17:12:00.897105 kubelet[3602]: I0912 17:12:00.897027 3602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc9ee936-954c-4af7-aedd-76c2de2ef89a-kube-api-access-t7wq6" (OuterVolumeSpecName: "kube-api-access-t7wq6") pod "bc9ee936-954c-4af7-aedd-76c2de2ef89a" (UID: "bc9ee936-954c-4af7-aedd-76c2de2ef89a"). InnerVolumeSpecName "kube-api-access-t7wq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:12:00.899746 kubelet[3602]: I0912 17:12:00.899172 3602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc9ee936-954c-4af7-aedd-76c2de2ef89a-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "bc9ee936-954c-4af7-aedd-76c2de2ef89a" (UID: "bc9ee936-954c-4af7-aedd-76c2de2ef89a"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 17:12:00.901454 systemd[1]: var-lib-kubelet-pods-bc9ee936\x2d954c\x2d4af7\x2daedd\x2d76c2de2ef89a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt7wq6.mount: Deactivated successfully. Sep 12 17:12:00.901829 systemd[1]: var-lib-kubelet-pods-bc9ee936\x2d954c\x2d4af7\x2daedd\x2d76c2de2ef89a-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 12 17:12:00.975204 kubelet[3602]: I0912 17:12:00.974861 3602 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7wq6\" (UniqueName: \"kubernetes.io/projected/bc9ee936-954c-4af7-aedd-76c2de2ef89a-kube-api-access-t7wq6\") on node \"ip-172-31-22-180\" DevicePath \"\"" Sep 12 17:12:00.975204 kubelet[3602]: I0912 17:12:00.974911 3602 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bc9ee936-954c-4af7-aedd-76c2de2ef89a-calico-apiserver-certs\") on node \"ip-172-31-22-180\" DevicePath \"\"" Sep 12 17:12:00.998622 kubelet[3602]: I0912 17:12:00.998361 3602 scope.go:117] "RemoveContainer" containerID="3945be13df64a20060207fc9e090b1eb19547609c227dad0bad904c18078ec20" Sep 12 17:12:01.006197 containerd[2128]: time="2025-09-12T17:12:01.005052668Z" level=info msg="RemoveContainer for \"3945be13df64a20060207fc9e090b1eb19547609c227dad0bad904c18078ec20\"" Sep 12 17:12:01.017800 containerd[2128]: time="2025-09-12T17:12:01.017205800Z" level=info msg="RemoveContainer for \"3945be13df64a20060207fc9e090b1eb19547609c227dad0bad904c18078ec20\" returns successfully" Sep 12 17:12:01.163289 containerd[2128]: time="2025-09-12T17:12:01.162782337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:01.168000 containerd[2128]: time="2025-09-12T17:12:01.167940429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 12 17:12:01.175016 containerd[2128]: time="2025-09-12T17:12:01.174929889Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:01.183255 containerd[2128]: time="2025-09-12T17:12:01.183018561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:01.185657 containerd[2128]: time="2025-09-12T17:12:01.185480169Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 4.831642032s" Sep 12 17:12:01.185657 containerd[2128]: time="2025-09-12T17:12:01.185535093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 12 17:12:01.188418 containerd[2128]: time="2025-09-12T17:12:01.188129481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 17:12:01.225772 containerd[2128]: time="2025-09-12T17:12:01.225556689Z" level=info msg="CreateContainer within sandbox \"076cd40656f3d7c9c0a5b1eba7ea977198c9ce2c595ffd0d3e6bf1e315c9938a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 17:12:01.250364 containerd[2128]: time="2025-09-12T17:12:01.250283697Z" level=info msg="CreateContainer within sandbox \"076cd40656f3d7c9c0a5b1eba7ea977198c9ce2c595ffd0d3e6bf1e315c9938a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1076a3607cba3d94614900b58d10513d5dbc58c8d5fbab026267db63312ec989\"" Sep 12 17:12:01.252769 containerd[2128]: time="2025-09-12T17:12:01.251386989Z" level=info msg="StartContainer for \"1076a3607cba3d94614900b58d10513d5dbc58c8d5fbab026267db63312ec989\"" Sep 12 17:12:01.407903 containerd[2128]: time="2025-09-12T17:12:01.407803918Z" level=info msg="StartContainer for \"1076a3607cba3d94614900b58d10513d5dbc58c8d5fbab026267db63312ec989\" returns successfully" Sep 12 17:12:02.041583 kubelet[3602]: I0912 17:12:02.041466 3602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5f8cc79964-sfrvv" podStartSLOduration=31.153734682 podStartE2EDuration="47.041442165s" podCreationTimestamp="2025-09-12 17:11:15 +0000 UTC" firstStartedPulling="2025-09-12 17:11:45.299132454 +0000 UTC m=+56.607870978" lastFinishedPulling="2025-09-12 17:12:01.186839937 +0000 UTC m=+72.495578461" observedRunningTime="2025-09-12 17:12:02.038430489 +0000 UTC m=+73.347169049" watchObservedRunningTime="2025-09-12 17:12:02.041442165 +0000 UTC m=+73.350180713" Sep 12 17:12:02.600484 ntpd[2090]: Deleting interface #13 cali662660f5d8e, fe80::ecee:eeff:feee:eeee%12#123, interface stats: received=0, sent=0, dropped=0, active_time=13 secs Sep 12 17:12:02.601230 ntpd[2090]: 12 Sep 17:12:02 ntpd[2090]: Deleting interface #13 cali662660f5d8e, fe80::ecee:eeff:feee:eeee%12#123, interface stats: received=0, sent=0, dropped=0, active_time=13 secs Sep 12 17:12:02.942283 kubelet[3602]: I0912 17:12:02.942199 3602 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc9ee936-954c-4af7-aedd-76c2de2ef89a" path="/var/lib/kubelet/pods/bc9ee936-954c-4af7-aedd-76c2de2ef89a/volumes" Sep 12 17:12:03.259799 containerd[2128]: time="2025-09-12T17:12:03.257578703Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:03.261221 containerd[2128]: time="2025-09-12T17:12:03.261093011Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 12 17:12:03.263971 containerd[2128]: time="2025-09-12T17:12:03.263811515Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:03.279399 containerd[2128]: time="2025-09-12T17:12:03.278586347Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:12:03.285922 containerd[2128]: time="2025-09-12T17:12:03.285855251Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 2.09765311s" Sep 12 17:12:03.286660 containerd[2128]: time="2025-09-12T17:12:03.286123967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 12 17:12:03.291826 containerd[2128]: time="2025-09-12T17:12:03.291320267Z" level=info msg="CreateContainer within sandbox \"d7b00122675f7914d9176a180b8cb18ab386d4c267bf636e09c05cd89035bd48\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 17:12:03.331740 containerd[2128]: time="2025-09-12T17:12:03.331160207Z" level=info msg="CreateContainer within sandbox \"d7b00122675f7914d9176a180b8cb18ab386d4c267bf636e09c05cd89035bd48\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"683a847ae54866d9c8521140c5eeb6928a4f2cd419e86b18eba68004129496f9\"" Sep 12 17:12:03.336457 containerd[2128]: time="2025-09-12T17:12:03.333776495Z" level=info msg="StartContainer for \"683a847ae54866d9c8521140c5eeb6928a4f2cd419e86b18eba68004129496f9\"" Sep 12 17:12:03.495257 containerd[2128]: time="2025-09-12T17:12:03.495144408Z" level=info msg="StartContainer for \"683a847ae54866d9c8521140c5eeb6928a4f2cd419e86b18eba68004129496f9\" returns successfully" Sep 12 17:12:04.057521 kubelet[3602]: I0912 17:12:04.057103 3602 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2fhht" podStartSLOduration=30.228227814 podStartE2EDuration="49.057077027s" podCreationTimestamp="2025-09-12 17:11:15 +0000 UTC" firstStartedPulling="2025-09-12 17:11:44.458805846 +0000 UTC m=+55.767544382" lastFinishedPulling="2025-09-12 17:12:03.287655071 +0000 UTC m=+74.596393595" observedRunningTime="2025-09-12 17:12:04.053541467 +0000 UTC m=+75.362280027" watchObservedRunningTime="2025-09-12 17:12:04.057077027 +0000 UTC m=+75.365815563" Sep 12 17:12:04.178276 kubelet[3602]: I0912 17:12:04.177860 3602 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 17:12:04.178276 kubelet[3602]: I0912 17:12:04.177924 3602 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 17:12:05.658719 systemd[1]: Started sshd@13-172.31.22.180:22-147.75.109.163:51196.service - OpenSSH per-connection server daemon (147.75.109.163:51196). Sep 12 17:12:05.858163 sshd[7139]: Accepted publickey for core from 147.75.109.163 port 51196 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:05.862767 sshd[7139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:05.876902 systemd-logind[2103]: New session 14 of user core. Sep 12 17:12:05.885243 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:12:06.196825 sshd[7139]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:06.207412 systemd[1]: sshd@13-172.31.22.180:22-147.75.109.163:51196.service: Deactivated successfully. Sep 12 17:12:06.216901 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:12:06.220353 systemd-logind[2103]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:12:06.224446 systemd-logind[2103]: Removed session 14. Sep 12 17:12:11.230251 systemd[1]: Started sshd@14-172.31.22.180:22-147.75.109.163:37432.service - OpenSSH per-connection server daemon (147.75.109.163:37432). Sep 12 17:12:11.460731 sshd[7178]: Accepted publickey for core from 147.75.109.163 port 37432 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:11.469396 sshd[7178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:11.489028 systemd-logind[2103]: New session 15 of user core. Sep 12 17:12:11.561390 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:12:11.907084 sshd[7178]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:11.931948 systemd-logind[2103]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:12:11.935114 systemd[1]: sshd@14-172.31.22.180:22-147.75.109.163:37432.service: Deactivated successfully. Sep 12 17:12:11.947431 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:12:11.955310 systemd-logind[2103]: Removed session 15. Sep 12 17:12:16.952195 systemd[1]: Started sshd@15-172.31.22.180:22-147.75.109.163:37438.service - OpenSSH per-connection server daemon (147.75.109.163:37438). Sep 12 17:12:17.156949 sshd[7192]: Accepted publickey for core from 147.75.109.163 port 37438 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:17.163428 sshd[7192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:17.187063 systemd-logind[2103]: New session 16 of user core. Sep 12 17:12:17.196863 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:12:17.609441 sshd[7192]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:17.625660 systemd[1]: sshd@15-172.31.22.180:22-147.75.109.163:37438.service: Deactivated successfully. Sep 12 17:12:17.636186 systemd-logind[2103]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:12:17.647901 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:12:17.661969 systemd-logind[2103]: Removed session 16. Sep 12 17:12:17.710127 systemd[1]: Started sshd@16-172.31.22.180:22-147.75.109.163:37448.service - OpenSSH per-connection server daemon (147.75.109.163:37448). Sep 12 17:12:17.899148 sshd[7225]: Accepted publickey for core from 147.75.109.163 port 37448 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:17.907158 sshd[7225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:17.927278 systemd-logind[2103]: New session 17 of user core. Sep 12 17:12:17.936335 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:12:18.642337 sshd[7225]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:18.650755 systemd[1]: sshd@16-172.31.22.180:22-147.75.109.163:37448.service: Deactivated successfully. Sep 12 17:12:18.663118 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:12:18.668323 systemd-logind[2103]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:12:18.684176 systemd[1]: Started sshd@17-172.31.22.180:22-147.75.109.163:37462.service - OpenSSH per-connection server daemon (147.75.109.163:37462). Sep 12 17:12:18.692818 systemd-logind[2103]: Removed session 17. Sep 12 17:12:18.927836 sshd[7237]: Accepted publickey for core from 147.75.109.163 port 37462 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:18.929712 sshd[7237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:18.943646 systemd-logind[2103]: New session 18 of user core. Sep 12 17:12:18.949268 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:12:20.673534 systemd[1]: run-containerd-runc-k8s.io-0155632fdd28bf0765e990f735d58eff0487fa01681efc07a77c6fd28daec1ed-runc.By01c9.mount: Deactivated successfully. Sep 12 17:12:23.412070 sshd[7237]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:23.427383 systemd[1]: sshd@17-172.31.22.180:22-147.75.109.163:37462.service: Deactivated successfully. Sep 12 17:12:23.442544 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:12:23.448972 systemd-logind[2103]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:12:23.521632 systemd[1]: Started sshd@18-172.31.22.180:22-147.75.109.163:52822.service - OpenSSH per-connection server daemon (147.75.109.163:52822). Sep 12 17:12:23.526790 systemd-logind[2103]: Removed session 18. Sep 12 17:12:23.737818 sshd[7283]: Accepted publickey for core from 147.75.109.163 port 52822 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:23.743304 sshd[7283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:23.767719 systemd-logind[2103]: New session 19 of user core. Sep 12 17:12:23.782508 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:12:24.640000 sshd[7283]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:24.653941 systemd[1]: sshd@18-172.31.22.180:22-147.75.109.163:52822.service: Deactivated successfully. Sep 12 17:12:24.671194 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:12:24.676055 systemd-logind[2103]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:12:24.693634 systemd[1]: Started sshd@19-172.31.22.180:22-147.75.109.163:52834.service - OpenSSH per-connection server daemon (147.75.109.163:52834). Sep 12 17:12:24.695049 systemd-logind[2103]: Removed session 19. Sep 12 17:12:24.894843 sshd[7302]: Accepted publickey for core from 147.75.109.163 port 52834 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:24.904679 sshd[7302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:24.947718 systemd[1]: run-containerd-runc-k8s.io-1076a3607cba3d94614900b58d10513d5dbc58c8d5fbab026267db63312ec989-runc.lcebIe.mount: Deactivated successfully. Sep 12 17:12:24.969472 systemd-logind[2103]: New session 20 of user core. Sep 12 17:12:24.972084 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:12:25.371334 sshd[7302]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:25.379164 systemd[1]: sshd@19-172.31.22.180:22-147.75.109.163:52834.service: Deactivated successfully. Sep 12 17:12:25.387122 systemd-logind[2103]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:12:25.387546 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:12:25.390946 systemd-logind[2103]: Removed session 20. Sep 12 17:12:30.406550 systemd[1]: Started sshd@20-172.31.22.180:22-147.75.109.163:51756.service - OpenSSH per-connection server daemon (147.75.109.163:51756). Sep 12 17:12:30.611861 sshd[7360]: Accepted publickey for core from 147.75.109.163 port 51756 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:30.616078 sshd[7360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:30.639110 systemd-logind[2103]: New session 21 of user core. Sep 12 17:12:30.707467 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:12:31.092049 sshd[7360]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:31.101599 systemd-logind[2103]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:12:31.105236 systemd[1]: sshd@20-172.31.22.180:22-147.75.109.163:51756.service: Deactivated successfully. Sep 12 17:12:31.112056 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:12:31.117904 systemd-logind[2103]: Removed session 21. Sep 12 17:12:31.417046 containerd[2128]: time="2025-09-12T17:12:31.416570487Z" level=info msg="StopContainer for \"1fb8f314ef0b2d4d4cac4024a2af26a8c6e35f6561cf5ab6660b357f5408f540\" with timeout 30 (s)" Sep 12 17:12:31.418092 containerd[2128]: time="2025-09-12T17:12:31.417847527Z" level=info msg="Stop container \"1fb8f314ef0b2d4d4cac4024a2af26a8c6e35f6561cf5ab6660b357f5408f540\" with signal terminated" Sep 12 17:12:31.597455 containerd[2128]: time="2025-09-12T17:12:31.597319480Z" level=info msg="shim disconnected" id=1fb8f314ef0b2d4d4cac4024a2af26a8c6e35f6561cf5ab6660b357f5408f540 namespace=k8s.io Sep 12 17:12:31.597622 containerd[2128]: time="2025-09-12T17:12:31.597434488Z" level=warning msg="cleaning up after shim disconnected" id=1fb8f314ef0b2d4d4cac4024a2af26a8c6e35f6561cf5ab6660b357f5408f540 namespace=k8s.io Sep 12 17:12:31.597622 containerd[2128]: time="2025-09-12T17:12:31.597482536Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:12:31.609718 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1fb8f314ef0b2d4d4cac4024a2af26a8c6e35f6561cf5ab6660b357f5408f540-rootfs.mount: Deactivated successfully. Sep 12 17:12:31.671042 containerd[2128]: time="2025-09-12T17:12:31.669980380Z" level=info msg="StopContainer for \"1fb8f314ef0b2d4d4cac4024a2af26a8c6e35f6561cf5ab6660b357f5408f540\" returns successfully" Sep 12 17:12:31.673561 containerd[2128]: time="2025-09-12T17:12:31.673406092Z" level=info msg="StopPodSandbox for \"d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc\"" Sep 12 17:12:31.673737 containerd[2128]: time="2025-09-12T17:12:31.673608688Z" level=info msg="Container to stop \"1fb8f314ef0b2d4d4cac4024a2af26a8c6e35f6561cf5ab6660b357f5408f540\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:12:31.690355 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc-shm.mount: Deactivated successfully. Sep 12 17:12:31.752944 containerd[2128]: time="2025-09-12T17:12:31.752843956Z" level=info msg="shim disconnected" id=d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc namespace=k8s.io Sep 12 17:12:31.752944 containerd[2128]: time="2025-09-12T17:12:31.752925976Z" level=warning msg="cleaning up after shim disconnected" id=d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc namespace=k8s.io Sep 12 17:12:31.753150 containerd[2128]: time="2025-09-12T17:12:31.752948056Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:12:31.757425 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc-rootfs.mount: Deactivated successfully. Sep 12 17:12:31.912445 systemd-networkd[1689]: cali76836b15a77: Link DOWN Sep 12 17:12:31.912464 systemd-networkd[1689]: cali76836b15a77: Lost carrier Sep 12 17:12:32.164490 kubelet[3602]: I0912 17:12:32.164435 3602 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" Sep 12 17:12:32.212520 containerd[2128]: 2025-09-12 17:12:31.909 [INFO][7456] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" Sep 12 17:12:32.212520 containerd[2128]: 2025-09-12 17:12:31.909 [INFO][7456] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" iface="eth0" netns="/var/run/netns/cni-e10376fe-0620-c264-af29-44accfddd6c5" Sep 12 17:12:32.212520 containerd[2128]: 2025-09-12 17:12:31.910 [INFO][7456] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" iface="eth0" netns="/var/run/netns/cni-e10376fe-0620-c264-af29-44accfddd6c5" Sep 12 17:12:32.212520 containerd[2128]: 2025-09-12 17:12:31.926 [INFO][7456] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" after=17.460216ms iface="eth0" netns="/var/run/netns/cni-e10376fe-0620-c264-af29-44accfddd6c5" Sep 12 17:12:32.212520 containerd[2128]: 2025-09-12 17:12:31.928 [INFO][7456] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" Sep 12 17:12:32.212520 containerd[2128]: 2025-09-12 17:12:31.931 [INFO][7456] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" Sep 12 17:12:32.212520 containerd[2128]: 2025-09-12 17:12:32.057 [INFO][7463] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" HandleID="k8s-pod-network.d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-eth0" Sep 12 17:12:32.212520 containerd[2128]: 2025-09-12 17:12:32.060 [INFO][7463] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:12:32.212520 containerd[2128]: 2025-09-12 17:12:32.060 [INFO][7463] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:12:32.212520 containerd[2128]: 2025-09-12 17:12:32.196 [INFO][7463] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" HandleID="k8s-pod-network.d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-eth0" Sep 12 17:12:32.212520 containerd[2128]: 2025-09-12 17:12:32.196 [INFO][7463] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" HandleID="k8s-pod-network.d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-eth0" Sep 12 17:12:32.212520 containerd[2128]: 2025-09-12 17:12:32.199 [INFO][7463] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:12:32.212520 containerd[2128]: 2025-09-12 17:12:32.205 [INFO][7456] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" Sep 12 17:12:32.219631 containerd[2128]: time="2025-09-12T17:12:32.216795771Z" level=info msg="TearDown network for sandbox \"d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc\" successfully" Sep 12 17:12:32.219631 containerd[2128]: time="2025-09-12T17:12:32.216870675Z" level=info msg="StopPodSandbox for \"d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc\" returns successfully" Sep 12 17:12:32.232449 systemd[1]: run-netns-cni\x2de10376fe\x2d0620\x2dc264\x2daf29\x2d44accfddd6c5.mount: Deactivated successfully. Sep 12 17:12:32.360909 kubelet[3602]: I0912 17:12:32.360832 3602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/831c8b21-3a30-4e09-bfba-cb39dd0935d8-calico-apiserver-certs\") pod \"831c8b21-3a30-4e09-bfba-cb39dd0935d8\" (UID: \"831c8b21-3a30-4e09-bfba-cb39dd0935d8\") " Sep 12 17:12:32.361133 kubelet[3602]: I0912 17:12:32.360927 3602 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwxlz\" (UniqueName: \"kubernetes.io/projected/831c8b21-3a30-4e09-bfba-cb39dd0935d8-kube-api-access-wwxlz\") pod \"831c8b21-3a30-4e09-bfba-cb39dd0935d8\" (UID: \"831c8b21-3a30-4e09-bfba-cb39dd0935d8\") " Sep 12 17:12:32.383727 kubelet[3602]: I0912 17:12:32.382447 3602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/831c8b21-3a30-4e09-bfba-cb39dd0935d8-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "831c8b21-3a30-4e09-bfba-cb39dd0935d8" (UID: "831c8b21-3a30-4e09-bfba-cb39dd0935d8"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 17:12:32.385722 systemd[1]: var-lib-kubelet-pods-831c8b21\x2d3a30\x2d4e09\x2dbfba\x2dcb39dd0935d8-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 12 17:12:32.391716 kubelet[3602]: I0912 17:12:32.391091 3602 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/831c8b21-3a30-4e09-bfba-cb39dd0935d8-kube-api-access-wwxlz" (OuterVolumeSpecName: "kube-api-access-wwxlz") pod "831c8b21-3a30-4e09-bfba-cb39dd0935d8" (UID: "831c8b21-3a30-4e09-bfba-cb39dd0935d8"). InnerVolumeSpecName "kube-api-access-wwxlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:12:32.461987 kubelet[3602]: I0912 17:12:32.461817 3602 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/831c8b21-3a30-4e09-bfba-cb39dd0935d8-calico-apiserver-certs\") on node \"ip-172-31-22-180\" DevicePath \"\"" Sep 12 17:12:32.461987 kubelet[3602]: I0912 17:12:32.461874 3602 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwxlz\" (UniqueName: \"kubernetes.io/projected/831c8b21-3a30-4e09-bfba-cb39dd0935d8-kube-api-access-wwxlz\") on node \"ip-172-31-22-180\" DevicePath \"\"" Sep 12 17:12:32.601200 systemd[1]: var-lib-kubelet-pods-831c8b21\x2d3a30\x2d4e09\x2dbfba\x2dcb39dd0935d8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwwxlz.mount: Deactivated successfully. Sep 12 17:12:34.600448 ntpd[2090]: Deleting interface #11 cali76836b15a77, fe80::ecee:eeff:feee:eeee%10#123, interface stats: received=0, sent=0, dropped=0, active_time=45 secs Sep 12 17:12:34.602140 ntpd[2090]: 12 Sep 17:12:34 ntpd[2090]: Deleting interface #11 cali76836b15a77, fe80::ecee:eeff:feee:eeee%10#123, interface stats: received=0, sent=0, dropped=0, active_time=45 secs Sep 12 17:12:34.931484 kubelet[3602]: I0912 17:12:34.931423 3602 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="831c8b21-3a30-4e09-bfba-cb39dd0935d8" path="/var/lib/kubelet/pods/831c8b21-3a30-4e09-bfba-cb39dd0935d8/volumes" Sep 12 17:12:36.124352 systemd[1]: Started sshd@21-172.31.22.180:22-147.75.109.163:51768.service - OpenSSH per-connection server daemon (147.75.109.163:51768). Sep 12 17:12:36.328571 sshd[7481]: Accepted publickey for core from 147.75.109.163 port 51768 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:36.338648 sshd[7481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:36.371504 systemd-logind[2103]: New session 22 of user core. Sep 12 17:12:36.379308 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:12:36.748744 sshd[7481]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:36.764504 systemd[1]: sshd@21-172.31.22.180:22-147.75.109.163:51768.service: Deactivated successfully. Sep 12 17:12:36.786563 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:12:36.792649 systemd-logind[2103]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:12:36.798074 systemd-logind[2103]: Removed session 22. Sep 12 17:12:41.781442 systemd[1]: Started sshd@22-172.31.22.180:22-147.75.109.163:37972.service - OpenSSH per-connection server daemon (147.75.109.163:37972). Sep 12 17:12:41.965124 sshd[7520]: Accepted publickey for core from 147.75.109.163 port 37972 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:41.968869 sshd[7520]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:41.983827 systemd-logind[2103]: New session 23 of user core. Sep 12 17:12:42.003321 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:12:42.341048 sshd[7520]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:42.358213 systemd[1]: sshd@22-172.31.22.180:22-147.75.109.163:37972.service: Deactivated successfully. Sep 12 17:12:42.370581 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:12:42.380231 systemd-logind[2103]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:12:42.384393 systemd-logind[2103]: Removed session 23. Sep 12 17:12:47.373318 systemd[1]: Started sshd@23-172.31.22.180:22-147.75.109.163:37984.service - OpenSSH per-connection server daemon (147.75.109.163:37984). Sep 12 17:12:47.549571 sshd[7537]: Accepted publickey for core from 147.75.109.163 port 37984 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:47.554536 sshd[7537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:47.570072 systemd-logind[2103]: New session 24 of user core. Sep 12 17:12:47.577345 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:12:47.870036 sshd[7537]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:47.880996 systemd[1]: sshd@23-172.31.22.180:22-147.75.109.163:37984.service: Deactivated successfully. Sep 12 17:12:47.894195 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:12:47.898823 systemd-logind[2103]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:12:47.901757 systemd-logind[2103]: Removed session 24. Sep 12 17:12:52.906305 systemd[1]: Started sshd@24-172.31.22.180:22-147.75.109.163:50148.service - OpenSSH per-connection server daemon (147.75.109.163:50148). Sep 12 17:12:53.099043 sshd[7555]: Accepted publickey for core from 147.75.109.163 port 50148 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:53.101780 sshd[7555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:53.126815 systemd-logind[2103]: New session 25 of user core. Sep 12 17:12:53.134268 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:12:53.452367 sshd[7555]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:53.459225 systemd-logind[2103]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:12:53.461295 systemd[1]: sshd@24-172.31.22.180:22-147.75.109.163:50148.service: Deactivated successfully. Sep 12 17:12:53.474053 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:12:53.480105 systemd-logind[2103]: Removed session 25. Sep 12 17:12:54.070312 kubelet[3602]: I0912 17:12:54.069068 3602 scope.go:117] "RemoveContainer" containerID="1fb8f314ef0b2d4d4cac4024a2af26a8c6e35f6561cf5ab6660b357f5408f540" Sep 12 17:12:54.077613 containerd[2128]: time="2025-09-12T17:12:54.077553971Z" level=info msg="RemoveContainer for \"1fb8f314ef0b2d4d4cac4024a2af26a8c6e35f6561cf5ab6660b357f5408f540\"" Sep 12 17:12:54.086853 containerd[2128]: time="2025-09-12T17:12:54.086784803Z" level=info msg="RemoveContainer for \"1fb8f314ef0b2d4d4cac4024a2af26a8c6e35f6561cf5ab6660b357f5408f540\" returns successfully" Sep 12 17:12:54.089744 containerd[2128]: time="2025-09-12T17:12:54.089572967Z" level=info msg="StopPodSandbox for \"9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e\"" Sep 12 17:12:54.264049 containerd[2128]: 2025-09-12 17:12:54.171 [WARNING][7578] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-eth0" Sep 12 17:12:54.264049 containerd[2128]: 2025-09-12 17:12:54.172 [INFO][7578] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" Sep 12 17:12:54.264049 containerd[2128]: 2025-09-12 17:12:54.172 [INFO][7578] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" iface="eth0" netns="" Sep 12 17:12:54.264049 containerd[2128]: 2025-09-12 17:12:54.172 [INFO][7578] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" Sep 12 17:12:54.264049 containerd[2128]: 2025-09-12 17:12:54.172 [INFO][7578] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" Sep 12 17:12:54.264049 containerd[2128]: 2025-09-12 17:12:54.233 [INFO][7586] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" HandleID="k8s-pod-network.9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-eth0" Sep 12 17:12:54.264049 containerd[2128]: 2025-09-12 17:12:54.233 [INFO][7586] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:12:54.264049 containerd[2128]: 2025-09-12 17:12:54.233 [INFO][7586] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:12:54.264049 containerd[2128]: 2025-09-12 17:12:54.253 [WARNING][7586] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" HandleID="k8s-pod-network.9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-eth0" Sep 12 17:12:54.264049 containerd[2128]: 2025-09-12 17:12:54.253 [INFO][7586] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" HandleID="k8s-pod-network.9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-eth0" Sep 12 17:12:54.264049 containerd[2128]: 2025-09-12 17:12:54.258 [INFO][7586] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:12:54.264049 containerd[2128]: 2025-09-12 17:12:54.261 [INFO][7578] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" Sep 12 17:12:54.267315 containerd[2128]: time="2025-09-12T17:12:54.264089640Z" level=info msg="TearDown network for sandbox \"9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e\" successfully" Sep 12 17:12:54.267315 containerd[2128]: time="2025-09-12T17:12:54.264130992Z" level=info msg="StopPodSandbox for \"9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e\" returns successfully" Sep 12 17:12:54.267315 containerd[2128]: time="2025-09-12T17:12:54.267058956Z" level=info msg="RemovePodSandbox for \"9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e\"" Sep 12 17:12:54.267315 containerd[2128]: time="2025-09-12T17:12:54.267134244Z" level=info msg="Forcibly stopping sandbox \"9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e\"" Sep 12 17:12:54.450338 containerd[2128]: 2025-09-12 17:12:54.358 [WARNING][7602] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-eth0" Sep 12 17:12:54.450338 containerd[2128]: 2025-09-12 17:12:54.358 [INFO][7602] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" Sep 12 17:12:54.450338 containerd[2128]: 2025-09-12 17:12:54.358 [INFO][7602] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" iface="eth0" netns="" Sep 12 17:12:54.450338 containerd[2128]: 2025-09-12 17:12:54.358 [INFO][7602] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" Sep 12 17:12:54.450338 containerd[2128]: 2025-09-12 17:12:54.358 [INFO][7602] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" Sep 12 17:12:54.450338 containerd[2128]: 2025-09-12 17:12:54.419 [INFO][7610] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" HandleID="k8s-pod-network.9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-eth0" Sep 12 17:12:54.450338 containerd[2128]: 2025-09-12 17:12:54.420 [INFO][7610] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:12:54.450338 containerd[2128]: 2025-09-12 17:12:54.420 [INFO][7610] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:12:54.450338 containerd[2128]: 2025-09-12 17:12:54.440 [WARNING][7610] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" HandleID="k8s-pod-network.9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-eth0" Sep 12 17:12:54.450338 containerd[2128]: 2025-09-12 17:12:54.440 [INFO][7610] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" HandleID="k8s-pod-network.9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--v8tpq-eth0" Sep 12 17:12:54.450338 containerd[2128]: 2025-09-12 17:12:54.443 [INFO][7610] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:12:54.450338 containerd[2128]: 2025-09-12 17:12:54.446 [INFO][7602] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e" Sep 12 17:12:54.451097 containerd[2128]: time="2025-09-12T17:12:54.450394009Z" level=info msg="TearDown network for sandbox \"9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e\" successfully" Sep 12 17:12:54.460889 containerd[2128]: time="2025-09-12T17:12:54.460794841Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:12:54.461168 containerd[2128]: time="2025-09-12T17:12:54.460909489Z" level=info msg="RemovePodSandbox \"9a293d451d88ce0d173a5046546079e0301b27ba5c73493b017cc34fc61eac4e\" returns successfully" Sep 12 17:12:54.462754 containerd[2128]: time="2025-09-12T17:12:54.461702449Z" level=info msg="StopPodSandbox for \"d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc\"" Sep 12 17:12:54.631793 containerd[2128]: 2025-09-12 17:12:54.561 [WARNING][7624] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-eth0" Sep 12 17:12:54.631793 containerd[2128]: 2025-09-12 17:12:54.562 [INFO][7624] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" Sep 12 17:12:54.631793 containerd[2128]: 2025-09-12 17:12:54.562 [INFO][7624] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" iface="eth0" netns="" Sep 12 17:12:54.631793 containerd[2128]: 2025-09-12 17:12:54.562 [INFO][7624] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" Sep 12 17:12:54.631793 containerd[2128]: 2025-09-12 17:12:54.562 [INFO][7624] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" Sep 12 17:12:54.631793 containerd[2128]: 2025-09-12 17:12:54.605 [INFO][7632] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" HandleID="k8s-pod-network.d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-eth0" Sep 12 17:12:54.631793 containerd[2128]: 2025-09-12 17:12:54.606 [INFO][7632] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:12:54.631793 containerd[2128]: 2025-09-12 17:12:54.606 [INFO][7632] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:12:54.631793 containerd[2128]: 2025-09-12 17:12:54.620 [WARNING][7632] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" HandleID="k8s-pod-network.d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-eth0" Sep 12 17:12:54.631793 containerd[2128]: 2025-09-12 17:12:54.620 [INFO][7632] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" HandleID="k8s-pod-network.d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-eth0" Sep 12 17:12:54.631793 containerd[2128]: 2025-09-12 17:12:54.623 [INFO][7632] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:12:54.631793 containerd[2128]: 2025-09-12 17:12:54.626 [INFO][7624] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" Sep 12 17:12:54.632494 containerd[2128]: time="2025-09-12T17:12:54.631898318Z" level=info msg="TearDown network for sandbox \"d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc\" successfully" Sep 12 17:12:54.632494 containerd[2128]: time="2025-09-12T17:12:54.631940666Z" level=info msg="StopPodSandbox for \"d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc\" returns successfully" Sep 12 17:12:54.634783 containerd[2128]: time="2025-09-12T17:12:54.634335662Z" level=info msg="RemovePodSandbox for \"d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc\"" Sep 12 17:12:54.634783 containerd[2128]: time="2025-09-12T17:12:54.634392266Z" level=info msg="Forcibly stopping sandbox \"d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc\"" Sep 12 17:12:54.905737 containerd[2128]: 2025-09-12 17:12:54.736 [WARNING][7646] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" WorkloadEndpoint="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-eth0" Sep 12 17:12:54.905737 containerd[2128]: 2025-09-12 17:12:54.736 [INFO][7646] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" Sep 12 17:12:54.905737 containerd[2128]: 2025-09-12 17:12:54.736 [INFO][7646] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" iface="eth0" netns="" Sep 12 17:12:54.905737 containerd[2128]: 2025-09-12 17:12:54.736 [INFO][7646] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" Sep 12 17:12:54.905737 containerd[2128]: 2025-09-12 17:12:54.736 [INFO][7646] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" Sep 12 17:12:54.905737 containerd[2128]: 2025-09-12 17:12:54.852 [INFO][7653] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" HandleID="k8s-pod-network.d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-eth0" Sep 12 17:12:54.905737 containerd[2128]: 2025-09-12 17:12:54.852 [INFO][7653] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:12:54.905737 containerd[2128]: 2025-09-12 17:12:54.852 [INFO][7653] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:12:54.905737 containerd[2128]: 2025-09-12 17:12:54.880 [WARNING][7653] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" HandleID="k8s-pod-network.d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-eth0" Sep 12 17:12:54.905737 containerd[2128]: 2025-09-12 17:12:54.880 [INFO][7653] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" HandleID="k8s-pod-network.d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" Workload="ip--172--31--22--180-k8s-calico--apiserver--7964ddc67d--2fn9n-eth0" Sep 12 17:12:54.905737 containerd[2128]: 2025-09-12 17:12:54.884 [INFO][7653] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:12:54.905737 containerd[2128]: 2025-09-12 17:12:54.897 [INFO][7646] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc" Sep 12 17:12:54.905737 containerd[2128]: time="2025-09-12T17:12:54.903964539Z" level=info msg="TearDown network for sandbox \"d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc\" successfully" Sep 12 17:12:54.978962 containerd[2128]: time="2025-09-12T17:12:54.978886888Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:12:54.979151 containerd[2128]: time="2025-09-12T17:12:54.979006132Z" level=info msg="RemovePodSandbox \"d7d38462fb3af71c1d9fb78626f4ad7c874e4c7c9c13abb479b015c4cc9148fc\" returns successfully" Sep 12 17:12:58.487291 systemd[1]: Started sshd@25-172.31.22.180:22-147.75.109.163:50158.service - OpenSSH per-connection server daemon (147.75.109.163:50158). Sep 12 17:12:58.686747 sshd[7700]: Accepted publickey for core from 147.75.109.163 port 50158 ssh2: RSA SHA256:MtueCMCElgMFpvQGHABlOh1LdmyEE9d8eacHhUBhK34 Sep 12 17:12:58.688734 sshd[7700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:12:58.702785 systemd-logind[2103]: New session 26 of user core. Sep 12 17:12:58.714238 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 17:12:59.029994 sshd[7700]: pam_unix(sshd:session): session closed for user core Sep 12 17:12:59.046105 systemd[1]: sshd@25-172.31.22.180:22-147.75.109.163:50158.service: Deactivated successfully. Sep 12 17:12:59.057316 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 17:12:59.064666 systemd-logind[2103]: Session 26 logged out. Waiting for processes to exit. Sep 12 17:12:59.069162 systemd-logind[2103]: Removed session 26. Sep 12 17:13:13.395011 containerd[2128]: time="2025-09-12T17:13:13.394646851Z" level=info msg="shim disconnected" id=b42318ac9394e96fb24c28fa41084c68fc2649d09fe8caaf53106adf2d250917 namespace=k8s.io Sep 12 17:13:13.395011 containerd[2128]: time="2025-09-12T17:13:13.394744891Z" level=warning msg="cleaning up after shim disconnected" id=b42318ac9394e96fb24c28fa41084c68fc2649d09fe8caaf53106adf2d250917 namespace=k8s.io Sep 12 17:13:13.395011 containerd[2128]: time="2025-09-12T17:13:13.394766275Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:13:13.398325 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b42318ac9394e96fb24c28fa41084c68fc2649d09fe8caaf53106adf2d250917-rootfs.mount: Deactivated successfully. Sep 12 17:13:13.691847 containerd[2128]: time="2025-09-12T17:13:13.691404165Z" level=info msg="shim disconnected" id=9f010a57e0700a2b1f4621abd6e107e7ae01bf58ad3e0e519d4dbb70dfa45306 namespace=k8s.io Sep 12 17:13:13.691847 containerd[2128]: time="2025-09-12T17:13:13.691483569Z" level=warning msg="cleaning up after shim disconnected" id=9f010a57e0700a2b1f4621abd6e107e7ae01bf58ad3e0e519d4dbb70dfa45306 namespace=k8s.io Sep 12 17:13:13.691847 containerd[2128]: time="2025-09-12T17:13:13.691503657Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:13:13.697099 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f010a57e0700a2b1f4621abd6e107e7ae01bf58ad3e0e519d4dbb70dfa45306-rootfs.mount: Deactivated successfully. Sep 12 17:13:13.718759 containerd[2128]: time="2025-09-12T17:13:13.718578021Z" level=warning msg="cleanup warnings time=\"2025-09-12T17:13:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 17:13:14.434847 kubelet[3602]: I0912 17:13:14.434110 3602 scope.go:117] "RemoveContainer" containerID="b42318ac9394e96fb24c28fa41084c68fc2649d09fe8caaf53106adf2d250917" Sep 12 17:13:14.437746 containerd[2128]: time="2025-09-12T17:13:14.437655596Z" level=info msg="CreateContainer within sandbox \"618e64d7eb289dea1590b81cd2428bc60157e6150e73cf2c42ec44afda74fcbb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 12 17:13:14.440732 kubelet[3602]: I0912 17:13:14.440227 3602 scope.go:117] "RemoveContainer" containerID="9f010a57e0700a2b1f4621abd6e107e7ae01bf58ad3e0e519d4dbb70dfa45306" Sep 12 17:13:14.444197 containerd[2128]: time="2025-09-12T17:13:14.444140145Z" level=info msg="CreateContainer within sandbox \"d81358e1c54b74669165163db342c49a197c944e056bf6605a844b9a3ebd085a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 12 17:13:14.467678 containerd[2128]: time="2025-09-12T17:13:14.465677565Z" level=info msg="CreateContainer within sandbox \"618e64d7eb289dea1590b81cd2428bc60157e6150e73cf2c42ec44afda74fcbb\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"29b7f84fe13f970ebf55eb43cda63db3a6ccaf6f72b6f0185122a15469bea5ec\"" Sep 12 17:13:14.471122 containerd[2128]: time="2025-09-12T17:13:14.469212285Z" level=info msg="StartContainer for \"29b7f84fe13f970ebf55eb43cda63db3a6ccaf6f72b6f0185122a15469bea5ec\"" Sep 12 17:13:14.503180 containerd[2128]: time="2025-09-12T17:13:14.503120517Z" level=info msg="CreateContainer within sandbox \"d81358e1c54b74669165163db342c49a197c944e056bf6605a844b9a3ebd085a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2b51e1444f7025ce9d0476898e8f2b85b25340366b13e0d42399c53c4245c5a4\"" Sep 12 17:13:14.505751 containerd[2128]: time="2025-09-12T17:13:14.504167025Z" level=info msg="StartContainer for \"2b51e1444f7025ce9d0476898e8f2b85b25340366b13e0d42399c53c4245c5a4\"" Sep 12 17:13:14.621717 containerd[2128]: time="2025-09-12T17:13:14.620606913Z" level=info msg="StartContainer for \"29b7f84fe13f970ebf55eb43cda63db3a6ccaf6f72b6f0185122a15469bea5ec\" returns successfully" Sep 12 17:13:14.687130 containerd[2128]: time="2025-09-12T17:13:14.686990446Z" level=info msg="StartContainer for \"2b51e1444f7025ce9d0476898e8f2b85b25340366b13e0d42399c53c4245c5a4\" returns successfully" Sep 12 17:13:17.894723 containerd[2128]: time="2025-09-12T17:13:17.893955602Z" level=info msg="shim disconnected" id=c3fbbebe8c118752cdabffcc9d8776fc624a278b82b5311bf5ca0623c956a0a8 namespace=k8s.io Sep 12 17:13:17.894723 containerd[2128]: time="2025-09-12T17:13:17.894030806Z" level=warning msg="cleaning up after shim disconnected" id=c3fbbebe8c118752cdabffcc9d8776fc624a278b82b5311bf5ca0623c956a0a8 namespace=k8s.io Sep 12 17:13:17.894723 containerd[2128]: time="2025-09-12T17:13:17.894051026Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:13:17.901930 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3fbbebe8c118752cdabffcc9d8776fc624a278b82b5311bf5ca0623c956a0a8-rootfs.mount: Deactivated successfully. Sep 12 17:13:17.921303 containerd[2128]: time="2025-09-12T17:13:17.921244010Z" level=warning msg="cleanup warnings time=\"2025-09-12T17:13:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 17:13:18.473867 kubelet[3602]: I0912 17:13:18.473793 3602 scope.go:117] "RemoveContainer" containerID="c3fbbebe8c118752cdabffcc9d8776fc624a278b82b5311bf5ca0623c956a0a8" Sep 12 17:13:18.477348 containerd[2128]: time="2025-09-12T17:13:18.477291181Z" level=info msg="CreateContainer within sandbox \"27b64a5f64d05130aaf382a23536ff93e1b2a3fe929a94ae68f1af9b17cedf2b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 12 17:13:18.508639 containerd[2128]: time="2025-09-12T17:13:18.508562137Z" level=info msg="CreateContainer within sandbox \"27b64a5f64d05130aaf382a23536ff93e1b2a3fe929a94ae68f1af9b17cedf2b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"41b9d88799e5b936fc70bf69668b752db3f8f35e8758c984a0d5007534ebf999\"" Sep 12 17:13:18.509828 containerd[2128]: time="2025-09-12T17:13:18.509777893Z" level=info msg="StartContainer for \"41b9d88799e5b936fc70bf69668b752db3f8f35e8758c984a0d5007534ebf999\"" Sep 12 17:13:18.645386 containerd[2128]: time="2025-09-12T17:13:18.645287893Z" level=info msg="StartContainer for \"41b9d88799e5b936fc70bf69668b752db3f8f35e8758c984a0d5007534ebf999\" returns successfully" Sep 12 17:13:21.946972 kubelet[3602]: E0912 17:13:21.946908 3602 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-22-180)" Sep 12 17:13:26.176975 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29b7f84fe13f970ebf55eb43cda63db3a6ccaf6f72b6f0185122a15469bea5ec-rootfs.mount: Deactivated successfully. Sep 12 17:13:26.187489 containerd[2128]: time="2025-09-12T17:13:26.187399279Z" level=info msg="shim disconnected" id=29b7f84fe13f970ebf55eb43cda63db3a6ccaf6f72b6f0185122a15469bea5ec namespace=k8s.io Sep 12 17:13:26.188203 containerd[2128]: time="2025-09-12T17:13:26.187498447Z" level=warning msg="cleaning up after shim disconnected" id=29b7f84fe13f970ebf55eb43cda63db3a6ccaf6f72b6f0185122a15469bea5ec namespace=k8s.io Sep 12 17:13:26.188203 containerd[2128]: time="2025-09-12T17:13:26.187521331Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:13:26.502058 kubelet[3602]: I0912 17:13:26.501904 3602 scope.go:117] "RemoveContainer" containerID="b42318ac9394e96fb24c28fa41084c68fc2649d09fe8caaf53106adf2d250917" Sep 12 17:13:26.502680 kubelet[3602]: I0912 17:13:26.502451 3602 scope.go:117] "RemoveContainer" containerID="29b7f84fe13f970ebf55eb43cda63db3a6ccaf6f72b6f0185122a15469bea5ec" Sep 12 17:13:26.502680 kubelet[3602]: E0912 17:13:26.502648 3602 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-58fc44c59b-shtzw_tigera-operator(4b05dfc3-8f57-478c-a8fa-ff490d264ca9)\"" pod="tigera-operator/tigera-operator-58fc44c59b-shtzw" podUID="4b05dfc3-8f57-478c-a8fa-ff490d264ca9" Sep 12 17:13:26.505732 containerd[2128]: time="2025-09-12T17:13:26.505423328Z" level=info msg="RemoveContainer for \"b42318ac9394e96fb24c28fa41084c68fc2649d09fe8caaf53106adf2d250917\"" Sep 12 17:13:26.512572 containerd[2128]: time="2025-09-12T17:13:26.512522324Z" level=info msg="RemoveContainer for \"b42318ac9394e96fb24c28fa41084c68fc2649d09fe8caaf53106adf2d250917\" returns successfully" Sep 12 17:13:31.947882 kubelet[3602]: E0912 17:13:31.947768 3602 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.180:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-180?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"