Sep 12 23:52:56.235214 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 12 23:52:56.235260 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 12 22:36:20 -00 2025 Sep 12 23:52:56.235285 kernel: KASLR disabled due to lack of seed Sep 12 23:52:56.235302 kernel: efi: EFI v2.7 by EDK II Sep 12 23:52:56.235318 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x7852ee18 Sep 12 23:52:56.235334 kernel: ACPI: Early table checksum verification disabled Sep 12 23:52:56.235352 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 12 23:52:56.235368 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 12 23:52:56.235383 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 12 23:52:56.235399 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 12 23:52:56.235420 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 12 23:52:56.235436 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 12 23:52:56.235452 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 12 23:52:56.235467 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 12 23:52:56.235486 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 12 23:52:56.235507 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 12 23:52:56.235525 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 12 23:52:56.235541 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 12 23:52:56.235558 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 12 23:52:56.235574 kernel: printk: bootconsole [uart0] enabled Sep 12 23:52:56.235590 kernel: NUMA: Failed to initialise from firmware Sep 12 23:52:56.235607 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 12 23:52:56.235624 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Sep 12 23:52:56.235664 kernel: Zone ranges: Sep 12 23:52:56.235682 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 12 23:52:56.235699 kernel: DMA32 empty Sep 12 23:52:56.235721 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 12 23:52:56.235739 kernel: Movable zone start for each node Sep 12 23:52:56.235756 kernel: Early memory node ranges Sep 12 23:52:56.235773 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 12 23:52:56.235790 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 12 23:52:56.235806 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 12 23:52:56.235822 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 12 23:52:56.235839 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 12 23:52:56.235855 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 12 23:52:56.235872 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 12 23:52:56.235888 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 12 23:52:56.235905 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 12 23:52:56.235926 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 12 23:52:56.235943 kernel: psci: probing for conduit method from ACPI. Sep 12 23:52:56.235967 kernel: psci: PSCIv1.0 detected in firmware. Sep 12 23:52:56.235985 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 23:52:56.236003 kernel: psci: Trusted OS migration not required Sep 12 23:52:56.236024 kernel: psci: SMC Calling Convention v1.1 Sep 12 23:52:56.236042 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Sep 12 23:52:56.236060 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 12 23:52:56.236077 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 12 23:52:56.236095 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 12 23:52:56.236113 kernel: Detected PIPT I-cache on CPU0 Sep 12 23:52:56.236130 kernel: CPU features: detected: GIC system register CPU interface Sep 12 23:52:56.236148 kernel: CPU features: detected: Spectre-v2 Sep 12 23:52:56.236165 kernel: CPU features: detected: Spectre-v3a Sep 12 23:52:56.236183 kernel: CPU features: detected: Spectre-BHB Sep 12 23:52:56.236200 kernel: CPU features: detected: ARM erratum 1742098 Sep 12 23:52:56.236222 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 12 23:52:56.236240 kernel: alternatives: applying boot alternatives Sep 12 23:52:56.236259 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 12 23:52:56.236278 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 23:52:56.236296 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 23:52:56.236314 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 23:52:56.236331 kernel: Fallback order for Node 0: 0 Sep 12 23:52:56.236349 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Sep 12 23:52:56.236366 kernel: Policy zone: Normal Sep 12 23:52:56.236383 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 23:52:56.236400 kernel: software IO TLB: area num 2. Sep 12 23:52:56.236423 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Sep 12 23:52:56.236442 kernel: Memory: 3820024K/4030464K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39488K init, 897K bss, 210440K reserved, 0K cma-reserved) Sep 12 23:52:56.236460 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 23:52:56.236477 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 23:52:56.236496 kernel: rcu: RCU event tracing is enabled. Sep 12 23:52:56.236514 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 23:52:56.236532 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 23:52:56.236550 kernel: Tracing variant of Tasks RCU enabled. Sep 12 23:52:56.236567 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 23:52:56.236585 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 23:52:56.236602 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 23:52:56.236624 kernel: GICv3: 96 SPIs implemented Sep 12 23:52:56.239188 kernel: GICv3: 0 Extended SPIs implemented Sep 12 23:52:56.239209 kernel: Root IRQ handler: gic_handle_irq Sep 12 23:52:56.239227 kernel: GICv3: GICv3 features: 16 PPIs Sep 12 23:52:56.239244 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 12 23:52:56.239262 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 12 23:52:56.239279 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Sep 12 23:52:56.239297 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Sep 12 23:52:56.239315 kernel: GICv3: using LPI property table @0x00000004000d0000 Sep 12 23:52:56.239332 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 12 23:52:56.239350 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Sep 12 23:52:56.239368 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 23:52:56.239395 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 12 23:52:56.239413 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 12 23:52:56.239431 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 12 23:52:56.239448 kernel: Console: colour dummy device 80x25 Sep 12 23:52:56.239467 kernel: printk: console [tty1] enabled Sep 12 23:52:56.239484 kernel: ACPI: Core revision 20230628 Sep 12 23:52:56.239503 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 12 23:52:56.239521 kernel: pid_max: default: 32768 minimum: 301 Sep 12 23:52:56.239539 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 23:52:56.239561 kernel: landlock: Up and running. Sep 12 23:52:56.239580 kernel: SELinux: Initializing. Sep 12 23:52:56.239598 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:52:56.239616 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:52:56.239653 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 23:52:56.239675 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 23:52:56.239719 kernel: rcu: Hierarchical SRCU implementation. Sep 12 23:52:56.239742 kernel: rcu: Max phase no-delay instances is 400. Sep 12 23:52:56.239761 kernel: Platform MSI: ITS@0x10080000 domain created Sep 12 23:52:56.239787 kernel: PCI/MSI: ITS@0x10080000 domain created Sep 12 23:52:56.239805 kernel: Remapping and enabling EFI services. Sep 12 23:52:56.239824 kernel: smp: Bringing up secondary CPUs ... Sep 12 23:52:56.239843 kernel: Detected PIPT I-cache on CPU1 Sep 12 23:52:56.239863 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 12 23:52:56.239882 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Sep 12 23:52:56.239900 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 12 23:52:56.239919 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 23:52:56.239938 kernel: SMP: Total of 2 processors activated. Sep 12 23:52:56.239956 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 23:52:56.239982 kernel: CPU features: detected: 32-bit EL1 Support Sep 12 23:52:56.240002 kernel: CPU features: detected: CRC32 instructions Sep 12 23:52:56.240033 kernel: CPU: All CPU(s) started at EL1 Sep 12 23:52:56.240057 kernel: alternatives: applying system-wide alternatives Sep 12 23:52:56.240078 kernel: devtmpfs: initialized Sep 12 23:52:56.240100 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 23:52:56.240120 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 23:52:56.240139 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 23:52:56.240158 kernel: SMBIOS 3.0.0 present. Sep 12 23:52:56.240182 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 12 23:52:56.240202 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 23:52:56.240221 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 23:52:56.240240 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 23:52:56.240259 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 23:52:56.240278 kernel: audit: initializing netlink subsys (disabled) Sep 12 23:52:56.240298 kernel: audit: type=2000 audit(0.292:1): state=initialized audit_enabled=0 res=1 Sep 12 23:52:56.240321 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 23:52:56.240340 kernel: cpuidle: using governor menu Sep 12 23:52:56.240360 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 23:52:56.240380 kernel: ASID allocator initialised with 65536 entries Sep 12 23:52:56.240399 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 23:52:56.240418 kernel: Serial: AMBA PL011 UART driver Sep 12 23:52:56.240438 kernel: Modules: 17472 pages in range for non-PLT usage Sep 12 23:52:56.240459 kernel: Modules: 508992 pages in range for PLT usage Sep 12 23:52:56.240480 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 23:52:56.240504 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 23:52:56.240525 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 23:52:56.240545 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 23:52:56.240565 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 23:52:56.240584 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 23:52:56.240603 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 23:52:56.240622 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 23:52:56.241754 kernel: ACPI: Added _OSI(Module Device) Sep 12 23:52:56.241779 kernel: ACPI: Added _OSI(Processor Device) Sep 12 23:52:56.241808 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 23:52:56.241827 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 23:52:56.241846 kernel: ACPI: Interpreter enabled Sep 12 23:52:56.241865 kernel: ACPI: Using GIC for interrupt routing Sep 12 23:52:56.241883 kernel: ACPI: MCFG table detected, 1 entries Sep 12 23:52:56.241902 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 12 23:52:56.242230 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 23:52:56.242445 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 12 23:52:56.242685 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 12 23:52:56.242895 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 12 23:52:56.243109 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 12 23:52:56.243135 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 12 23:52:56.243155 kernel: acpiphp: Slot [1] registered Sep 12 23:52:56.243175 kernel: acpiphp: Slot [2] registered Sep 12 23:52:56.243194 kernel: acpiphp: Slot [3] registered Sep 12 23:52:56.243212 kernel: acpiphp: Slot [4] registered Sep 12 23:52:56.243239 kernel: acpiphp: Slot [5] registered Sep 12 23:52:56.243258 kernel: acpiphp: Slot [6] registered Sep 12 23:52:56.243276 kernel: acpiphp: Slot [7] registered Sep 12 23:52:56.243295 kernel: acpiphp: Slot [8] registered Sep 12 23:52:56.243313 kernel: acpiphp: Slot [9] registered Sep 12 23:52:56.243332 kernel: acpiphp: Slot [10] registered Sep 12 23:52:56.243350 kernel: acpiphp: Slot [11] registered Sep 12 23:52:56.243369 kernel: acpiphp: Slot [12] registered Sep 12 23:52:56.243388 kernel: acpiphp: Slot [13] registered Sep 12 23:52:56.243406 kernel: acpiphp: Slot [14] registered Sep 12 23:52:56.243430 kernel: acpiphp: Slot [15] registered Sep 12 23:52:56.243449 kernel: acpiphp: Slot [16] registered Sep 12 23:52:56.243469 kernel: acpiphp: Slot [17] registered Sep 12 23:52:56.243487 kernel: acpiphp: Slot [18] registered Sep 12 23:52:56.243506 kernel: acpiphp: Slot [19] registered Sep 12 23:52:56.243524 kernel: acpiphp: Slot [20] registered Sep 12 23:52:56.243543 kernel: acpiphp: Slot [21] registered Sep 12 23:52:56.243561 kernel: acpiphp: Slot [22] registered Sep 12 23:52:56.243580 kernel: acpiphp: Slot [23] registered Sep 12 23:52:56.243603 kernel: acpiphp: Slot [24] registered Sep 12 23:52:56.243622 kernel: acpiphp: Slot [25] registered Sep 12 23:52:56.246387 kernel: acpiphp: Slot [26] registered Sep 12 23:52:56.246467 kernel: acpiphp: Slot [27] registered Sep 12 23:52:56.246888 kernel: acpiphp: Slot [28] registered Sep 12 23:52:56.247064 kernel: acpiphp: Slot [29] registered Sep 12 23:52:56.247371 kernel: acpiphp: Slot [30] registered Sep 12 23:52:56.247548 kernel: acpiphp: Slot [31] registered Sep 12 23:52:56.250486 kernel: PCI host bridge to bus 0000:00 Sep 12 23:52:56.250780 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 12 23:52:56.250989 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 12 23:52:56.251181 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 12 23:52:56.251370 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 12 23:52:56.251651 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Sep 12 23:52:56.251895 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Sep 12 23:52:56.252118 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Sep 12 23:52:56.252352 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 12 23:52:56.252565 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Sep 12 23:52:56.253500 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 12 23:52:56.253780 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 12 23:52:56.254005 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Sep 12 23:52:56.254803 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Sep 12 23:52:56.255065 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Sep 12 23:52:56.255278 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 12 23:52:56.255495 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Sep 12 23:52:56.255773 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Sep 12 23:52:56.255987 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Sep 12 23:52:56.256192 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Sep 12 23:52:56.256410 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Sep 12 23:52:56.256606 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 12 23:52:56.256931 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 12 23:52:56.257189 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 12 23:52:56.257218 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 12 23:52:56.257238 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 12 23:52:56.257258 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 12 23:52:56.257277 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 12 23:52:56.257296 kernel: iommu: Default domain type: Translated Sep 12 23:52:56.257315 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 23:52:56.257344 kernel: efivars: Registered efivars operations Sep 12 23:52:56.257362 kernel: vgaarb: loaded Sep 12 23:52:56.257382 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 23:52:56.257401 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 23:52:56.257420 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 23:52:56.257438 kernel: pnp: PnP ACPI init Sep 12 23:52:56.257766 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 12 23:52:56.257796 kernel: pnp: PnP ACPI: found 1 devices Sep 12 23:52:56.257822 kernel: NET: Registered PF_INET protocol family Sep 12 23:52:56.257842 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 23:52:56.257861 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 23:52:56.257881 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 23:52:56.257899 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 23:52:56.257918 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 23:52:56.257937 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 23:52:56.257956 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:52:56.257974 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:52:56.257998 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 23:52:56.258017 kernel: PCI: CLS 0 bytes, default 64 Sep 12 23:52:56.258035 kernel: kvm [1]: HYP mode not available Sep 12 23:52:56.258054 kernel: Initialise system trusted keyrings Sep 12 23:52:56.258073 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 23:52:56.258092 kernel: Key type asymmetric registered Sep 12 23:52:56.258110 kernel: Asymmetric key parser 'x509' registered Sep 12 23:52:56.258129 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 12 23:52:56.258149 kernel: io scheduler mq-deadline registered Sep 12 23:52:56.258173 kernel: io scheduler kyber registered Sep 12 23:52:56.258192 kernel: io scheduler bfq registered Sep 12 23:52:56.258420 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 12 23:52:56.258449 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 12 23:52:56.258468 kernel: ACPI: button: Power Button [PWRB] Sep 12 23:52:56.258488 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 12 23:52:56.258507 kernel: ACPI: button: Sleep Button [SLPB] Sep 12 23:52:56.258525 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 23:52:56.258551 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 12 23:52:56.258792 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 12 23:52:56.258820 kernel: printk: console [ttyS0] disabled Sep 12 23:52:56.258840 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 12 23:52:56.258859 kernel: printk: console [ttyS0] enabled Sep 12 23:52:56.258878 kernel: printk: bootconsole [uart0] disabled Sep 12 23:52:56.258897 kernel: thunder_xcv, ver 1.0 Sep 12 23:52:56.258915 kernel: thunder_bgx, ver 1.0 Sep 12 23:52:56.258935 kernel: nicpf, ver 1.0 Sep 12 23:52:56.258959 kernel: nicvf, ver 1.0 Sep 12 23:52:56.259174 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 23:52:56.259368 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T23:52:55 UTC (1757721175) Sep 12 23:52:56.259394 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 23:52:56.259414 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Sep 12 23:52:56.259433 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 12 23:52:56.259452 kernel: watchdog: Hard watchdog permanently disabled Sep 12 23:52:56.259472 kernel: NET: Registered PF_INET6 protocol family Sep 12 23:52:56.259496 kernel: Segment Routing with IPv6 Sep 12 23:52:56.259516 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 23:52:56.259534 kernel: NET: Registered PF_PACKET protocol family Sep 12 23:52:56.259553 kernel: Key type dns_resolver registered Sep 12 23:52:56.259571 kernel: registered taskstats version 1 Sep 12 23:52:56.259590 kernel: Loading compiled-in X.509 certificates Sep 12 23:52:56.259609 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 036ad4721a31543be5c000f2896b40d1e5515c6e' Sep 12 23:52:56.259647 kernel: Key type .fscrypt registered Sep 12 23:52:56.259671 kernel: Key type fscrypt-provisioning registered Sep 12 23:52:56.259697 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 23:52:56.259716 kernel: ima: Allocated hash algorithm: sha1 Sep 12 23:52:56.259735 kernel: ima: No architecture policies found Sep 12 23:52:56.259754 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 23:52:56.259772 kernel: clk: Disabling unused clocks Sep 12 23:52:56.259792 kernel: Freeing unused kernel memory: 39488K Sep 12 23:52:56.259811 kernel: Run /init as init process Sep 12 23:52:56.259829 kernel: with arguments: Sep 12 23:52:56.259848 kernel: /init Sep 12 23:52:56.259866 kernel: with environment: Sep 12 23:52:56.259889 kernel: HOME=/ Sep 12 23:52:56.259908 kernel: TERM=linux Sep 12 23:52:56.259927 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 23:52:56.259950 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 23:52:56.259974 systemd[1]: Detected virtualization amazon. Sep 12 23:52:56.259995 systemd[1]: Detected architecture arm64. Sep 12 23:52:56.260015 systemd[1]: Running in initrd. Sep 12 23:52:56.260041 systemd[1]: No hostname configured, using default hostname. Sep 12 23:52:56.260061 systemd[1]: Hostname set to . Sep 12 23:52:56.260083 systemd[1]: Initializing machine ID from VM UUID. Sep 12 23:52:56.260104 systemd[1]: Queued start job for default target initrd.target. Sep 12 23:52:56.260125 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:52:56.260147 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:52:56.260169 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 23:52:56.260191 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 23:52:56.260217 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 23:52:56.260238 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 23:52:56.260262 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 23:52:56.260318 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 23:52:56.260341 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:52:56.260362 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:52:56.260383 systemd[1]: Reached target paths.target - Path Units. Sep 12 23:52:56.260410 systemd[1]: Reached target slices.target - Slice Units. Sep 12 23:52:56.260432 systemd[1]: Reached target swap.target - Swaps. Sep 12 23:52:56.260453 systemd[1]: Reached target timers.target - Timer Units. Sep 12 23:52:56.260475 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:52:56.260496 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:52:56.260518 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 23:52:56.260540 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 23:52:56.260561 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:52:56.260582 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 23:52:56.260608 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:52:56.262698 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 23:52:56.262749 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 23:52:56.262772 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 23:52:56.262794 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 23:52:56.262815 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 23:52:56.262835 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 23:52:56.262856 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 23:52:56.262886 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:52:56.262907 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 23:52:56.262927 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:52:56.262993 systemd-journald[251]: Collecting audit messages is disabled. Sep 12 23:52:56.263042 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 23:52:56.263066 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 23:52:56.263087 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 23:52:56.263107 systemd-journald[251]: Journal started Sep 12 23:52:56.263150 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2801a2cffba07d058bb57270bae526) is 8.0M, max 75.3M, 67.3M free. Sep 12 23:52:56.220611 systemd-modules-load[252]: Inserted module 'overlay' Sep 12 23:52:56.271502 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 23:52:56.279037 kernel: Bridge firewalling registered Sep 12 23:52:56.275502 systemd-modules-load[252]: Inserted module 'br_netfilter' Sep 12 23:52:56.284002 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 23:52:56.288583 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 23:52:56.297783 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:52:56.303602 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:52:56.326919 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:52:56.350831 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:52:56.354013 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 23:52:56.355765 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:52:56.401221 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:52:56.418014 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 23:52:56.421401 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:52:56.426588 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:52:56.449047 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 23:52:56.469216 dracut-cmdline[286]: dracut-dracut-053 Sep 12 23:52:56.481183 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 12 23:52:56.529049 systemd-resolved[289]: Positive Trust Anchors: Sep 12 23:52:56.529087 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 23:52:56.529152 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 23:52:56.638670 kernel: SCSI subsystem initialized Sep 12 23:52:56.646671 kernel: Loading iSCSI transport class v2.0-870. Sep 12 23:52:56.660685 kernel: iscsi: registered transport (tcp) Sep 12 23:52:56.681672 kernel: iscsi: registered transport (qla4xxx) Sep 12 23:52:56.681745 kernel: QLogic iSCSI HBA Driver Sep 12 23:52:56.757656 kernel: random: crng init done Sep 12 23:52:56.757960 systemd-resolved[289]: Defaulting to hostname 'linux'. Sep 12 23:52:56.762878 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 23:52:56.766408 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:52:56.793713 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 23:52:56.807919 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 23:52:56.836952 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 23:52:56.837029 kernel: device-mapper: uevent: version 1.0.3 Sep 12 23:52:56.837057 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 23:52:56.904680 kernel: raid6: neonx8 gen() 6755 MB/s Sep 12 23:52:56.921672 kernel: raid6: neonx4 gen() 6581 MB/s Sep 12 23:52:56.938681 kernel: raid6: neonx2 gen() 5462 MB/s Sep 12 23:52:56.955670 kernel: raid6: neonx1 gen() 3959 MB/s Sep 12 23:52:56.972686 kernel: raid6: int64x8 gen() 3826 MB/s Sep 12 23:52:56.989678 kernel: raid6: int64x4 gen() 3723 MB/s Sep 12 23:52:57.006675 kernel: raid6: int64x2 gen() 3607 MB/s Sep 12 23:52:57.024611 kernel: raid6: int64x1 gen() 2758 MB/s Sep 12 23:52:57.024687 kernel: raid6: using algorithm neonx8 gen() 6755 MB/s Sep 12 23:52:57.042678 kernel: raid6: .... xor() 4788 MB/s, rmw enabled Sep 12 23:52:57.042745 kernel: raid6: using neon recovery algorithm Sep 12 23:52:57.050672 kernel: xor: measuring software checksum speed Sep 12 23:52:57.052842 kernel: 8regs : 10239 MB/sec Sep 12 23:52:57.052880 kernel: 32regs : 11918 MB/sec Sep 12 23:52:57.054086 kernel: arm64_neon : 9568 MB/sec Sep 12 23:52:57.054118 kernel: xor: using function: 32regs (11918 MB/sec) Sep 12 23:52:57.138676 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 23:52:57.158671 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:52:57.171001 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:52:57.211042 systemd-udevd[472]: Using default interface naming scheme 'v255'. Sep 12 23:52:57.219205 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:52:57.233999 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 23:52:57.270890 dracut-pre-trigger[473]: rd.md=0: removing MD RAID activation Sep 12 23:52:57.333306 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:52:57.348136 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 23:52:57.466080 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:52:57.480948 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 23:52:57.536388 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 23:52:57.545830 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:52:57.554237 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:52:57.560368 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 23:52:57.573085 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 23:52:57.616687 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:52:57.676379 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 23:52:57.676669 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:52:57.681123 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:52:57.684348 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:52:57.684655 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:52:57.688880 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:52:57.720141 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:52:57.734435 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 12 23:52:57.734511 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 12 23:52:57.738079 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 12 23:52:57.738509 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 12 23:52:57.747813 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:81:33:ad:a4:6f Sep 12 23:52:57.748153 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 12 23:52:57.748183 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 12 23:52:57.758083 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 12 23:52:57.763688 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 23:52:57.763753 kernel: GPT:9289727 != 16777215 Sep 12 23:52:57.763779 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 23:52:57.765130 kernel: GPT:9289727 != 16777215 Sep 12 23:52:57.767614 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 23:52:57.767724 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 23:52:57.770408 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:52:57.772106 (udev-worker)[518]: Network interface NamePolicy= disabled on kernel command line. Sep 12 23:52:57.791035 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:52:57.835138 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:52:57.906676 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (524) Sep 12 23:52:57.919206 kernel: BTRFS: device fsid 29bc4da8-c689-46a2-a16a-b7bbc722db77 devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (521) Sep 12 23:52:58.017958 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 12 23:52:58.056420 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 12 23:52:58.084153 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 23:52:58.095714 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 12 23:52:58.108518 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 12 23:52:58.123029 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 23:52:58.138924 disk-uuid[664]: Primary Header is updated. Sep 12 23:52:58.138924 disk-uuid[664]: Secondary Entries is updated. Sep 12 23:52:58.138924 disk-uuid[664]: Secondary Header is updated. Sep 12 23:52:58.152655 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 23:52:58.162694 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 23:52:58.170704 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 23:52:59.172118 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 12 23:52:59.173709 disk-uuid[665]: The operation has completed successfully. Sep 12 23:52:59.386928 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 23:52:59.387181 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 23:52:59.431903 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 23:52:59.444423 sh[1009]: Success Sep 12 23:52:59.470693 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 12 23:52:59.581356 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 23:52:59.598889 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 23:52:59.605281 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 23:52:59.643534 kernel: BTRFS info (device dm-0): first mount of filesystem 29bc4da8-c689-46a2-a16a-b7bbc722db77 Sep 12 23:52:59.643595 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:52:59.646663 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 23:52:59.646700 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 23:52:59.647931 kernel: BTRFS info (device dm-0): using free space tree Sep 12 23:52:59.746677 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 12 23:52:59.771877 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 23:52:59.777702 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 23:52:59.788938 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 23:52:59.796988 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 23:52:59.829332 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:52:59.829412 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:52:59.829445 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 23:52:59.846707 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 23:52:59.863747 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 23:52:59.870714 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:52:59.879544 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 23:52:59.895062 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 23:52:59.987707 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:53:00.005035 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 23:53:00.058937 systemd-networkd[1201]: lo: Link UP Sep 12 23:53:00.058959 systemd-networkd[1201]: lo: Gained carrier Sep 12 23:53:00.061934 systemd-networkd[1201]: Enumeration completed Sep 12 23:53:00.063149 systemd-networkd[1201]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:53:00.063157 systemd-networkd[1201]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:53:00.068909 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 23:53:00.070060 systemd-networkd[1201]: eth0: Link UP Sep 12 23:53:00.070068 systemd-networkd[1201]: eth0: Gained carrier Sep 12 23:53:00.070089 systemd-networkd[1201]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:53:00.081781 systemd[1]: Reached target network.target - Network. Sep 12 23:53:00.109166 systemd-networkd[1201]: eth0: DHCPv4 address 172.31.18.203/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 23:53:00.408145 ignition[1132]: Ignition 2.19.0 Sep 12 23:53:00.408175 ignition[1132]: Stage: fetch-offline Sep 12 23:53:00.413309 ignition[1132]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:53:00.413362 ignition[1132]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 23:53:00.420427 ignition[1132]: Ignition finished successfully Sep 12 23:53:00.424842 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:53:00.444819 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 23:53:00.472959 ignition[1211]: Ignition 2.19.0 Sep 12 23:53:00.472978 ignition[1211]: Stage: fetch Sep 12 23:53:00.473577 ignition[1211]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:53:00.473601 ignition[1211]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 23:53:00.473800 ignition[1211]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 23:53:00.506739 ignition[1211]: PUT result: OK Sep 12 23:53:00.514873 ignition[1211]: parsed url from cmdline: "" Sep 12 23:53:00.514892 ignition[1211]: no config URL provided Sep 12 23:53:00.514909 ignition[1211]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 23:53:00.514937 ignition[1211]: no config at "/usr/lib/ignition/user.ign" Sep 12 23:53:00.514975 ignition[1211]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 23:53:00.519011 ignition[1211]: PUT result: OK Sep 12 23:53:00.519123 ignition[1211]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 12 23:53:00.533049 ignition[1211]: GET result: OK Sep 12 23:53:00.533256 ignition[1211]: parsing config with SHA512: e8eda2f801826d57ee00c964b4ec485d05a2134344d16ce4ee8a8485114952bce5596550c32fa7f27f3aa751b8045422365b7535a9b27f364d4ab97a82b5606a Sep 12 23:53:00.549326 unknown[1211]: fetched base config from "system" Sep 12 23:53:00.552945 unknown[1211]: fetched base config from "system" Sep 12 23:53:00.553573 unknown[1211]: fetched user config from "aws" Sep 12 23:53:00.554468 ignition[1211]: fetch: fetch complete Sep 12 23:53:00.554483 ignition[1211]: fetch: fetch passed Sep 12 23:53:00.554615 ignition[1211]: Ignition finished successfully Sep 12 23:53:00.568194 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 23:53:00.580150 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 23:53:00.612274 ignition[1217]: Ignition 2.19.0 Sep 12 23:53:00.612312 ignition[1217]: Stage: kargs Sep 12 23:53:00.613048 ignition[1217]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:53:00.613075 ignition[1217]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 23:53:00.613244 ignition[1217]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 23:53:00.627817 ignition[1217]: PUT result: OK Sep 12 23:53:00.636405 ignition[1217]: kargs: kargs passed Sep 12 23:53:00.636571 ignition[1217]: Ignition finished successfully Sep 12 23:53:00.642960 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 23:53:00.657090 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 23:53:00.687426 ignition[1223]: Ignition 2.19.0 Sep 12 23:53:00.687458 ignition[1223]: Stage: disks Sep 12 23:53:00.688190 ignition[1223]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:53:00.688218 ignition[1223]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 23:53:00.688385 ignition[1223]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 23:53:00.692270 ignition[1223]: PUT result: OK Sep 12 23:53:00.709371 ignition[1223]: disks: disks passed Sep 12 23:53:00.709502 ignition[1223]: Ignition finished successfully Sep 12 23:53:00.717482 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 23:53:00.728453 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 23:53:00.734255 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 23:53:00.743746 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 23:53:00.749972 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 23:53:00.756535 systemd[1]: Reached target basic.target - Basic System. Sep 12 23:53:00.769061 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 23:53:00.827027 systemd-fsck[1231]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 23:53:00.834092 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 23:53:00.851109 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 23:53:00.928078 kernel: EXT4-fs (nvme0n1p9): mounted filesystem d35fd879-6758-447b-9fdd-bb21dd7c5b2b r/w with ordered data mode. Quota mode: none. Sep 12 23:53:00.929220 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 23:53:00.937542 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 23:53:00.959836 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:53:00.970378 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 23:53:00.981424 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 23:53:00.981595 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 23:53:00.981727 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:53:01.010674 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1250) Sep 12 23:53:01.011135 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 23:53:01.019100 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:53:01.019145 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:53:01.019172 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 23:53:01.031951 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 23:53:01.048151 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 23:53:01.050056 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:53:01.334878 systemd-networkd[1201]: eth0: Gained IPv6LL Sep 12 23:53:01.436019 initrd-setup-root[1274]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 23:53:01.447711 initrd-setup-root[1281]: cut: /sysroot/etc/group: No such file or directory Sep 12 23:53:01.458849 initrd-setup-root[1288]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 23:53:01.469325 initrd-setup-root[1295]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 23:53:01.836320 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 23:53:01.849092 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 23:53:01.866231 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 23:53:01.887353 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:53:01.887321 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 23:53:01.937536 ignition[1362]: INFO : Ignition 2.19.0 Sep 12 23:53:01.937536 ignition[1362]: INFO : Stage: mount Sep 12 23:53:01.944872 ignition[1362]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:53:01.944872 ignition[1362]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 23:53:01.944872 ignition[1362]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 23:53:01.955242 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 23:53:01.959226 ignition[1362]: INFO : PUT result: OK Sep 12 23:53:01.966902 ignition[1362]: INFO : mount: mount passed Sep 12 23:53:01.966902 ignition[1362]: INFO : Ignition finished successfully Sep 12 23:53:01.975251 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 23:53:01.988050 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 23:53:02.018456 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:53:02.044716 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1374) Sep 12 23:53:02.049494 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 12 23:53:02.049567 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:53:02.049595 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 12 23:53:02.057698 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 12 23:53:02.061332 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:53:02.104348 ignition[1391]: INFO : Ignition 2.19.0 Sep 12 23:53:02.104348 ignition[1391]: INFO : Stage: files Sep 12 23:53:02.109369 ignition[1391]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:53:02.109369 ignition[1391]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 23:53:02.109369 ignition[1391]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 23:53:02.119902 ignition[1391]: INFO : PUT result: OK Sep 12 23:53:02.124261 ignition[1391]: DEBUG : files: compiled without relabeling support, skipping Sep 12 23:53:02.137672 ignition[1391]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 23:53:02.137672 ignition[1391]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 23:53:02.167709 ignition[1391]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 23:53:02.172958 ignition[1391]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 23:53:02.177599 unknown[1391]: wrote ssh authorized keys file for user: core Sep 12 23:53:02.180535 ignition[1391]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 23:53:02.192618 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 12 23:53:02.197729 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 12 23:53:02.197729 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 12 23:53:02.197729 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 12 23:53:02.287396 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 23:53:02.641339 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 12 23:53:02.648976 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 23:53:02.648976 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 23:53:02.648976 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 23:53:02.648976 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 23:53:02.648976 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 23:53:02.648976 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 23:53:02.648976 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 23:53:02.648976 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 23:53:02.648976 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:53:02.648976 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:53:02.648976 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 23:53:02.648976 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 23:53:02.648976 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 23:53:02.648976 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 12 23:53:03.227406 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 23:53:04.050558 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 23:53:04.050558 ignition[1391]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 12 23:53:04.061167 ignition[1391]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 12 23:53:04.061167 ignition[1391]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 12 23:53:04.061167 ignition[1391]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 12 23:53:04.061167 ignition[1391]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 12 23:53:04.061167 ignition[1391]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 23:53:04.061167 ignition[1391]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 23:53:04.061167 ignition[1391]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 12 23:53:04.061167 ignition[1391]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 12 23:53:04.061167 ignition[1391]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 23:53:04.061167 ignition[1391]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:53:04.061167 ignition[1391]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:53:04.061167 ignition[1391]: INFO : files: files passed Sep 12 23:53:04.061167 ignition[1391]: INFO : Ignition finished successfully Sep 12 23:53:04.060436 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 23:53:04.085468 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 23:53:04.132715 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 23:53:04.143378 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 23:53:04.144917 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 23:53:04.195680 initrd-setup-root-after-ignition[1424]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:53:04.200444 initrd-setup-root-after-ignition[1420]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:53:04.200444 initrd-setup-root-after-ignition[1420]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:53:04.213798 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:53:04.217839 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 23:53:04.235103 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 23:53:04.303985 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 23:53:04.304195 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 23:53:04.307981 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 23:53:04.311387 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 23:53:04.314752 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 23:53:04.329003 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 23:53:04.365279 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:53:04.378098 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 23:53:04.406717 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:53:04.413223 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:53:04.416566 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 23:53:04.421571 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 23:53:04.422260 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:53:04.432372 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 23:53:04.436187 systemd[1]: Stopped target basic.target - Basic System. Sep 12 23:53:04.441139 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 23:53:04.448294 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:53:04.452238 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 23:53:04.460920 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 23:53:04.464026 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:53:04.467362 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 23:53:04.478519 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 23:53:04.483927 systemd[1]: Stopped target swap.target - Swaps. Sep 12 23:53:04.486119 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 23:53:04.486358 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:53:04.489315 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:53:04.492072 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:53:04.495887 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 23:53:04.512421 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:53:04.515717 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 23:53:04.515970 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 23:53:04.519861 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 23:53:04.520156 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:53:04.524466 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 23:53:04.524774 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 23:53:04.554150 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 23:53:04.562284 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 23:53:04.571820 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 23:53:04.572140 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:53:04.578135 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 23:53:04.578399 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:53:04.600260 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 23:53:04.600458 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 23:53:04.620186 ignition[1444]: INFO : Ignition 2.19.0 Sep 12 23:53:04.620186 ignition[1444]: INFO : Stage: umount Sep 12 23:53:04.620186 ignition[1444]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:53:04.620186 ignition[1444]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 12 23:53:04.620186 ignition[1444]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 12 23:53:04.620186 ignition[1444]: INFO : PUT result: OK Sep 12 23:53:04.637416 ignition[1444]: INFO : umount: umount passed Sep 12 23:53:04.639384 ignition[1444]: INFO : Ignition finished successfully Sep 12 23:53:04.644867 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 23:53:04.645125 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 23:53:04.648776 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 23:53:04.648889 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 23:53:04.662581 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 23:53:04.662970 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 23:53:04.669867 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 23:53:04.669979 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 23:53:04.673018 systemd[1]: Stopped target network.target - Network. Sep 12 23:53:04.677131 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 23:53:04.677265 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:53:04.687890 systemd[1]: Stopped target paths.target - Path Units. Sep 12 23:53:04.690008 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 23:53:04.695504 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:53:04.709345 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 23:53:04.712215 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 23:53:04.715194 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 23:53:04.715285 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:53:04.734456 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 23:53:04.734553 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:53:04.737430 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 23:53:04.737551 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 23:53:04.740801 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 23:53:04.740912 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 23:53:04.751353 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 23:53:04.755679 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 23:53:04.760420 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 23:53:04.761579 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 23:53:04.761822 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 23:53:04.766270 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 23:53:04.766470 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 23:53:04.770926 systemd-networkd[1201]: eth0: DHCPv6 lease lost Sep 12 23:53:04.784138 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 23:53:04.784366 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 23:53:04.795781 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 23:53:04.796321 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:53:04.825818 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 23:53:04.835721 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 23:53:04.835875 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:53:04.840796 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:53:04.853480 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 23:53:04.855810 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 23:53:04.872288 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 23:53:04.872499 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:53:04.879203 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 23:53:04.879314 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 23:53:04.885410 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 23:53:04.885520 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:53:04.914481 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 23:53:04.917846 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:53:04.923240 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 23:53:04.923394 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 23:53:04.934173 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 23:53:04.934256 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:53:04.939182 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 23:53:04.939301 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:53:04.954681 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 23:53:04.954805 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 23:53:04.966476 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 23:53:04.966612 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:53:04.978747 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 23:53:04.981912 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 23:53:04.982050 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:53:04.985614 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 23:53:04.985777 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:53:04.989123 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 23:53:04.989240 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:53:04.992562 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:53:04.992740 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:53:04.996555 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 23:53:04.996832 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 23:53:05.053519 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 23:53:05.053957 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 23:53:05.062930 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 23:53:05.078898 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 23:53:05.130802 systemd[1]: Switching root. Sep 12 23:53:05.175796 systemd-journald[251]: Journal stopped Sep 12 23:53:08.112406 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Sep 12 23:53:08.112710 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 23:53:08.112805 kernel: SELinux: policy capability open_perms=1 Sep 12 23:53:08.112879 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 23:53:08.112931 kernel: SELinux: policy capability always_check_network=0 Sep 12 23:53:08.112996 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 23:53:08.113068 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 23:53:08.113124 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 23:53:08.113194 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 23:53:08.113277 kernel: audit: type=1403 audit(1757721186.070:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 23:53:08.113346 systemd[1]: Successfully loaded SELinux policy in 100.449ms. Sep 12 23:53:08.113430 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.750ms. Sep 12 23:53:08.113540 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 23:53:08.113609 systemd[1]: Detected virtualization amazon. Sep 12 23:53:08.117744 systemd[1]: Detected architecture arm64. Sep 12 23:53:08.117792 systemd[1]: Detected first boot. Sep 12 23:53:08.117827 systemd[1]: Initializing machine ID from VM UUID. Sep 12 23:53:08.117863 zram_generator::config[1504]: No configuration found. Sep 12 23:53:08.117902 systemd[1]: Populated /etc with preset unit settings. Sep 12 23:53:08.117937 systemd[1]: Queued start job for default target multi-user.target. Sep 12 23:53:08.117979 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 12 23:53:08.118014 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 23:53:08.118053 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 23:53:08.118087 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 23:53:08.118125 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 23:53:08.118156 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 23:53:08.118189 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 23:53:08.118220 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 23:53:08.118257 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 23:53:08.118289 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:53:08.118320 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:53:08.118351 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 23:53:08.118386 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 23:53:08.118421 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 23:53:08.118456 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 23:53:08.118490 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 23:53:08.118524 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:53:08.118564 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 23:53:08.118596 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:53:08.122833 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 23:53:08.122920 systemd[1]: Reached target slices.target - Slice Units. Sep 12 23:53:08.122958 systemd[1]: Reached target swap.target - Swaps. Sep 12 23:53:08.122993 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 23:53:08.123028 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 23:53:08.123062 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 23:53:08.123108 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 23:53:08.123140 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:53:08.123173 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 23:53:08.123207 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:53:08.123243 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 23:53:08.123274 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 23:53:08.123308 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 23:53:08.123340 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 23:53:08.123372 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 23:53:08.123403 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 23:53:08.123444 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 23:53:08.123475 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 23:53:08.123523 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:53:08.123556 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 23:53:08.123588 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 23:53:08.123624 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:53:08.137975 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 23:53:08.138111 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:53:08.138213 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 23:53:08.138311 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:53:08.138415 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 23:53:08.138515 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 12 23:53:08.138601 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 12 23:53:08.138731 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 23:53:08.138797 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 23:53:08.138882 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 23:53:08.138966 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 23:53:08.139011 kernel: ACPI: bus type drm_connector registered Sep 12 23:53:08.139062 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 23:53:08.139100 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 23:53:08.139132 kernel: fuse: init (API version 7.39) Sep 12 23:53:08.139177 kernel: loop: module loaded Sep 12 23:53:08.139229 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 23:53:08.139277 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 23:53:08.139336 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 23:53:08.139403 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 23:53:08.139484 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 23:53:08.139555 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:53:08.139613 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 23:53:08.150765 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 23:53:08.150842 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:53:08.150880 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:53:08.150941 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 23:53:08.151006 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 23:53:08.151071 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:53:08.151131 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:53:08.151186 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 23:53:08.151229 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 23:53:08.151352 systemd-journald[1604]: Collecting audit messages is disabled. Sep 12 23:53:08.151423 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:53:08.151455 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:53:08.151486 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 23:53:08.151520 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 23:53:08.151553 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 23:53:08.151584 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 23:53:08.151618 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 23:53:08.151686 systemd-journald[1604]: Journal started Sep 12 23:53:08.151741 systemd-journald[1604]: Runtime Journal (/run/log/journal/ec2801a2cffba07d058bb57270bae526) is 8.0M, max 75.3M, 67.3M free. Sep 12 23:53:08.182698 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 23:53:08.205655 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 23:53:08.212697 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 23:53:08.230694 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 23:53:08.246736 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 23:53:08.271707 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 23:53:08.280702 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 23:53:08.300874 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:53:08.329695 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 23:53:08.351128 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 23:53:08.365315 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:53:08.369676 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 23:53:08.377314 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 23:53:08.385210 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 23:53:08.448543 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 23:53:08.477231 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 23:53:08.488103 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 23:53:08.495934 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:53:08.512464 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. Sep 12 23:53:08.513129 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. Sep 12 23:53:08.538483 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:53:08.553475 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 23:53:08.558313 systemd-journald[1604]: Time spent on flushing to /var/log/journal/ec2801a2cffba07d058bb57270bae526 is 27.701ms for 905 entries. Sep 12 23:53:08.558313 systemd-journald[1604]: System Journal (/var/log/journal/ec2801a2cffba07d058bb57270bae526) is 8.0M, max 195.6M, 187.6M free. Sep 12 23:53:08.592168 systemd-journald[1604]: Received client request to flush runtime journal. Sep 12 23:53:08.576043 udevadm[1668]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 12 23:53:08.599553 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 23:53:08.649846 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 23:53:08.662980 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 23:53:08.711573 systemd-tmpfiles[1679]: ACLs are not supported, ignoring. Sep 12 23:53:08.712240 systemd-tmpfiles[1679]: ACLs are not supported, ignoring. Sep 12 23:53:08.722584 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:53:09.430015 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 23:53:09.449107 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:53:09.509308 systemd-udevd[1685]: Using default interface naming scheme 'v255'. Sep 12 23:53:09.561058 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:53:09.594286 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 23:53:09.636310 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 23:53:09.710481 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Sep 12 23:53:09.741810 (udev-worker)[1691]: Network interface NamePolicy= disabled on kernel command line. Sep 12 23:53:09.816880 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 23:53:10.013061 systemd-networkd[1694]: lo: Link UP Sep 12 23:53:10.013085 systemd-networkd[1694]: lo: Gained carrier Sep 12 23:53:10.017144 systemd-networkd[1694]: Enumeration completed Sep 12 23:53:10.017381 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 23:53:10.025501 systemd-networkd[1694]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:53:10.025527 systemd-networkd[1694]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:53:10.027983 systemd-networkd[1694]: eth0: Link UP Sep 12 23:53:10.028368 systemd-networkd[1694]: eth0: Gained carrier Sep 12 23:53:10.028419 systemd-networkd[1694]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:53:10.041157 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 23:53:10.061823 systemd-networkd[1694]: eth0: DHCPv4 address 172.31.18.203/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 12 23:53:10.073730 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1692) Sep 12 23:53:10.142430 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:53:10.338157 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 23:53:10.357765 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 12 23:53:10.362974 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 23:53:10.367213 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:53:10.424973 lvm[1812]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 23:53:10.468599 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 23:53:10.474283 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:53:10.486148 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 23:53:10.500763 lvm[1817]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 23:53:10.541028 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 23:53:10.548071 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 23:53:10.551335 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 23:53:10.551396 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 23:53:10.554297 systemd[1]: Reached target machines.target - Containers. Sep 12 23:53:10.559063 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 23:53:10.571157 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 23:53:10.578142 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 23:53:10.582247 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:53:10.586186 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 23:53:10.605359 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 23:53:10.618947 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 23:53:10.636217 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 23:53:10.668767 kernel: loop0: detected capacity change from 0 to 114328 Sep 12 23:53:10.685443 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 23:53:10.688282 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 23:53:10.704246 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 23:53:10.771679 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 23:53:10.799703 kernel: loop1: detected capacity change from 0 to 52536 Sep 12 23:53:10.910691 kernel: loop2: detected capacity change from 0 to 114432 Sep 12 23:53:11.039672 kernel: loop3: detected capacity change from 0 to 203944 Sep 12 23:53:11.314729 kernel: loop4: detected capacity change from 0 to 114328 Sep 12 23:53:11.318842 systemd-networkd[1694]: eth0: Gained IPv6LL Sep 12 23:53:11.330002 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 23:53:11.346758 kernel: loop5: detected capacity change from 0 to 52536 Sep 12 23:53:11.364680 kernel: loop6: detected capacity change from 0 to 114432 Sep 12 23:53:11.383718 kernel: loop7: detected capacity change from 0 to 203944 Sep 12 23:53:11.410975 (sd-merge)[1838]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 12 23:53:11.412826 (sd-merge)[1838]: Merged extensions into '/usr'. Sep 12 23:53:11.424255 systemd[1]: Reloading requested from client PID 1825 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 23:53:11.424290 systemd[1]: Reloading... Sep 12 23:53:11.543673 zram_generator::config[1867]: No configuration found. Sep 12 23:53:11.925073 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:53:12.104534 systemd[1]: Reloading finished in 679 ms. Sep 12 23:53:12.134931 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 23:53:12.156150 systemd[1]: Starting ensure-sysext.service... Sep 12 23:53:12.172035 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 23:53:12.195880 ldconfig[1821]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 23:53:12.209556 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 23:53:12.216066 systemd[1]: Reloading requested from client PID 1924 ('systemctl') (unit ensure-sysext.service)... Sep 12 23:53:12.216765 systemd[1]: Reloading... Sep 12 23:53:12.255573 systemd-tmpfiles[1925]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 23:53:12.256780 systemd-tmpfiles[1925]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 23:53:12.260510 systemd-tmpfiles[1925]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 23:53:12.261241 systemd-tmpfiles[1925]: ACLs are not supported, ignoring. Sep 12 23:53:12.261428 systemd-tmpfiles[1925]: ACLs are not supported, ignoring. Sep 12 23:53:12.280933 systemd-tmpfiles[1925]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 23:53:12.280980 systemd-tmpfiles[1925]: Skipping /boot Sep 12 23:53:12.335743 systemd-tmpfiles[1925]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 23:53:12.335779 systemd-tmpfiles[1925]: Skipping /boot Sep 12 23:53:12.422679 zram_generator::config[1955]: No configuration found. Sep 12 23:53:12.720873 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:53:12.899411 systemd[1]: Reloading finished in 681 ms. Sep 12 23:53:12.935325 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:53:12.963047 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 23:53:12.977940 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 23:53:12.990862 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 23:53:13.011136 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 23:53:13.021067 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 23:53:13.049534 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:53:13.062877 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:53:13.078733 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:53:13.098086 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:53:13.102623 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:53:13.124373 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:53:13.126153 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:53:13.131219 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:53:13.133914 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:53:13.153252 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 23:53:13.160498 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:53:13.174839 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:53:13.180416 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 23:53:13.201412 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:53:13.210269 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:53:13.226772 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:53:13.248805 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:53:13.252018 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:53:13.265212 augenrules[2053]: No rules Sep 12 23:53:13.265153 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 23:53:13.282352 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:53:13.289698 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:53:13.307618 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 23:53:13.325167 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:53:13.325620 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:53:13.328477 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:53:13.332281 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:53:13.367275 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 23:53:13.386115 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:53:13.402701 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:53:13.420966 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 23:53:13.439011 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:53:13.455996 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:53:13.461062 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:53:13.461204 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 23:53:13.466233 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 23:53:13.474009 systemd[1]: Finished ensure-sysext.service. Sep 12 23:53:13.477476 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 23:53:13.487052 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:53:13.487488 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:53:13.493390 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 23:53:13.494048 systemd-resolved[2019]: Positive Trust Anchors: Sep 12 23:53:13.494075 systemd-resolved[2019]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 23:53:13.494142 systemd-resolved[2019]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 23:53:13.503306 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 23:53:13.507431 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:53:13.507894 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:53:13.519367 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:53:13.519368 systemd-resolved[2019]: Defaulting to hostname 'linux'. Sep 12 23:53:13.521079 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:53:13.538257 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 23:53:13.548238 systemd[1]: Reached target network.target - Network. Sep 12 23:53:13.551072 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 23:53:13.553860 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:53:13.557097 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 23:53:13.557397 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 23:53:13.560614 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 23:53:13.563918 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 23:53:13.567463 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 23:53:13.570930 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 23:53:13.575269 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 23:53:13.578475 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 23:53:13.578544 systemd[1]: Reached target paths.target - Path Units. Sep 12 23:53:13.580882 systemd[1]: Reached target timers.target - Timer Units. Sep 12 23:53:13.585089 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 23:53:13.591395 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 23:53:13.598120 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 23:53:13.603497 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 23:53:13.605118 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 23:53:13.610409 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 23:53:13.616808 systemd[1]: Reached target basic.target - Basic System. Sep 12 23:53:13.620217 systemd[1]: System is tainted: cgroupsv1 Sep 12 23:53:13.620330 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 23:53:13.620387 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 23:53:13.628074 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 23:53:13.637786 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 23:53:13.647096 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 23:53:13.672866 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 23:53:13.694290 jq[2093]: false Sep 12 23:53:13.699190 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 23:53:13.705610 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 23:53:13.720862 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:53:13.744994 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 23:53:13.758898 systemd[1]: Started ntpd.service - Network Time Service. Sep 12 23:53:13.786205 extend-filesystems[2094]: Found loop4 Sep 12 23:53:13.786205 extend-filesystems[2094]: Found loop5 Sep 12 23:53:13.803490 extend-filesystems[2094]: Found loop6 Sep 12 23:53:13.803490 extend-filesystems[2094]: Found loop7 Sep 12 23:53:13.803490 extend-filesystems[2094]: Found nvme0n1 Sep 12 23:53:13.803490 extend-filesystems[2094]: Found nvme0n1p1 Sep 12 23:53:13.803490 extend-filesystems[2094]: Found nvme0n1p2 Sep 12 23:53:13.803490 extend-filesystems[2094]: Found nvme0n1p3 Sep 12 23:53:13.803490 extend-filesystems[2094]: Found usr Sep 12 23:53:13.803490 extend-filesystems[2094]: Found nvme0n1p4 Sep 12 23:53:13.803490 extend-filesystems[2094]: Found nvme0n1p6 Sep 12 23:53:13.803490 extend-filesystems[2094]: Found nvme0n1p7 Sep 12 23:53:13.787982 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 23:53:13.843193 extend-filesystems[2094]: Found nvme0n1p9 Sep 12 23:53:13.843193 extend-filesystems[2094]: Checking size of /dev/nvme0n1p9 Sep 12 23:53:13.810876 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 23:53:13.854916 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 12 23:53:13.872752 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 23:53:13.899461 dbus-daemon[2092]: [system] SELinux support is enabled Sep 12 23:53:13.904291 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 23:53:13.928992 dbus-daemon[2092]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1694 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 12 23:53:13.957945 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 23:53:13.976821 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 23:53:13.985985 extend-filesystems[2094]: Resized partition /dev/nvme0n1p9 Sep 12 23:53:13.990959 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 23:53:14.015660 ntpd[2101]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 22:00:00 UTC 2025 (1): Starting Sep 12 23:53:14.064334 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 12 23:53:14.064554 extend-filesystems[2126]: resize2fs 1.47.1 (20-May-2024) Sep 12 23:53:14.111935 ntpd[2101]: 12 Sep 23:53:14 ntpd[2101]: ntpd 4.2.8p17@1.4004-o Fri Sep 12 22:00:00 UTC 2025 (1): Starting Sep 12 23:53:14.111935 ntpd[2101]: 12 Sep 23:53:14 ntpd[2101]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 23:53:14.111935 ntpd[2101]: 12 Sep 23:53:14 ntpd[2101]: ---------------------------------------------------- Sep 12 23:53:14.111935 ntpd[2101]: 12 Sep 23:53:14 ntpd[2101]: ntp-4 is maintained by Network Time Foundation, Sep 12 23:53:14.111935 ntpd[2101]: 12 Sep 23:53:14 ntpd[2101]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 23:53:14.111935 ntpd[2101]: 12 Sep 23:53:14 ntpd[2101]: corporation. Support and training for ntp-4 are Sep 12 23:53:14.111935 ntpd[2101]: 12 Sep 23:53:14 ntpd[2101]: available at https://www.nwtime.org/support Sep 12 23:53:14.111935 ntpd[2101]: 12 Sep 23:53:14 ntpd[2101]: ---------------------------------------------------- Sep 12 23:53:14.111935 ntpd[2101]: 12 Sep 23:53:14 ntpd[2101]: proto: precision = 0.108 usec (-23) Sep 12 23:53:14.111935 ntpd[2101]: 12 Sep 23:53:14 ntpd[2101]: basedate set to 2025-08-31 Sep 12 23:53:14.111935 ntpd[2101]: 12 Sep 23:53:14 ntpd[2101]: gps base set to 2025-08-31 (week 2382) Sep 12 23:53:14.111935 ntpd[2101]: 12 Sep 23:53:14 ntpd[2101]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 23:53:14.111935 ntpd[2101]: 12 Sep 23:53:14 ntpd[2101]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 23:53:14.111935 ntpd[2101]: 12 Sep 23:53:14 ntpd[2101]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 23:53:14.111935 ntpd[2101]: 12 Sep 23:53:14 ntpd[2101]: Listen normally on 3 eth0 172.31.18.203:123 Sep 12 23:53:14.111935 ntpd[2101]: 12 Sep 23:53:14 ntpd[2101]: Listen normally on 4 lo [::1]:123 Sep 12 23:53:14.111935 ntpd[2101]: 12 Sep 23:53:14 ntpd[2101]: Listen normally on 5 eth0 [fe80::481:33ff:fead:a46f%2]:123 Sep 12 23:53:14.111935 ntpd[2101]: 12 Sep 23:53:14 ntpd[2101]: Listening on routing socket on fd #22 for interface updates Sep 12 23:53:14.111935 ntpd[2101]: 12 Sep 23:53:14 ntpd[2101]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 23:53:14.111935 ntpd[2101]: 12 Sep 23:53:14 ntpd[2101]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 23:53:14.015741 ntpd[2101]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 12 23:53:14.064823 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 23:53:14.015763 ntpd[2101]: ---------------------------------------------------- Sep 12 23:53:14.079311 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 23:53:14.015783 ntpd[2101]: ntp-4 is maintained by Network Time Foundation, Sep 12 23:53:14.101478 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 23:53:14.169072 jq[2133]: true Sep 12 23:53:14.015807 ntpd[2101]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 12 23:53:14.102058 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 23:53:14.015826 ntpd[2101]: corporation. Support and training for ntp-4 are Sep 12 23:53:14.113577 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 23:53:14.015846 ntpd[2101]: available at https://www.nwtime.org/support Sep 12 23:53:14.114198 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 23:53:14.015866 ntpd[2101]: ---------------------------------------------------- Sep 12 23:53:14.120147 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 23:53:14.021079 ntpd[2101]: proto: precision = 0.108 usec (-23) Sep 12 23:53:14.159302 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 23:53:14.022885 ntpd[2101]: basedate set to 2025-08-31 Sep 12 23:53:14.195278 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 23:53:14.022927 ntpd[2101]: gps base set to 2025-08-31 (week 2382) Sep 12 23:53:14.027927 ntpd[2101]: Listen and drop on 0 v6wildcard [::]:123 Sep 12 23:53:14.028022 ntpd[2101]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 12 23:53:14.029364 ntpd[2101]: Listen normally on 2 lo 127.0.0.1:123 Sep 12 23:53:14.029482 ntpd[2101]: Listen normally on 3 eth0 172.31.18.203:123 Sep 12 23:53:14.029574 ntpd[2101]: Listen normally on 4 lo [::1]:123 Sep 12 23:53:14.029753 ntpd[2101]: Listen normally on 5 eth0 [fe80::481:33ff:fead:a46f%2]:123 Sep 12 23:53:14.029854 ntpd[2101]: Listening on routing socket on fd #22 for interface updates Sep 12 23:53:14.070733 ntpd[2101]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 23:53:14.070790 ntpd[2101]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 12 23:53:14.270809 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 12 23:53:14.270973 extend-filesystems[2126]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 12 23:53:14.270973 extend-filesystems[2126]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 23:53:14.270973 extend-filesystems[2126]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 12 23:53:14.301843 coreos-metadata[2090]: Sep 12 23:53:14.289 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 23:53:14.302400 extend-filesystems[2094]: Resized filesystem in /dev/nvme0n1p9 Sep 12 23:53:14.312411 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 23:53:14.330432 coreos-metadata[2090]: Sep 12 23:53:14.311 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 12 23:53:14.320497 (ntainerd)[2151]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 23:53:14.321516 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 23:53:14.355469 jq[2145]: true Sep 12 23:53:14.356015 coreos-metadata[2090]: Sep 12 23:53:14.333 INFO Fetch successful Sep 12 23:53:14.356015 coreos-metadata[2090]: Sep 12 23:53:14.333 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 12 23:53:14.362592 coreos-metadata[2090]: Sep 12 23:53:14.359 INFO Fetch successful Sep 12 23:53:14.362592 coreos-metadata[2090]: Sep 12 23:53:14.359 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 12 23:53:14.369319 update_engine[2124]: I20250912 23:53:14.349448 2124 main.cc:92] Flatcar Update Engine starting Sep 12 23:53:14.381690 coreos-metadata[2090]: Sep 12 23:53:14.371 INFO Fetch successful Sep 12 23:53:14.381690 coreos-metadata[2090]: Sep 12 23:53:14.371 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 12 23:53:14.377208 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 23:53:14.377335 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 23:53:14.385558 tar[2139]: linux-arm64/helm Sep 12 23:53:14.386011 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 23:53:14.386057 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 23:53:14.408333 coreos-metadata[2090]: Sep 12 23:53:14.407 INFO Fetch successful Sep 12 23:53:14.408333 coreos-metadata[2090]: Sep 12 23:53:14.408 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 12 23:53:14.409181 dbus-daemon[2092]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 23:53:14.426768 coreos-metadata[2090]: Sep 12 23:53:14.426 INFO Fetch failed with 404: resource not found Sep 12 23:53:14.426928 coreos-metadata[2090]: Sep 12 23:53:14.426 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 12 23:53:14.430160 coreos-metadata[2090]: Sep 12 23:53:14.430 INFO Fetch successful Sep 12 23:53:14.430327 coreos-metadata[2090]: Sep 12 23:53:14.430 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 12 23:53:14.440989 coreos-metadata[2090]: Sep 12 23:53:14.440 INFO Fetch successful Sep 12 23:53:14.440989 coreos-metadata[2090]: Sep 12 23:53:14.440 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 12 23:53:14.441927 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 12 23:53:14.456676 coreos-metadata[2090]: Sep 12 23:53:14.447 INFO Fetch successful Sep 12 23:53:14.456676 coreos-metadata[2090]: Sep 12 23:53:14.447 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 12 23:53:14.452220 systemd[1]: Started update-engine.service - Update Engine. Sep 12 23:53:14.459806 coreos-metadata[2090]: Sep 12 23:53:14.457 INFO Fetch successful Sep 12 23:53:14.459806 coreos-metadata[2090]: Sep 12 23:53:14.457 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 12 23:53:14.464618 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 23:53:14.471982 coreos-metadata[2090]: Sep 12 23:53:14.465 INFO Fetch successful Sep 12 23:53:14.472077 update_engine[2124]: I20250912 23:53:14.464745 2124 update_check_scheduler.cc:74] Next update check in 5m47s Sep 12 23:53:14.478176 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 23:53:14.519901 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 12 23:53:14.632278 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 12 23:53:14.734612 systemd-logind[2118]: Watching system buttons on /dev/input/event0 (Power Button) Sep 12 23:53:14.734733 systemd-logind[2118]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 12 23:53:14.748052 systemd-logind[2118]: New seat seat0. Sep 12 23:53:14.749695 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (2189) Sep 12 23:53:14.757091 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 23:53:14.776164 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 23:53:14.791891 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 23:53:14.878336 bash[2221]: Updated "/home/core/.ssh/authorized_keys" Sep 12 23:53:14.895966 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 23:53:14.950096 systemd[1]: Starting sshkeys.service... Sep 12 23:53:15.049266 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 23:53:15.066685 containerd[2151]: time="2025-09-12T23:53:15.057529981Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 23:53:15.061016 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 23:53:15.149753 amazon-ssm-agent[2190]: Initializing new seelog logger Sep 12 23:53:15.149753 amazon-ssm-agent[2190]: New Seelog Logger Creation Complete Sep 12 23:53:15.149753 amazon-ssm-agent[2190]: 2025/09/12 23:53:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 23:53:15.149753 amazon-ssm-agent[2190]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 23:53:15.149753 amazon-ssm-agent[2190]: 2025/09/12 23:53:15 processing appconfig overrides Sep 12 23:53:15.150498 amazon-ssm-agent[2190]: 2025/09/12 23:53:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 23:53:15.150498 amazon-ssm-agent[2190]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 23:53:15.156656 amazon-ssm-agent[2190]: 2025/09/12 23:53:15 processing appconfig overrides Sep 12 23:53:15.156656 amazon-ssm-agent[2190]: 2025/09/12 23:53:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 23:53:15.156656 amazon-ssm-agent[2190]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 23:53:15.156656 amazon-ssm-agent[2190]: 2025/09/12 23:53:15 processing appconfig overrides Sep 12 23:53:15.156656 amazon-ssm-agent[2190]: 2025-09-12 23:53:15 INFO Proxy environment variables: Sep 12 23:53:15.158672 amazon-ssm-agent[2190]: 2025/09/12 23:53:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 23:53:15.158672 amazon-ssm-agent[2190]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 12 23:53:15.158838 amazon-ssm-agent[2190]: 2025/09/12 23:53:15 processing appconfig overrides Sep 12 23:53:15.255810 amazon-ssm-agent[2190]: 2025-09-12 23:53:15 INFO https_proxy: Sep 12 23:53:15.324854 containerd[2151]: time="2025-09-12T23:53:15.324780350Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:53:15.340666 containerd[2151]: time="2025-09-12T23:53:15.336965954Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:53:15.340666 containerd[2151]: time="2025-09-12T23:53:15.337060094Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 23:53:15.340666 containerd[2151]: time="2025-09-12T23:53:15.337101326Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 23:53:15.340666 containerd[2151]: time="2025-09-12T23:53:15.337465958Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 23:53:15.340666 containerd[2151]: time="2025-09-12T23:53:15.337517306Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 23:53:15.340666 containerd[2151]: time="2025-09-12T23:53:15.337719326Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:53:15.340666 containerd[2151]: time="2025-09-12T23:53:15.337777610Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:53:15.340666 containerd[2151]: time="2025-09-12T23:53:15.338249846Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:53:15.340666 containerd[2151]: time="2025-09-12T23:53:15.338294030Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 23:53:15.340666 containerd[2151]: time="2025-09-12T23:53:15.338330018Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:53:15.340666 containerd[2151]: time="2025-09-12T23:53:15.338360066Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 23:53:15.349072 containerd[2151]: time="2025-09-12T23:53:15.338608022Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:53:15.349072 containerd[2151]: time="2025-09-12T23:53:15.348005042Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 23:53:15.349072 containerd[2151]: time="2025-09-12T23:53:15.348454370Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 23:53:15.349072 containerd[2151]: time="2025-09-12T23:53:15.348506078Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 23:53:15.349072 containerd[2151]: time="2025-09-12T23:53:15.348843290Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 23:53:15.349072 containerd[2151]: time="2025-09-12T23:53:15.348983978Z" level=info msg="metadata content store policy set" policy=shared Sep 12 23:53:15.356214 amazon-ssm-agent[2190]: 2025-09-12 23:53:15 INFO http_proxy: Sep 12 23:53:15.364738 containerd[2151]: time="2025-09-12T23:53:15.363820274Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 23:53:15.364738 containerd[2151]: time="2025-09-12T23:53:15.363976802Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 23:53:15.364738 containerd[2151]: time="2025-09-12T23:53:15.364134734Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 23:53:15.364738 containerd[2151]: time="2025-09-12T23:53:15.364182254Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 23:53:15.364738 containerd[2151]: time="2025-09-12T23:53:15.364220714Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 23:53:15.364738 containerd[2151]: time="2025-09-12T23:53:15.364523342Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 23:53:15.373007 containerd[2151]: time="2025-09-12T23:53:15.368244998Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 23:53:15.374782 containerd[2151]: time="2025-09-12T23:53:15.373455518Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 23:53:15.374782 containerd[2151]: time="2025-09-12T23:53:15.373512434Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 23:53:15.374782 containerd[2151]: time="2025-09-12T23:53:15.373552778Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 23:53:15.374782 containerd[2151]: time="2025-09-12T23:53:15.373600694Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 23:53:15.374782 containerd[2151]: time="2025-09-12T23:53:15.373660682Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 23:53:15.374782 containerd[2151]: time="2025-09-12T23:53:15.373701770Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 23:53:15.374782 containerd[2151]: time="2025-09-12T23:53:15.373737722Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 23:53:15.374782 containerd[2151]: time="2025-09-12T23:53:15.373773614Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 23:53:15.374782 containerd[2151]: time="2025-09-12T23:53:15.373807310Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 23:53:15.374782 containerd[2151]: time="2025-09-12T23:53:15.373838114Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 23:53:15.374782 containerd[2151]: time="2025-09-12T23:53:15.373866686Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 23:53:15.374782 containerd[2151]: time="2025-09-12T23:53:15.373913102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 23:53:15.374782 containerd[2151]: time="2025-09-12T23:53:15.373947686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 23:53:15.374782 containerd[2151]: time="2025-09-12T23:53:15.373979930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 23:53:15.375501 containerd[2151]: time="2025-09-12T23:53:15.374014010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 23:53:15.375501 containerd[2151]: time="2025-09-12T23:53:15.374044406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 23:53:15.375501 containerd[2151]: time="2025-09-12T23:53:15.374111414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 23:53:15.375501 containerd[2151]: time="2025-09-12T23:53:15.374148254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 23:53:15.375501 containerd[2151]: time="2025-09-12T23:53:15.374187338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 23:53:15.375501 containerd[2151]: time="2025-09-12T23:53:15.374221742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 23:53:15.375501 containerd[2151]: time="2025-09-12T23:53:15.374280758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 23:53:15.375501 containerd[2151]: time="2025-09-12T23:53:15.374324822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 23:53:15.375501 containerd[2151]: time="2025-09-12T23:53:15.374357894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 23:53:15.375501 containerd[2151]: time="2025-09-12T23:53:15.374389718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 23:53:15.375501 containerd[2151]: time="2025-09-12T23:53:15.374432198Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 23:53:15.375501 containerd[2151]: time="2025-09-12T23:53:15.374489474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 23:53:15.375501 containerd[2151]: time="2025-09-12T23:53:15.374532122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 23:53:15.375501 containerd[2151]: time="2025-09-12T23:53:15.374561606Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 23:53:15.391669 containerd[2151]: time="2025-09-12T23:53:15.387833882Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 23:53:15.391669 containerd[2151]: time="2025-09-12T23:53:15.387931730Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 23:53:15.391669 containerd[2151]: time="2025-09-12T23:53:15.387963890Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 23:53:15.391669 containerd[2151]: time="2025-09-12T23:53:15.387999878Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 23:53:15.391669 containerd[2151]: time="2025-09-12T23:53:15.388026206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 23:53:15.391669 containerd[2151]: time="2025-09-12T23:53:15.388069442Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 23:53:15.391669 containerd[2151]: time="2025-09-12T23:53:15.388096358Z" level=info msg="NRI interface is disabled by configuration." Sep 12 23:53:15.391669 containerd[2151]: time="2025-09-12T23:53:15.388122242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 23:53:15.392316 containerd[2151]: time="2025-09-12T23:53:15.388874966Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 23:53:15.392316 containerd[2151]: time="2025-09-12T23:53:15.389025038Z" level=info msg="Connect containerd service" Sep 12 23:53:15.392316 containerd[2151]: time="2025-09-12T23:53:15.389277350Z" level=info msg="using legacy CRI server" Sep 12 23:53:15.392316 containerd[2151]: time="2025-09-12T23:53:15.389307206Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 23:53:15.405737 containerd[2151]: time="2025-09-12T23:53:15.398733710Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 23:53:15.416821 containerd[2151]: time="2025-09-12T23:53:15.416752046Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 23:53:15.420260 coreos-metadata[2255]: Sep 12 23:53:15.419 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 12 23:53:15.422567 coreos-metadata[2255]: Sep 12 23:53:15.422 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 12 23:53:15.424813 containerd[2151]: time="2025-09-12T23:53:15.422852402Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 23:53:15.424813 containerd[2151]: time="2025-09-12T23:53:15.422998574Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 23:53:15.424813 containerd[2151]: time="2025-09-12T23:53:15.423201290Z" level=info msg="Start subscribing containerd event" Sep 12 23:53:15.424813 containerd[2151]: time="2025-09-12T23:53:15.423275450Z" level=info msg="Start recovering state" Sep 12 23:53:15.424813 containerd[2151]: time="2025-09-12T23:53:15.423420098Z" level=info msg="Start event monitor" Sep 12 23:53:15.424813 containerd[2151]: time="2025-09-12T23:53:15.423450158Z" level=info msg="Start snapshots syncer" Sep 12 23:53:15.424813 containerd[2151]: time="2025-09-12T23:53:15.423474674Z" level=info msg="Start cni network conf syncer for default" Sep 12 23:53:15.424813 containerd[2151]: time="2025-09-12T23:53:15.423494666Z" level=info msg="Start streaming server" Sep 12 23:53:15.429762 coreos-metadata[2255]: Sep 12 23:53:15.425 INFO Fetch successful Sep 12 23:53:15.429762 coreos-metadata[2255]: Sep 12 23:53:15.427 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 12 23:53:15.431874 coreos-metadata[2255]: Sep 12 23:53:15.431 INFO Fetch successful Sep 12 23:53:15.434578 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 23:53:15.438240 unknown[2255]: wrote ssh authorized keys file for user: core Sep 12 23:53:15.442614 containerd[2151]: time="2025-09-12T23:53:15.442289103Z" level=info msg="containerd successfully booted in 0.388686s" Sep 12 23:53:15.457980 amazon-ssm-agent[2190]: 2025-09-12 23:53:15 INFO no_proxy: Sep 12 23:53:15.541549 locksmithd[2180]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 23:53:15.574031 amazon-ssm-agent[2190]: 2025-09-12 23:53:15 INFO Checking if agent identity type OnPrem can be assumed Sep 12 23:53:15.606157 update-ssh-keys[2301]: Updated "/home/core/.ssh/authorized_keys" Sep 12 23:53:15.610074 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 23:53:15.646819 systemd[1]: Finished sshkeys.service. Sep 12 23:53:15.656865 amazon-ssm-agent[2190]: 2025-09-12 23:53:15 INFO Checking if agent identity type EC2 can be assumed Sep 12 23:53:15.756779 amazon-ssm-agent[2190]: 2025-09-12 23:53:15 INFO Agent will take identity from EC2 Sep 12 23:53:15.857067 amazon-ssm-agent[2190]: 2025-09-12 23:53:15 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 23:53:15.865898 dbus-daemon[2092]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 12 23:53:15.866200 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 12 23:53:15.893931 dbus-daemon[2092]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2175 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 12 23:53:15.913334 systemd[1]: Starting polkit.service - Authorization Manager... Sep 12 23:53:15.956988 amazon-ssm-agent[2190]: 2025-09-12 23:53:15 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 23:53:15.996740 polkitd[2332]: Started polkitd version 121 Sep 12 23:53:16.030871 polkitd[2332]: Loading rules from directory /etc/polkit-1/rules.d Sep 12 23:53:16.031017 polkitd[2332]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 12 23:53:16.041909 polkitd[2332]: Finished loading, compiling and executing 2 rules Sep 12 23:53:16.044492 dbus-daemon[2092]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 12 23:53:16.045210 systemd[1]: Started polkit.service - Authorization Manager. Sep 12 23:53:16.050087 polkitd[2332]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 12 23:53:16.058137 amazon-ssm-agent[2190]: 2025-09-12 23:53:15 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 12 23:53:16.157494 systemd-hostnamed[2175]: Hostname set to (transient) Sep 12 23:53:16.157761 systemd-resolved[2019]: System hostname changed to 'ip-172-31-18-203'. Sep 12 23:53:16.159895 amazon-ssm-agent[2190]: 2025-09-12 23:53:15 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 12 23:53:16.261669 amazon-ssm-agent[2190]: 2025-09-12 23:53:15 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 12 23:53:16.363750 amazon-ssm-agent[2190]: 2025-09-12 23:53:15 INFO [amazon-ssm-agent] Starting Core Agent Sep 12 23:53:16.474310 amazon-ssm-agent[2190]: 2025-09-12 23:53:15 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 12 23:53:16.498900 amazon-ssm-agent[2190]: 2025-09-12 23:53:15 INFO [Registrar] Starting registrar module Sep 12 23:53:16.499448 amazon-ssm-agent[2190]: 2025-09-12 23:53:15 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 12 23:53:16.499448 amazon-ssm-agent[2190]: 2025-09-12 23:53:16 INFO [EC2Identity] EC2 registration was successful. Sep 12 23:53:16.499448 amazon-ssm-agent[2190]: 2025-09-12 23:53:16 INFO [CredentialRefresher] credentialRefresher has started Sep 12 23:53:16.499448 amazon-ssm-agent[2190]: 2025-09-12 23:53:16 INFO [CredentialRefresher] Starting credentials refresher loop Sep 12 23:53:16.500016 amazon-ssm-agent[2190]: 2025-09-12 23:53:16 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 12 23:53:16.575590 amazon-ssm-agent[2190]: 2025-09-12 23:53:16 INFO [CredentialRefresher] Next credential rotation will be in 30.908273452733333 minutes Sep 12 23:53:16.679438 tar[2139]: linux-arm64/LICENSE Sep 12 23:53:16.679438 tar[2139]: linux-arm64/README.md Sep 12 23:53:16.728515 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 23:53:17.538375 amazon-ssm-agent[2190]: 2025-09-12 23:53:17 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 12 23:53:17.640667 amazon-ssm-agent[2190]: 2025-09-12 23:53:17 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2359) started Sep 12 23:53:17.740700 amazon-ssm-agent[2190]: 2025-09-12 23:53:17 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 12 23:53:18.460327 sshd_keygen[2144]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 23:53:18.513246 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 23:53:18.533396 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 23:53:18.555445 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 23:53:18.556318 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 23:53:18.570283 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 23:53:18.601323 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 23:53:18.630319 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 23:53:18.645359 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 23:53:18.656781 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 23:53:18.684991 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:53:18.694513 (kubelet)[2396]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:53:18.695119 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 23:53:18.702760 systemd[1]: Startup finished in 11.385s (kernel) + 12.733s (userspace) = 24.119s. Sep 12 23:53:20.649448 kubelet[2396]: E0912 23:53:20.649353 2396 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:53:20.654502 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:53:20.654967 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:53:21.430384 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 23:53:21.439145 systemd[1]: Started sshd@0-172.31.18.203:22-147.75.109.163:59902.service - OpenSSH per-connection server daemon (147.75.109.163:59902). Sep 12 23:53:21.634962 sshd[2408]: Accepted publickey for core from 147.75.109.163 port 59902 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:53:21.638131 sshd[2408]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:53:21.654933 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 23:53:21.662120 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 23:53:21.667154 systemd-logind[2118]: New session 1 of user core. Sep 12 23:53:21.694251 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 23:53:21.711172 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 23:53:21.719539 (systemd)[2414]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 23:53:21.944926 systemd[2414]: Queued start job for default target default.target. Sep 12 23:53:21.946149 systemd[2414]: Created slice app.slice - User Application Slice. Sep 12 23:53:21.946208 systemd[2414]: Reached target paths.target - Paths. Sep 12 23:53:21.946240 systemd[2414]: Reached target timers.target - Timers. Sep 12 23:53:21.954829 systemd[2414]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 23:53:21.968425 systemd[2414]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 23:53:21.968563 systemd[2414]: Reached target sockets.target - Sockets. Sep 12 23:53:21.968598 systemd[2414]: Reached target basic.target - Basic System. Sep 12 23:53:21.970032 systemd[2414]: Reached target default.target - Main User Target. Sep 12 23:53:21.970113 systemd[2414]: Startup finished in 237ms. Sep 12 23:53:21.970552 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 23:53:21.980188 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 23:53:22.133177 systemd[1]: Started sshd@1-172.31.18.203:22-147.75.109.163:59910.service - OpenSSH per-connection server daemon (147.75.109.163:59910). Sep 12 23:53:22.317545 sshd[2426]: Accepted publickey for core from 147.75.109.163 port 59910 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:53:22.320169 sshd[2426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:53:22.330488 systemd-logind[2118]: New session 2 of user core. Sep 12 23:53:22.336310 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 23:53:22.466002 sshd[2426]: pam_unix(sshd:session): session closed for user core Sep 12 23:53:22.473247 systemd[1]: sshd@1-172.31.18.203:22-147.75.109.163:59910.service: Deactivated successfully. Sep 12 23:53:22.474918 systemd-logind[2118]: Session 2 logged out. Waiting for processes to exit. Sep 12 23:53:22.479875 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 23:53:22.481471 systemd-logind[2118]: Removed session 2. Sep 12 23:53:22.497158 systemd[1]: Started sshd@2-172.31.18.203:22-147.75.109.163:59926.service - OpenSSH per-connection server daemon (147.75.109.163:59926). Sep 12 23:53:22.663577 sshd[2434]: Accepted publickey for core from 147.75.109.163 port 59926 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:53:22.665663 sshd[2434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:53:22.673772 systemd-logind[2118]: New session 3 of user core. Sep 12 23:53:22.688284 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 23:53:22.810002 sshd[2434]: pam_unix(sshd:session): session closed for user core Sep 12 23:53:22.816542 systemd[1]: sshd@2-172.31.18.203:22-147.75.109.163:59926.service: Deactivated successfully. Sep 12 23:53:22.822951 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 23:53:22.824929 systemd-logind[2118]: Session 3 logged out. Waiting for processes to exit. Sep 12 23:53:22.826905 systemd-logind[2118]: Removed session 3. Sep 12 23:53:22.838143 systemd[1]: Started sshd@3-172.31.18.203:22-147.75.109.163:59932.service - OpenSSH per-connection server daemon (147.75.109.163:59932). Sep 12 23:53:23.014387 sshd[2442]: Accepted publickey for core from 147.75.109.163 port 59932 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:53:23.017082 sshd[2442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:53:23.025463 systemd-logind[2118]: New session 4 of user core. Sep 12 23:53:23.037133 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 23:53:23.165333 sshd[2442]: pam_unix(sshd:session): session closed for user core Sep 12 23:53:23.170494 systemd[1]: sshd@3-172.31.18.203:22-147.75.109.163:59932.service: Deactivated successfully. Sep 12 23:53:23.176923 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 23:53:23.177237 systemd-logind[2118]: Session 4 logged out. Waiting for processes to exit. Sep 12 23:53:23.180386 systemd-logind[2118]: Removed session 4. Sep 12 23:53:23.199130 systemd[1]: Started sshd@4-172.31.18.203:22-147.75.109.163:59942.service - OpenSSH per-connection server daemon (147.75.109.163:59942). Sep 12 23:53:23.363015 sshd[2450]: Accepted publickey for core from 147.75.109.163 port 59942 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:53:23.365525 sshd[2450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:53:23.373160 systemd-logind[2118]: New session 5 of user core. Sep 12 23:53:23.382103 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 23:53:23.531998 sudo[2454]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 23:53:23.532698 sudo[2454]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:53:23.552169 sudo[2454]: pam_unix(sudo:session): session closed for user root Sep 12 23:53:23.576067 sshd[2450]: pam_unix(sshd:session): session closed for user core Sep 12 23:53:23.582929 systemd-logind[2118]: Session 5 logged out. Waiting for processes to exit. Sep 12 23:53:23.584218 systemd[1]: sshd@4-172.31.18.203:22-147.75.109.163:59942.service: Deactivated successfully. Sep 12 23:53:23.590045 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 23:53:23.591887 systemd-logind[2118]: Removed session 5. Sep 12 23:53:23.609157 systemd[1]: Started sshd@5-172.31.18.203:22-147.75.109.163:59952.service - OpenSSH per-connection server daemon (147.75.109.163:59952). Sep 12 23:53:23.778506 sshd[2459]: Accepted publickey for core from 147.75.109.163 port 59952 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:53:23.781141 sshd[2459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:53:23.789855 systemd-logind[2118]: New session 6 of user core. Sep 12 23:53:23.798175 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 23:53:23.905352 sudo[2464]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 23:53:23.906066 sudo[2464]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:53:23.911827 sudo[2464]: pam_unix(sudo:session): session closed for user root Sep 12 23:53:23.921598 sudo[2463]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 23:53:23.922242 sudo[2463]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:53:23.948122 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 23:53:23.952382 auditctl[2467]: No rules Sep 12 23:53:23.953227 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 23:53:23.953825 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 23:53:23.975470 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 23:53:24.019624 augenrules[2486]: No rules Sep 12 23:53:24.023347 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 23:53:24.027460 sudo[2463]: pam_unix(sudo:session): session closed for user root Sep 12 23:53:24.051567 sshd[2459]: pam_unix(sshd:session): session closed for user core Sep 12 23:53:24.057782 systemd-logind[2118]: Session 6 logged out. Waiting for processes to exit. Sep 12 23:53:24.060855 systemd[1]: sshd@5-172.31.18.203:22-147.75.109.163:59952.service: Deactivated successfully. Sep 12 23:53:24.065090 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 23:53:24.067183 systemd-logind[2118]: Removed session 6. Sep 12 23:53:24.085068 systemd[1]: Started sshd@6-172.31.18.203:22-147.75.109.163:59962.service - OpenSSH per-connection server daemon (147.75.109.163:59962). Sep 12 23:53:24.252584 sshd[2495]: Accepted publickey for core from 147.75.109.163 port 59962 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:53:24.255140 sshd[2495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:53:24.263744 systemd-logind[2118]: New session 7 of user core. Sep 12 23:53:24.267115 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 23:53:24.373965 sudo[2499]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 23:53:24.374653 sudo[2499]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:53:24.998143 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 23:53:25.013327 (dockerd)[2516]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 23:53:25.590674 dockerd[2516]: time="2025-09-12T23:53:25.590568174Z" level=info msg="Starting up" Sep 12 23:53:26.035748 dockerd[2516]: time="2025-09-12T23:53:26.035173601Z" level=info msg="Loading containers: start." Sep 12 23:53:26.235674 kernel: Initializing XFRM netlink socket Sep 12 23:53:26.291016 (udev-worker)[2540]: Network interface NamePolicy= disabled on kernel command line. Sep 12 23:53:26.379569 systemd-networkd[1694]: docker0: Link UP Sep 12 23:53:26.404122 dockerd[2516]: time="2025-09-12T23:53:26.404074437Z" level=info msg="Loading containers: done." Sep 12 23:53:26.428385 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3489748648-merged.mount: Deactivated successfully. Sep 12 23:53:26.435464 dockerd[2516]: time="2025-09-12T23:53:26.435400169Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 23:53:26.435837 dockerd[2516]: time="2025-09-12T23:53:26.435564339Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 23:53:26.435837 dockerd[2516]: time="2025-09-12T23:53:26.435797171Z" level=info msg="Daemon has completed initialization" Sep 12 23:53:26.502477 dockerd[2516]: time="2025-09-12T23:53:26.502329741Z" level=info msg="API listen on /run/docker.sock" Sep 12 23:53:26.504279 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 23:53:27.595372 containerd[2151]: time="2025-09-12T23:53:27.594977692Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 23:53:28.304963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount441259668.mount: Deactivated successfully. Sep 12 23:53:30.515212 containerd[2151]: time="2025-09-12T23:53:30.515128697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:30.517050 containerd[2151]: time="2025-09-12T23:53:30.516971681Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=25687325" Sep 12 23:53:30.520674 containerd[2151]: time="2025-09-12T23:53:30.519340541Z" level=info msg="ImageCreate event name:\"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:30.530229 containerd[2151]: time="2025-09-12T23:53:30.530153262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:30.532674 containerd[2151]: time="2025-09-12T23:53:30.532601550Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"25683924\" in 2.937561459s" Sep 12 23:53:30.532851 containerd[2151]: time="2025-09-12T23:53:30.532819530Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\"" Sep 12 23:53:30.535513 containerd[2151]: time="2025-09-12T23:53:30.535251918Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 23:53:30.759600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 23:53:30.768024 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:53:31.289059 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:53:31.305463 (kubelet)[2725]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:53:31.405176 kubelet[2725]: E0912 23:53:31.405078 2725 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:53:31.413844 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:53:31.414280 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:53:32.844767 containerd[2151]: time="2025-09-12T23:53:32.844685013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:32.847862 containerd[2151]: time="2025-09-12T23:53:32.847789677Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=22459767" Sep 12 23:53:32.848934 containerd[2151]: time="2025-09-12T23:53:32.848859237Z" level=info msg="ImageCreate event name:\"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:32.855680 containerd[2151]: time="2025-09-12T23:53:32.854981409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:32.858800 containerd[2151]: time="2025-09-12T23:53:32.857510829Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"24028542\" in 2.321748323s" Sep 12 23:53:32.858800 containerd[2151]: time="2025-09-12T23:53:32.857581749Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\"" Sep 12 23:53:32.859449 containerd[2151]: time="2025-09-12T23:53:32.859407837Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 23:53:34.365669 containerd[2151]: time="2025-09-12T23:53:34.364048581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:34.366262 containerd[2151]: time="2025-09-12T23:53:34.366101181Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=17127506" Sep 12 23:53:34.367195 containerd[2151]: time="2025-09-12T23:53:34.367143633Z" level=info msg="ImageCreate event name:\"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:34.373476 containerd[2151]: time="2025-09-12T23:53:34.373407429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:34.376092 containerd[2151]: time="2025-09-12T23:53:34.375999585Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"18696299\" in 1.516282328s" Sep 12 23:53:34.376092 containerd[2151]: time="2025-09-12T23:53:34.376078641Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\"" Sep 12 23:53:34.377942 containerd[2151]: time="2025-09-12T23:53:34.377873253Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 23:53:35.946989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3919079942.mount: Deactivated successfully. Sep 12 23:53:36.501769 containerd[2151]: time="2025-09-12T23:53:36.501682991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:36.504099 containerd[2151]: time="2025-09-12T23:53:36.504029075Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=26954907" Sep 12 23:53:36.506102 containerd[2151]: time="2025-09-12T23:53:36.506018303Z" level=info msg="ImageCreate event name:\"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:36.510562 containerd[2151]: time="2025-09-12T23:53:36.509957879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:36.511620 containerd[2151]: time="2025-09-12T23:53:36.511546283Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"26953926\" in 2.133607858s" Sep 12 23:53:36.511799 containerd[2151]: time="2025-09-12T23:53:36.511613843Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 12 23:53:36.513332 containerd[2151]: time="2025-09-12T23:53:36.513282947Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 23:53:37.094389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3149080839.mount: Deactivated successfully. Sep 12 23:53:38.735549 containerd[2151]: time="2025-09-12T23:53:38.735467474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:38.741757 containerd[2151]: time="2025-09-12T23:53:38.741612758Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Sep 12 23:53:38.754878 containerd[2151]: time="2025-09-12T23:53:38.754209926Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:38.761886 containerd[2151]: time="2025-09-12T23:53:38.761812094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:38.765189 containerd[2151]: time="2025-09-12T23:53:38.765117602Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.251632287s" Sep 12 23:53:38.765189 containerd[2151]: time="2025-09-12T23:53:38.765191858Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 12 23:53:38.766327 containerd[2151]: time="2025-09-12T23:53:38.766255622Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 23:53:39.232567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1502528132.mount: Deactivated successfully. Sep 12 23:53:39.240030 containerd[2151]: time="2025-09-12T23:53:39.239958145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:39.242017 containerd[2151]: time="2025-09-12T23:53:39.241916437Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 12 23:53:39.242386 containerd[2151]: time="2025-09-12T23:53:39.242299045Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:39.247690 containerd[2151]: time="2025-09-12T23:53:39.247026145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:39.250336 containerd[2151]: time="2025-09-12T23:53:39.249124861Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 482.804619ms" Sep 12 23:53:39.250336 containerd[2151]: time="2025-09-12T23:53:39.249195961Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 23:53:39.251564 containerd[2151]: time="2025-09-12T23:53:39.251243545Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 23:53:39.785916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1809680444.mount: Deactivated successfully. Sep 12 23:53:41.509701 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 23:53:41.520040 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:53:41.999956 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:53:42.010811 (kubelet)[2869]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:53:42.112884 kubelet[2869]: E0912 23:53:42.112815 2869 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:53:42.118337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:53:42.120209 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:53:43.706210 containerd[2151]: time="2025-09-12T23:53:43.706055395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:43.708470 containerd[2151]: time="2025-09-12T23:53:43.708396127Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537161" Sep 12 23:53:43.710653 containerd[2151]: time="2025-09-12T23:53:43.710302171Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:43.718018 containerd[2151]: time="2025-09-12T23:53:43.717958867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:53:43.720660 containerd[2151]: time="2025-09-12T23:53:43.720553423Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 4.469242546s" Sep 12 23:53:43.720660 containerd[2151]: time="2025-09-12T23:53:43.720624295Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 12 23:53:46.194307 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 12 23:53:51.490361 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:53:51.502145 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:53:51.568925 systemd[1]: Reloading requested from client PID 2911 ('systemctl') (unit session-7.scope)... Sep 12 23:53:51.568957 systemd[1]: Reloading... Sep 12 23:53:51.796679 zram_generator::config[2954]: No configuration found. Sep 12 23:53:52.060011 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:53:52.247226 systemd[1]: Reloading finished in 677 ms. Sep 12 23:53:52.331182 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 23:53:52.331404 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 23:53:52.332125 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:53:52.343575 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:53:52.678985 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:53:52.687104 (kubelet)[3024]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 23:53:52.761315 kubelet[3024]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:53:52.761315 kubelet[3024]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 23:53:52.761315 kubelet[3024]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:53:52.763695 kubelet[3024]: I0912 23:53:52.762317 3024 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 23:53:55.719351 kubelet[3024]: I0912 23:53:55.719283 3024 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 23:53:55.719351 kubelet[3024]: I0912 23:53:55.719334 3024 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 23:53:55.720077 kubelet[3024]: I0912 23:53:55.719925 3024 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 23:53:55.773054 kubelet[3024]: E0912 23:53:55.772985 3024 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.18.203:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.203:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:53:55.773730 kubelet[3024]: I0912 23:53:55.773697 3024 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 23:53:55.788219 kubelet[3024]: E0912 23:53:55.788148 3024 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 23:53:55.788219 kubelet[3024]: I0912 23:53:55.788218 3024 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 23:53:55.796285 kubelet[3024]: I0912 23:53:55.795359 3024 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 23:53:55.801327 kubelet[3024]: I0912 23:53:55.801295 3024 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 23:53:55.801763 kubelet[3024]: I0912 23:53:55.801723 3024 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 23:53:55.802141 kubelet[3024]: I0912 23:53:55.801861 3024 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-203","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 12 23:53:55.802664 kubelet[3024]: I0912 23:53:55.802620 3024 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 23:53:55.802761 kubelet[3024]: I0912 23:53:55.802744 3024 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 23:53:55.803153 kubelet[3024]: I0912 23:53:55.803134 3024 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:53:55.809658 kubelet[3024]: I0912 23:53:55.809596 3024 kubelet.go:408] "Attempting to sync node with API server" Sep 12 23:53:55.809850 kubelet[3024]: I0912 23:53:55.809829 3024 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 23:53:55.809967 kubelet[3024]: I0912 23:53:55.809949 3024 kubelet.go:314] "Adding apiserver pod source" Sep 12 23:53:55.810233 kubelet[3024]: I0912 23:53:55.810212 3024 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 23:53:55.812748 kubelet[3024]: W0912 23:53:55.812608 3024 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.203:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-203&limit=500&resourceVersion=0": dial tcp 172.31.18.203:6443: connect: connection refused Sep 12 23:53:55.812869 kubelet[3024]: E0912 23:53:55.812753 3024 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.203:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-203&limit=500&resourceVersion=0\": dial tcp 172.31.18.203:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:53:55.817380 kubelet[3024]: W0912 23:53:55.817303 3024 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.203:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.18.203:6443: connect: connection refused Sep 12 23:53:55.818193 kubelet[3024]: E0912 23:53:55.817579 3024 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.18.203:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.203:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:53:55.818193 kubelet[3024]: I0912 23:53:55.818025 3024 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 23:53:55.820657 kubelet[3024]: I0912 23:53:55.819313 3024 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 23:53:55.820657 kubelet[3024]: W0912 23:53:55.819712 3024 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 23:53:55.824063 kubelet[3024]: I0912 23:53:55.824008 3024 server.go:1274] "Started kubelet" Sep 12 23:53:55.826411 kubelet[3024]: I0912 23:53:55.826355 3024 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 23:53:55.828291 kubelet[3024]: I0912 23:53:55.828256 3024 server.go:449] "Adding debug handlers to kubelet server" Sep 12 23:53:55.835536 kubelet[3024]: I0912 23:53:55.835409 3024 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 23:53:55.836188 kubelet[3024]: I0912 23:53:55.836119 3024 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 23:53:55.840658 kubelet[3024]: E0912 23:53:55.838053 3024 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.203:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.203:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-203.1864ae26b372b85f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-203,UID:ip-172-31-18-203,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-203,},FirstTimestamp:2025-09-12 23:53:55.823966303 +0000 UTC m=+3.130069024,LastTimestamp:2025-09-12 23:53:55.823966303 +0000 UTC m=+3.130069024,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-203,}" Sep 12 23:53:55.844040 kubelet[3024]: I0912 23:53:55.844004 3024 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 23:53:55.848830 kubelet[3024]: I0912 23:53:55.844330 3024 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 23:53:55.849190 kubelet[3024]: I0912 23:53:55.849167 3024 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 23:53:55.849803 kubelet[3024]: E0912 23:53:55.849772 3024 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-203\" not found" Sep 12 23:53:55.851338 kubelet[3024]: I0912 23:53:55.851303 3024 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 23:53:55.851598 kubelet[3024]: I0912 23:53:55.851578 3024 reconciler.go:26] "Reconciler: start to sync state" Sep 12 23:53:55.854498 kubelet[3024]: W0912 23:53:55.854412 3024 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.203:6443: connect: connection refused Sep 12 23:53:55.855558 kubelet[3024]: E0912 23:53:55.854740 3024 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.18.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.203:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:53:55.855558 kubelet[3024]: E0912 23:53:55.854882 3024 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-203?timeout=10s\": dial tcp 172.31.18.203:6443: connect: connection refused" interval="200ms" Sep 12 23:53:55.855558 kubelet[3024]: I0912 23:53:55.855298 3024 factory.go:221] Registration of the systemd container factory successfully Sep 12 23:53:55.855558 kubelet[3024]: I0912 23:53:55.855428 3024 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 23:53:55.858010 kubelet[3024]: E0912 23:53:55.857950 3024 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 23:53:55.858921 kubelet[3024]: I0912 23:53:55.858892 3024 factory.go:221] Registration of the containerd container factory successfully Sep 12 23:53:55.893683 kubelet[3024]: I0912 23:53:55.893032 3024 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 23:53:55.896803 kubelet[3024]: I0912 23:53:55.896749 3024 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 23:53:55.896974 kubelet[3024]: I0912 23:53:55.896955 3024 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 23:53:55.897129 kubelet[3024]: I0912 23:53:55.897111 3024 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 23:53:55.897363 kubelet[3024]: E0912 23:53:55.897323 3024 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 23:53:55.901989 kubelet[3024]: W0912 23:53:55.901907 3024 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.203:6443: connect: connection refused Sep 12 23:53:55.903283 kubelet[3024]: E0912 23:53:55.903227 3024 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.18.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.203:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:53:55.905980 kubelet[3024]: I0912 23:53:55.905928 3024 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 23:53:55.905980 kubelet[3024]: I0912 23:53:55.905963 3024 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 23:53:55.906173 kubelet[3024]: I0912 23:53:55.905996 3024 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:53:55.910026 kubelet[3024]: I0912 23:53:55.909834 3024 policy_none.go:49] "None policy: Start" Sep 12 23:53:55.911258 kubelet[3024]: I0912 23:53:55.911091 3024 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 23:53:55.911258 kubelet[3024]: I0912 23:53:55.911136 3024 state_mem.go:35] "Initializing new in-memory state store" Sep 12 23:53:55.923662 kubelet[3024]: I0912 23:53:55.922117 3024 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 23:53:55.923662 kubelet[3024]: I0912 23:53:55.922402 3024 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 23:53:55.923662 kubelet[3024]: I0912 23:53:55.922421 3024 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 23:53:55.925149 kubelet[3024]: I0912 23:53:55.925119 3024 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 23:53:55.928146 kubelet[3024]: E0912 23:53:55.928106 3024 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-203\" not found" Sep 12 23:53:56.024535 kubelet[3024]: I0912 23:53:56.024470 3024 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-203" Sep 12 23:53:56.025471 kubelet[3024]: E0912 23:53:56.025427 3024 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.203:6443/api/v1/nodes\": dial tcp 172.31.18.203:6443: connect: connection refused" node="ip-172-31-18-203" Sep 12 23:53:56.053089 kubelet[3024]: I0912 23:53:56.052943 3024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ba6cc04a4d66f77e0280f6703b8d1c2-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-203\" (UID: \"6ba6cc04a4d66f77e0280f6703b8d1c2\") " pod="kube-system/kube-apiserver-ip-172-31-18-203" Sep 12 23:53:56.053089 kubelet[3024]: I0912 23:53:56.053003 3024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1733405b2b62207e67103a7875d18810-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-203\" (UID: \"1733405b2b62207e67103a7875d18810\") " pod="kube-system/kube-controller-manager-ip-172-31-18-203" Sep 12 23:53:56.053089 kubelet[3024]: I0912 23:53:56.053043 3024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1733405b2b62207e67103a7875d18810-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-203\" (UID: \"1733405b2b62207e67103a7875d18810\") " pod="kube-system/kube-controller-manager-ip-172-31-18-203" Sep 12 23:53:56.053362 kubelet[3024]: I0912 23:53:56.053106 3024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1733405b2b62207e67103a7875d18810-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-203\" (UID: \"1733405b2b62207e67103a7875d18810\") " pod="kube-system/kube-controller-manager-ip-172-31-18-203" Sep 12 23:53:56.053362 kubelet[3024]: I0912 23:53:56.053144 3024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1733405b2b62207e67103a7875d18810-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-203\" (UID: \"1733405b2b62207e67103a7875d18810\") " pod="kube-system/kube-controller-manager-ip-172-31-18-203" Sep 12 23:53:56.053362 kubelet[3024]: I0912 23:53:56.053179 3024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1733405b2b62207e67103a7875d18810-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-203\" (UID: \"1733405b2b62207e67103a7875d18810\") " pod="kube-system/kube-controller-manager-ip-172-31-18-203" Sep 12 23:53:56.053362 kubelet[3024]: I0912 23:53:56.053218 3024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b088b597c56825d30ae717888009a66e-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-203\" (UID: \"b088b597c56825d30ae717888009a66e\") " pod="kube-system/kube-scheduler-ip-172-31-18-203" Sep 12 23:53:56.053362 kubelet[3024]: I0912 23:53:56.053249 3024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ba6cc04a4d66f77e0280f6703b8d1c2-ca-certs\") pod \"kube-apiserver-ip-172-31-18-203\" (UID: \"6ba6cc04a4d66f77e0280f6703b8d1c2\") " pod="kube-system/kube-apiserver-ip-172-31-18-203" Sep 12 23:53:56.053666 kubelet[3024]: I0912 23:53:56.053285 3024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ba6cc04a4d66f77e0280f6703b8d1c2-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-203\" (UID: \"6ba6cc04a4d66f77e0280f6703b8d1c2\") " pod="kube-system/kube-apiserver-ip-172-31-18-203" Sep 12 23:53:56.055407 kubelet[3024]: E0912 23:53:56.055322 3024 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-203?timeout=10s\": dial tcp 172.31.18.203:6443: connect: connection refused" interval="400ms" Sep 12 23:53:56.228324 kubelet[3024]: I0912 23:53:56.227757 3024 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-203" Sep 12 23:53:56.228324 kubelet[3024]: E0912 23:53:56.228185 3024 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.203:6443/api/v1/nodes\": dial tcp 172.31.18.203:6443: connect: connection refused" node="ip-172-31-18-203" Sep 12 23:53:56.310238 containerd[2151]: time="2025-09-12T23:53:56.310095162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-203,Uid:6ba6cc04a4d66f77e0280f6703b8d1c2,Namespace:kube-system,Attempt:0,}" Sep 12 23:53:56.320072 containerd[2151]: time="2025-09-12T23:53:56.319158882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-203,Uid:1733405b2b62207e67103a7875d18810,Namespace:kube-system,Attempt:0,}" Sep 12 23:53:56.320505 containerd[2151]: time="2025-09-12T23:53:56.320462658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-203,Uid:b088b597c56825d30ae717888009a66e,Namespace:kube-system,Attempt:0,}" Sep 12 23:53:56.456551 kubelet[3024]: E0912 23:53:56.456467 3024 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-203?timeout=10s\": dial tcp 172.31.18.203:6443: connect: connection refused" interval="800ms" Sep 12 23:53:56.631252 kubelet[3024]: I0912 23:53:56.631094 3024 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-203" Sep 12 23:53:56.632024 kubelet[3024]: E0912 23:53:56.631971 3024 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.203:6443/api/v1/nodes\": dial tcp 172.31.18.203:6443: connect: connection refused" node="ip-172-31-18-203" Sep 12 23:53:56.761119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1077955797.mount: Deactivated successfully. Sep 12 23:53:56.769405 containerd[2151]: time="2025-09-12T23:53:56.769315712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:53:56.772436 containerd[2151]: time="2025-09-12T23:53:56.772369220Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 12 23:53:56.773692 containerd[2151]: time="2025-09-12T23:53:56.773099156Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:53:56.775122 containerd[2151]: time="2025-09-12T23:53:56.775077392Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:53:56.776190 containerd[2151]: time="2025-09-12T23:53:56.776099576Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 23:53:56.777492 containerd[2151]: time="2025-09-12T23:53:56.777424544Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:53:56.777873 containerd[2151]: time="2025-09-12T23:53:56.777838544Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 23:53:56.785287 containerd[2151]: time="2025-09-12T23:53:56.785217644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:53:56.789120 containerd[2151]: time="2025-09-12T23:53:56.788729276Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 477.897818ms" Sep 12 23:53:56.795506 containerd[2151]: time="2025-09-12T23:53:56.795450776Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 476.18201ms" Sep 12 23:53:56.797259 containerd[2151]: time="2025-09-12T23:53:56.797189912Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 476.48069ms" Sep 12 23:53:56.873289 kubelet[3024]: W0912 23:53:56.872919 3024 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.203:6443: connect: connection refused Sep 12 23:53:56.873289 kubelet[3024]: E0912 23:53:56.873001 3024 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.18.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.203:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:53:57.013171 containerd[2151]: time="2025-09-12T23:53:57.012979253Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:53:57.013720 containerd[2151]: time="2025-09-12T23:53:57.013215377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:53:57.013720 containerd[2151]: time="2025-09-12T23:53:57.013291961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:53:57.015750 containerd[2151]: time="2025-09-12T23:53:57.015213089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:53:57.017280 containerd[2151]: time="2025-09-12T23:53:57.016826057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:53:57.017280 containerd[2151]: time="2025-09-12T23:53:57.016929077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:53:57.017280 containerd[2151]: time="2025-09-12T23:53:57.016991453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:53:57.017280 containerd[2151]: time="2025-09-12T23:53:57.017192033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:53:57.026011 containerd[2151]: time="2025-09-12T23:53:57.025714361Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:53:57.026011 containerd[2151]: time="2025-09-12T23:53:57.025830197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:53:57.026513 containerd[2151]: time="2025-09-12T23:53:57.025868489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:53:57.027022 containerd[2151]: time="2025-09-12T23:53:57.026803445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:53:57.099911 kubelet[3024]: W0912 23:53:57.099833 3024 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.203:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-203&limit=500&resourceVersion=0": dial tcp 172.31.18.203:6443: connect: connection refused Sep 12 23:53:57.101887 kubelet[3024]: E0912 23:53:57.101792 3024 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.203:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-203&limit=500&resourceVersion=0\": dial tcp 172.31.18.203:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:53:57.174972 containerd[2151]: time="2025-09-12T23:53:57.174601170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-203,Uid:1733405b2b62207e67103a7875d18810,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d6c2850d6d1f0b680c8a693f505db8f9c33be273f87c0fb48a8753a7aecb059\"" Sep 12 23:53:57.192593 containerd[2151]: time="2025-09-12T23:53:57.192311730Z" level=info msg="CreateContainer within sandbox \"6d6c2850d6d1f0b680c8a693f505db8f9c33be273f87c0fb48a8753a7aecb059\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 23:53:57.222337 containerd[2151]: time="2025-09-12T23:53:57.221804706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-203,Uid:b088b597c56825d30ae717888009a66e,Namespace:kube-system,Attempt:0,} returns sandbox id \"54238bfcc484f0401ae0797d4769b387b8277bb253f47f76270e0daab5250b08\"" Sep 12 23:53:57.224690 containerd[2151]: time="2025-09-12T23:53:57.223299042Z" level=info msg="CreateContainer within sandbox \"6d6c2850d6d1f0b680c8a693f505db8f9c33be273f87c0fb48a8753a7aecb059\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"825e4105ba939c180acf363d17a7e00594a4bc255ec91cecfb59209fdaf32c33\"" Sep 12 23:53:57.225677 containerd[2151]: time="2025-09-12T23:53:57.225545478Z" level=info msg="StartContainer for \"825e4105ba939c180acf363d17a7e00594a4bc255ec91cecfb59209fdaf32c33\"" Sep 12 23:53:57.231222 containerd[2151]: time="2025-09-12T23:53:57.231157158Z" level=info msg="CreateContainer within sandbox \"54238bfcc484f0401ae0797d4769b387b8277bb253f47f76270e0daab5250b08\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 23:53:57.258043 containerd[2151]: time="2025-09-12T23:53:57.257863878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-203,Uid:6ba6cc04a4d66f77e0280f6703b8d1c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"61f7ea3e44af765e7a2d57fa15d44bed6f55e8fdffe440f17197b39669de741b\"" Sep 12 23:53:57.258213 kubelet[3024]: E0912 23:53:57.257964 3024 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-203?timeout=10s\": dial tcp 172.31.18.203:6443: connect: connection refused" interval="1.6s" Sep 12 23:53:57.264764 containerd[2151]: time="2025-09-12T23:53:57.263339178Z" level=info msg="CreateContainer within sandbox \"54238bfcc484f0401ae0797d4769b387b8277bb253f47f76270e0daab5250b08\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6999d02a0e37514bddf01cf6b7be62ffc7e2fd5a884249c40fdca1334a5283f5\"" Sep 12 23:53:57.267195 kubelet[3024]: W0912 23:53:57.267098 3024 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.203:6443: connect: connection refused Sep 12 23:53:57.267460 kubelet[3024]: E0912 23:53:57.267418 3024 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.18.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.203:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:53:57.268791 containerd[2151]: time="2025-09-12T23:53:57.268686378Z" level=info msg="StartContainer for \"6999d02a0e37514bddf01cf6b7be62ffc7e2fd5a884249c40fdca1334a5283f5\"" Sep 12 23:53:57.271405 kubelet[3024]: W0912 23:53:57.271237 3024 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.203:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.18.203:6443: connect: connection refused Sep 12 23:53:57.271405 kubelet[3024]: E0912 23:53:57.271366 3024 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.18.203:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.203:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:53:57.271895 containerd[2151]: time="2025-09-12T23:53:57.270578862Z" level=info msg="CreateContainer within sandbox \"61f7ea3e44af765e7a2d57fa15d44bed6f55e8fdffe440f17197b39669de741b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 23:53:57.303048 containerd[2151]: time="2025-09-12T23:53:57.302980434Z" level=info msg="CreateContainer within sandbox \"61f7ea3e44af765e7a2d57fa15d44bed6f55e8fdffe440f17197b39669de741b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b4a98eb612bae0f00e37565c82b711165b0f9bd6cb3e1af3f8c713b53d01b4c2\"" Sep 12 23:53:57.307669 containerd[2151]: time="2025-09-12T23:53:57.305859246Z" level=info msg="StartContainer for \"b4a98eb612bae0f00e37565c82b711165b0f9bd6cb3e1af3f8c713b53d01b4c2\"" Sep 12 23:53:57.439569 kubelet[3024]: I0912 23:53:57.439465 3024 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-203" Sep 12 23:53:57.440713 kubelet[3024]: E0912 23:53:57.440284 3024 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.203:6443/api/v1/nodes\": dial tcp 172.31.18.203:6443: connect: connection refused" node="ip-172-31-18-203" Sep 12 23:53:57.471123 containerd[2151]: time="2025-09-12T23:53:57.469711939Z" level=info msg="StartContainer for \"825e4105ba939c180acf363d17a7e00594a4bc255ec91cecfb59209fdaf32c33\" returns successfully" Sep 12 23:53:57.546468 containerd[2151]: time="2025-09-12T23:53:57.543839612Z" level=info msg="StartContainer for \"b4a98eb612bae0f00e37565c82b711165b0f9bd6cb3e1af3f8c713b53d01b4c2\" returns successfully" Sep 12 23:53:57.559959 containerd[2151]: time="2025-09-12T23:53:57.559869368Z" level=info msg="StartContainer for \"6999d02a0e37514bddf01cf6b7be62ffc7e2fd5a884249c40fdca1334a5283f5\" returns successfully" Sep 12 23:53:57.781763 kubelet[3024]: E0912 23:53:57.780262 3024 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.18.203:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.203:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:53:59.045464 kubelet[3024]: I0912 23:53:59.045415 3024 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-203" Sep 12 23:53:59.596845 update_engine[2124]: I20250912 23:53:59.596760 2124 update_attempter.cc:509] Updating boot flags... Sep 12 23:53:59.853012 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3314) Sep 12 23:54:00.616715 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3316) Sep 12 23:54:02.653553 kubelet[3024]: E0912 23:54:02.653492 3024 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-203\" not found" node="ip-172-31-18-203" Sep 12 23:54:02.664100 kubelet[3024]: I0912 23:54:02.664029 3024 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-18-203" Sep 12 23:54:02.664100 kubelet[3024]: E0912 23:54:02.664093 3024 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-18-203\": node \"ip-172-31-18-203\" not found" Sep 12 23:54:02.740759 kubelet[3024]: E0912 23:54:02.740568 3024 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-18-203.1864ae26b372b85f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-203,UID:ip-172-31-18-203,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-203,},FirstTimestamp:2025-09-12 23:53:55.823966303 +0000 UTC m=+3.130069024,LastTimestamp:2025-09-12 23:53:55.823966303 +0000 UTC m=+3.130069024,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-203,}" Sep 12 23:54:02.817571 kubelet[3024]: I0912 23:54:02.817483 3024 apiserver.go:52] "Watching apiserver" Sep 12 23:54:02.852290 kubelet[3024]: I0912 23:54:02.852227 3024 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 23:54:04.684733 systemd[1]: Reloading requested from client PID 3484 ('systemctl') (unit session-7.scope)... Sep 12 23:54:04.685197 systemd[1]: Reloading... Sep 12 23:54:04.887736 zram_generator::config[3536]: No configuration found. Sep 12 23:54:05.132760 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:54:05.400004 systemd[1]: Reloading finished in 713 ms. Sep 12 23:54:05.480280 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:54:05.494043 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 23:54:05.494731 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:54:05.505383 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:54:05.876141 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:54:05.910383 (kubelet)[3594]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 23:54:06.002178 kubelet[3594]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:54:06.002178 kubelet[3594]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 23:54:06.002178 kubelet[3594]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:54:06.002178 kubelet[3594]: I0912 23:54:06.001376 3594 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 23:54:06.026687 kubelet[3594]: I0912 23:54:06.026186 3594 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 23:54:06.026687 kubelet[3594]: I0912 23:54:06.026247 3594 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 23:54:06.026863 kubelet[3594]: I0912 23:54:06.026796 3594 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 23:54:06.030233 kubelet[3594]: I0912 23:54:06.030188 3594 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 23:54:06.039019 kubelet[3594]: I0912 23:54:06.038973 3594 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 23:54:06.047172 kubelet[3594]: E0912 23:54:06.047060 3594 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 23:54:06.047172 kubelet[3594]: I0912 23:54:06.047126 3594 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 23:54:06.058543 kubelet[3594]: I0912 23:54:06.058492 3594 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 23:54:06.059329 kubelet[3594]: I0912 23:54:06.059289 3594 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 23:54:06.059658 kubelet[3594]: I0912 23:54:06.059566 3594 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 23:54:06.059978 kubelet[3594]: I0912 23:54:06.059667 3594 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-203","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 12 23:54:06.060151 kubelet[3594]: I0912 23:54:06.059994 3594 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 23:54:06.060151 kubelet[3594]: I0912 23:54:06.060016 3594 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 23:54:06.060151 kubelet[3594]: I0912 23:54:06.060081 3594 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:54:06.060351 kubelet[3594]: I0912 23:54:06.060300 3594 kubelet.go:408] "Attempting to sync node with API server" Sep 12 23:54:06.060351 kubelet[3594]: I0912 23:54:06.060326 3594 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 23:54:06.060442 kubelet[3594]: I0912 23:54:06.060361 3594 kubelet.go:314] "Adding apiserver pod source" Sep 12 23:54:06.060442 kubelet[3594]: I0912 23:54:06.060390 3594 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 23:54:06.063961 kubelet[3594]: I0912 23:54:06.063898 3594 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 23:54:06.069395 kubelet[3594]: I0912 23:54:06.068362 3594 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 23:54:06.070889 kubelet[3594]: I0912 23:54:06.070570 3594 server.go:1274] "Started kubelet" Sep 12 23:54:06.084186 kubelet[3594]: I0912 23:54:06.084124 3594 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 23:54:06.097474 kubelet[3594]: I0912 23:54:06.097332 3594 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 23:54:06.100951 kubelet[3594]: I0912 23:54:06.100043 3594 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 23:54:06.105912 kubelet[3594]: I0912 23:54:06.105803 3594 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 23:54:06.108420 kubelet[3594]: E0912 23:54:06.108339 3594 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-203\" not found" Sep 12 23:54:06.114759 kubelet[3594]: I0912 23:54:06.112459 3594 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 23:54:06.114759 kubelet[3594]: I0912 23:54:06.112947 3594 reconciler.go:26] "Reconciler: start to sync state" Sep 12 23:54:06.118674 kubelet[3594]: I0912 23:54:06.098054 3594 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 23:54:06.123850 kubelet[3594]: I0912 23:54:06.123758 3594 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 23:54:06.147765 kubelet[3594]: I0912 23:54:06.147587 3594 server.go:449] "Adding debug handlers to kubelet server" Sep 12 23:54:06.192699 kubelet[3594]: I0912 23:54:06.154561 3594 factory.go:221] Registration of the systemd container factory successfully Sep 12 23:54:06.207710 kubelet[3594]: I0912 23:54:06.204470 3594 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 23:54:06.223273 kubelet[3594]: E0912 23:54:06.223233 3594 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 23:54:06.230613 kubelet[3594]: I0912 23:54:06.230575 3594 factory.go:221] Registration of the containerd container factory successfully Sep 12 23:54:06.233773 kubelet[3594]: I0912 23:54:06.232876 3594 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 23:54:06.248502 kubelet[3594]: I0912 23:54:06.246761 3594 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 23:54:06.248502 kubelet[3594]: I0912 23:54:06.246806 3594 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 23:54:06.248502 kubelet[3594]: I0912 23:54:06.246836 3594 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 23:54:06.248502 kubelet[3594]: E0912 23:54:06.246914 3594 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 23:54:06.348810 kubelet[3594]: E0912 23:54:06.348750 3594 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 23:54:06.397042 kubelet[3594]: I0912 23:54:06.396984 3594 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 23:54:06.397042 kubelet[3594]: I0912 23:54:06.397022 3594 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 23:54:06.397216 kubelet[3594]: I0912 23:54:06.397061 3594 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:54:06.399518 kubelet[3594]: I0912 23:54:06.397764 3594 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 23:54:06.399518 kubelet[3594]: I0912 23:54:06.397803 3594 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 23:54:06.399518 kubelet[3594]: I0912 23:54:06.397860 3594 policy_none.go:49] "None policy: Start" Sep 12 23:54:06.401715 kubelet[3594]: I0912 23:54:06.400488 3594 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 23:54:06.401715 kubelet[3594]: I0912 23:54:06.400536 3594 state_mem.go:35] "Initializing new in-memory state store" Sep 12 23:54:06.401715 kubelet[3594]: I0912 23:54:06.400836 3594 state_mem.go:75] "Updated machine memory state" Sep 12 23:54:06.404815 kubelet[3594]: I0912 23:54:06.404773 3594 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 23:54:06.405284 kubelet[3594]: I0912 23:54:06.405249 3594 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 23:54:06.405468 kubelet[3594]: I0912 23:54:06.405412 3594 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 23:54:06.406301 kubelet[3594]: I0912 23:54:06.406254 3594 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 23:54:06.525795 kubelet[3594]: I0912 23:54:06.525750 3594 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-203" Sep 12 23:54:06.543580 kubelet[3594]: I0912 23:54:06.543487 3594 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-18-203" Sep 12 23:54:06.544557 kubelet[3594]: I0912 23:54:06.544086 3594 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-18-203" Sep 12 23:54:06.583464 kubelet[3594]: E0912 23:54:06.583358 3594 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-18-203\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-18-203" Sep 12 23:54:06.617745 kubelet[3594]: I0912 23:54:06.617590 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ba6cc04a4d66f77e0280f6703b8d1c2-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-203\" (UID: \"6ba6cc04a4d66f77e0280f6703b8d1c2\") " pod="kube-system/kube-apiserver-ip-172-31-18-203" Sep 12 23:54:06.617745 kubelet[3594]: I0912 23:54:06.617700 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ba6cc04a4d66f77e0280f6703b8d1c2-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-203\" (UID: \"6ba6cc04a4d66f77e0280f6703b8d1c2\") " pod="kube-system/kube-apiserver-ip-172-31-18-203" Sep 12 23:54:06.617745 kubelet[3594]: I0912 23:54:06.617752 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1733405b2b62207e67103a7875d18810-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-203\" (UID: \"1733405b2b62207e67103a7875d18810\") " pod="kube-system/kube-controller-manager-ip-172-31-18-203" Sep 12 23:54:06.618022 kubelet[3594]: I0912 23:54:06.617792 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b088b597c56825d30ae717888009a66e-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-203\" (UID: \"b088b597c56825d30ae717888009a66e\") " pod="kube-system/kube-scheduler-ip-172-31-18-203" Sep 12 23:54:06.618022 kubelet[3594]: I0912 23:54:06.617831 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ba6cc04a4d66f77e0280f6703b8d1c2-ca-certs\") pod \"kube-apiserver-ip-172-31-18-203\" (UID: \"6ba6cc04a4d66f77e0280f6703b8d1c2\") " pod="kube-system/kube-apiserver-ip-172-31-18-203" Sep 12 23:54:06.618022 kubelet[3594]: I0912 23:54:06.617865 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1733405b2b62207e67103a7875d18810-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-203\" (UID: \"1733405b2b62207e67103a7875d18810\") " pod="kube-system/kube-controller-manager-ip-172-31-18-203" Sep 12 23:54:06.618022 kubelet[3594]: I0912 23:54:06.617900 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1733405b2b62207e67103a7875d18810-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-203\" (UID: \"1733405b2b62207e67103a7875d18810\") " pod="kube-system/kube-controller-manager-ip-172-31-18-203" Sep 12 23:54:06.618022 kubelet[3594]: I0912 23:54:06.617936 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1733405b2b62207e67103a7875d18810-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-203\" (UID: \"1733405b2b62207e67103a7875d18810\") " pod="kube-system/kube-controller-manager-ip-172-31-18-203" Sep 12 23:54:06.620286 kubelet[3594]: I0912 23:54:06.618686 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1733405b2b62207e67103a7875d18810-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-203\" (UID: \"1733405b2b62207e67103a7875d18810\") " pod="kube-system/kube-controller-manager-ip-172-31-18-203" Sep 12 23:54:07.062999 kubelet[3594]: I0912 23:54:07.062929 3594 apiserver.go:52] "Watching apiserver" Sep 12 23:54:07.112862 kubelet[3594]: I0912 23:54:07.112807 3594 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 23:54:07.460694 kubelet[3594]: I0912 23:54:07.457566 3594 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-203" podStartSLOduration=1.457540601 podStartE2EDuration="1.457540601s" podCreationTimestamp="2025-09-12 23:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:54:07.416719481 +0000 UTC m=+1.495522365" watchObservedRunningTime="2025-09-12 23:54:07.457540601 +0000 UTC m=+1.536343461" Sep 12 23:54:07.487625 kubelet[3594]: I0912 23:54:07.487294 3594 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-203" podStartSLOduration=1.487267733 podStartE2EDuration="1.487267733s" podCreationTimestamp="2025-09-12 23:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:54:07.457785653 +0000 UTC m=+1.536588525" watchObservedRunningTime="2025-09-12 23:54:07.487267733 +0000 UTC m=+1.566070593" Sep 12 23:54:07.514543 kubelet[3594]: I0912 23:54:07.513360 3594 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-203" podStartSLOduration=4.513338177 podStartE2EDuration="4.513338177s" podCreationTimestamp="2025-09-12 23:54:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:54:07.488321033 +0000 UTC m=+1.567123893" watchObservedRunningTime="2025-09-12 23:54:07.513338177 +0000 UTC m=+1.592141097" Sep 12 23:54:09.859803 kubelet[3594]: I0912 23:54:09.859734 3594 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 23:54:09.861097 containerd[2151]: time="2025-09-12T23:54:09.860902761Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 23:54:09.861739 kubelet[3594]: I0912 23:54:09.861336 3594 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 23:54:10.852332 kubelet[3594]: I0912 23:54:10.852268 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c7d9376-9830-48ca-86e6-76b7586a3670-lib-modules\") pod \"kube-proxy-9wlbq\" (UID: \"5c7d9376-9830-48ca-86e6-76b7586a3670\") " pod="kube-system/kube-proxy-9wlbq" Sep 12 23:54:10.852520 kubelet[3594]: I0912 23:54:10.852339 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gftrl\" (UniqueName: \"kubernetes.io/projected/5c7d9376-9830-48ca-86e6-76b7586a3670-kube-api-access-gftrl\") pod \"kube-proxy-9wlbq\" (UID: \"5c7d9376-9830-48ca-86e6-76b7586a3670\") " pod="kube-system/kube-proxy-9wlbq" Sep 12 23:54:10.852520 kubelet[3594]: I0912 23:54:10.852389 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c7d9376-9830-48ca-86e6-76b7586a3670-xtables-lock\") pod \"kube-proxy-9wlbq\" (UID: \"5c7d9376-9830-48ca-86e6-76b7586a3670\") " pod="kube-system/kube-proxy-9wlbq" Sep 12 23:54:10.852520 kubelet[3594]: I0912 23:54:10.852426 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5c7d9376-9830-48ca-86e6-76b7586a3670-kube-proxy\") pod \"kube-proxy-9wlbq\" (UID: \"5c7d9376-9830-48ca-86e6-76b7586a3670\") " pod="kube-system/kube-proxy-9wlbq" Sep 12 23:54:11.055490 kubelet[3594]: I0912 23:54:11.055333 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d4d4d050-49de-4a47-a0c3-0654fa963062-var-lib-calico\") pod \"tigera-operator-58fc44c59b-8nz2k\" (UID: \"d4d4d050-49de-4a47-a0c3-0654fa963062\") " pod="tigera-operator/tigera-operator-58fc44c59b-8nz2k" Sep 12 23:54:11.055490 kubelet[3594]: I0912 23:54:11.055411 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjqfn\" (UniqueName: \"kubernetes.io/projected/d4d4d050-49de-4a47-a0c3-0654fa963062-kube-api-access-rjqfn\") pod \"tigera-operator-58fc44c59b-8nz2k\" (UID: \"d4d4d050-49de-4a47-a0c3-0654fa963062\") " pod="tigera-operator/tigera-operator-58fc44c59b-8nz2k" Sep 12 23:54:11.080013 containerd[2151]: time="2025-09-12T23:54:11.079882927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9wlbq,Uid:5c7d9376-9830-48ca-86e6-76b7586a3670,Namespace:kube-system,Attempt:0,}" Sep 12 23:54:11.122184 containerd[2151]: time="2025-09-12T23:54:11.120417499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:54:11.122786 containerd[2151]: time="2025-09-12T23:54:11.121277995Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:54:11.125046 containerd[2151]: time="2025-09-12T23:54:11.124308535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:54:11.126130 containerd[2151]: time="2025-09-12T23:54:11.125798971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:54:11.236987 containerd[2151]: time="2025-09-12T23:54:11.236865296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9wlbq,Uid:5c7d9376-9830-48ca-86e6-76b7586a3670,Namespace:kube-system,Attempt:0,} returns sandbox id \"735f861a917eef6e4280ab1a1dcb86e87263f2f3578b6534e7bfdd5cab4d0e28\"" Sep 12 23:54:11.248095 containerd[2151]: time="2025-09-12T23:54:11.248006720Z" level=info msg="CreateContainer within sandbox \"735f861a917eef6e4280ab1a1dcb86e87263f2f3578b6534e7bfdd5cab4d0e28\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 23:54:11.273429 containerd[2151]: time="2025-09-12T23:54:11.273333416Z" level=info msg="CreateContainer within sandbox \"735f861a917eef6e4280ab1a1dcb86e87263f2f3578b6534e7bfdd5cab4d0e28\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8113d32643d0c45b4cfc5e2a3b8052abd79855d83b9fa5bca6d8dc373e5fe2c1\"" Sep 12 23:54:11.275660 containerd[2151]: time="2025-09-12T23:54:11.274548368Z" level=info msg="StartContainer for \"8113d32643d0c45b4cfc5e2a3b8052abd79855d83b9fa5bca6d8dc373e5fe2c1\"" Sep 12 23:54:11.280424 containerd[2151]: time="2025-09-12T23:54:11.280244204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-8nz2k,Uid:d4d4d050-49de-4a47-a0c3-0654fa963062,Namespace:tigera-operator,Attempt:0,}" Sep 12 23:54:11.337532 containerd[2151]: time="2025-09-12T23:54:11.334044608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:54:11.337532 containerd[2151]: time="2025-09-12T23:54:11.337473560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:54:11.338071 containerd[2151]: time="2025-09-12T23:54:11.337515932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:54:11.338071 containerd[2151]: time="2025-09-12T23:54:11.337930460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:54:11.469618 containerd[2151]: time="2025-09-12T23:54:11.469476477Z" level=info msg="StartContainer for \"8113d32643d0c45b4cfc5e2a3b8052abd79855d83b9fa5bca6d8dc373e5fe2c1\" returns successfully" Sep 12 23:54:11.481082 containerd[2151]: time="2025-09-12T23:54:11.480397605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-8nz2k,Uid:d4d4d050-49de-4a47-a0c3-0654fa963062,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ad2c23730bddc1a225e5e2a4d2fdfc1414ab9521b47858d8fd3f5ae7e38fe019\"" Sep 12 23:54:11.485548 containerd[2151]: time="2025-09-12T23:54:11.485498253Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 23:54:12.914452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3411067565.mount: Deactivated successfully. Sep 12 23:54:13.662806 containerd[2151]: time="2025-09-12T23:54:13.661844124Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:54:13.664414 containerd[2151]: time="2025-09-12T23:54:13.663948228Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 12 23:54:13.666684 containerd[2151]: time="2025-09-12T23:54:13.665412492Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:54:13.672289 containerd[2151]: time="2025-09-12T23:54:13.672235800Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:54:13.674169 containerd[2151]: time="2025-09-12T23:54:13.674115852Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.188313939s" Sep 12 23:54:13.674371 containerd[2151]: time="2025-09-12T23:54:13.674339088Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 12 23:54:13.680264 containerd[2151]: time="2025-09-12T23:54:13.680194644Z" level=info msg="CreateContainer within sandbox \"ad2c23730bddc1a225e5e2a4d2fdfc1414ab9521b47858d8fd3f5ae7e38fe019\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 23:54:13.700925 containerd[2151]: time="2025-09-12T23:54:13.700860672Z" level=info msg="CreateContainer within sandbox \"ad2c23730bddc1a225e5e2a4d2fdfc1414ab9521b47858d8fd3f5ae7e38fe019\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"cf60cc61a7421260f240e6795443ada83c058bce28976e1b52befe86b5a04383\"" Sep 12 23:54:13.703526 containerd[2151]: time="2025-09-12T23:54:13.701978940Z" level=info msg="StartContainer for \"cf60cc61a7421260f240e6795443ada83c058bce28976e1b52befe86b5a04383\"" Sep 12 23:54:13.822701 containerd[2151]: time="2025-09-12T23:54:13.822610801Z" level=info msg="StartContainer for \"cf60cc61a7421260f240e6795443ada83c058bce28976e1b52befe86b5a04383\" returns successfully" Sep 12 23:54:14.357039 kubelet[3594]: I0912 23:54:14.356922 3594 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9wlbq" podStartSLOduration=4.356897375 podStartE2EDuration="4.356897375s" podCreationTimestamp="2025-09-12 23:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:54:12.345242301 +0000 UTC m=+6.424045173" watchObservedRunningTime="2025-09-12 23:54:14.356897375 +0000 UTC m=+8.435700235" Sep 12 23:54:21.141059 sudo[2499]: pam_unix(sudo:session): session closed for user root Sep 12 23:54:21.167013 sshd[2495]: pam_unix(sshd:session): session closed for user core Sep 12 23:54:21.185218 systemd[1]: sshd@6-172.31.18.203:22-147.75.109.163:59962.service: Deactivated successfully. Sep 12 23:54:21.199375 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 23:54:21.203782 systemd-logind[2118]: Session 7 logged out. Waiting for processes to exit. Sep 12 23:54:21.210770 systemd-logind[2118]: Removed session 7. Sep 12 23:54:34.800690 kubelet[3594]: I0912 23:54:34.795970 3594 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-8nz2k" podStartSLOduration=22.602711346 podStartE2EDuration="24.795946017s" podCreationTimestamp="2025-09-12 23:54:10 +0000 UTC" firstStartedPulling="2025-09-12 23:54:11.483023913 +0000 UTC m=+5.561826785" lastFinishedPulling="2025-09-12 23:54:13.676258596 +0000 UTC m=+7.755061456" observedRunningTime="2025-09-12 23:54:14.359366051 +0000 UTC m=+8.438168911" watchObservedRunningTime="2025-09-12 23:54:34.795946017 +0000 UTC m=+28.874748913" Sep 12 23:54:34.827283 kubelet[3594]: I0912 23:54:34.826867 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c7d5a1c-3bee-45d4-a997-f92025f9d18d-tigera-ca-bundle\") pod \"calico-typha-5b4d977b8f-dmwst\" (UID: \"1c7d5a1c-3bee-45d4-a997-f92025f9d18d\") " pod="calico-system/calico-typha-5b4d977b8f-dmwst" Sep 12 23:54:34.830267 kubelet[3594]: I0912 23:54:34.829408 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1c7d5a1c-3bee-45d4-a997-f92025f9d18d-typha-certs\") pod \"calico-typha-5b4d977b8f-dmwst\" (UID: \"1c7d5a1c-3bee-45d4-a997-f92025f9d18d\") " pod="calico-system/calico-typha-5b4d977b8f-dmwst" Sep 12 23:54:34.831806 kubelet[3594]: I0912 23:54:34.831726 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsb6f\" (UniqueName: \"kubernetes.io/projected/1c7d5a1c-3bee-45d4-a997-f92025f9d18d-kube-api-access-zsb6f\") pod \"calico-typha-5b4d977b8f-dmwst\" (UID: \"1c7d5a1c-3bee-45d4-a997-f92025f9d18d\") " pod="calico-system/calico-typha-5b4d977b8f-dmwst" Sep 12 23:54:35.123188 containerd[2151]: time="2025-09-12T23:54:35.119624358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b4d977b8f-dmwst,Uid:1c7d5a1c-3bee-45d4-a997-f92025f9d18d,Namespace:calico-system,Attempt:0,}" Sep 12 23:54:35.146750 kubelet[3594]: I0912 23:54:35.146171 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2cc79c9-3bbf-4e17-a80b-ee9aa90e6ca4-lib-modules\") pod \"calico-node-mpfz2\" (UID: \"e2cc79c9-3bbf-4e17-a80b-ee9aa90e6ca4\") " pod="calico-system/calico-node-mpfz2" Sep 12 23:54:35.146750 kubelet[3594]: I0912 23:54:35.146249 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e2cc79c9-3bbf-4e17-a80b-ee9aa90e6ca4-node-certs\") pod \"calico-node-mpfz2\" (UID: \"e2cc79c9-3bbf-4e17-a80b-ee9aa90e6ca4\") " pod="calico-system/calico-node-mpfz2" Sep 12 23:54:35.146750 kubelet[3594]: I0912 23:54:35.146289 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e2cc79c9-3bbf-4e17-a80b-ee9aa90e6ca4-var-run-calico\") pod \"calico-node-mpfz2\" (UID: \"e2cc79c9-3bbf-4e17-a80b-ee9aa90e6ca4\") " pod="calico-system/calico-node-mpfz2" Sep 12 23:54:35.146750 kubelet[3594]: I0912 23:54:35.146335 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2cc79c9-3bbf-4e17-a80b-ee9aa90e6ca4-xtables-lock\") pod \"calico-node-mpfz2\" (UID: \"e2cc79c9-3bbf-4e17-a80b-ee9aa90e6ca4\") " pod="calico-system/calico-node-mpfz2" Sep 12 23:54:35.146750 kubelet[3594]: I0912 23:54:35.146376 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e2cc79c9-3bbf-4e17-a80b-ee9aa90e6ca4-flexvol-driver-host\") pod \"calico-node-mpfz2\" (UID: \"e2cc79c9-3bbf-4e17-a80b-ee9aa90e6ca4\") " pod="calico-system/calico-node-mpfz2" Sep 12 23:54:35.148908 kubelet[3594]: I0912 23:54:35.146460 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2cc79c9-3bbf-4e17-a80b-ee9aa90e6ca4-tigera-ca-bundle\") pod \"calico-node-mpfz2\" (UID: \"e2cc79c9-3bbf-4e17-a80b-ee9aa90e6ca4\") " pod="calico-system/calico-node-mpfz2" Sep 12 23:54:35.148908 kubelet[3594]: I0912 23:54:35.146511 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e2cc79c9-3bbf-4e17-a80b-ee9aa90e6ca4-var-lib-calico\") pod \"calico-node-mpfz2\" (UID: \"e2cc79c9-3bbf-4e17-a80b-ee9aa90e6ca4\") " pod="calico-system/calico-node-mpfz2" Sep 12 23:54:35.148908 kubelet[3594]: I0912 23:54:35.146553 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e2cc79c9-3bbf-4e17-a80b-ee9aa90e6ca4-cni-net-dir\") pod \"calico-node-mpfz2\" (UID: \"e2cc79c9-3bbf-4e17-a80b-ee9aa90e6ca4\") " pod="calico-system/calico-node-mpfz2" Sep 12 23:54:35.148908 kubelet[3594]: I0912 23:54:35.146596 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e2cc79c9-3bbf-4e17-a80b-ee9aa90e6ca4-cni-bin-dir\") pod \"calico-node-mpfz2\" (UID: \"e2cc79c9-3bbf-4e17-a80b-ee9aa90e6ca4\") " pod="calico-system/calico-node-mpfz2" Sep 12 23:54:35.148908 kubelet[3594]: I0912 23:54:35.146675 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e2cc79c9-3bbf-4e17-a80b-ee9aa90e6ca4-cni-log-dir\") pod \"calico-node-mpfz2\" (UID: \"e2cc79c9-3bbf-4e17-a80b-ee9aa90e6ca4\") " pod="calico-system/calico-node-mpfz2" Sep 12 23:54:35.150069 kubelet[3594]: I0912 23:54:35.146999 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e2cc79c9-3bbf-4e17-a80b-ee9aa90e6ca4-policysync\") pod \"calico-node-mpfz2\" (UID: \"e2cc79c9-3bbf-4e17-a80b-ee9aa90e6ca4\") " pod="calico-system/calico-node-mpfz2" Sep 12 23:54:35.150069 kubelet[3594]: I0912 23:54:35.147102 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b5pz\" (UniqueName: \"kubernetes.io/projected/e2cc79c9-3bbf-4e17-a80b-ee9aa90e6ca4-kube-api-access-6b5pz\") pod \"calico-node-mpfz2\" (UID: \"e2cc79c9-3bbf-4e17-a80b-ee9aa90e6ca4\") " pod="calico-system/calico-node-mpfz2" Sep 12 23:54:35.236372 containerd[2151]: time="2025-09-12T23:54:35.235328419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:54:35.236372 containerd[2151]: time="2025-09-12T23:54:35.235564795Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:54:35.237131 containerd[2151]: time="2025-09-12T23:54:35.235615267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:54:35.238043 containerd[2151]: time="2025-09-12T23:54:35.237495247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:54:35.267902 kubelet[3594]: E0912 23:54:35.267688 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.267902 kubelet[3594]: W0912 23:54:35.267769 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.267902 kubelet[3594]: E0912 23:54:35.267859 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.275582 kubelet[3594]: E0912 23:54:35.272171 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.275582 kubelet[3594]: W0912 23:54:35.272445 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.275582 kubelet[3594]: E0912 23:54:35.273281 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.281999 kubelet[3594]: E0912 23:54:35.281930 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.283153 kubelet[3594]: W0912 23:54:35.282554 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.283153 kubelet[3594]: E0912 23:54:35.283042 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.287316 kubelet[3594]: E0912 23:54:35.286318 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.287316 kubelet[3594]: W0912 23:54:35.286885 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.287316 kubelet[3594]: E0912 23:54:35.286944 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.290994 kubelet[3594]: E0912 23:54:35.289971 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.290994 kubelet[3594]: W0912 23:54:35.290225 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.290994 kubelet[3594]: E0912 23:54:35.290267 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.336518 kubelet[3594]: E0912 23:54:35.333283 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.336518 kubelet[3594]: W0912 23:54:35.333600 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.336518 kubelet[3594]: E0912 23:54:35.333707 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.352790 kubelet[3594]: E0912 23:54:35.352603 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.353028 kubelet[3594]: W0912 23:54:35.353000 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.353279 kubelet[3594]: E0912 23:54:35.353249 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.417975 containerd[2151]: time="2025-09-12T23:54:35.417733592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mpfz2,Uid:e2cc79c9-3bbf-4e17-a80b-ee9aa90e6ca4,Namespace:calico-system,Attempt:0,}" Sep 12 23:54:35.483382 kubelet[3594]: E0912 23:54:35.482392 3594 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vb427" podUID="e874f212-ec82-4dc1-a7f2-b6ff94f1cb99" Sep 12 23:54:35.533388 kubelet[3594]: E0912 23:54:35.532449 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.533388 kubelet[3594]: W0912 23:54:35.532547 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.533388 kubelet[3594]: E0912 23:54:35.532583 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.535969 kubelet[3594]: E0912 23:54:35.534992 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.535969 kubelet[3594]: W0912 23:54:35.535054 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.535969 kubelet[3594]: E0912 23:54:35.535114 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.537601 kubelet[3594]: E0912 23:54:35.537560 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.540284 kubelet[3594]: W0912 23:54:35.537863 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.540284 kubelet[3594]: E0912 23:54:35.537920 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.540284 kubelet[3594]: E0912 23:54:35.540213 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.540914 kubelet[3594]: W0912 23:54:35.540243 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.540914 kubelet[3594]: E0912 23:54:35.540796 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.541964 kubelet[3594]: E0912 23:54:35.541921 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.542335 kubelet[3594]: W0912 23:54:35.542290 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.542577 kubelet[3594]: E0912 23:54:35.542545 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.543854 kubelet[3594]: E0912 23:54:35.543796 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.544039 kubelet[3594]: W0912 23:54:35.544010 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.544377 kubelet[3594]: E0912 23:54:35.544232 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.545902 kubelet[3594]: E0912 23:54:35.545862 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.547724 kubelet[3594]: W0912 23:54:35.546362 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.547724 kubelet[3594]: E0912 23:54:35.546424 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.549004 kubelet[3594]: E0912 23:54:35.548958 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.549252 kubelet[3594]: W0912 23:54:35.549217 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.549480 kubelet[3594]: E0912 23:54:35.549451 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.553818 kubelet[3594]: E0912 23:54:35.553239 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.553818 kubelet[3594]: W0912 23:54:35.553278 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.553818 kubelet[3594]: E0912 23:54:35.553313 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.554321 kubelet[3594]: E0912 23:54:35.554284 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.554512 kubelet[3594]: W0912 23:54:35.554477 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.555232 kubelet[3594]: E0912 23:54:35.554797 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.557127 kubelet[3594]: E0912 23:54:35.556846 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.557127 kubelet[3594]: W0912 23:54:35.556886 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.557127 kubelet[3594]: E0912 23:54:35.556926 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.560288 kubelet[3594]: E0912 23:54:35.560067 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.560288 kubelet[3594]: W0912 23:54:35.560103 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.560288 kubelet[3594]: E0912 23:54:35.560136 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.565023 kubelet[3594]: E0912 23:54:35.564007 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.565023 kubelet[3594]: W0912 23:54:35.564049 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.565023 kubelet[3594]: E0912 23:54:35.564096 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.565023 kubelet[3594]: I0912 23:54:35.564143 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e874f212-ec82-4dc1-a7f2-b6ff94f1cb99-kubelet-dir\") pod \"csi-node-driver-vb427\" (UID: \"e874f212-ec82-4dc1-a7f2-b6ff94f1cb99\") " pod="calico-system/csi-node-driver-vb427" Sep 12 23:54:35.570542 kubelet[3594]: E0912 23:54:35.568209 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.570542 kubelet[3594]: W0912 23:54:35.568249 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.571553 kubelet[3594]: E0912 23:54:35.571028 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.572187 kubelet[3594]: I0912 23:54:35.571900 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e874f212-ec82-4dc1-a7f2-b6ff94f1cb99-registration-dir\") pod \"csi-node-driver-vb427\" (UID: \"e874f212-ec82-4dc1-a7f2-b6ff94f1cb99\") " pod="calico-system/csi-node-driver-vb427" Sep 12 23:54:35.573651 kubelet[3594]: E0912 23:54:35.573347 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.573651 kubelet[3594]: W0912 23:54:35.573386 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.578002 kubelet[3594]: E0912 23:54:35.574604 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.581969 kubelet[3594]: E0912 23:54:35.581749 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.581969 kubelet[3594]: W0912 23:54:35.581787 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.583556 kubelet[3594]: E0912 23:54:35.583385 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.586873 kubelet[3594]: E0912 23:54:35.586648 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.586873 kubelet[3594]: W0912 23:54:35.586683 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.592833 kubelet[3594]: E0912 23:54:35.589573 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.592833 kubelet[3594]: E0912 23:54:35.590551 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.592833 kubelet[3594]: W0912 23:54:35.590582 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.592833 kubelet[3594]: I0912 23:54:35.592221 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e874f212-ec82-4dc1-a7f2-b6ff94f1cb99-socket-dir\") pod \"csi-node-driver-vb427\" (UID: \"e874f212-ec82-4dc1-a7f2-b6ff94f1cb99\") " pod="calico-system/csi-node-driver-vb427" Sep 12 23:54:35.592833 kubelet[3594]: E0912 23:54:35.592349 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.593926 kubelet[3594]: E0912 23:54:35.593856 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.594411 kubelet[3594]: W0912 23:54:35.594366 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.596840 kubelet[3594]: E0912 23:54:35.595949 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.600740 kubelet[3594]: E0912 23:54:35.598033 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.600740 kubelet[3594]: W0912 23:54:35.598073 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.602761 kubelet[3594]: E0912 23:54:35.601889 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.603820 kubelet[3594]: E0912 23:54:35.603778 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.604882 containerd[2151]: time="2025-09-12T23:54:35.602653425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:54:35.606058 kubelet[3594]: W0912 23:54:35.605115 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.606058 kubelet[3594]: E0912 23:54:35.605237 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.608538 containerd[2151]: time="2025-09-12T23:54:35.608118609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:54:35.608538 containerd[2151]: time="2025-09-12T23:54:35.608356497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:54:35.610724 containerd[2151]: time="2025-09-12T23:54:35.609107085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:54:35.610897 kubelet[3594]: E0912 23:54:35.609759 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.610897 kubelet[3594]: W0912 23:54:35.609790 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.611406 kubelet[3594]: E0912 23:54:35.611279 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.613952 kubelet[3594]: E0912 23:54:35.613911 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.616799 kubelet[3594]: W0912 23:54:35.614183 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.617053 kubelet[3594]: E0912 23:54:35.617015 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.619217 kubelet[3594]: E0912 23:54:35.618432 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.619494 kubelet[3594]: W0912 23:54:35.619400 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.622047 kubelet[3594]: E0912 23:54:35.621501 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.628719 kubelet[3594]: E0912 23:54:35.626422 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.628719 kubelet[3594]: W0912 23:54:35.626460 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.628719 kubelet[3594]: E0912 23:54:35.626503 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.633995 kubelet[3594]: E0912 23:54:35.632887 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.633995 kubelet[3594]: W0912 23:54:35.632925 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.633995 kubelet[3594]: E0912 23:54:35.633498 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.637786 kubelet[3594]: E0912 23:54:35.635850 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.639454 kubelet[3594]: W0912 23:54:35.638338 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.640616 kubelet[3594]: E0912 23:54:35.639599 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.644813 kubelet[3594]: E0912 23:54:35.644736 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.645785 kubelet[3594]: W0912 23:54:35.645733 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.646037 kubelet[3594]: E0912 23:54:35.646010 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.652814 kubelet[3594]: E0912 23:54:35.652768 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.654509 kubelet[3594]: W0912 23:54:35.653114 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.654509 kubelet[3594]: E0912 23:54:35.653179 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.756052 kubelet[3594]: E0912 23:54:35.755771 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.756052 kubelet[3594]: W0912 23:54:35.755806 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.756052 kubelet[3594]: E0912 23:54:35.755882 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.762981 kubelet[3594]: E0912 23:54:35.761306 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.762981 kubelet[3594]: W0912 23:54:35.761343 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.762981 kubelet[3594]: E0912 23:54:35.762188 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.767262 kubelet[3594]: E0912 23:54:35.766929 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.767262 kubelet[3594]: W0912 23:54:35.766962 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.767262 kubelet[3594]: E0912 23:54:35.767131 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.767262 kubelet[3594]: I0912 23:54:35.767178 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e874f212-ec82-4dc1-a7f2-b6ff94f1cb99-varrun\") pod \"csi-node-driver-vb427\" (UID: \"e874f212-ec82-4dc1-a7f2-b6ff94f1cb99\") " pod="calico-system/csi-node-driver-vb427" Sep 12 23:54:35.768922 kubelet[3594]: E0912 23:54:35.768438 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.768922 kubelet[3594]: W0912 23:54:35.768528 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.769713 kubelet[3594]: E0912 23:54:35.769428 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.774670 kubelet[3594]: E0912 23:54:35.773490 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.777352 kubelet[3594]: W0912 23:54:35.776059 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.777352 kubelet[3594]: E0912 23:54:35.776454 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.780860 kubelet[3594]: E0912 23:54:35.780017 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.780860 kubelet[3594]: W0912 23:54:35.780052 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.781367 containerd[2151]: time="2025-09-12T23:54:35.781291582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b4d977b8f-dmwst,Uid:1c7d5a1c-3bee-45d4-a997-f92025f9d18d,Namespace:calico-system,Attempt:0,} returns sandbox id \"15b385b68567e968f0521a77c937490d649fabed8e1350a9d33d6b4d5f2970ad\"" Sep 12 23:54:35.782199 kubelet[3594]: E0912 23:54:35.782026 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.782199 kubelet[3594]: I0912 23:54:35.782082 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9t6g\" (UniqueName: \"kubernetes.io/projected/e874f212-ec82-4dc1-a7f2-b6ff94f1cb99-kube-api-access-s9t6g\") pod \"csi-node-driver-vb427\" (UID: \"e874f212-ec82-4dc1-a7f2-b6ff94f1cb99\") " pod="calico-system/csi-node-driver-vb427" Sep 12 23:54:35.782934 kubelet[3594]: E0912 23:54:35.782730 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.782934 kubelet[3594]: W0912 23:54:35.782759 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.783155 kubelet[3594]: E0912 23:54:35.783123 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.783534 kubelet[3594]: E0912 23:54:35.783478 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.783534 kubelet[3594]: W0912 23:54:35.783502 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.784085 kubelet[3594]: E0912 23:54:35.783870 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.785352 kubelet[3594]: E0912 23:54:35.785133 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.785352 kubelet[3594]: W0912 23:54:35.785166 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.785586 kubelet[3594]: E0912 23:54:35.785554 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.786281 kubelet[3594]: E0912 23:54:35.786130 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.786281 kubelet[3594]: W0912 23:54:35.786163 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.786626 kubelet[3594]: E0912 23:54:35.786501 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.787264 kubelet[3594]: E0912 23:54:35.787123 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.787264 kubelet[3594]: W0912 23:54:35.787155 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.787726 kubelet[3594]: E0912 23:54:35.787532 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.788398 kubelet[3594]: E0912 23:54:35.788188 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.788398 kubelet[3594]: W0912 23:54:35.788220 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.789172 kubelet[3594]: E0912 23:54:35.788704 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.789836 kubelet[3594]: E0912 23:54:35.789803 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.789992 kubelet[3594]: W0912 23:54:35.789965 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.790224 kubelet[3594]: E0912 23:54:35.790198 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.793017 kubelet[3594]: E0912 23:54:35.791528 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.793017 kubelet[3594]: W0912 23:54:35.791557 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.793194 containerd[2151]: time="2025-09-12T23:54:35.792479770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 23:54:35.794082 kubelet[3594]: E0912 23:54:35.793477 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.795487 kubelet[3594]: E0912 23:54:35.795133 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.795487 kubelet[3594]: W0912 23:54:35.795215 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.796123 kubelet[3594]: E0912 23:54:35.795898 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.796398 kubelet[3594]: E0912 23:54:35.796367 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.796958 kubelet[3594]: W0912 23:54:35.796877 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.797419 kubelet[3594]: E0912 23:54:35.797344 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.798134 kubelet[3594]: E0912 23:54:35.798059 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.798134 kubelet[3594]: W0912 23:54:35.798093 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.799438 kubelet[3594]: E0912 23:54:35.798480 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.800863 kubelet[3594]: E0912 23:54:35.800097 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.800863 kubelet[3594]: W0912 23:54:35.800134 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.800863 kubelet[3594]: E0912 23:54:35.800777 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.805335 kubelet[3594]: E0912 23:54:35.804079 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.805335 kubelet[3594]: W0912 23:54:35.804112 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.805335 kubelet[3594]: E0912 23:54:35.804145 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.807675 kubelet[3594]: E0912 23:54:35.806079 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.807675 kubelet[3594]: W0912 23:54:35.806135 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.807675 kubelet[3594]: E0912 23:54:35.806172 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.808558 kubelet[3594]: E0912 23:54:35.808416 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.808558 kubelet[3594]: W0912 23:54:35.808455 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.808558 kubelet[3594]: E0912 23:54:35.808491 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.891179 kubelet[3594]: E0912 23:54:35.890431 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.891179 kubelet[3594]: W0912 23:54:35.890503 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.891179 kubelet[3594]: E0912 23:54:35.890544 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.894454 kubelet[3594]: E0912 23:54:35.893918 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.894454 kubelet[3594]: W0912 23:54:35.893964 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.896357 kubelet[3594]: E0912 23:54:35.895896 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.898460 kubelet[3594]: E0912 23:54:35.897911 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.898460 kubelet[3594]: W0912 23:54:35.897949 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.898460 kubelet[3594]: E0912 23:54:35.898021 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.900510 kubelet[3594]: E0912 23:54:35.900146 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.900510 kubelet[3594]: W0912 23:54:35.900421 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.902101 kubelet[3594]: E0912 23:54:35.901790 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.905474 kubelet[3594]: E0912 23:54:35.904888 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.905474 kubelet[3594]: W0912 23:54:35.904929 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.905474 kubelet[3594]: E0912 23:54:35.904974 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.907469 kubelet[3594]: E0912 23:54:35.906855 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.907469 kubelet[3594]: W0912 23:54:35.906894 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.907469 kubelet[3594]: E0912 23:54:35.907021 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.909530 kubelet[3594]: E0912 23:54:35.908862 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.909530 kubelet[3594]: W0912 23:54:35.908898 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.909530 kubelet[3594]: E0912 23:54:35.908942 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.911774 kubelet[3594]: E0912 23:54:35.910931 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.911774 kubelet[3594]: W0912 23:54:35.910974 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.912962 kubelet[3594]: E0912 23:54:35.911612 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.914281 kubelet[3594]: E0912 23:54:35.913557 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.914281 kubelet[3594]: W0912 23:54:35.913594 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.914905 kubelet[3594]: E0912 23:54:35.913981 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.916388 kubelet[3594]: E0912 23:54:35.915862 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.916388 kubelet[3594]: W0912 23:54:35.915915 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.916388 kubelet[3594]: E0912 23:54:35.915950 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:35.928134 containerd[2151]: time="2025-09-12T23:54:35.927050734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mpfz2,Uid:e2cc79c9-3bbf-4e17-a80b-ee9aa90e6ca4,Namespace:calico-system,Attempt:0,} returns sandbox id \"64aa8ffdaafde1707ddfaf03fba9cc993f1718177e2833d35187de84ffd3eb22\"" Sep 12 23:54:35.961714 kubelet[3594]: E0912 23:54:35.959479 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:35.962070 kubelet[3594]: W0912 23:54:35.961917 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:35.962387 kubelet[3594]: E0912 23:54:35.962242 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:37.248167 kubelet[3594]: E0912 23:54:37.248059 3594 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vb427" podUID="e874f212-ec82-4dc1-a7f2-b6ff94f1cb99" Sep 12 23:54:37.442877 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4283883188.mount: Deactivated successfully. Sep 12 23:54:38.505503 containerd[2151]: time="2025-09-12T23:54:38.505353527Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:54:38.507041 containerd[2151]: time="2025-09-12T23:54:38.506973383Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 12 23:54:38.508049 containerd[2151]: time="2025-09-12T23:54:38.507954455Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:54:38.513075 containerd[2151]: time="2025-09-12T23:54:38.513002795Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:54:38.514976 containerd[2151]: time="2025-09-12T23:54:38.514907411Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 2.722364185s" Sep 12 23:54:38.515371 containerd[2151]: time="2025-09-12T23:54:38.515197259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 12 23:54:38.518272 containerd[2151]: time="2025-09-12T23:54:38.517940423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 23:54:38.557450 containerd[2151]: time="2025-09-12T23:54:38.557380283Z" level=info msg="CreateContainer within sandbox \"15b385b68567e968f0521a77c937490d649fabed8e1350a9d33d6b4d5f2970ad\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 23:54:38.579242 containerd[2151]: time="2025-09-12T23:54:38.579035280Z" level=info msg="CreateContainer within sandbox \"15b385b68567e968f0521a77c937490d649fabed8e1350a9d33d6b4d5f2970ad\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"21ed434f4a54761a914fc1e73023ba25b10d2f163b4d83946ed024ac00cb7c4c\"" Sep 12 23:54:38.582219 containerd[2151]: time="2025-09-12T23:54:38.582155088Z" level=info msg="StartContainer for \"21ed434f4a54761a914fc1e73023ba25b10d2f163b4d83946ed024ac00cb7c4c\"" Sep 12 23:54:38.713500 containerd[2151]: time="2025-09-12T23:54:38.713424588Z" level=info msg="StartContainer for \"21ed434f4a54761a914fc1e73023ba25b10d2f163b4d83946ed024ac00cb7c4c\" returns successfully" Sep 12 23:54:39.247469 kubelet[3594]: E0912 23:54:39.247400 3594 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vb427" podUID="e874f212-ec82-4dc1-a7f2-b6ff94f1cb99" Sep 12 23:54:39.533062 kubelet[3594]: I0912 23:54:39.532742 3594 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5b4d977b8f-dmwst" podStartSLOduration=2.8068778869999997 podStartE2EDuration="5.53272038s" podCreationTimestamp="2025-09-12 23:54:34 +0000 UTC" firstStartedPulling="2025-09-12 23:54:35.790778854 +0000 UTC m=+29.869581714" lastFinishedPulling="2025-09-12 23:54:38.516621347 +0000 UTC m=+32.595424207" observedRunningTime="2025-09-12 23:54:39.5324097 +0000 UTC m=+33.611212584" watchObservedRunningTime="2025-09-12 23:54:39.53272038 +0000 UTC m=+33.611523264" Sep 12 23:54:39.584668 kubelet[3594]: E0912 23:54:39.584593 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:39.584854 kubelet[3594]: W0912 23:54:39.584680 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:39.584854 kubelet[3594]: E0912 23:54:39.584743 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:39.585179 kubelet[3594]: E0912 23:54:39.585151 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:39.585250 kubelet[3594]: W0912 23:54:39.585179 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:39.585250 kubelet[3594]: E0912 23:54:39.585206 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:39.585536 kubelet[3594]: E0912 23:54:39.585510 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:39.585608 kubelet[3594]: W0912 23:54:39.585536 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:39.585608 kubelet[3594]: E0912 23:54:39.585558 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:39.585956 kubelet[3594]: E0912 23:54:39.585927 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:39.586025 kubelet[3594]: W0912 23:54:39.585955 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:39.586025 kubelet[3594]: E0912 23:54:39.585979 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:39.586399 kubelet[3594]: E0912 23:54:39.586372 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:39.586463 kubelet[3594]: W0912 23:54:39.586398 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:39.586463 kubelet[3594]: E0912 23:54:39.586422 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:39.586803 kubelet[3594]: E0912 23:54:39.586775 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:39.586872 kubelet[3594]: W0912 23:54:39.586803 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:39.586872 kubelet[3594]: E0912 23:54:39.586831 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:39.587245 kubelet[3594]: E0912 23:54:39.587218 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:39.587309 kubelet[3594]: W0912 23:54:39.587244 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:39.587309 kubelet[3594]: E0912 23:54:39.587267 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:39.587617 kubelet[3594]: E0912 23:54:39.587590 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:39.587617 kubelet[3594]: W0912 23:54:39.587617 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:39.587835 kubelet[3594]: E0912 23:54:39.587685 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:39.588100 kubelet[3594]: E0912 23:54:39.588073 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:39.588179 kubelet[3594]: W0912 23:54:39.588100 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:39.588179 kubelet[3594]: E0912 23:54:39.588125 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:39.588577 kubelet[3594]: E0912 23:54:39.588547 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:39.588577 kubelet[3594]: W0912 23:54:39.588576 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:39.588780 kubelet[3594]: E0912 23:54:39.588602 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:39.589099 kubelet[3594]: E0912 23:54:39.589061 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:39.589197 kubelet[3594]: W0912 23:54:39.589098 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:39.589197 kubelet[3594]: E0912 23:54:39.589130 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:39.589768 kubelet[3594]: E0912 23:54:39.589729 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:39.589902 kubelet[3594]: W0912 23:54:39.589768 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:39.589902 kubelet[3594]: E0912 23:54:39.589800 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:39.590291 kubelet[3594]: E0912 23:54:39.590255 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:39.590410 kubelet[3594]: W0912 23:54:39.590290 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:39.590410 kubelet[3594]: E0912 23:54:39.590320 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:39.590945 kubelet[3594]: E0912 23:54:39.590905 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:39.590945 kubelet[3594]: W0912 23:54:39.590942 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:39.591168 kubelet[3594]: E0912 23:54:39.590975 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:39.591445 kubelet[3594]: E0912 23:54:39.591412 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:39.591539 kubelet[3594]: W0912 23:54:39.591446 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:39.591539 kubelet[3594]: E0912 23:54:39.591474 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:39.633337 kubelet[3594]: E0912 23:54:39.633229 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:39.633337 kubelet[3594]: W0912 23:54:39.633267 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:39.633337 kubelet[3594]: E0912 23:54:39.633298 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:39.634210 kubelet[3594]: E0912 23:54:39.634085 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:39.634210 kubelet[3594]: W0912 23:54:39.634113 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:39.634210 kubelet[3594]: E0912 23:54:39.634155 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:39.634551 kubelet[3594]: E0912 23:54:39.634519 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:39.634888 kubelet[3594]: W0912 23:54:39.634551 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:39.634888 kubelet[3594]: E0912 23:54:39.634593 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:39.635476 kubelet[3594]: E0912 23:54:39.635316 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:39.635476 kubelet[3594]: W0912 23:54:39.635345 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:39.635476 kubelet[3594]: E0912 23:54:39.635386 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:39.636296 kubelet[3594]: E0912 23:54:39.636037 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:39.636296 kubelet[3594]: W0912 23:54:39.636062 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:39.636296 kubelet[3594]: E0912 23:54:39.636101 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:39.636863 kubelet[3594]: E0912 23:54:39.636710 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:39.636863 kubelet[3594]: W0912 23:54:39.636742 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:39.637052 kubelet[3594]: E0912 23:54:39.636864 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:39.637958 kubelet[3594]: E0912 23:54:39.637531 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:39.637958 kubelet[3594]: W0912 23:54:39.637566 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:39.637958 kubelet[3594]: E0912 23:54:39.637711 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:39.638363 kubelet[3594]: E0912 23:54:39.638332 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:39.638510 kubelet[3594]: W0912 23:54:39.638468 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:39.639059 kubelet[3594]: E0912 23:54:39.638860 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:39.639342 kubelet[3594]: E0912 23:54:39.639283 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:39.639342 kubelet[3594]: W0912 23:54:39.639310 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:39.639778 kubelet[3594]: E0912 23:54:39.639546 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:39.640276 kubelet[3594]: E0912 23:54:39.640041 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:39.640276 kubelet[3594]: W0912 23:54:39.640071 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:39.640538 kubelet[3594]: E0912 23:54:39.640505 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:39.640818 kubelet[3594]: E0912 23:54:39.640792 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:39.641055 kubelet[3594]: W0912 23:54:39.640920 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:39.641055 kubelet[3594]: E0912 23:54:39.640969 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:39.642075 kubelet[3594]: E0912 23:54:39.641850 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:39.642075 kubelet[3594]: W0912 23:54:39.641881 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:39.642075 kubelet[3594]: E0912 23:54:39.641925 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:39.642933 kubelet[3594]: E0912 23:54:39.642704 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:39.642933 kubelet[3594]: W0912 23:54:39.642736 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:39.642933 kubelet[3594]: E0912 23:54:39.642793 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:39.643617 kubelet[3594]: E0912 23:54:39.643387 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:39.643617 kubelet[3594]: W0912 23:54:39.643411 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:39.643617 kubelet[3594]: E0912 23:54:39.643451 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:39.644052 kubelet[3594]: E0912 23:54:39.644024 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:39.644401 kubelet[3594]: W0912 23:54:39.644132 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:39.644401 kubelet[3594]: E0912 23:54:39.644181 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:39.644679 kubelet[3594]: E0912 23:54:39.644611 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:39.644803 kubelet[3594]: W0912 23:54:39.644779 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:39.644967 kubelet[3594]: E0912 23:54:39.644928 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:39.645383 kubelet[3594]: E0912 23:54:39.645357 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:39.645736 kubelet[3594]: W0912 23:54:39.645482 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:39.645736 kubelet[3594]: E0912 23:54:39.645526 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:39.645969 kubelet[3594]: E0912 23:54:39.645946 3594 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:54:39.646071 kubelet[3594]: W0912 23:54:39.646049 3594 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:54:39.646214 kubelet[3594]: E0912 23:54:39.646190 3594 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:54:40.014242 containerd[2151]: time="2025-09-12T23:54:40.014121131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:54:40.016130 containerd[2151]: time="2025-09-12T23:54:40.015724631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 12 23:54:40.018104 containerd[2151]: time="2025-09-12T23:54:40.017356307Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:54:40.022304 containerd[2151]: time="2025-09-12T23:54:40.022222835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:54:40.024572 containerd[2151]: time="2025-09-12T23:54:40.024476051Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.506438584s" Sep 12 23:54:40.024572 containerd[2151]: time="2025-09-12T23:54:40.024556991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 12 23:54:40.032598 containerd[2151]: time="2025-09-12T23:54:40.032529839Z" level=info msg="CreateContainer within sandbox \"64aa8ffdaafde1707ddfaf03fba9cc993f1718177e2833d35187de84ffd3eb22\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 23:54:40.059926 containerd[2151]: time="2025-09-12T23:54:40.057461723Z" level=info msg="CreateContainer within sandbox \"64aa8ffdaafde1707ddfaf03fba9cc993f1718177e2833d35187de84ffd3eb22\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"acf4a5278bc52727d10f4f96442db96d686f44de0b3497c34e7bcab239fca3b7\"" Sep 12 23:54:40.063718 containerd[2151]: time="2025-09-12T23:54:40.062327363Z" level=info msg="StartContainer for \"acf4a5278bc52727d10f4f96442db96d686f44de0b3497c34e7bcab239fca3b7\"" Sep 12 23:54:40.208192 containerd[2151]: time="2025-09-12T23:54:40.208072512Z" level=info msg="StartContainer for \"acf4a5278bc52727d10f4f96442db96d686f44de0b3497c34e7bcab239fca3b7\" returns successfully" Sep 12 23:54:40.524681 kubelet[3594]: I0912 23:54:40.523518 3594 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:54:40.744692 containerd[2151]: time="2025-09-12T23:54:40.744163766Z" level=info msg="shim disconnected" id=acf4a5278bc52727d10f4f96442db96d686f44de0b3497c34e7bcab239fca3b7 namespace=k8s.io Sep 12 23:54:40.744692 containerd[2151]: time="2025-09-12T23:54:40.744278054Z" level=warning msg="cleaning up after shim disconnected" id=acf4a5278bc52727d10f4f96442db96d686f44de0b3497c34e7bcab239fca3b7 namespace=k8s.io Sep 12 23:54:40.744692 containerd[2151]: time="2025-09-12T23:54:40.744325562Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:54:40.768261 containerd[2151]: time="2025-09-12T23:54:40.768151982Z" level=warning msg="cleanup warnings time=\"2025-09-12T23:54:40Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 23:54:41.051751 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-acf4a5278bc52727d10f4f96442db96d686f44de0b3497c34e7bcab239fca3b7-rootfs.mount: Deactivated successfully. Sep 12 23:54:41.247310 kubelet[3594]: E0912 23:54:41.247201 3594 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vb427" podUID="e874f212-ec82-4dc1-a7f2-b6ff94f1cb99" Sep 12 23:54:41.533663 containerd[2151]: time="2025-09-12T23:54:41.533513774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 23:54:42.545187 kubelet[3594]: I0912 23:54:42.544570 3594 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:54:43.248291 kubelet[3594]: E0912 23:54:43.247692 3594 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vb427" podUID="e874f212-ec82-4dc1-a7f2-b6ff94f1cb99" Sep 12 23:54:45.248465 kubelet[3594]: E0912 23:54:45.248375 3594 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vb427" podUID="e874f212-ec82-4dc1-a7f2-b6ff94f1cb99" Sep 12 23:54:45.816701 containerd[2151]: time="2025-09-12T23:54:45.816141427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:54:45.820613 containerd[2151]: time="2025-09-12T23:54:45.820502395Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 12 23:54:45.824846 containerd[2151]: time="2025-09-12T23:54:45.824763787Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:54:45.831626 containerd[2151]: time="2025-09-12T23:54:45.831516788Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:54:45.834271 containerd[2151]: time="2025-09-12T23:54:45.834203120Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 4.300618102s" Sep 12 23:54:45.834675 containerd[2151]: time="2025-09-12T23:54:45.834446432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 12 23:54:45.840654 containerd[2151]: time="2025-09-12T23:54:45.840436100Z" level=info msg="CreateContainer within sandbox \"64aa8ffdaafde1707ddfaf03fba9cc993f1718177e2833d35187de84ffd3eb22\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 23:54:45.878542 containerd[2151]: time="2025-09-12T23:54:45.878314784Z" level=info msg="CreateContainer within sandbox \"64aa8ffdaafde1707ddfaf03fba9cc993f1718177e2833d35187de84ffd3eb22\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1d9ed316c109546f5675e83f83182765f792fd9224c571f262b0d360a42dfa1d\"" Sep 12 23:54:45.881447 containerd[2151]: time="2025-09-12T23:54:45.881258696Z" level=info msg="StartContainer for \"1d9ed316c109546f5675e83f83182765f792fd9224c571f262b0d360a42dfa1d\"" Sep 12 23:54:45.954787 systemd[1]: run-containerd-runc-k8s.io-1d9ed316c109546f5675e83f83182765f792fd9224c571f262b0d360a42dfa1d-runc.pcP9Wq.mount: Deactivated successfully. Sep 12 23:54:46.020447 containerd[2151]: time="2025-09-12T23:54:46.020352232Z" level=info msg="StartContainer for \"1d9ed316c109546f5675e83f83182765f792fd9224c571f262b0d360a42dfa1d\" returns successfully" Sep 12 23:54:47.168904 containerd[2151]: time="2025-09-12T23:54:47.168830406Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 23:54:47.223983 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d9ed316c109546f5675e83f83182765f792fd9224c571f262b0d360a42dfa1d-rootfs.mount: Deactivated successfully. Sep 12 23:54:47.249775 kubelet[3594]: E0912 23:54:47.248273 3594 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vb427" podUID="e874f212-ec82-4dc1-a7f2-b6ff94f1cb99" Sep 12 23:54:47.260002 kubelet[3594]: I0912 23:54:47.259877 3594 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 23:54:47.422847 kubelet[3594]: I0912 23:54:47.422562 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9eeb1078-74ba-4b83-8069-cea1b65e8744-config-volume\") pod \"coredns-7c65d6cfc9-j88mc\" (UID: \"9eeb1078-74ba-4b83-8069-cea1b65e8744\") " pod="kube-system/coredns-7c65d6cfc9-j88mc" Sep 12 23:54:47.424521 kubelet[3594]: I0912 23:54:47.423471 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79l8s\" (UniqueName: \"kubernetes.io/projected/9f5b3f0c-b02e-481f-a083-c8af4d9dc294-kube-api-access-79l8s\") pod \"calico-apiserver-5c48bb7547-pbxdf\" (UID: \"9f5b3f0c-b02e-481f-a083-c8af4d9dc294\") " pod="calico-apiserver/calico-apiserver-5c48bb7547-pbxdf" Sep 12 23:54:47.426679 kubelet[3594]: I0912 23:54:47.424913 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d62a2d0-ccd7-4178-8371-f2c20fc86ca0-tigera-ca-bundle\") pod \"calico-kube-controllers-5dc46b49f4-xjvcm\" (UID: \"8d62a2d0-ccd7-4178-8371-f2c20fc86ca0\") " pod="calico-system/calico-kube-controllers-5dc46b49f4-xjvcm" Sep 12 23:54:47.428247 kubelet[3594]: I0912 23:54:47.428173 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgww6\" (UniqueName: \"kubernetes.io/projected/8d62a2d0-ccd7-4178-8371-f2c20fc86ca0-kube-api-access-sgww6\") pod \"calico-kube-controllers-5dc46b49f4-xjvcm\" (UID: \"8d62a2d0-ccd7-4178-8371-f2c20fc86ca0\") " pod="calico-system/calico-kube-controllers-5dc46b49f4-xjvcm" Sep 12 23:54:47.428247 kubelet[3594]: I0912 23:54:47.428262 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz7gz\" (UniqueName: \"kubernetes.io/projected/9eeb1078-74ba-4b83-8069-cea1b65e8744-kube-api-access-jz7gz\") pod \"coredns-7c65d6cfc9-j88mc\" (UID: \"9eeb1078-74ba-4b83-8069-cea1b65e8744\") " pod="kube-system/coredns-7c65d6cfc9-j88mc" Sep 12 23:54:47.428493 kubelet[3594]: I0912 23:54:47.428312 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9f5b3f0c-b02e-481f-a083-c8af4d9dc294-calico-apiserver-certs\") pod \"calico-apiserver-5c48bb7547-pbxdf\" (UID: \"9f5b3f0c-b02e-481f-a083-c8af4d9dc294\") " pod="calico-apiserver/calico-apiserver-5c48bb7547-pbxdf" Sep 12 23:54:47.529754 kubelet[3594]: I0912 23:54:47.529674 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490-whisker-backend-key-pair\") pod \"whisker-78c4b4c45-vpm9g\" (UID: \"1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490\") " pod="calico-system/whisker-78c4b4c45-vpm9g" Sep 12 23:54:47.529952 kubelet[3594]: I0912 23:54:47.529785 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490-whisker-ca-bundle\") pod \"whisker-78c4b4c45-vpm9g\" (UID: \"1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490\") " pod="calico-system/whisker-78c4b4c45-vpm9g" Sep 12 23:54:47.529952 kubelet[3594]: I0912 23:54:47.529829 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/082bf9af-912b-4ff6-8411-79fadb8bf200-goldmane-ca-bundle\") pod \"goldmane-7988f88666-bhgbj\" (UID: \"082bf9af-912b-4ff6-8411-79fadb8bf200\") " pod="calico-system/goldmane-7988f88666-bhgbj" Sep 12 23:54:47.529952 kubelet[3594]: I0912 23:54:47.529872 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3fae242f-71cb-4cc8-a7fa-b06a5787570e-config-volume\") pod \"coredns-7c65d6cfc9-h78v2\" (UID: \"3fae242f-71cb-4cc8-a7fa-b06a5787570e\") " pod="kube-system/coredns-7c65d6cfc9-h78v2" Sep 12 23:54:47.529952 kubelet[3594]: I0912 23:54:47.529949 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbczp\" (UniqueName: \"kubernetes.io/projected/3fae242f-71cb-4cc8-a7fa-b06a5787570e-kube-api-access-gbczp\") pod \"coredns-7c65d6cfc9-h78v2\" (UID: \"3fae242f-71cb-4cc8-a7fa-b06a5787570e\") " pod="kube-system/coredns-7c65d6cfc9-h78v2" Sep 12 23:54:47.530221 kubelet[3594]: I0912 23:54:47.530013 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/082bf9af-912b-4ff6-8411-79fadb8bf200-goldmane-key-pair\") pod \"goldmane-7988f88666-bhgbj\" (UID: \"082bf9af-912b-4ff6-8411-79fadb8bf200\") " pod="calico-system/goldmane-7988f88666-bhgbj" Sep 12 23:54:47.530221 kubelet[3594]: I0912 23:54:47.530122 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwfqf\" (UniqueName: \"kubernetes.io/projected/1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490-kube-api-access-bwfqf\") pod \"whisker-78c4b4c45-vpm9g\" (UID: \"1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490\") " pod="calico-system/whisker-78c4b4c45-vpm9g" Sep 12 23:54:47.530221 kubelet[3594]: I0912 23:54:47.530163 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbzdj\" (UniqueName: \"kubernetes.io/projected/082bf9af-912b-4ff6-8411-79fadb8bf200-kube-api-access-gbzdj\") pod \"goldmane-7988f88666-bhgbj\" (UID: \"082bf9af-912b-4ff6-8411-79fadb8bf200\") " pod="calico-system/goldmane-7988f88666-bhgbj" Sep 12 23:54:47.530403 kubelet[3594]: I0912 23:54:47.530240 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/082bf9af-912b-4ff6-8411-79fadb8bf200-config\") pod \"goldmane-7988f88666-bhgbj\" (UID: \"082bf9af-912b-4ff6-8411-79fadb8bf200\") " pod="calico-system/goldmane-7988f88666-bhgbj" Sep 12 23:54:47.530403 kubelet[3594]: I0912 23:54:47.530282 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/077d1d76-d7b8-4b1c-bc6e-9119a67ba30b-calico-apiserver-certs\") pod \"calico-apiserver-5c48bb7547-2nt2f\" (UID: \"077d1d76-d7b8-4b1c-bc6e-9119a67ba30b\") " pod="calico-apiserver/calico-apiserver-5c48bb7547-2nt2f" Sep 12 23:54:47.530403 kubelet[3594]: I0912 23:54:47.530320 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9psq\" (UniqueName: \"kubernetes.io/projected/077d1d76-d7b8-4b1c-bc6e-9119a67ba30b-kube-api-access-g9psq\") pod \"calico-apiserver-5c48bb7547-2nt2f\" (UID: \"077d1d76-d7b8-4b1c-bc6e-9119a67ba30b\") " pod="calico-apiserver/calico-apiserver-5c48bb7547-2nt2f" Sep 12 23:54:47.678622 containerd[2151]: time="2025-09-12T23:54:47.676121889Z" level=info msg="shim disconnected" id=1d9ed316c109546f5675e83f83182765f792fd9224c571f262b0d360a42dfa1d namespace=k8s.io Sep 12 23:54:47.678622 containerd[2151]: time="2025-09-12T23:54:47.676206249Z" level=warning msg="cleaning up after shim disconnected" id=1d9ed316c109546f5675e83f83182765f792fd9224c571f262b0d360a42dfa1d namespace=k8s.io Sep 12 23:54:47.678622 containerd[2151]: time="2025-09-12T23:54:47.676227681Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:54:47.693740 containerd[2151]: time="2025-09-12T23:54:47.693184521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-j88mc,Uid:9eeb1078-74ba-4b83-8069-cea1b65e8744,Namespace:kube-system,Attempt:0,}" Sep 12 23:54:47.730969 containerd[2151]: time="2025-09-12T23:54:47.730887669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dc46b49f4-xjvcm,Uid:8d62a2d0-ccd7-4178-8371-f2c20fc86ca0,Namespace:calico-system,Attempt:0,}" Sep 12 23:54:47.758372 containerd[2151]: time="2025-09-12T23:54:47.758304225Z" level=warning msg="cleanup warnings time=\"2025-09-12T23:54:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 12 23:54:47.796092 containerd[2151]: time="2025-09-12T23:54:47.795012141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-h78v2,Uid:3fae242f-71cb-4cc8-a7fa-b06a5787570e,Namespace:kube-system,Attempt:0,}" Sep 12 23:54:47.804032 containerd[2151]: time="2025-09-12T23:54:47.803956761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c48bb7547-2nt2f,Uid:077d1d76-d7b8-4b1c-bc6e-9119a67ba30b,Namespace:calico-apiserver,Attempt:0,}" Sep 12 23:54:47.805858 containerd[2151]: time="2025-09-12T23:54:47.805798017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c48bb7547-pbxdf,Uid:9f5b3f0c-b02e-481f-a083-c8af4d9dc294,Namespace:calico-apiserver,Attempt:0,}" Sep 12 23:54:47.807497 containerd[2151]: time="2025-09-12T23:54:47.806839677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78c4b4c45-vpm9g,Uid:1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490,Namespace:calico-system,Attempt:0,}" Sep 12 23:54:47.812576 containerd[2151]: time="2025-09-12T23:54:47.811746741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-bhgbj,Uid:082bf9af-912b-4ff6-8411-79fadb8bf200,Namespace:calico-system,Attempt:0,}" Sep 12 23:54:48.179546 containerd[2151]: time="2025-09-12T23:54:48.179474023Z" level=error msg="Failed to destroy network for sandbox \"99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:48.186621 containerd[2151]: time="2025-09-12T23:54:48.186546055Z" level=error msg="encountered an error cleaning up failed sandbox \"99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:48.187770 containerd[2151]: time="2025-09-12T23:54:48.187343335Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dc46b49f4-xjvcm,Uid:8d62a2d0-ccd7-4178-8371-f2c20fc86ca0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:48.191875 kubelet[3594]: E0912 23:54:48.191063 3594 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:48.191875 kubelet[3594]: E0912 23:54:48.191199 3594 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5dc46b49f4-xjvcm" Sep 12 23:54:48.191875 kubelet[3594]: E0912 23:54:48.191240 3594 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5dc46b49f4-xjvcm" Sep 12 23:54:48.193319 kubelet[3594]: E0912 23:54:48.191339 3594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5dc46b49f4-xjvcm_calico-system(8d62a2d0-ccd7-4178-8371-f2c20fc86ca0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5dc46b49f4-xjvcm_calico-system(8d62a2d0-ccd7-4178-8371-f2c20fc86ca0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5dc46b49f4-xjvcm" podUID="8d62a2d0-ccd7-4178-8371-f2c20fc86ca0" Sep 12 23:54:48.198371 containerd[2151]: time="2025-09-12T23:54:48.197822959Z" level=error msg="Failed to destroy network for sandbox \"c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:48.201824 containerd[2151]: time="2025-09-12T23:54:48.201747535Z" level=error msg="encountered an error cleaning up failed sandbox \"c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:48.205500 containerd[2151]: time="2025-09-12T23:54:48.203696839Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-j88mc,Uid:9eeb1078-74ba-4b83-8069-cea1b65e8744,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:48.205694 kubelet[3594]: E0912 23:54:48.204055 3594 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:48.205694 kubelet[3594]: E0912 23:54:48.204158 3594 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-j88mc" Sep 12 23:54:48.205694 kubelet[3594]: E0912 23:54:48.204198 3594 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-j88mc" Sep 12 23:54:48.205907 kubelet[3594]: E0912 23:54:48.204265 3594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-j88mc_kube-system(9eeb1078-74ba-4b83-8069-cea1b65e8744)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-j88mc_kube-system(9eeb1078-74ba-4b83-8069-cea1b65e8744)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-j88mc" podUID="9eeb1078-74ba-4b83-8069-cea1b65e8744" Sep 12 23:54:48.335924 containerd[2151]: time="2025-09-12T23:54:48.335386388Z" level=error msg="Failed to destroy network for sandbox \"559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:48.345506 containerd[2151]: time="2025-09-12T23:54:48.339112700Z" level=error msg="encountered an error cleaning up failed sandbox \"559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:48.345506 containerd[2151]: time="2025-09-12T23:54:48.339240680Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-bhgbj,Uid:082bf9af-912b-4ff6-8411-79fadb8bf200,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:48.345987 kubelet[3594]: E0912 23:54:48.340916 3594 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:48.345987 kubelet[3594]: E0912 23:54:48.341008 3594 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-bhgbj" Sep 12 23:54:48.345987 kubelet[3594]: E0912 23:54:48.341042 3594 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-bhgbj" Sep 12 23:54:48.348445 kubelet[3594]: E0912 23:54:48.341132 3594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-bhgbj_calico-system(082bf9af-912b-4ff6-8411-79fadb8bf200)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-bhgbj_calico-system(082bf9af-912b-4ff6-8411-79fadb8bf200)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-bhgbj" podUID="082bf9af-912b-4ff6-8411-79fadb8bf200" Sep 12 23:54:48.347874 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd-shm.mount: Deactivated successfully. Sep 12 23:54:48.370990 containerd[2151]: time="2025-09-12T23:54:48.370909604Z" level=error msg="Failed to destroy network for sandbox \"15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:48.379476 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89-shm.mount: Deactivated successfully. Sep 12 23:54:48.387982 containerd[2151]: time="2025-09-12T23:54:48.387890696Z" level=error msg="encountered an error cleaning up failed sandbox \"15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:48.388501 containerd[2151]: time="2025-09-12T23:54:48.388123856Z" level=error msg="Failed to destroy network for sandbox \"ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:48.389962 containerd[2151]: time="2025-09-12T23:54:48.388842260Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c48bb7547-pbxdf,Uid:9f5b3f0c-b02e-481f-a083-c8af4d9dc294,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:48.394716 kubelet[3594]: E0912 23:54:48.394601 3594 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:48.395386 kubelet[3594]: E0912 23:54:48.395039 3594 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c48bb7547-pbxdf" Sep 12 23:54:48.395386 kubelet[3594]: E0912 23:54:48.395117 3594 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c48bb7547-pbxdf" Sep 12 23:54:48.396023 kubelet[3594]: E0912 23:54:48.395243 3594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c48bb7547-pbxdf_calico-apiserver(9f5b3f0c-b02e-481f-a083-c8af4d9dc294)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c48bb7547-pbxdf_calico-apiserver(9f5b3f0c-b02e-481f-a083-c8af4d9dc294)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c48bb7547-pbxdf" podUID="9f5b3f0c-b02e-481f-a083-c8af4d9dc294" Sep 12 23:54:48.397279 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e-shm.mount: Deactivated successfully. Sep 12 23:54:48.403414 containerd[2151]: time="2025-09-12T23:54:48.402678548Z" level=error msg="encountered an error cleaning up failed sandbox \"ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:48.403414 containerd[2151]: time="2025-09-12T23:54:48.402779564Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c48bb7547-2nt2f,Uid:077d1d76-d7b8-4b1c-bc6e-9119a67ba30b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:48.403772 kubelet[3594]: E0912 23:54:48.403099 3594 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:48.403772 kubelet[3594]: E0912 23:54:48.403329 3594 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c48bb7547-2nt2f" Sep 12 23:54:48.404527 kubelet[3594]: E0912 23:54:48.403378 3594 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c48bb7547-2nt2f" Sep 12 23:54:48.404527 kubelet[3594]: E0912 23:54:48.404137 3594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c48bb7547-2nt2f_calico-apiserver(077d1d76-d7b8-4b1c-bc6e-9119a67ba30b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c48bb7547-2nt2f_calico-apiserver(077d1d76-d7b8-4b1c-bc6e-9119a67ba30b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c48bb7547-2nt2f" podUID="077d1d76-d7b8-4b1c-bc6e-9119a67ba30b" Sep 12 23:54:48.415212 containerd[2151]: time="2025-09-12T23:54:48.415000388Z" level=error msg="Failed to destroy network for sandbox \"f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:48.423695 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00-shm.mount: Deactivated successfully. Sep 12 23:54:48.424286 containerd[2151]: time="2025-09-12T23:54:48.422711576Z" level=error msg="encountered an error cleaning up failed sandbox \"f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:48.424841 containerd[2151]: time="2025-09-12T23:54:48.424471556Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78c4b4c45-vpm9g,Uid:1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:48.428030 kubelet[3594]: E0912 23:54:48.426277 3594 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:48.428030 kubelet[3594]: E0912 23:54:48.426372 3594 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-78c4b4c45-vpm9g" Sep 12 23:54:48.428030 kubelet[3594]: E0912 23:54:48.426407 3594 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-78c4b4c45-vpm9g" Sep 12 23:54:48.428336 kubelet[3594]: E0912 23:54:48.426481 3594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-78c4b4c45-vpm9g_calico-system(1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-78c4b4c45-vpm9g_calico-system(1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-78c4b4c45-vpm9g" podUID="1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490" Sep 12 23:54:48.443252 containerd[2151]: time="2025-09-12T23:54:48.443039024Z" level=error msg="Failed to destroy network for sandbox \"75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:48.446028 containerd[2151]: time="2025-09-12T23:54:48.445508325Z" level=error msg="encountered an error cleaning up failed sandbox \"75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:48.446028 containerd[2151]: time="2025-09-12T23:54:48.445853613Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-h78v2,Uid:3fae242f-71cb-4cc8-a7fa-b06a5787570e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:48.447410 kubelet[3594]: E0912 23:54:48.446690 3594 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:48.447410 kubelet[3594]: E0912 23:54:48.446798 3594 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-h78v2" Sep 12 23:54:48.447410 kubelet[3594]: E0912 23:54:48.446855 3594 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-h78v2" Sep 12 23:54:48.447801 kubelet[3594]: E0912 23:54:48.446958 3594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-h78v2_kube-system(3fae242f-71cb-4cc8-a7fa-b06a5787570e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-h78v2_kube-system(3fae242f-71cb-4cc8-a7fa-b06a5787570e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-h78v2" podUID="3fae242f-71cb-4cc8-a7fa-b06a5787570e" Sep 12 23:54:48.572298 containerd[2151]: time="2025-09-12T23:54:48.570916653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 23:54:48.580249 kubelet[3594]: I0912 23:54:48.580068 3594 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" Sep 12 23:54:48.584527 kubelet[3594]: I0912 23:54:48.584365 3594 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" Sep 12 23:54:48.589475 containerd[2151]: time="2025-09-12T23:54:48.587259333Z" level=info msg="StopPodSandbox for \"c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb\"" Sep 12 23:54:48.589475 containerd[2151]: time="2025-09-12T23:54:48.587399385Z" level=info msg="StopPodSandbox for \"75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4\"" Sep 12 23:54:48.589475 containerd[2151]: time="2025-09-12T23:54:48.587562897Z" level=info msg="Ensure that sandbox c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb in task-service has been cleanup successfully" Sep 12 23:54:48.589475 containerd[2151]: time="2025-09-12T23:54:48.587674725Z" level=info msg="Ensure that sandbox 75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4 in task-service has been cleanup successfully" Sep 12 23:54:48.601370 kubelet[3594]: I0912 23:54:48.601307 3594 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" Sep 12 23:54:48.614154 containerd[2151]: time="2025-09-12T23:54:48.613685469Z" level=info msg="StopPodSandbox for \"99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c\"" Sep 12 23:54:48.616372 containerd[2151]: time="2025-09-12T23:54:48.616294701Z" level=info msg="Ensure that sandbox 99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c in task-service has been cleanup successfully" Sep 12 23:54:48.624675 kubelet[3594]: I0912 23:54:48.624454 3594 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" Sep 12 23:54:48.630669 containerd[2151]: time="2025-09-12T23:54:48.630263049Z" level=info msg="StopPodSandbox for \"f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00\"" Sep 12 23:54:48.631751 containerd[2151]: time="2025-09-12T23:54:48.630609657Z" level=info msg="Ensure that sandbox f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00 in task-service has been cleanup successfully" Sep 12 23:54:48.644491 kubelet[3594]: I0912 23:54:48.644422 3594 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" Sep 12 23:54:48.653021 containerd[2151]: time="2025-09-12T23:54:48.652686202Z" level=info msg="StopPodSandbox for \"ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e\"" Sep 12 23:54:48.654995 containerd[2151]: time="2025-09-12T23:54:48.654800278Z" level=info msg="Ensure that sandbox ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e in task-service has been cleanup successfully" Sep 12 23:54:48.656768 kubelet[3594]: I0912 23:54:48.656085 3594 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" Sep 12 23:54:48.660687 containerd[2151]: time="2025-09-12T23:54:48.660449854Z" level=info msg="StopPodSandbox for \"15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89\"" Sep 12 23:54:48.664753 containerd[2151]: time="2025-09-12T23:54:48.663869074Z" level=info msg="Ensure that sandbox 15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89 in task-service has been cleanup successfully" Sep 12 23:54:48.677386 kubelet[3594]: I0912 23:54:48.675778 3594 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" Sep 12 23:54:48.678283 containerd[2151]: time="2025-09-12T23:54:48.677782174Z" level=info msg="StopPodSandbox for \"559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd\"" Sep 12 23:54:48.684974 containerd[2151]: time="2025-09-12T23:54:48.684866758Z" level=info msg="Ensure that sandbox 559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd in task-service has been cleanup successfully" Sep 12 23:54:48.798989 containerd[2151]: time="2025-09-12T23:54:48.798895138Z" level=error msg="StopPodSandbox for \"75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4\" failed" error="failed to destroy network for sandbox \"75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:48.799476 kubelet[3594]: E0912 23:54:48.799417 3594 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" Sep 12 23:54:48.799902 kubelet[3594]: E0912 23:54:48.799810 3594 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4"} Sep 12 23:54:48.800264 kubelet[3594]: E0912 23:54:48.800141 3594 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3fae242f-71cb-4cc8-a7fa-b06a5787570e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 23:54:48.800264 kubelet[3594]: E0912 23:54:48.800198 3594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3fae242f-71cb-4cc8-a7fa-b06a5787570e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-h78v2" podUID="3fae242f-71cb-4cc8-a7fa-b06a5787570e" Sep 12 23:54:48.837330 containerd[2151]: time="2025-09-12T23:54:48.837125182Z" level=error msg="StopPodSandbox for \"c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb\" failed" error="failed to destroy network for sandbox \"c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:48.837540 kubelet[3594]: E0912 23:54:48.837450 3594 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" Sep 12 23:54:48.837540 kubelet[3594]: E0912 23:54:48.837518 3594 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb"} Sep 12 23:54:48.837925 kubelet[3594]: E0912 23:54:48.837574 3594 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9eeb1078-74ba-4b83-8069-cea1b65e8744\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 23:54:48.837925 kubelet[3594]: E0912 23:54:48.837618 3594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9eeb1078-74ba-4b83-8069-cea1b65e8744\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-j88mc" podUID="9eeb1078-74ba-4b83-8069-cea1b65e8744" Sep 12 23:54:48.867402 containerd[2151]: time="2025-09-12T23:54:48.866977907Z" level=error msg="StopPodSandbox for \"f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00\" failed" error="failed to destroy network for sandbox \"f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:48.867578 kubelet[3594]: E0912 23:54:48.867411 3594 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" Sep 12 23:54:48.867578 kubelet[3594]: E0912 23:54:48.867485 3594 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00"} Sep 12 23:54:48.867578 kubelet[3594]: E0912 23:54:48.867543 3594 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 23:54:48.867948 kubelet[3594]: E0912 23:54:48.867606 3594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-78c4b4c45-vpm9g" podUID="1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490" Sep 12 23:54:48.868671 containerd[2151]: time="2025-09-12T23:54:48.868536719Z" level=error msg="StopPodSandbox for \"559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd\" failed" error="failed to destroy network for sandbox \"559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:48.869301 kubelet[3594]: E0912 23:54:48.868995 3594 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" Sep 12 23:54:48.869301 kubelet[3594]: E0912 23:54:48.869083 3594 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd"} Sep 12 23:54:48.869301 kubelet[3594]: E0912 23:54:48.869158 3594 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"082bf9af-912b-4ff6-8411-79fadb8bf200\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 23:54:48.869301 kubelet[3594]: E0912 23:54:48.869209 3594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"082bf9af-912b-4ff6-8411-79fadb8bf200\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-bhgbj" podUID="082bf9af-912b-4ff6-8411-79fadb8bf200" Sep 12 23:54:48.869979 containerd[2151]: time="2025-09-12T23:54:48.869524703Z" level=error msg="StopPodSandbox for \"99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c\" failed" error="failed to destroy network for sandbox \"99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:48.870523 kubelet[3594]: E0912 23:54:48.870358 3594 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" Sep 12 23:54:48.870973 kubelet[3594]: E0912 23:54:48.870446 3594 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c"} Sep 12 23:54:48.870973 kubelet[3594]: E0912 23:54:48.870807 3594 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8d62a2d0-ccd7-4178-8371-f2c20fc86ca0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 23:54:48.870973 kubelet[3594]: E0912 23:54:48.870857 3594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8d62a2d0-ccd7-4178-8371-f2c20fc86ca0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5dc46b49f4-xjvcm" podUID="8d62a2d0-ccd7-4178-8371-f2c20fc86ca0" Sep 12 23:54:48.877663 containerd[2151]: time="2025-09-12T23:54:48.877548443Z" level=error msg="StopPodSandbox for \"ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e\" failed" error="failed to destroy network for sandbox \"ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:48.877966 kubelet[3594]: E0912 23:54:48.877900 3594 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" Sep 12 23:54:48.878083 kubelet[3594]: E0912 23:54:48.877985 3594 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e"} Sep 12 23:54:48.878083 kubelet[3594]: E0912 23:54:48.878044 3594 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"077d1d76-d7b8-4b1c-bc6e-9119a67ba30b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 23:54:48.878323 kubelet[3594]: E0912 23:54:48.878085 3594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"077d1d76-d7b8-4b1c-bc6e-9119a67ba30b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c48bb7547-2nt2f" podUID="077d1d76-d7b8-4b1c-bc6e-9119a67ba30b" Sep 12 23:54:48.887701 containerd[2151]: time="2025-09-12T23:54:48.887500403Z" level=error msg="StopPodSandbox for \"15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89\" failed" error="failed to destroy network for sandbox \"15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:48.888419 kubelet[3594]: E0912 23:54:48.888017 3594 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" Sep 12 23:54:48.888419 kubelet[3594]: E0912 23:54:48.888114 3594 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89"} Sep 12 23:54:48.888419 kubelet[3594]: E0912 23:54:48.888171 3594 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9f5b3f0c-b02e-481f-a083-c8af4d9dc294\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 23:54:48.888419 kubelet[3594]: E0912 23:54:48.888217 3594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9f5b3f0c-b02e-481f-a083-c8af4d9dc294\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c48bb7547-pbxdf" podUID="9f5b3f0c-b02e-481f-a083-c8af4d9dc294" Sep 12 23:54:49.221084 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4-shm.mount: Deactivated successfully. Sep 12 23:54:49.255087 containerd[2151]: time="2025-09-12T23:54:49.254986893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vb427,Uid:e874f212-ec82-4dc1-a7f2-b6ff94f1cb99,Namespace:calico-system,Attempt:0,}" Sep 12 23:54:49.398740 containerd[2151]: time="2025-09-12T23:54:49.398606133Z" level=error msg="Failed to destroy network for sandbox \"c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:49.402422 containerd[2151]: time="2025-09-12T23:54:49.402182841Z" level=error msg="encountered an error cleaning up failed sandbox \"c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:49.402422 containerd[2151]: time="2025-09-12T23:54:49.402339141Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vb427,Uid:e874f212-ec82-4dc1-a7f2-b6ff94f1cb99,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:49.403148 kubelet[3594]: E0912 23:54:49.403048 3594 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:49.403897 kubelet[3594]: E0912 23:54:49.403171 3594 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vb427" Sep 12 23:54:49.403897 kubelet[3594]: E0912 23:54:49.403209 3594 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vb427" Sep 12 23:54:49.403897 kubelet[3594]: E0912 23:54:49.403283 3594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vb427_calico-system(e874f212-ec82-4dc1-a7f2-b6ff94f1cb99)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vb427_calico-system(e874f212-ec82-4dc1-a7f2-b6ff94f1cb99)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vb427" podUID="e874f212-ec82-4dc1-a7f2-b6ff94f1cb99" Sep 12 23:54:49.406159 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d-shm.mount: Deactivated successfully. Sep 12 23:54:49.680759 kubelet[3594]: I0912 23:54:49.680704 3594 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" Sep 12 23:54:49.682135 containerd[2151]: time="2025-09-12T23:54:49.681969983Z" level=info msg="StopPodSandbox for \"c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d\"" Sep 12 23:54:49.682135 containerd[2151]: time="2025-09-12T23:54:49.682457003Z" level=info msg="Ensure that sandbox c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d in task-service has been cleanup successfully" Sep 12 23:54:49.727434 containerd[2151]: time="2025-09-12T23:54:49.727324379Z" level=error msg="StopPodSandbox for \"c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d\" failed" error="failed to destroy network for sandbox \"c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:54:49.728014 kubelet[3594]: E0912 23:54:49.727918 3594 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" Sep 12 23:54:49.728144 kubelet[3594]: E0912 23:54:49.728066 3594 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d"} Sep 12 23:54:49.728201 kubelet[3594]: E0912 23:54:49.728165 3594 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e874f212-ec82-4dc1-a7f2-b6ff94f1cb99\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 23:54:49.728333 kubelet[3594]: E0912 23:54:49.728242 3594 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e874f212-ec82-4dc1-a7f2-b6ff94f1cb99\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vb427" podUID="e874f212-ec82-4dc1-a7f2-b6ff94f1cb99" Sep 12 23:54:57.173287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2373818259.mount: Deactivated successfully. Sep 12 23:54:57.231621 containerd[2151]: time="2025-09-12T23:54:57.229221928Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:54:57.231621 containerd[2151]: time="2025-09-12T23:54:57.230408368Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 12 23:54:57.232586 containerd[2151]: time="2025-09-12T23:54:57.232534912Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:54:57.238252 containerd[2151]: time="2025-09-12T23:54:57.238174720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:54:57.242328 containerd[2151]: time="2025-09-12T23:54:57.242259820Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 8.671271659s" Sep 12 23:54:57.242566 containerd[2151]: time="2025-09-12T23:54:57.242534344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 12 23:54:57.290994 containerd[2151]: time="2025-09-12T23:54:57.290123896Z" level=info msg="CreateContainer within sandbox \"64aa8ffdaafde1707ddfaf03fba9cc993f1718177e2833d35187de84ffd3eb22\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 23:54:57.321929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2445223418.mount: Deactivated successfully. Sep 12 23:54:57.325079 containerd[2151]: time="2025-09-12T23:54:57.324892865Z" level=info msg="CreateContainer within sandbox \"64aa8ffdaafde1707ddfaf03fba9cc993f1718177e2833d35187de84ffd3eb22\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8d39eef57adf2b3ccbd669aaa80bfdccab8ea16bd5d927ad23c024045a6a83b2\"" Sep 12 23:54:57.330137 containerd[2151]: time="2025-09-12T23:54:57.330085733Z" level=info msg="StartContainer for \"8d39eef57adf2b3ccbd669aaa80bfdccab8ea16bd5d927ad23c024045a6a83b2\"" Sep 12 23:54:57.517694 containerd[2151]: time="2025-09-12T23:54:57.517470402Z" level=info msg="StartContainer for \"8d39eef57adf2b3ccbd669aaa80bfdccab8ea16bd5d927ad23c024045a6a83b2\" returns successfully" Sep 12 23:54:57.750501 kubelet[3594]: I0912 23:54:57.749045 3594 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mpfz2" podStartSLOduration=1.435899829 podStartE2EDuration="22.749021251s" podCreationTimestamp="2025-09-12 23:54:35 +0000 UTC" firstStartedPulling="2025-09-12 23:54:35.93099175 +0000 UTC m=+30.009794610" lastFinishedPulling="2025-09-12 23:54:57.244113172 +0000 UTC m=+51.322916032" observedRunningTime="2025-09-12 23:54:57.747842887 +0000 UTC m=+51.826645759" watchObservedRunningTime="2025-09-12 23:54:57.749021251 +0000 UTC m=+51.827824111" Sep 12 23:54:58.027548 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 23:54:58.027770 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 23:54:58.270100 containerd[2151]: time="2025-09-12T23:54:58.269938193Z" level=info msg="StopPodSandbox for \"f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00\"" Sep 12 23:54:58.771763 containerd[2151]: 2025-09-12 23:54:58.540 [INFO][4806] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" Sep 12 23:54:58.771763 containerd[2151]: 2025-09-12 23:54:58.541 [INFO][4806] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" iface="eth0" netns="/var/run/netns/cni-17bc0c12-5d0e-fd00-1d6f-6befc081d68f" Sep 12 23:54:58.771763 containerd[2151]: 2025-09-12 23:54:58.544 [INFO][4806] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" iface="eth0" netns="/var/run/netns/cni-17bc0c12-5d0e-fd00-1d6f-6befc081d68f" Sep 12 23:54:58.771763 containerd[2151]: 2025-09-12 23:54:58.547 [INFO][4806] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" iface="eth0" netns="/var/run/netns/cni-17bc0c12-5d0e-fd00-1d6f-6befc081d68f" Sep 12 23:54:58.771763 containerd[2151]: 2025-09-12 23:54:58.548 [INFO][4806] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" Sep 12 23:54:58.771763 containerd[2151]: 2025-09-12 23:54:58.548 [INFO][4806] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" Sep 12 23:54:58.771763 containerd[2151]: 2025-09-12 23:54:58.684 [INFO][4814] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" HandleID="k8s-pod-network.f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" Workload="ip--172--31--18--203-k8s-whisker--78c4b4c45--vpm9g-eth0" Sep 12 23:54:58.771763 containerd[2151]: 2025-09-12 23:54:58.684 [INFO][4814] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:54:58.771763 containerd[2151]: 2025-09-12 23:54:58.685 [INFO][4814] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:54:58.771763 containerd[2151]: 2025-09-12 23:54:58.706 [WARNING][4814] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" HandleID="k8s-pod-network.f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" Workload="ip--172--31--18--203-k8s-whisker--78c4b4c45--vpm9g-eth0" Sep 12 23:54:58.771763 containerd[2151]: 2025-09-12 23:54:58.707 [INFO][4814] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" HandleID="k8s-pod-network.f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" Workload="ip--172--31--18--203-k8s-whisker--78c4b4c45--vpm9g-eth0" Sep 12 23:54:58.771763 containerd[2151]: 2025-09-12 23:54:58.711 [INFO][4814] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:54:58.771763 containerd[2151]: 2025-09-12 23:54:58.750 [INFO][4806] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" Sep 12 23:54:58.779405 containerd[2151]: time="2025-09-12T23:54:58.773674856Z" level=info msg="TearDown network for sandbox \"f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00\" successfully" Sep 12 23:54:58.779405 containerd[2151]: time="2025-09-12T23:54:58.773729672Z" level=info msg="StopPodSandbox for \"f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00\" returns successfully" Sep 12 23:54:58.783545 systemd[1]: run-netns-cni\x2d17bc0c12\x2d5d0e\x2dfd00\x2d1d6f\x2d6befc081d68f.mount: Deactivated successfully. Sep 12 23:54:58.879044 systemd[1]: run-containerd-runc-k8s.io-8d39eef57adf2b3ccbd669aaa80bfdccab8ea16bd5d927ad23c024045a6a83b2-runc.7wd0SV.mount: Deactivated successfully. Sep 12 23:54:58.972844 kubelet[3594]: I0912 23:54:58.972759 3594 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwfqf\" (UniqueName: \"kubernetes.io/projected/1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490-kube-api-access-bwfqf\") pod \"1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490\" (UID: \"1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490\") " Sep 12 23:54:58.973538 kubelet[3594]: I0912 23:54:58.972856 3594 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490-whisker-backend-key-pair\") pod \"1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490\" (UID: \"1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490\") " Sep 12 23:54:58.973538 kubelet[3594]: I0912 23:54:58.972911 3594 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490-whisker-ca-bundle\") pod \"1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490\" (UID: \"1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490\") " Sep 12 23:54:58.977697 kubelet[3594]: I0912 23:54:58.976425 3594 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490" (UID: "1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 23:54:58.986935 systemd[1]: var-lib-kubelet-pods-1dfebf8b\x2db269\x2d4b21\x2dbcfc\x2d5b6f6c7bd490-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbwfqf.mount: Deactivated successfully. Sep 12 23:54:58.991788 kubelet[3594]: I0912 23:54:58.990863 3594 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490" (UID: "1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 23:54:58.996037 kubelet[3594]: I0912 23:54:58.995939 3594 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490-kube-api-access-bwfqf" (OuterVolumeSpecName: "kube-api-access-bwfqf") pod "1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490" (UID: "1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490"). InnerVolumeSpecName "kube-api-access-bwfqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 23:54:59.074170 kubelet[3594]: I0912 23:54:59.073987 3594 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwfqf\" (UniqueName: \"kubernetes.io/projected/1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490-kube-api-access-bwfqf\") on node \"ip-172-31-18-203\" DevicePath \"\"" Sep 12 23:54:59.074170 kubelet[3594]: I0912 23:54:59.074060 3594 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490-whisker-backend-key-pair\") on node \"ip-172-31-18-203\" DevicePath \"\"" Sep 12 23:54:59.074170 kubelet[3594]: I0912 23:54:59.074089 3594 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490-whisker-ca-bundle\") on node \"ip-172-31-18-203\" DevicePath \"\"" Sep 12 23:54:59.174121 systemd[1]: var-lib-kubelet-pods-1dfebf8b\x2db269\x2d4b21\x2dbcfc\x2d5b6f6c7bd490-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 23:54:59.982895 kubelet[3594]: I0912 23:54:59.982822 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ba1ff068-6af0-4643-baf0-831b7f97a0c7-whisker-backend-key-pair\") pod \"whisker-6d9cf74dd-xfzvz\" (UID: \"ba1ff068-6af0-4643-baf0-831b7f97a0c7\") " pod="calico-system/whisker-6d9cf74dd-xfzvz" Sep 12 23:54:59.982895 kubelet[3594]: I0912 23:54:59.982972 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba1ff068-6af0-4643-baf0-831b7f97a0c7-whisker-ca-bundle\") pod \"whisker-6d9cf74dd-xfzvz\" (UID: \"ba1ff068-6af0-4643-baf0-831b7f97a0c7\") " pod="calico-system/whisker-6d9cf74dd-xfzvz" Sep 12 23:54:59.982895 kubelet[3594]: I0912 23:54:59.983039 3594 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grqn4\" (UniqueName: \"kubernetes.io/projected/ba1ff068-6af0-4643-baf0-831b7f97a0c7-kube-api-access-grqn4\") pod \"whisker-6d9cf74dd-xfzvz\" (UID: \"ba1ff068-6af0-4643-baf0-831b7f97a0c7\") " pod="calico-system/whisker-6d9cf74dd-xfzvz" Sep 12 23:55:00.149988 containerd[2151]: time="2025-09-12T23:55:00.145213843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d9cf74dd-xfzvz,Uid:ba1ff068-6af0-4643-baf0-831b7f97a0c7,Namespace:calico-system,Attempt:0,}" Sep 12 23:55:00.257864 containerd[2151]: time="2025-09-12T23:55:00.255671287Z" level=info msg="StopPodSandbox for \"99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c\"" Sep 12 23:55:00.285219 kubelet[3594]: I0912 23:55:00.281557 3594 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490" path="/var/lib/kubelet/pods/1dfebf8b-b269-4b21-bcfc-5b6f6c7bd490/volumes" Sep 12 23:55:00.285526 containerd[2151]: time="2025-09-12T23:55:00.284264011Z" level=info msg="StopPodSandbox for \"c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb\"" Sep 12 23:55:01.056044 containerd[2151]: 2025-09-12 23:55:00.632 [INFO][4914] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" Sep 12 23:55:01.056044 containerd[2151]: 2025-09-12 23:55:00.634 [INFO][4914] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" iface="eth0" netns="/var/run/netns/cni-723a7a7a-2f5f-a437-fb69-b6de00f4fbd9" Sep 12 23:55:01.056044 containerd[2151]: 2025-09-12 23:55:00.637 [INFO][4914] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" iface="eth0" netns="/var/run/netns/cni-723a7a7a-2f5f-a437-fb69-b6de00f4fbd9" Sep 12 23:55:01.056044 containerd[2151]: 2025-09-12 23:55:00.638 [INFO][4914] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" iface="eth0" netns="/var/run/netns/cni-723a7a7a-2f5f-a437-fb69-b6de00f4fbd9" Sep 12 23:55:01.056044 containerd[2151]: 2025-09-12 23:55:00.638 [INFO][4914] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" Sep 12 23:55:01.056044 containerd[2151]: 2025-09-12 23:55:00.638 [INFO][4914] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" Sep 12 23:55:01.056044 containerd[2151]: 2025-09-12 23:55:00.899 [INFO][4986] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" HandleID="k8s-pod-network.99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" Workload="ip--172--31--18--203-k8s-calico--kube--controllers--5dc46b49f4--xjvcm-eth0" Sep 12 23:55:01.056044 containerd[2151]: 2025-09-12 23:55:00.903 [INFO][4986] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:55:01.056044 containerd[2151]: 2025-09-12 23:55:01.002 [INFO][4986] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:55:01.056044 containerd[2151]: 2025-09-12 23:55:01.025 [WARNING][4986] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" HandleID="k8s-pod-network.99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" Workload="ip--172--31--18--203-k8s-calico--kube--controllers--5dc46b49f4--xjvcm-eth0" Sep 12 23:55:01.056044 containerd[2151]: 2025-09-12 23:55:01.025 [INFO][4986] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" HandleID="k8s-pod-network.99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" Workload="ip--172--31--18--203-k8s-calico--kube--controllers--5dc46b49f4--xjvcm-eth0" Sep 12 23:55:01.056044 containerd[2151]: 2025-09-12 23:55:01.032 [INFO][4986] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:55:01.056044 containerd[2151]: 2025-09-12 23:55:01.040 [INFO][4914] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" Sep 12 23:55:01.071668 containerd[2151]: time="2025-09-12T23:55:01.067061311Z" level=info msg="TearDown network for sandbox \"99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c\" successfully" Sep 12 23:55:01.071668 containerd[2151]: time="2025-09-12T23:55:01.067117555Z" level=info msg="StopPodSandbox for \"99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c\" returns successfully" Sep 12 23:55:01.070551 systemd[1]: run-netns-cni\x2d723a7a7a\x2d2f5f\x2da437\x2dfb69\x2db6de00f4fbd9.mount: Deactivated successfully. Sep 12 23:55:01.076176 containerd[2151]: time="2025-09-12T23:55:01.075512635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dc46b49f4-xjvcm,Uid:8d62a2d0-ccd7-4178-8371-f2c20fc86ca0,Namespace:calico-system,Attempt:1,}" Sep 12 23:55:01.076922 systemd-networkd[1694]: calic8481f5e80c: Link UP Sep 12 23:55:01.080869 systemd-networkd[1694]: calic8481f5e80c: Gained carrier Sep 12 23:55:01.100830 containerd[2151]: 2025-09-12 23:55:00.665 [INFO][4920] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" Sep 12 23:55:01.100830 containerd[2151]: 2025-09-12 23:55:00.667 [INFO][4920] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" iface="eth0" netns="/var/run/netns/cni-f5e68d5a-7a20-108e-bf6b-734bf7267d3d" Sep 12 23:55:01.100830 containerd[2151]: 2025-09-12 23:55:00.668 [INFO][4920] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" iface="eth0" netns="/var/run/netns/cni-f5e68d5a-7a20-108e-bf6b-734bf7267d3d" Sep 12 23:55:01.100830 containerd[2151]: 2025-09-12 23:55:00.669 [INFO][4920] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" iface="eth0" netns="/var/run/netns/cni-f5e68d5a-7a20-108e-bf6b-734bf7267d3d" Sep 12 23:55:01.100830 containerd[2151]: 2025-09-12 23:55:00.669 [INFO][4920] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" Sep 12 23:55:01.100830 containerd[2151]: 2025-09-12 23:55:00.669 [INFO][4920] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" Sep 12 23:55:01.100830 containerd[2151]: 2025-09-12 23:55:00.947 [INFO][4993] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" HandleID="k8s-pod-network.c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" Workload="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--j88mc-eth0" Sep 12 23:55:01.100830 containerd[2151]: 2025-09-12 23:55:00.949 [INFO][4993] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:55:01.100830 containerd[2151]: 2025-09-12 23:55:01.033 [INFO][4993] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:55:01.100830 containerd[2151]: 2025-09-12 23:55:01.060 [WARNING][4993] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" HandleID="k8s-pod-network.c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" Workload="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--j88mc-eth0" Sep 12 23:55:01.100830 containerd[2151]: 2025-09-12 23:55:01.060 [INFO][4993] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" HandleID="k8s-pod-network.c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" Workload="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--j88mc-eth0" Sep 12 23:55:01.100830 containerd[2151]: 2025-09-12 23:55:01.072 [INFO][4993] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:55:01.100830 containerd[2151]: 2025-09-12 23:55:01.086 [INFO][4920] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" Sep 12 23:55:01.120589 systemd[1]: run-netns-cni\x2df5e68d5a\x2d7a20\x2d108e\x2dbf6b\x2d734bf7267d3d.mount: Deactivated successfully. Sep 12 23:55:01.127780 (udev-worker)[5014]: Network interface NamePolicy= disabled on kernel command line. Sep 12 23:55:01.147202 containerd[2151]: time="2025-09-12T23:55:01.132363176Z" level=info msg="TearDown network for sandbox \"c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb\" successfully" Sep 12 23:55:01.147202 containerd[2151]: time="2025-09-12T23:55:01.132416252Z" level=info msg="StopPodSandbox for \"c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb\" returns successfully" Sep 12 23:55:01.147202 containerd[2151]: time="2025-09-12T23:55:01.145127456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-j88mc,Uid:9eeb1078-74ba-4b83-8069-cea1b65e8744,Namespace:kube-system,Attempt:1,}" Sep 12 23:55:01.179478 containerd[2151]: 2025-09-12 23:55:00.507 [INFO][4882] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 23:55:01.179478 containerd[2151]: 2025-09-12 23:55:00.578 [INFO][4882] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--203-k8s-whisker--6d9cf74dd--xfzvz-eth0 whisker-6d9cf74dd- calico-system ba1ff068-6af0-4643-baf0-831b7f97a0c7 945 0 2025-09-12 23:54:59 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6d9cf74dd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-18-203 whisker-6d9cf74dd-xfzvz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic8481f5e80c [] [] }} ContainerID="c7c2fa4625da9a26429a361fcdf303472d2cef1672c9c6a484ac84bd2fa54dc1" Namespace="calico-system" Pod="whisker-6d9cf74dd-xfzvz" WorkloadEndpoint="ip--172--31--18--203-k8s-whisker--6d9cf74dd--xfzvz-" Sep 12 23:55:01.179478 containerd[2151]: 2025-09-12 23:55:00.579 [INFO][4882] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c7c2fa4625da9a26429a361fcdf303472d2cef1672c9c6a484ac84bd2fa54dc1" Namespace="calico-system" Pod="whisker-6d9cf74dd-xfzvz" WorkloadEndpoint="ip--172--31--18--203-k8s-whisker--6d9cf74dd--xfzvz-eth0" Sep 12 23:55:01.179478 containerd[2151]: 2025-09-12 23:55:00.872 [INFO][4977] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c7c2fa4625da9a26429a361fcdf303472d2cef1672c9c6a484ac84bd2fa54dc1" HandleID="k8s-pod-network.c7c2fa4625da9a26429a361fcdf303472d2cef1672c9c6a484ac84bd2fa54dc1" Workload="ip--172--31--18--203-k8s-whisker--6d9cf74dd--xfzvz-eth0" Sep 12 23:55:01.179478 containerd[2151]: 2025-09-12 23:55:00.878 [INFO][4977] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c7c2fa4625da9a26429a361fcdf303472d2cef1672c9c6a484ac84bd2fa54dc1" HandleID="k8s-pod-network.c7c2fa4625da9a26429a361fcdf303472d2cef1672c9c6a484ac84bd2fa54dc1" Workload="ip--172--31--18--203-k8s-whisker--6d9cf74dd--xfzvz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000363710), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-203", "pod":"whisker-6d9cf74dd-xfzvz", "timestamp":"2025-09-12 23:55:00.872461462 +0000 UTC"}, Hostname:"ip-172-31-18-203", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:55:01.179478 containerd[2151]: 2025-09-12 23:55:00.878 [INFO][4977] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:55:01.179478 containerd[2151]: 2025-09-12 23:55:00.878 [INFO][4977] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:55:01.179478 containerd[2151]: 2025-09-12 23:55:00.878 [INFO][4977] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-203' Sep 12 23:55:01.179478 containerd[2151]: 2025-09-12 23:55:00.928 [INFO][4977] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c7c2fa4625da9a26429a361fcdf303472d2cef1672c9c6a484ac84bd2fa54dc1" host="ip-172-31-18-203" Sep 12 23:55:01.179478 containerd[2151]: 2025-09-12 23:55:00.944 [INFO][4977] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-203" Sep 12 23:55:01.179478 containerd[2151]: 2025-09-12 23:55:00.961 [INFO][4977] ipam/ipam.go 511: Trying affinity for 192.168.50.192/26 host="ip-172-31-18-203" Sep 12 23:55:01.179478 containerd[2151]: 2025-09-12 23:55:00.967 [INFO][4977] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.192/26 host="ip-172-31-18-203" Sep 12 23:55:01.179478 containerd[2151]: 2025-09-12 23:55:00.974 [INFO][4977] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.192/26 host="ip-172-31-18-203" Sep 12 23:55:01.179478 containerd[2151]: 2025-09-12 23:55:00.975 [INFO][4977] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.50.192/26 handle="k8s-pod-network.c7c2fa4625da9a26429a361fcdf303472d2cef1672c9c6a484ac84bd2fa54dc1" host="ip-172-31-18-203" Sep 12 23:55:01.179478 containerd[2151]: 2025-09-12 23:55:00.980 [INFO][4977] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c7c2fa4625da9a26429a361fcdf303472d2cef1672c9c6a484ac84bd2fa54dc1 Sep 12 23:55:01.179478 containerd[2151]: 2025-09-12 23:55:00.987 [INFO][4977] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.50.192/26 handle="k8s-pod-network.c7c2fa4625da9a26429a361fcdf303472d2cef1672c9c6a484ac84bd2fa54dc1" host="ip-172-31-18-203" Sep 12 23:55:01.179478 containerd[2151]: 2025-09-12 23:55:01.002 [INFO][4977] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.50.193/26] block=192.168.50.192/26 handle="k8s-pod-network.c7c2fa4625da9a26429a361fcdf303472d2cef1672c9c6a484ac84bd2fa54dc1" host="ip-172-31-18-203" Sep 12 23:55:01.179478 containerd[2151]: 2025-09-12 23:55:01.002 [INFO][4977] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.193/26] handle="k8s-pod-network.c7c2fa4625da9a26429a361fcdf303472d2cef1672c9c6a484ac84bd2fa54dc1" host="ip-172-31-18-203" Sep 12 23:55:01.179478 containerd[2151]: 2025-09-12 23:55:01.003 [INFO][4977] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:55:01.179478 containerd[2151]: 2025-09-12 23:55:01.004 [INFO][4977] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.193/26] IPv6=[] ContainerID="c7c2fa4625da9a26429a361fcdf303472d2cef1672c9c6a484ac84bd2fa54dc1" HandleID="k8s-pod-network.c7c2fa4625da9a26429a361fcdf303472d2cef1672c9c6a484ac84bd2fa54dc1" Workload="ip--172--31--18--203-k8s-whisker--6d9cf74dd--xfzvz-eth0" Sep 12 23:55:01.183974 containerd[2151]: 2025-09-12 23:55:01.018 [INFO][4882] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c7c2fa4625da9a26429a361fcdf303472d2cef1672c9c6a484ac84bd2fa54dc1" Namespace="calico-system" Pod="whisker-6d9cf74dd-xfzvz" WorkloadEndpoint="ip--172--31--18--203-k8s-whisker--6d9cf74dd--xfzvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--203-k8s-whisker--6d9cf74dd--xfzvz-eth0", GenerateName:"whisker-6d9cf74dd-", Namespace:"calico-system", SelfLink:"", UID:"ba1ff068-6af0-4643-baf0-831b7f97a0c7", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 54, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d9cf74dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-203", ContainerID:"", Pod:"whisker-6d9cf74dd-xfzvz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.50.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic8481f5e80c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:55:01.183974 containerd[2151]: 2025-09-12 23:55:01.018 [INFO][4882] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.193/32] ContainerID="c7c2fa4625da9a26429a361fcdf303472d2cef1672c9c6a484ac84bd2fa54dc1" Namespace="calico-system" Pod="whisker-6d9cf74dd-xfzvz" WorkloadEndpoint="ip--172--31--18--203-k8s-whisker--6d9cf74dd--xfzvz-eth0" Sep 12 23:55:01.183974 containerd[2151]: 2025-09-12 23:55:01.018 [INFO][4882] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic8481f5e80c ContainerID="c7c2fa4625da9a26429a361fcdf303472d2cef1672c9c6a484ac84bd2fa54dc1" Namespace="calico-system" Pod="whisker-6d9cf74dd-xfzvz" WorkloadEndpoint="ip--172--31--18--203-k8s-whisker--6d9cf74dd--xfzvz-eth0" Sep 12 23:55:01.183974 containerd[2151]: 2025-09-12 23:55:01.085 [INFO][4882] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c7c2fa4625da9a26429a361fcdf303472d2cef1672c9c6a484ac84bd2fa54dc1" Namespace="calico-system" Pod="whisker-6d9cf74dd-xfzvz" WorkloadEndpoint="ip--172--31--18--203-k8s-whisker--6d9cf74dd--xfzvz-eth0" Sep 12 23:55:01.183974 containerd[2151]: 2025-09-12 23:55:01.086 [INFO][4882] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c7c2fa4625da9a26429a361fcdf303472d2cef1672c9c6a484ac84bd2fa54dc1" Namespace="calico-system" Pod="whisker-6d9cf74dd-xfzvz" WorkloadEndpoint="ip--172--31--18--203-k8s-whisker--6d9cf74dd--xfzvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--203-k8s-whisker--6d9cf74dd--xfzvz-eth0", GenerateName:"whisker-6d9cf74dd-", Namespace:"calico-system", SelfLink:"", UID:"ba1ff068-6af0-4643-baf0-831b7f97a0c7", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 54, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d9cf74dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-203", ContainerID:"c7c2fa4625da9a26429a361fcdf303472d2cef1672c9c6a484ac84bd2fa54dc1", Pod:"whisker-6d9cf74dd-xfzvz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.50.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic8481f5e80c", MAC:"96:72:b6:6a:7e:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:55:01.183974 containerd[2151]: 2025-09-12 23:55:01.132 [INFO][4882] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c7c2fa4625da9a26429a361fcdf303472d2cef1672c9c6a484ac84bd2fa54dc1" Namespace="calico-system" Pod="whisker-6d9cf74dd-xfzvz" WorkloadEndpoint="ip--172--31--18--203-k8s-whisker--6d9cf74dd--xfzvz-eth0" Sep 12 23:55:01.252872 containerd[2151]: time="2025-09-12T23:55:01.252396752Z" level=info msg="StopPodSandbox for \"559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd\"" Sep 12 23:55:01.256078 containerd[2151]: time="2025-09-12T23:55:01.255304844Z" level=info msg="StopPodSandbox for \"ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e\"" Sep 12 23:55:01.627766 containerd[2151]: time="2025-09-12T23:55:01.622180378Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:55:01.627766 containerd[2151]: time="2025-09-12T23:55:01.623736058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:55:01.627766 containerd[2151]: time="2025-09-12T23:55:01.623796934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:01.654943 containerd[2151]: time="2025-09-12T23:55:01.639914410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:02.314332 containerd[2151]: time="2025-09-12T23:55:02.311847081Z" level=info msg="StopPodSandbox for \"75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4\"" Sep 12 23:55:02.317409 containerd[2151]: time="2025-09-12T23:55:02.315795513Z" level=info msg="StopPodSandbox for \"c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d\"" Sep 12 23:55:02.553184 systemd-resolved[2019]: Under memory pressure, flushing caches. Sep 12 23:55:02.559284 systemd-journald[1604]: Under memory pressure, flushing caches. Sep 12 23:55:02.553335 systemd-resolved[2019]: Flushed all caches. Sep 12 23:55:02.712590 systemd[1]: Started sshd@7-172.31.18.203:22-147.75.109.163:34776.service - OpenSSH per-connection server daemon (147.75.109.163:34776). Sep 12 23:55:02.775397 systemd-networkd[1694]: cali06c11963e8d: Link UP Sep 12 23:55:02.786705 systemd-networkd[1694]: cali06c11963e8d: Gained carrier Sep 12 23:55:02.789249 (udev-worker)[5012]: Network interface NamePolicy= disabled on kernel command line. Sep 12 23:55:02.855698 kernel: bpftool[5189]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 12 23:55:02.872585 systemd-networkd[1694]: calic8481f5e80c: Gained IPv6LL Sep 12 23:55:03.042410 containerd[2151]: 2025-09-12 23:55:01.785 [INFO][5021] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 23:55:03.042410 containerd[2151]: 2025-09-12 23:55:01.900 [INFO][5021] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--203-k8s-calico--kube--controllers--5dc46b49f4--xjvcm-eth0 calico-kube-controllers-5dc46b49f4- calico-system 8d62a2d0-ccd7-4178-8371-f2c20fc86ca0 951 0 2025-09-12 23:54:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5dc46b49f4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-18-203 calico-kube-controllers-5dc46b49f4-xjvcm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali06c11963e8d [] [] }} ContainerID="9ac03a4b21b5cc80a771985e40acdcf014df4bdbe2465b3dc0252a80c0796135" Namespace="calico-system" Pod="calico-kube-controllers-5dc46b49f4-xjvcm" WorkloadEndpoint="ip--172--31--18--203-k8s-calico--kube--controllers--5dc46b49f4--xjvcm-" Sep 12 23:55:03.042410 containerd[2151]: 2025-09-12 23:55:01.900 [INFO][5021] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9ac03a4b21b5cc80a771985e40acdcf014df4bdbe2465b3dc0252a80c0796135" Namespace="calico-system" Pod="calico-kube-controllers-5dc46b49f4-xjvcm" WorkloadEndpoint="ip--172--31--18--203-k8s-calico--kube--controllers--5dc46b49f4--xjvcm-eth0" Sep 12 23:55:03.042410 containerd[2151]: 2025-09-12 23:55:02.246 [INFO][5108] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ac03a4b21b5cc80a771985e40acdcf014df4bdbe2465b3dc0252a80c0796135" HandleID="k8s-pod-network.9ac03a4b21b5cc80a771985e40acdcf014df4bdbe2465b3dc0252a80c0796135" Workload="ip--172--31--18--203-k8s-calico--kube--controllers--5dc46b49f4--xjvcm-eth0" Sep 12 23:55:03.042410 containerd[2151]: 2025-09-12 23:55:02.246 [INFO][5108] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9ac03a4b21b5cc80a771985e40acdcf014df4bdbe2465b3dc0252a80c0796135" HandleID="k8s-pod-network.9ac03a4b21b5cc80a771985e40acdcf014df4bdbe2465b3dc0252a80c0796135" Workload="ip--172--31--18--203-k8s-calico--kube--controllers--5dc46b49f4--xjvcm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ca20), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-203", "pod":"calico-kube-controllers-5dc46b49f4-xjvcm", "timestamp":"2025-09-12 23:55:02.246535821 +0000 UTC"}, Hostname:"ip-172-31-18-203", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:55:03.042410 containerd[2151]: 2025-09-12 23:55:02.246 [INFO][5108] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:55:03.042410 containerd[2151]: 2025-09-12 23:55:02.260 [INFO][5108] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:55:03.042410 containerd[2151]: 2025-09-12 23:55:02.260 [INFO][5108] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-203' Sep 12 23:55:03.042410 containerd[2151]: 2025-09-12 23:55:02.354 [INFO][5108] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9ac03a4b21b5cc80a771985e40acdcf014df4bdbe2465b3dc0252a80c0796135" host="ip-172-31-18-203" Sep 12 23:55:03.042410 containerd[2151]: 2025-09-12 23:55:02.455 [INFO][5108] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-203" Sep 12 23:55:03.042410 containerd[2151]: 2025-09-12 23:55:02.507 [INFO][5108] ipam/ipam.go 511: Trying affinity for 192.168.50.192/26 host="ip-172-31-18-203" Sep 12 23:55:03.042410 containerd[2151]: 2025-09-12 23:55:02.538 [INFO][5108] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.192/26 host="ip-172-31-18-203" Sep 12 23:55:03.042410 containerd[2151]: 2025-09-12 23:55:02.561 [INFO][5108] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.192/26 host="ip-172-31-18-203" Sep 12 23:55:03.042410 containerd[2151]: 2025-09-12 23:55:02.568 [INFO][5108] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.50.192/26 handle="k8s-pod-network.9ac03a4b21b5cc80a771985e40acdcf014df4bdbe2465b3dc0252a80c0796135" host="ip-172-31-18-203" Sep 12 23:55:03.042410 containerd[2151]: 2025-09-12 23:55:02.584 [INFO][5108] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9ac03a4b21b5cc80a771985e40acdcf014df4bdbe2465b3dc0252a80c0796135 Sep 12 23:55:03.042410 containerd[2151]: 2025-09-12 23:55:02.607 [INFO][5108] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.50.192/26 handle="k8s-pod-network.9ac03a4b21b5cc80a771985e40acdcf014df4bdbe2465b3dc0252a80c0796135" host="ip-172-31-18-203" Sep 12 23:55:03.042410 containerd[2151]: 2025-09-12 23:55:02.650 [INFO][5108] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.50.194/26] block=192.168.50.192/26 handle="k8s-pod-network.9ac03a4b21b5cc80a771985e40acdcf014df4bdbe2465b3dc0252a80c0796135" host="ip-172-31-18-203" Sep 12 23:55:03.042410 containerd[2151]: 2025-09-12 23:55:02.652 [INFO][5108] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.194/26] handle="k8s-pod-network.9ac03a4b21b5cc80a771985e40acdcf014df4bdbe2465b3dc0252a80c0796135" host="ip-172-31-18-203" Sep 12 23:55:03.042410 containerd[2151]: 2025-09-12 23:55:02.652 [INFO][5108] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:55:03.042410 containerd[2151]: 2025-09-12 23:55:02.654 [INFO][5108] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.194/26] IPv6=[] ContainerID="9ac03a4b21b5cc80a771985e40acdcf014df4bdbe2465b3dc0252a80c0796135" HandleID="k8s-pod-network.9ac03a4b21b5cc80a771985e40acdcf014df4bdbe2465b3dc0252a80c0796135" Workload="ip--172--31--18--203-k8s-calico--kube--controllers--5dc46b49f4--xjvcm-eth0" Sep 12 23:55:03.050358 containerd[2151]: 2025-09-12 23:55:02.716 [INFO][5021] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9ac03a4b21b5cc80a771985e40acdcf014df4bdbe2465b3dc0252a80c0796135" Namespace="calico-system" Pod="calico-kube-controllers-5dc46b49f4-xjvcm" WorkloadEndpoint="ip--172--31--18--203-k8s-calico--kube--controllers--5dc46b49f4--xjvcm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--203-k8s-calico--kube--controllers--5dc46b49f4--xjvcm-eth0", GenerateName:"calico-kube-controllers-5dc46b49f4-", Namespace:"calico-system", SelfLink:"", UID:"8d62a2d0-ccd7-4178-8371-f2c20fc86ca0", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 54, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5dc46b49f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-203", ContainerID:"", Pod:"calico-kube-controllers-5dc46b49f4-xjvcm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali06c11963e8d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:55:03.050358 containerd[2151]: 2025-09-12 23:55:02.716 [INFO][5021] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.194/32] ContainerID="9ac03a4b21b5cc80a771985e40acdcf014df4bdbe2465b3dc0252a80c0796135" Namespace="calico-system" Pod="calico-kube-controllers-5dc46b49f4-xjvcm" WorkloadEndpoint="ip--172--31--18--203-k8s-calico--kube--controllers--5dc46b49f4--xjvcm-eth0" Sep 12 23:55:03.050358 containerd[2151]: 2025-09-12 23:55:02.716 [INFO][5021] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali06c11963e8d ContainerID="9ac03a4b21b5cc80a771985e40acdcf014df4bdbe2465b3dc0252a80c0796135" Namespace="calico-system" Pod="calico-kube-controllers-5dc46b49f4-xjvcm" WorkloadEndpoint="ip--172--31--18--203-k8s-calico--kube--controllers--5dc46b49f4--xjvcm-eth0" Sep 12 23:55:03.050358 containerd[2151]: 2025-09-12 23:55:02.845 [INFO][5021] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9ac03a4b21b5cc80a771985e40acdcf014df4bdbe2465b3dc0252a80c0796135" Namespace="calico-system" Pod="calico-kube-controllers-5dc46b49f4-xjvcm" WorkloadEndpoint="ip--172--31--18--203-k8s-calico--kube--controllers--5dc46b49f4--xjvcm-eth0" Sep 12 23:55:03.050358 containerd[2151]: 2025-09-12 23:55:02.854 [INFO][5021] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9ac03a4b21b5cc80a771985e40acdcf014df4bdbe2465b3dc0252a80c0796135" Namespace="calico-system" Pod="calico-kube-controllers-5dc46b49f4-xjvcm" WorkloadEndpoint="ip--172--31--18--203-k8s-calico--kube--controllers--5dc46b49f4--xjvcm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--203-k8s-calico--kube--controllers--5dc46b49f4--xjvcm-eth0", GenerateName:"calico-kube-controllers-5dc46b49f4-", Namespace:"calico-system", SelfLink:"", UID:"8d62a2d0-ccd7-4178-8371-f2c20fc86ca0", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 54, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5dc46b49f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-203", ContainerID:"9ac03a4b21b5cc80a771985e40acdcf014df4bdbe2465b3dc0252a80c0796135", Pod:"calico-kube-controllers-5dc46b49f4-xjvcm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali06c11963e8d", MAC:"46:38:7c:97:f4:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:55:03.050358 containerd[2151]: 2025-09-12 23:55:02.949 [INFO][5021] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9ac03a4b21b5cc80a771985e40acdcf014df4bdbe2465b3dc0252a80c0796135" Namespace="calico-system" Pod="calico-kube-controllers-5dc46b49f4-xjvcm" WorkloadEndpoint="ip--172--31--18--203-k8s-calico--kube--controllers--5dc46b49f4--xjvcm-eth0" Sep 12 23:55:03.081307 sshd[5177]: Accepted publickey for core from 147.75.109.163 port 34776 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:03.096832 sshd[5177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:03.110832 containerd[2151]: 2025-09-12 23:55:02.093 [INFO][5064] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" Sep 12 23:55:03.110832 containerd[2151]: 2025-09-12 23:55:02.105 [INFO][5064] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" iface="eth0" netns="/var/run/netns/cni-87169d7d-9cbe-7632-16d3-d9742790073b" Sep 12 23:55:03.110832 containerd[2151]: 2025-09-12 23:55:02.110 [INFO][5064] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" iface="eth0" netns="/var/run/netns/cni-87169d7d-9cbe-7632-16d3-d9742790073b" Sep 12 23:55:03.110832 containerd[2151]: 2025-09-12 23:55:02.111 [INFO][5064] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" iface="eth0" netns="/var/run/netns/cni-87169d7d-9cbe-7632-16d3-d9742790073b" Sep 12 23:55:03.110832 containerd[2151]: 2025-09-12 23:55:02.112 [INFO][5064] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" Sep 12 23:55:03.110832 containerd[2151]: 2025-09-12 23:55:02.113 [INFO][5064] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" Sep 12 23:55:03.110832 containerd[2151]: 2025-09-12 23:55:02.684 [INFO][5123] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" HandleID="k8s-pod-network.ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" Workload="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--2nt2f-eth0" Sep 12 23:55:03.110832 containerd[2151]: 2025-09-12 23:55:02.692 [INFO][5123] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:55:03.110832 containerd[2151]: 2025-09-12 23:55:02.697 [INFO][5123] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:55:03.110832 containerd[2151]: 2025-09-12 23:55:02.877 [WARNING][5123] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" HandleID="k8s-pod-network.ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" Workload="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--2nt2f-eth0" Sep 12 23:55:03.110832 containerd[2151]: 2025-09-12 23:55:02.877 [INFO][5123] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" HandleID="k8s-pod-network.ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" Workload="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--2nt2f-eth0" Sep 12 23:55:03.110832 containerd[2151]: 2025-09-12 23:55:02.911 [INFO][5123] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:55:03.110832 containerd[2151]: 2025-09-12 23:55:03.018 [INFO][5064] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" Sep 12 23:55:03.116847 containerd[2151]: time="2025-09-12T23:55:03.116555841Z" level=info msg="TearDown network for sandbox \"ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e\" successfully" Sep 12 23:55:03.118613 containerd[2151]: time="2025-09-12T23:55:03.118552713Z" level=info msg="StopPodSandbox for \"ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e\" returns successfully" Sep 12 23:55:03.126289 systemd[1]: run-netns-cni\x2d87169d7d\x2d9cbe\x2d7632\x2d16d3\x2dd9742790073b.mount: Deactivated successfully. Sep 12 23:55:03.135574 containerd[2151]: time="2025-09-12T23:55:03.134680077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c48bb7547-2nt2f,Uid:077d1d76-d7b8-4b1c-bc6e-9119a67ba30b,Namespace:calico-apiserver,Attempt:1,}" Sep 12 23:55:03.147237 systemd-logind[2118]: New session 8 of user core. Sep 12 23:55:03.155335 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 23:55:03.162812 containerd[2151]: time="2025-09-12T23:55:03.161714818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d9cf74dd-xfzvz,Uid:ba1ff068-6af0-4643-baf0-831b7f97a0c7,Namespace:calico-system,Attempt:0,} returns sandbox id \"c7c2fa4625da9a26429a361fcdf303472d2cef1672c9c6a484ac84bd2fa54dc1\"" Sep 12 23:55:03.212298 containerd[2151]: time="2025-09-12T23:55:03.211898062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 23:55:03.605062 containerd[2151]: time="2025-09-12T23:55:03.602781840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:55:03.605062 containerd[2151]: time="2025-09-12T23:55:03.602899476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:55:03.605062 containerd[2151]: time="2025-09-12T23:55:03.604137084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:03.623538 containerd[2151]: time="2025-09-12T23:55:03.612758952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:03.673545 systemd-networkd[1694]: cali81adfd4b05f: Link UP Sep 12 23:55:03.685569 systemd-networkd[1694]: cali81adfd4b05f: Gained carrier Sep 12 23:55:03.711293 containerd[2151]: 2025-09-12 23:55:02.172 [INFO][5067] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" Sep 12 23:55:03.711293 containerd[2151]: 2025-09-12 23:55:02.174 [INFO][5067] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" iface="eth0" netns="/var/run/netns/cni-6195ce12-b812-a248-ccbe-8b07736b279c" Sep 12 23:55:03.711293 containerd[2151]: 2025-09-12 23:55:02.179 [INFO][5067] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" iface="eth0" netns="/var/run/netns/cni-6195ce12-b812-a248-ccbe-8b07736b279c" Sep 12 23:55:03.711293 containerd[2151]: 2025-09-12 23:55:02.195 [INFO][5067] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" iface="eth0" netns="/var/run/netns/cni-6195ce12-b812-a248-ccbe-8b07736b279c" Sep 12 23:55:03.711293 containerd[2151]: 2025-09-12 23:55:02.195 [INFO][5067] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" Sep 12 23:55:03.711293 containerd[2151]: 2025-09-12 23:55:02.197 [INFO][5067] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" Sep 12 23:55:03.711293 containerd[2151]: 2025-09-12 23:55:03.207 [INFO][5140] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" HandleID="k8s-pod-network.559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" Workload="ip--172--31--18--203-k8s-goldmane--7988f88666--bhgbj-eth0" Sep 12 23:55:03.711293 containerd[2151]: 2025-09-12 23:55:03.207 [INFO][5140] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:55:03.711293 containerd[2151]: 2025-09-12 23:55:03.474 [INFO][5140] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:55:03.711293 containerd[2151]: 2025-09-12 23:55:03.565 [WARNING][5140] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" HandleID="k8s-pod-network.559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" Workload="ip--172--31--18--203-k8s-goldmane--7988f88666--bhgbj-eth0" Sep 12 23:55:03.711293 containerd[2151]: 2025-09-12 23:55:03.565 [INFO][5140] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" HandleID="k8s-pod-network.559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" Workload="ip--172--31--18--203-k8s-goldmane--7988f88666--bhgbj-eth0" Sep 12 23:55:03.711293 containerd[2151]: 2025-09-12 23:55:03.605 [INFO][5140] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:55:03.711293 containerd[2151]: 2025-09-12 23:55:03.646 [INFO][5067] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" Sep 12 23:55:03.717057 containerd[2151]: time="2025-09-12T23:55:03.714830832Z" level=info msg="TearDown network for sandbox \"559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd\" successfully" Sep 12 23:55:03.717202 containerd[2151]: time="2025-09-12T23:55:03.717167760Z" level=info msg="StopPodSandbox for \"559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd\" returns successfully" Sep 12 23:55:03.719316 containerd[2151]: time="2025-09-12T23:55:03.718235640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-bhgbj,Uid:082bf9af-912b-4ff6-8411-79fadb8bf200,Namespace:calico-system,Attempt:1,}" Sep 12 23:55:03.735542 systemd[1]: run-netns-cni\x2d6195ce12\x2db812\x2da248\x2dccbe\x2d8b07736b279c.mount: Deactivated successfully. Sep 12 23:55:03.924790 sshd[5177]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:03.952305 systemd[1]: sshd@7-172.31.18.203:22-147.75.109.163:34776.service: Deactivated successfully. Sep 12 23:55:03.958790 containerd[2151]: 2025-09-12 23:55:02.066 [INFO][5042] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--203-k8s-coredns--7c65d6cfc9--j88mc-eth0 coredns-7c65d6cfc9- kube-system 9eeb1078-74ba-4b83-8069-cea1b65e8744 952 0 2025-09-12 23:54:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-203 coredns-7c65d6cfc9-j88mc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali81adfd4b05f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5789dc517b88b5911558648be43271564a1c88143583916cc951afb355f09ba6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j88mc" WorkloadEndpoint="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--j88mc-" Sep 12 23:55:03.958790 containerd[2151]: 2025-09-12 23:55:02.066 [INFO][5042] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5789dc517b88b5911558648be43271564a1c88143583916cc951afb355f09ba6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j88mc" WorkloadEndpoint="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--j88mc-eth0" Sep 12 23:55:03.958790 containerd[2151]: 2025-09-12 23:55:02.968 [INFO][5133] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5789dc517b88b5911558648be43271564a1c88143583916cc951afb355f09ba6" HandleID="k8s-pod-network.5789dc517b88b5911558648be43271564a1c88143583916cc951afb355f09ba6" Workload="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--j88mc-eth0" Sep 12 23:55:03.958790 containerd[2151]: 2025-09-12 23:55:03.014 [INFO][5133] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5789dc517b88b5911558648be43271564a1c88143583916cc951afb355f09ba6" HandleID="k8s-pod-network.5789dc517b88b5911558648be43271564a1c88143583916cc951afb355f09ba6" Workload="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--j88mc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ccc0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-203", "pod":"coredns-7c65d6cfc9-j88mc", "timestamp":"2025-09-12 23:55:02.968538337 +0000 UTC"}, Hostname:"ip-172-31-18-203", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:55:03.958790 containerd[2151]: 2025-09-12 23:55:03.014 [INFO][5133] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:55:03.958790 containerd[2151]: 2025-09-12 23:55:03.014 [INFO][5133] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:55:03.958790 containerd[2151]: 2025-09-12 23:55:03.014 [INFO][5133] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-203' Sep 12 23:55:03.958790 containerd[2151]: 2025-09-12 23:55:03.111 [INFO][5133] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5789dc517b88b5911558648be43271564a1c88143583916cc951afb355f09ba6" host="ip-172-31-18-203" Sep 12 23:55:03.958790 containerd[2151]: 2025-09-12 23:55:03.238 [INFO][5133] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-203" Sep 12 23:55:03.958790 containerd[2151]: 2025-09-12 23:55:03.280 [INFO][5133] ipam/ipam.go 511: Trying affinity for 192.168.50.192/26 host="ip-172-31-18-203" Sep 12 23:55:03.958790 containerd[2151]: 2025-09-12 23:55:03.296 [INFO][5133] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.192/26 host="ip-172-31-18-203" Sep 12 23:55:03.958790 containerd[2151]: 2025-09-12 23:55:03.335 [INFO][5133] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.192/26 host="ip-172-31-18-203" Sep 12 23:55:03.958790 containerd[2151]: 2025-09-12 23:55:03.335 [INFO][5133] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.50.192/26 handle="k8s-pod-network.5789dc517b88b5911558648be43271564a1c88143583916cc951afb355f09ba6" host="ip-172-31-18-203" Sep 12 23:55:03.958790 containerd[2151]: 2025-09-12 23:55:03.364 [INFO][5133] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5789dc517b88b5911558648be43271564a1c88143583916cc951afb355f09ba6 Sep 12 23:55:03.958790 containerd[2151]: 2025-09-12 23:55:03.426 [INFO][5133] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.50.192/26 handle="k8s-pod-network.5789dc517b88b5911558648be43271564a1c88143583916cc951afb355f09ba6" host="ip-172-31-18-203" Sep 12 23:55:03.958790 containerd[2151]: 2025-09-12 23:55:03.467 [INFO][5133] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.50.195/26] block=192.168.50.192/26 handle="k8s-pod-network.5789dc517b88b5911558648be43271564a1c88143583916cc951afb355f09ba6" host="ip-172-31-18-203" Sep 12 23:55:03.958790 containerd[2151]: 2025-09-12 23:55:03.467 [INFO][5133] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.195/26] handle="k8s-pod-network.5789dc517b88b5911558648be43271564a1c88143583916cc951afb355f09ba6" host="ip-172-31-18-203" Sep 12 23:55:03.958790 containerd[2151]: 2025-09-12 23:55:03.482 [INFO][5133] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:55:03.958790 containerd[2151]: 2025-09-12 23:55:03.482 [INFO][5133] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.195/26] IPv6=[] ContainerID="5789dc517b88b5911558648be43271564a1c88143583916cc951afb355f09ba6" HandleID="k8s-pod-network.5789dc517b88b5911558648be43271564a1c88143583916cc951afb355f09ba6" Workload="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--j88mc-eth0" Sep 12 23:55:03.972062 containerd[2151]: 2025-09-12 23:55:03.545 [INFO][5042] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5789dc517b88b5911558648be43271564a1c88143583916cc951afb355f09ba6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j88mc" WorkloadEndpoint="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--j88mc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--203-k8s-coredns--7c65d6cfc9--j88mc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9eeb1078-74ba-4b83-8069-cea1b65e8744", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 54, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-203", ContainerID:"", Pod:"coredns-7c65d6cfc9-j88mc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali81adfd4b05f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:55:03.972062 containerd[2151]: 2025-09-12 23:55:03.554 [INFO][5042] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.195/32] ContainerID="5789dc517b88b5911558648be43271564a1c88143583916cc951afb355f09ba6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j88mc" WorkloadEndpoint="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--j88mc-eth0" Sep 12 23:55:03.972062 containerd[2151]: 2025-09-12 23:55:03.554 [INFO][5042] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali81adfd4b05f ContainerID="5789dc517b88b5911558648be43271564a1c88143583916cc951afb355f09ba6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j88mc" WorkloadEndpoint="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--j88mc-eth0" Sep 12 23:55:03.972062 containerd[2151]: 2025-09-12 23:55:03.735 [INFO][5042] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5789dc517b88b5911558648be43271564a1c88143583916cc951afb355f09ba6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j88mc" WorkloadEndpoint="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--j88mc-eth0" Sep 12 23:55:03.972062 containerd[2151]: 2025-09-12 23:55:03.795 [INFO][5042] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5789dc517b88b5911558648be43271564a1c88143583916cc951afb355f09ba6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j88mc" WorkloadEndpoint="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--j88mc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--203-k8s-coredns--7c65d6cfc9--j88mc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9eeb1078-74ba-4b83-8069-cea1b65e8744", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 54, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-203", ContainerID:"5789dc517b88b5911558648be43271564a1c88143583916cc951afb355f09ba6", Pod:"coredns-7c65d6cfc9-j88mc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali81adfd4b05f", MAC:"46:18:7d:13:eb:20", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:55:03.972062 containerd[2151]: 2025-09-12 23:55:03.879 [INFO][5042] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5789dc517b88b5911558648be43271564a1c88143583916cc951afb355f09ba6" Namespace="kube-system" Pod="coredns-7c65d6cfc9-j88mc" WorkloadEndpoint="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--j88mc-eth0" Sep 12 23:55:03.959906 systemd-networkd[1694]: cali06c11963e8d: Gained IPv6LL Sep 12 23:55:03.965298 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 23:55:03.968488 systemd-logind[2118]: Session 8 logged out. Waiting for processes to exit. Sep 12 23:55:03.978003 systemd-logind[2118]: Removed session 8. Sep 12 23:55:04.091044 systemd-networkd[1694]: vxlan.calico: Link UP Sep 12 23:55:04.091068 systemd-networkd[1694]: vxlan.calico: Gained carrier Sep 12 23:55:04.285382 containerd[2151]: time="2025-09-12T23:55:04.285253139Z" level=info msg="StopPodSandbox for \"15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89\"" Sep 12 23:55:04.332735 containerd[2151]: time="2025-09-12T23:55:04.332147711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5dc46b49f4-xjvcm,Uid:8d62a2d0-ccd7-4178-8371-f2c20fc86ca0,Namespace:calico-system,Attempt:1,} returns sandbox id \"9ac03a4b21b5cc80a771985e40acdcf014df4bdbe2465b3dc0252a80c0796135\"" Sep 12 23:55:04.416290 containerd[2151]: 2025-09-12 23:55:03.463 [INFO][5168] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" Sep 12 23:55:04.416290 containerd[2151]: 2025-09-12 23:55:03.464 [INFO][5168] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" iface="eth0" netns="/var/run/netns/cni-599f19ed-3765-f168-01f2-b62daaf74f0c" Sep 12 23:55:04.416290 containerd[2151]: 2025-09-12 23:55:03.465 [INFO][5168] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" iface="eth0" netns="/var/run/netns/cni-599f19ed-3765-f168-01f2-b62daaf74f0c" Sep 12 23:55:04.416290 containerd[2151]: 2025-09-12 23:55:03.469 [INFO][5168] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" iface="eth0" netns="/var/run/netns/cni-599f19ed-3765-f168-01f2-b62daaf74f0c" Sep 12 23:55:04.416290 containerd[2151]: 2025-09-12 23:55:03.469 [INFO][5168] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" Sep 12 23:55:04.416290 containerd[2151]: 2025-09-12 23:55:03.469 [INFO][5168] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" Sep 12 23:55:04.416290 containerd[2151]: 2025-09-12 23:55:04.291 [INFO][5256] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" HandleID="k8s-pod-network.75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" Workload="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--h78v2-eth0" Sep 12 23:55:04.416290 containerd[2151]: 2025-09-12 23:55:04.292 [INFO][5256] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:55:04.416290 containerd[2151]: 2025-09-12 23:55:04.292 [INFO][5256] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:55:04.416290 containerd[2151]: 2025-09-12 23:55:04.360 [WARNING][5256] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" HandleID="k8s-pod-network.75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" Workload="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--h78v2-eth0" Sep 12 23:55:04.416290 containerd[2151]: 2025-09-12 23:55:04.360 [INFO][5256] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" HandleID="k8s-pod-network.75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" Workload="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--h78v2-eth0" Sep 12 23:55:04.416290 containerd[2151]: 2025-09-12 23:55:04.367 [INFO][5256] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:55:04.416290 containerd[2151]: 2025-09-12 23:55:04.396 [INFO][5168] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" Sep 12 23:55:04.421156 containerd[2151]: time="2025-09-12T23:55:04.416533752Z" level=info msg="TearDown network for sandbox \"75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4\" successfully" Sep 12 23:55:04.421156 containerd[2151]: time="2025-09-12T23:55:04.416581284Z" level=info msg="StopPodSandbox for \"75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4\" returns successfully" Sep 12 23:55:04.421156 containerd[2151]: time="2025-09-12T23:55:04.417508272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-h78v2,Uid:3fae242f-71cb-4cc8-a7fa-b06a5787570e,Namespace:kube-system,Attempt:1,}" Sep 12 23:55:04.435082 systemd[1]: run-netns-cni\x2d599f19ed\x2d3765\x2df168\x2d01f2\x2db62daaf74f0c.mount: Deactivated successfully. Sep 12 23:55:04.498999 containerd[2151]: time="2025-09-12T23:55:04.493452492Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:55:04.498999 containerd[2151]: time="2025-09-12T23:55:04.493607448Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:55:04.519960 containerd[2151]: time="2025-09-12T23:55:04.507539856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:04.521104 containerd[2151]: time="2025-09-12T23:55:04.520870044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:04.551023 containerd[2151]: 2025-09-12 23:55:03.972 [INFO][5167] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" Sep 12 23:55:04.551023 containerd[2151]: 2025-09-12 23:55:03.973 [INFO][5167] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" iface="eth0" netns="/var/run/netns/cni-058d1f4a-e871-ef1d-39df-0471efac7662" Sep 12 23:55:04.551023 containerd[2151]: 2025-09-12 23:55:03.978 [INFO][5167] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" iface="eth0" netns="/var/run/netns/cni-058d1f4a-e871-ef1d-39df-0471efac7662" Sep 12 23:55:04.551023 containerd[2151]: 2025-09-12 23:55:03.979 [INFO][5167] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" iface="eth0" netns="/var/run/netns/cni-058d1f4a-e871-ef1d-39df-0471efac7662" Sep 12 23:55:04.551023 containerd[2151]: 2025-09-12 23:55:03.980 [INFO][5167] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" Sep 12 23:55:04.551023 containerd[2151]: 2025-09-12 23:55:03.980 [INFO][5167] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" Sep 12 23:55:04.551023 containerd[2151]: 2025-09-12 23:55:04.408 [INFO][5318] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" HandleID="k8s-pod-network.c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" Workload="ip--172--31--18--203-k8s-csi--node--driver--vb427-eth0" Sep 12 23:55:04.551023 containerd[2151]: 2025-09-12 23:55:04.408 [INFO][5318] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:55:04.551023 containerd[2151]: 2025-09-12 23:55:04.408 [INFO][5318] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:55:04.551023 containerd[2151]: 2025-09-12 23:55:04.466 [WARNING][5318] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" HandleID="k8s-pod-network.c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" Workload="ip--172--31--18--203-k8s-csi--node--driver--vb427-eth0" Sep 12 23:55:04.551023 containerd[2151]: 2025-09-12 23:55:04.470 [INFO][5318] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" HandleID="k8s-pod-network.c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" Workload="ip--172--31--18--203-k8s-csi--node--driver--vb427-eth0" Sep 12 23:55:04.551023 containerd[2151]: 2025-09-12 23:55:04.498 [INFO][5318] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:55:04.551023 containerd[2151]: 2025-09-12 23:55:04.527 [INFO][5167] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" Sep 12 23:55:04.569765 containerd[2151]: time="2025-09-12T23:55:04.558922897Z" level=info msg="TearDown network for sandbox \"c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d\" successfully" Sep 12 23:55:04.571239 containerd[2151]: time="2025-09-12T23:55:04.571151425Z" level=info msg="StopPodSandbox for \"c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d\" returns successfully" Sep 12 23:55:04.578736 containerd[2151]: time="2025-09-12T23:55:04.578172301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vb427,Uid:e874f212-ec82-4dc1-a7f2-b6ff94f1cb99,Namespace:calico-system,Attempt:1,}" Sep 12 23:55:04.605770 systemd-journald[1604]: Under memory pressure, flushing caches. Sep 12 23:55:04.600728 systemd-resolved[2019]: Under memory pressure, flushing caches. Sep 12 23:55:04.600761 systemd-resolved[2019]: Flushed all caches. Sep 12 23:55:05.044505 containerd[2151]: time="2025-09-12T23:55:05.044315807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-j88mc,Uid:9eeb1078-74ba-4b83-8069-cea1b65e8744,Namespace:kube-system,Attempt:1,} returns sandbox id \"5789dc517b88b5911558648be43271564a1c88143583916cc951afb355f09ba6\"" Sep 12 23:55:05.062752 containerd[2151]: time="2025-09-12T23:55:05.061668947Z" level=info msg="CreateContainer within sandbox \"5789dc517b88b5911558648be43271564a1c88143583916cc951afb355f09ba6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 23:55:05.133269 systemd[1]: run-netns-cni\x2d058d1f4a\x2de871\x2def1d\x2d39df\x2d0471efac7662.mount: Deactivated successfully. Sep 12 23:55:05.161495 systemd-networkd[1694]: cali23946670777: Link UP Sep 12 23:55:05.169120 systemd-networkd[1694]: cali23946670777: Gained carrier Sep 12 23:55:05.174975 systemd-networkd[1694]: cali81adfd4b05f: Gained IPv6LL Sep 12 23:55:05.227122 containerd[2151]: 2025-09-12 23:55:04.232 [INFO][5243] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--2nt2f-eth0 calico-apiserver-5c48bb7547- calico-apiserver 077d1d76-d7b8-4b1c-bc6e-9119a67ba30b 966 0 2025-09-12 23:54:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c48bb7547 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-203 calico-apiserver-5c48bb7547-2nt2f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali23946670777 [] [] }} ContainerID="769330a917e76f3a10253645fd40423c87d0759e48b68e42c891d8d36636c168" Namespace="calico-apiserver" Pod="calico-apiserver-5c48bb7547-2nt2f" WorkloadEndpoint="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--2nt2f-" Sep 12 23:55:05.227122 containerd[2151]: 2025-09-12 23:55:04.233 [INFO][5243] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="769330a917e76f3a10253645fd40423c87d0759e48b68e42c891d8d36636c168" Namespace="calico-apiserver" Pod="calico-apiserver-5c48bb7547-2nt2f" WorkloadEndpoint="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--2nt2f-eth0" Sep 12 23:55:05.227122 containerd[2151]: 2025-09-12 23:55:04.644 [INFO][5369] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="769330a917e76f3a10253645fd40423c87d0759e48b68e42c891d8d36636c168" HandleID="k8s-pod-network.769330a917e76f3a10253645fd40423c87d0759e48b68e42c891d8d36636c168" Workload="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--2nt2f-eth0" Sep 12 23:55:05.227122 containerd[2151]: 2025-09-12 23:55:04.655 [INFO][5369] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="769330a917e76f3a10253645fd40423c87d0759e48b68e42c891d8d36636c168" HandleID="k8s-pod-network.769330a917e76f3a10253645fd40423c87d0759e48b68e42c891d8d36636c168" Workload="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--2nt2f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034e3d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-203", "pod":"calico-apiserver-5c48bb7547-2nt2f", "timestamp":"2025-09-12 23:55:04.644241145 +0000 UTC"}, Hostname:"ip-172-31-18-203", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:55:05.227122 containerd[2151]: 2025-09-12 23:55:04.657 [INFO][5369] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:55:05.227122 containerd[2151]: 2025-09-12 23:55:04.660 [INFO][5369] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:55:05.227122 containerd[2151]: 2025-09-12 23:55:04.661 [INFO][5369] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-203' Sep 12 23:55:05.227122 containerd[2151]: 2025-09-12 23:55:04.720 [INFO][5369] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.769330a917e76f3a10253645fd40423c87d0759e48b68e42c891d8d36636c168" host="ip-172-31-18-203" Sep 12 23:55:05.227122 containerd[2151]: 2025-09-12 23:55:04.774 [INFO][5369] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-203" Sep 12 23:55:05.227122 containerd[2151]: 2025-09-12 23:55:04.834 [INFO][5369] ipam/ipam.go 511: Trying affinity for 192.168.50.192/26 host="ip-172-31-18-203" Sep 12 23:55:05.227122 containerd[2151]: 2025-09-12 23:55:04.892 [INFO][5369] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.192/26 host="ip-172-31-18-203" Sep 12 23:55:05.227122 containerd[2151]: 2025-09-12 23:55:04.916 [INFO][5369] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.192/26 host="ip-172-31-18-203" Sep 12 23:55:05.227122 containerd[2151]: 2025-09-12 23:55:04.929 [INFO][5369] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.50.192/26 handle="k8s-pod-network.769330a917e76f3a10253645fd40423c87d0759e48b68e42c891d8d36636c168" host="ip-172-31-18-203" Sep 12 23:55:05.227122 containerd[2151]: 2025-09-12 23:55:04.940 [INFO][5369] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.769330a917e76f3a10253645fd40423c87d0759e48b68e42c891d8d36636c168 Sep 12 23:55:05.227122 containerd[2151]: 2025-09-12 23:55:04.993 [INFO][5369] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.50.192/26 handle="k8s-pod-network.769330a917e76f3a10253645fd40423c87d0759e48b68e42c891d8d36636c168" host="ip-172-31-18-203" Sep 12 23:55:05.227122 containerd[2151]: 2025-09-12 23:55:05.035 [INFO][5369] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.50.196/26] block=192.168.50.192/26 handle="k8s-pod-network.769330a917e76f3a10253645fd40423c87d0759e48b68e42c891d8d36636c168" host="ip-172-31-18-203" Sep 12 23:55:05.227122 containerd[2151]: 2025-09-12 23:55:05.041 [INFO][5369] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.196/26] handle="k8s-pod-network.769330a917e76f3a10253645fd40423c87d0759e48b68e42c891d8d36636c168" host="ip-172-31-18-203" Sep 12 23:55:05.227122 containerd[2151]: 2025-09-12 23:55:05.041 [INFO][5369] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:55:05.227122 containerd[2151]: 2025-09-12 23:55:05.041 [INFO][5369] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.196/26] IPv6=[] ContainerID="769330a917e76f3a10253645fd40423c87d0759e48b68e42c891d8d36636c168" HandleID="k8s-pod-network.769330a917e76f3a10253645fd40423c87d0759e48b68e42c891d8d36636c168" Workload="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--2nt2f-eth0" Sep 12 23:55:05.228606 containerd[2151]: 2025-09-12 23:55:05.064 [INFO][5243] cni-plugin/k8s.go 418: Populated endpoint ContainerID="769330a917e76f3a10253645fd40423c87d0759e48b68e42c891d8d36636c168" Namespace="calico-apiserver" Pod="calico-apiserver-5c48bb7547-2nt2f" WorkloadEndpoint="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--2nt2f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--2nt2f-eth0", GenerateName:"calico-apiserver-5c48bb7547-", Namespace:"calico-apiserver", SelfLink:"", UID:"077d1d76-d7b8-4b1c-bc6e-9119a67ba30b", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 54, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c48bb7547", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-203", ContainerID:"", Pod:"calico-apiserver-5c48bb7547-2nt2f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali23946670777", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:55:05.228606 containerd[2151]: 2025-09-12 23:55:05.064 [INFO][5243] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.196/32] ContainerID="769330a917e76f3a10253645fd40423c87d0759e48b68e42c891d8d36636c168" Namespace="calico-apiserver" Pod="calico-apiserver-5c48bb7547-2nt2f" WorkloadEndpoint="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--2nt2f-eth0" Sep 12 23:55:05.228606 containerd[2151]: 2025-09-12 23:55:05.064 [INFO][5243] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali23946670777 ContainerID="769330a917e76f3a10253645fd40423c87d0759e48b68e42c891d8d36636c168" Namespace="calico-apiserver" Pod="calico-apiserver-5c48bb7547-2nt2f" WorkloadEndpoint="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--2nt2f-eth0" Sep 12 23:55:05.228606 containerd[2151]: 2025-09-12 23:55:05.181 [INFO][5243] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="769330a917e76f3a10253645fd40423c87d0759e48b68e42c891d8d36636c168" Namespace="calico-apiserver" Pod="calico-apiserver-5c48bb7547-2nt2f" WorkloadEndpoint="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--2nt2f-eth0" Sep 12 23:55:05.228606 containerd[2151]: 2025-09-12 23:55:05.184 [INFO][5243] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="769330a917e76f3a10253645fd40423c87d0759e48b68e42c891d8d36636c168" Namespace="calico-apiserver" Pod="calico-apiserver-5c48bb7547-2nt2f" WorkloadEndpoint="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--2nt2f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--2nt2f-eth0", GenerateName:"calico-apiserver-5c48bb7547-", Namespace:"calico-apiserver", SelfLink:"", UID:"077d1d76-d7b8-4b1c-bc6e-9119a67ba30b", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 54, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c48bb7547", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-203", ContainerID:"769330a917e76f3a10253645fd40423c87d0759e48b68e42c891d8d36636c168", Pod:"calico-apiserver-5c48bb7547-2nt2f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali23946670777", MAC:"ea:50:1b:67:6f:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:55:05.228606 containerd[2151]: 2025-09-12 23:55:05.206 [INFO][5243] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="769330a917e76f3a10253645fd40423c87d0759e48b68e42c891d8d36636c168" Namespace="calico-apiserver" Pod="calico-apiserver-5c48bb7547-2nt2f" WorkloadEndpoint="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--2nt2f-eth0" Sep 12 23:55:05.362898 systemd-networkd[1694]: cali95460f8f173: Link UP Sep 12 23:55:05.365650 systemd-networkd[1694]: cali95460f8f173: Gained carrier Sep 12 23:55:05.474115 containerd[2151]: 2025-09-12 23:55:04.663 [INFO][5326] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--203-k8s-goldmane--7988f88666--bhgbj-eth0 goldmane-7988f88666- calico-system 082bf9af-912b-4ff6-8411-79fadb8bf200 971 0 2025-09-12 23:54:35 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-18-203 goldmane-7988f88666-bhgbj eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali95460f8f173 [] [] }} ContainerID="1c18a913cc4f58f85fe9a19f5dcc18dca04ba609e716597e0cbec52fcd78b226" Namespace="calico-system" Pod="goldmane-7988f88666-bhgbj" WorkloadEndpoint="ip--172--31--18--203-k8s-goldmane--7988f88666--bhgbj-" Sep 12 23:55:05.474115 containerd[2151]: 2025-09-12 23:55:04.667 [INFO][5326] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1c18a913cc4f58f85fe9a19f5dcc18dca04ba609e716597e0cbec52fcd78b226" Namespace="calico-system" Pod="goldmane-7988f88666-bhgbj" WorkloadEndpoint="ip--172--31--18--203-k8s-goldmane--7988f88666--bhgbj-eth0" Sep 12 23:55:05.474115 containerd[2151]: 2025-09-12 23:55:05.090 [INFO][5438] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1c18a913cc4f58f85fe9a19f5dcc18dca04ba609e716597e0cbec52fcd78b226" HandleID="k8s-pod-network.1c18a913cc4f58f85fe9a19f5dcc18dca04ba609e716597e0cbec52fcd78b226" Workload="ip--172--31--18--203-k8s-goldmane--7988f88666--bhgbj-eth0" Sep 12 23:55:05.474115 containerd[2151]: 2025-09-12 23:55:05.090 [INFO][5438] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1c18a913cc4f58f85fe9a19f5dcc18dca04ba609e716597e0cbec52fcd78b226" HandleID="k8s-pod-network.1c18a913cc4f58f85fe9a19f5dcc18dca04ba609e716597e0cbec52fcd78b226" Workload="ip--172--31--18--203-k8s-goldmane--7988f88666--bhgbj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c700), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-203", "pod":"goldmane-7988f88666-bhgbj", "timestamp":"2025-09-12 23:55:05.090496703 +0000 UTC"}, Hostname:"ip-172-31-18-203", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:55:05.474115 containerd[2151]: 2025-09-12 23:55:05.091 [INFO][5438] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:55:05.474115 containerd[2151]: 2025-09-12 23:55:05.091 [INFO][5438] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:55:05.474115 containerd[2151]: 2025-09-12 23:55:05.091 [INFO][5438] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-203' Sep 12 23:55:05.474115 containerd[2151]: 2025-09-12 23:55:05.155 [INFO][5438] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1c18a913cc4f58f85fe9a19f5dcc18dca04ba609e716597e0cbec52fcd78b226" host="ip-172-31-18-203" Sep 12 23:55:05.474115 containerd[2151]: 2025-09-12 23:55:05.194 [INFO][5438] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-203" Sep 12 23:55:05.474115 containerd[2151]: 2025-09-12 23:55:05.223 [INFO][5438] ipam/ipam.go 511: Trying affinity for 192.168.50.192/26 host="ip-172-31-18-203" Sep 12 23:55:05.474115 containerd[2151]: 2025-09-12 23:55:05.235 [INFO][5438] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.192/26 host="ip-172-31-18-203" Sep 12 23:55:05.474115 containerd[2151]: 2025-09-12 23:55:05.243 [INFO][5438] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.192/26 host="ip-172-31-18-203" Sep 12 23:55:05.474115 containerd[2151]: 2025-09-12 23:55:05.243 [INFO][5438] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.50.192/26 handle="k8s-pod-network.1c18a913cc4f58f85fe9a19f5dcc18dca04ba609e716597e0cbec52fcd78b226" host="ip-172-31-18-203" Sep 12 23:55:05.474115 containerd[2151]: 2025-09-12 23:55:05.246 [INFO][5438] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1c18a913cc4f58f85fe9a19f5dcc18dca04ba609e716597e0cbec52fcd78b226 Sep 12 23:55:05.474115 containerd[2151]: 2025-09-12 23:55:05.279 [INFO][5438] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.50.192/26 handle="k8s-pod-network.1c18a913cc4f58f85fe9a19f5dcc18dca04ba609e716597e0cbec52fcd78b226" host="ip-172-31-18-203" Sep 12 23:55:05.474115 containerd[2151]: 2025-09-12 23:55:05.304 [INFO][5438] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.50.197/26] block=192.168.50.192/26 handle="k8s-pod-network.1c18a913cc4f58f85fe9a19f5dcc18dca04ba609e716597e0cbec52fcd78b226" host="ip-172-31-18-203" Sep 12 23:55:05.474115 containerd[2151]: 2025-09-12 23:55:05.304 [INFO][5438] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.197/26] handle="k8s-pod-network.1c18a913cc4f58f85fe9a19f5dcc18dca04ba609e716597e0cbec52fcd78b226" host="ip-172-31-18-203" Sep 12 23:55:05.474115 containerd[2151]: 2025-09-12 23:55:05.304 [INFO][5438] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:55:05.474115 containerd[2151]: 2025-09-12 23:55:05.307 [INFO][5438] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.197/26] IPv6=[] ContainerID="1c18a913cc4f58f85fe9a19f5dcc18dca04ba609e716597e0cbec52fcd78b226" HandleID="k8s-pod-network.1c18a913cc4f58f85fe9a19f5dcc18dca04ba609e716597e0cbec52fcd78b226" Workload="ip--172--31--18--203-k8s-goldmane--7988f88666--bhgbj-eth0" Sep 12 23:55:05.476994 containerd[2151]: 2025-09-12 23:55:05.315 [INFO][5326] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1c18a913cc4f58f85fe9a19f5dcc18dca04ba609e716597e0cbec52fcd78b226" Namespace="calico-system" Pod="goldmane-7988f88666-bhgbj" WorkloadEndpoint="ip--172--31--18--203-k8s-goldmane--7988f88666--bhgbj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--203-k8s-goldmane--7988f88666--bhgbj-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"082bf9af-912b-4ff6-8411-79fadb8bf200", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 54, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-203", ContainerID:"", Pod:"goldmane-7988f88666-bhgbj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.50.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali95460f8f173", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:55:05.476994 containerd[2151]: 2025-09-12 23:55:05.323 [INFO][5326] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.197/32] ContainerID="1c18a913cc4f58f85fe9a19f5dcc18dca04ba609e716597e0cbec52fcd78b226" Namespace="calico-system" Pod="goldmane-7988f88666-bhgbj" WorkloadEndpoint="ip--172--31--18--203-k8s-goldmane--7988f88666--bhgbj-eth0" Sep 12 23:55:05.476994 containerd[2151]: 2025-09-12 23:55:05.323 [INFO][5326] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali95460f8f173 ContainerID="1c18a913cc4f58f85fe9a19f5dcc18dca04ba609e716597e0cbec52fcd78b226" Namespace="calico-system" Pod="goldmane-7988f88666-bhgbj" WorkloadEndpoint="ip--172--31--18--203-k8s-goldmane--7988f88666--bhgbj-eth0" Sep 12 23:55:05.476994 containerd[2151]: 2025-09-12 23:55:05.388 [INFO][5326] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1c18a913cc4f58f85fe9a19f5dcc18dca04ba609e716597e0cbec52fcd78b226" Namespace="calico-system" Pod="goldmane-7988f88666-bhgbj" WorkloadEndpoint="ip--172--31--18--203-k8s-goldmane--7988f88666--bhgbj-eth0" Sep 12 23:55:05.476994 containerd[2151]: 2025-09-12 23:55:05.394 [INFO][5326] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1c18a913cc4f58f85fe9a19f5dcc18dca04ba609e716597e0cbec52fcd78b226" Namespace="calico-system" Pod="goldmane-7988f88666-bhgbj" WorkloadEndpoint="ip--172--31--18--203-k8s-goldmane--7988f88666--bhgbj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--203-k8s-goldmane--7988f88666--bhgbj-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"082bf9af-912b-4ff6-8411-79fadb8bf200", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 54, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-203", ContainerID:"1c18a913cc4f58f85fe9a19f5dcc18dca04ba609e716597e0cbec52fcd78b226", Pod:"goldmane-7988f88666-bhgbj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.50.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali95460f8f173", MAC:"76:3e:cc:62:fc:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:55:05.476994 containerd[2151]: 2025-09-12 23:55:05.437 [INFO][5326] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1c18a913cc4f58f85fe9a19f5dcc18dca04ba609e716597e0cbec52fcd78b226" Namespace="calico-system" Pod="goldmane-7988f88666-bhgbj" WorkloadEndpoint="ip--172--31--18--203-k8s-goldmane--7988f88666--bhgbj-eth0" Sep 12 23:55:05.510757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount70166217.mount: Deactivated successfully. Sep 12 23:55:05.557732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2554239596.mount: Deactivated successfully. Sep 12 23:55:05.597876 containerd[2151]: time="2025-09-12T23:55:05.597745994Z" level=info msg="CreateContainer within sandbox \"5789dc517b88b5911558648be43271564a1c88143583916cc951afb355f09ba6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2ac7f31b84ce56a9b7083aa5bdc8af7c3e820887e718f71457c63424335d2ce9\"" Sep 12 23:55:05.608467 containerd[2151]: time="2025-09-12T23:55:05.606389414Z" level=info msg="StartContainer for \"2ac7f31b84ce56a9b7083aa5bdc8af7c3e820887e718f71457c63424335d2ce9\"" Sep 12 23:55:05.628508 containerd[2151]: 2025-09-12 23:55:04.992 [INFO][5390] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" Sep 12 23:55:05.628508 containerd[2151]: 2025-09-12 23:55:05.001 [INFO][5390] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" iface="eth0" netns="/var/run/netns/cni-da68f1f5-d05b-d205-5dbc-653ba3f0cb80" Sep 12 23:55:05.628508 containerd[2151]: 2025-09-12 23:55:05.004 [INFO][5390] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" iface="eth0" netns="/var/run/netns/cni-da68f1f5-d05b-d205-5dbc-653ba3f0cb80" Sep 12 23:55:05.628508 containerd[2151]: 2025-09-12 23:55:05.006 [INFO][5390] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" iface="eth0" netns="/var/run/netns/cni-da68f1f5-d05b-d205-5dbc-653ba3f0cb80" Sep 12 23:55:05.628508 containerd[2151]: 2025-09-12 23:55:05.006 [INFO][5390] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" Sep 12 23:55:05.628508 containerd[2151]: 2025-09-12 23:55:05.006 [INFO][5390] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" Sep 12 23:55:05.628508 containerd[2151]: 2025-09-12 23:55:05.492 [INFO][5482] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" HandleID="k8s-pod-network.15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" Workload="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--pbxdf-eth0" Sep 12 23:55:05.628508 containerd[2151]: 2025-09-12 23:55:05.515 [INFO][5482] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:55:05.628508 containerd[2151]: 2025-09-12 23:55:05.515 [INFO][5482] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:55:05.628508 containerd[2151]: 2025-09-12 23:55:05.553 [WARNING][5482] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" HandleID="k8s-pod-network.15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" Workload="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--pbxdf-eth0" Sep 12 23:55:05.628508 containerd[2151]: 2025-09-12 23:55:05.554 [INFO][5482] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" HandleID="k8s-pod-network.15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" Workload="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--pbxdf-eth0" Sep 12 23:55:05.628508 containerd[2151]: 2025-09-12 23:55:05.570 [INFO][5482] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:55:05.628508 containerd[2151]: 2025-09-12 23:55:05.591 [INFO][5390] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" Sep 12 23:55:05.637135 containerd[2151]: time="2025-09-12T23:55:05.636953114Z" level=info msg="TearDown network for sandbox \"15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89\" successfully" Sep 12 23:55:05.637135 containerd[2151]: time="2025-09-12T23:55:05.637018814Z" level=info msg="StopPodSandbox for \"15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89\" returns successfully" Sep 12 23:55:05.640733 containerd[2151]: time="2025-09-12T23:55:05.640447214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c48bb7547-pbxdf,Uid:9f5b3f0c-b02e-481f-a083-c8af4d9dc294,Namespace:calico-apiserver,Attempt:1,}" Sep 12 23:55:05.691313 containerd[2151]: time="2025-09-12T23:55:05.681031502Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:55:05.691313 containerd[2151]: time="2025-09-12T23:55:05.681160802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:55:05.691313 containerd[2151]: time="2025-09-12T23:55:05.681199214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:05.691313 containerd[2151]: time="2025-09-12T23:55:05.681437690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:05.752585 systemd-networkd[1694]: vxlan.calico: Gained IPv6LL Sep 12 23:55:05.773007 containerd[2151]: time="2025-09-12T23:55:05.764174715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:55:05.774838 containerd[2151]: time="2025-09-12T23:55:05.774205491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:55:05.774838 containerd[2151]: time="2025-09-12T23:55:05.774282399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:05.781125 containerd[2151]: time="2025-09-12T23:55:05.778581207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:05.948925 systemd-networkd[1694]: cali5f0a31a1ea0: Link UP Sep 12 23:55:05.951773 systemd-networkd[1694]: cali5f0a31a1ea0: Gained carrier Sep 12 23:55:06.104352 containerd[2151]: 2025-09-12 23:55:05.166 [INFO][5406] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--203-k8s-coredns--7c65d6cfc9--h78v2-eth0 coredns-7c65d6cfc9- kube-system 3fae242f-71cb-4cc8-a7fa-b06a5787570e 1011 0 2025-09-12 23:54:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-203 coredns-7c65d6cfc9-h78v2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5f0a31a1ea0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e1b0b2681c8696a710ada509712019d80c693e27308ddc27f48be9d6c7a27649" Namespace="kube-system" Pod="coredns-7c65d6cfc9-h78v2" WorkloadEndpoint="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--h78v2-" Sep 12 23:55:06.104352 containerd[2151]: 2025-09-12 23:55:05.166 [INFO][5406] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e1b0b2681c8696a710ada509712019d80c693e27308ddc27f48be9d6c7a27649" Namespace="kube-system" Pod="coredns-7c65d6cfc9-h78v2" WorkloadEndpoint="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--h78v2-eth0" Sep 12 23:55:06.104352 containerd[2151]: 2025-09-12 23:55:05.690 [INFO][5506] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e1b0b2681c8696a710ada509712019d80c693e27308ddc27f48be9d6c7a27649" HandleID="k8s-pod-network.e1b0b2681c8696a710ada509712019d80c693e27308ddc27f48be9d6c7a27649" Workload="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--h78v2-eth0" Sep 12 23:55:06.104352 containerd[2151]: 2025-09-12 23:55:05.692 [INFO][5506] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e1b0b2681c8696a710ada509712019d80c693e27308ddc27f48be9d6c7a27649" HandleID="k8s-pod-network.e1b0b2681c8696a710ada509712019d80c693e27308ddc27f48be9d6c7a27649" Workload="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--h78v2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000422f90), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-203", "pod":"coredns-7c65d6cfc9-h78v2", "timestamp":"2025-09-12 23:55:05.690785954 +0000 UTC"}, Hostname:"ip-172-31-18-203", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:55:06.104352 containerd[2151]: 2025-09-12 23:55:05.692 [INFO][5506] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:55:06.104352 containerd[2151]: 2025-09-12 23:55:05.692 [INFO][5506] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:55:06.104352 containerd[2151]: 2025-09-12 23:55:05.692 [INFO][5506] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-203' Sep 12 23:55:06.104352 containerd[2151]: 2025-09-12 23:55:05.726 [INFO][5506] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e1b0b2681c8696a710ada509712019d80c693e27308ddc27f48be9d6c7a27649" host="ip-172-31-18-203" Sep 12 23:55:06.104352 containerd[2151]: 2025-09-12 23:55:05.746 [INFO][5506] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-203" Sep 12 23:55:06.104352 containerd[2151]: 2025-09-12 23:55:05.769 [INFO][5506] ipam/ipam.go 511: Trying affinity for 192.168.50.192/26 host="ip-172-31-18-203" Sep 12 23:55:06.104352 containerd[2151]: 2025-09-12 23:55:05.777 [INFO][5506] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.192/26 host="ip-172-31-18-203" Sep 12 23:55:06.104352 containerd[2151]: 2025-09-12 23:55:05.795 [INFO][5506] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.192/26 host="ip-172-31-18-203" Sep 12 23:55:06.104352 containerd[2151]: 2025-09-12 23:55:05.795 [INFO][5506] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.50.192/26 handle="k8s-pod-network.e1b0b2681c8696a710ada509712019d80c693e27308ddc27f48be9d6c7a27649" host="ip-172-31-18-203" Sep 12 23:55:06.104352 containerd[2151]: 2025-09-12 23:55:05.820 [INFO][5506] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e1b0b2681c8696a710ada509712019d80c693e27308ddc27f48be9d6c7a27649 Sep 12 23:55:06.104352 containerd[2151]: 2025-09-12 23:55:05.855 [INFO][5506] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.50.192/26 handle="k8s-pod-network.e1b0b2681c8696a710ada509712019d80c693e27308ddc27f48be9d6c7a27649" host="ip-172-31-18-203" Sep 12 23:55:06.104352 containerd[2151]: 2025-09-12 23:55:05.890 [INFO][5506] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.50.198/26] block=192.168.50.192/26 handle="k8s-pod-network.e1b0b2681c8696a710ada509712019d80c693e27308ddc27f48be9d6c7a27649" host="ip-172-31-18-203" Sep 12 23:55:06.104352 containerd[2151]: 2025-09-12 23:55:05.890 [INFO][5506] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.198/26] handle="k8s-pod-network.e1b0b2681c8696a710ada509712019d80c693e27308ddc27f48be9d6c7a27649" host="ip-172-31-18-203" Sep 12 23:55:06.104352 containerd[2151]: 2025-09-12 23:55:05.890 [INFO][5506] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:55:06.104352 containerd[2151]: 2025-09-12 23:55:05.890 [INFO][5506] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.198/26] IPv6=[] ContainerID="e1b0b2681c8696a710ada509712019d80c693e27308ddc27f48be9d6c7a27649" HandleID="k8s-pod-network.e1b0b2681c8696a710ada509712019d80c693e27308ddc27f48be9d6c7a27649" Workload="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--h78v2-eth0" Sep 12 23:55:06.117465 containerd[2151]: 2025-09-12 23:55:05.920 [INFO][5406] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e1b0b2681c8696a710ada509712019d80c693e27308ddc27f48be9d6c7a27649" Namespace="kube-system" Pod="coredns-7c65d6cfc9-h78v2" WorkloadEndpoint="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--h78v2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--203-k8s-coredns--7c65d6cfc9--h78v2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3fae242f-71cb-4cc8-a7fa-b06a5787570e", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 54, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-203", ContainerID:"", Pod:"coredns-7c65d6cfc9-h78v2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f0a31a1ea0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:55:06.117465 containerd[2151]: 2025-09-12 23:55:05.922 [INFO][5406] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.198/32] ContainerID="e1b0b2681c8696a710ada509712019d80c693e27308ddc27f48be9d6c7a27649" Namespace="kube-system" Pod="coredns-7c65d6cfc9-h78v2" WorkloadEndpoint="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--h78v2-eth0" Sep 12 23:55:06.117465 containerd[2151]: 2025-09-12 23:55:05.922 [INFO][5406] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5f0a31a1ea0 ContainerID="e1b0b2681c8696a710ada509712019d80c693e27308ddc27f48be9d6c7a27649" Namespace="kube-system" Pod="coredns-7c65d6cfc9-h78v2" WorkloadEndpoint="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--h78v2-eth0" Sep 12 23:55:06.117465 containerd[2151]: 2025-09-12 23:55:05.991 [INFO][5406] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e1b0b2681c8696a710ada509712019d80c693e27308ddc27f48be9d6c7a27649" Namespace="kube-system" Pod="coredns-7c65d6cfc9-h78v2" WorkloadEndpoint="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--h78v2-eth0" Sep 12 23:55:06.117465 containerd[2151]: 2025-09-12 23:55:06.021 [INFO][5406] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e1b0b2681c8696a710ada509712019d80c693e27308ddc27f48be9d6c7a27649" Namespace="kube-system" Pod="coredns-7c65d6cfc9-h78v2" WorkloadEndpoint="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--h78v2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--203-k8s-coredns--7c65d6cfc9--h78v2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3fae242f-71cb-4cc8-a7fa-b06a5787570e", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 54, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-203", ContainerID:"e1b0b2681c8696a710ada509712019d80c693e27308ddc27f48be9d6c7a27649", Pod:"coredns-7c65d6cfc9-h78v2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f0a31a1ea0", MAC:"62:93:e9:96:16:00", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:55:06.117465 containerd[2151]: 2025-09-12 23:55:06.059 [INFO][5406] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e1b0b2681c8696a710ada509712019d80c693e27308ddc27f48be9d6c7a27649" Namespace="kube-system" Pod="coredns-7c65d6cfc9-h78v2" WorkloadEndpoint="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--h78v2-eth0" Sep 12 23:55:06.162985 systemd[1]: run-netns-cni\x2dda68f1f5\x2dd05b\x2dd205\x2d5dbc\x2d653ba3f0cb80.mount: Deactivated successfully. Sep 12 23:55:06.285666 containerd[2151]: time="2025-09-12T23:55:06.285373381Z" level=info msg="StopPodSandbox for \"f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00\"" Sep 12 23:55:06.377541 systemd-networkd[1694]: cali9eefa4107f9: Link UP Sep 12 23:55:06.379879 systemd-networkd[1694]: cali9eefa4107f9: Gained carrier Sep 12 23:55:06.391880 systemd-networkd[1694]: cali95460f8f173: Gained IPv6LL Sep 12 23:55:06.495401 containerd[2151]: time="2025-09-12T23:55:06.495315710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-bhgbj,Uid:082bf9af-912b-4ff6-8411-79fadb8bf200,Namespace:calico-system,Attempt:1,} returns sandbox id \"1c18a913cc4f58f85fe9a19f5dcc18dca04ba609e716597e0cbec52fcd78b226\"" Sep 12 23:55:06.520144 containerd[2151]: time="2025-09-12T23:55:06.519981686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:55:06.520999 containerd[2151]: time="2025-09-12T23:55:06.520845362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:55:06.521184 containerd[2151]: time="2025-09-12T23:55:06.520970642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:06.526209 containerd[2151]: time="2025-09-12T23:55:06.522089834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:06.526443 containerd[2151]: 2025-09-12 23:55:05.458 [INFO][5441] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--203-k8s-csi--node--driver--vb427-eth0 csi-node-driver- calico-system e874f212-ec82-4dc1-a7f2-b6ff94f1cb99 1015 0 2025-09-12 23:54:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-18-203 csi-node-driver-vb427 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9eefa4107f9 [] [] }} ContainerID="ffb69b66abf440cab31c0981b2747476b759e5e0aafd0ec21fd6aa9f93574e47" Namespace="calico-system" Pod="csi-node-driver-vb427" WorkloadEndpoint="ip--172--31--18--203-k8s-csi--node--driver--vb427-" Sep 12 23:55:06.526443 containerd[2151]: 2025-09-12 23:55:05.461 [INFO][5441] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ffb69b66abf440cab31c0981b2747476b759e5e0aafd0ec21fd6aa9f93574e47" Namespace="calico-system" Pod="csi-node-driver-vb427" WorkloadEndpoint="ip--172--31--18--203-k8s-csi--node--driver--vb427-eth0" Sep 12 23:55:06.526443 containerd[2151]: 2025-09-12 23:55:05.896 [INFO][5532] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ffb69b66abf440cab31c0981b2747476b759e5e0aafd0ec21fd6aa9f93574e47" HandleID="k8s-pod-network.ffb69b66abf440cab31c0981b2747476b759e5e0aafd0ec21fd6aa9f93574e47" Workload="ip--172--31--18--203-k8s-csi--node--driver--vb427-eth0" Sep 12 23:55:06.526443 containerd[2151]: 2025-09-12 23:55:05.896 [INFO][5532] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ffb69b66abf440cab31c0981b2747476b759e5e0aafd0ec21fd6aa9f93574e47" HandleID="k8s-pod-network.ffb69b66abf440cab31c0981b2747476b759e5e0aafd0ec21fd6aa9f93574e47" Workload="ip--172--31--18--203-k8s-csi--node--driver--vb427-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d0f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-203", "pod":"csi-node-driver-vb427", "timestamp":"2025-09-12 23:55:05.893683383 +0000 UTC"}, Hostname:"ip-172-31-18-203", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:55:06.526443 containerd[2151]: 2025-09-12 23:55:05.900 [INFO][5532] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:55:06.526443 containerd[2151]: 2025-09-12 23:55:05.901 [INFO][5532] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:55:06.526443 containerd[2151]: 2025-09-12 23:55:05.901 [INFO][5532] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-203' Sep 12 23:55:06.526443 containerd[2151]: 2025-09-12 23:55:05.957 [INFO][5532] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ffb69b66abf440cab31c0981b2747476b759e5e0aafd0ec21fd6aa9f93574e47" host="ip-172-31-18-203" Sep 12 23:55:06.526443 containerd[2151]: 2025-09-12 23:55:05.997 [INFO][5532] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-203" Sep 12 23:55:06.526443 containerd[2151]: 2025-09-12 23:55:06.090 [INFO][5532] ipam/ipam.go 511: Trying affinity for 192.168.50.192/26 host="ip-172-31-18-203" Sep 12 23:55:06.526443 containerd[2151]: 2025-09-12 23:55:06.100 [INFO][5532] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.192/26 host="ip-172-31-18-203" Sep 12 23:55:06.526443 containerd[2151]: 2025-09-12 23:55:06.165 [INFO][5532] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.192/26 host="ip-172-31-18-203" Sep 12 23:55:06.526443 containerd[2151]: 2025-09-12 23:55:06.193 [INFO][5532] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.50.192/26 handle="k8s-pod-network.ffb69b66abf440cab31c0981b2747476b759e5e0aafd0ec21fd6aa9f93574e47" host="ip-172-31-18-203" Sep 12 23:55:06.526443 containerd[2151]: 2025-09-12 23:55:06.206 [INFO][5532] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ffb69b66abf440cab31c0981b2747476b759e5e0aafd0ec21fd6aa9f93574e47 Sep 12 23:55:06.526443 containerd[2151]: 2025-09-12 23:55:06.260 [INFO][5532] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.50.192/26 handle="k8s-pod-network.ffb69b66abf440cab31c0981b2747476b759e5e0aafd0ec21fd6aa9f93574e47" host="ip-172-31-18-203" Sep 12 23:55:06.526443 containerd[2151]: 2025-09-12 23:55:06.291 [INFO][5532] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.50.199/26] block=192.168.50.192/26 handle="k8s-pod-network.ffb69b66abf440cab31c0981b2747476b759e5e0aafd0ec21fd6aa9f93574e47" host="ip-172-31-18-203" Sep 12 23:55:06.526443 containerd[2151]: 2025-09-12 23:55:06.293 [INFO][5532] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.199/26] handle="k8s-pod-network.ffb69b66abf440cab31c0981b2747476b759e5e0aafd0ec21fd6aa9f93574e47" host="ip-172-31-18-203" Sep 12 23:55:06.526443 containerd[2151]: 2025-09-12 23:55:06.293 [INFO][5532] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:55:06.526443 containerd[2151]: 2025-09-12 23:55:06.293 [INFO][5532] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.199/26] IPv6=[] ContainerID="ffb69b66abf440cab31c0981b2747476b759e5e0aafd0ec21fd6aa9f93574e47" HandleID="k8s-pod-network.ffb69b66abf440cab31c0981b2747476b759e5e0aafd0ec21fd6aa9f93574e47" Workload="ip--172--31--18--203-k8s-csi--node--driver--vb427-eth0" Sep 12 23:55:06.527842 containerd[2151]: 2025-09-12 23:55:06.336 [INFO][5441] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ffb69b66abf440cab31c0981b2747476b759e5e0aafd0ec21fd6aa9f93574e47" Namespace="calico-system" Pod="csi-node-driver-vb427" WorkloadEndpoint="ip--172--31--18--203-k8s-csi--node--driver--vb427-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--203-k8s-csi--node--driver--vb427-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e874f212-ec82-4dc1-a7f2-b6ff94f1cb99", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 54, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-203", ContainerID:"", Pod:"csi-node-driver-vb427", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9eefa4107f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:55:06.527842 containerd[2151]: 2025-09-12 23:55:06.336 [INFO][5441] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.199/32] ContainerID="ffb69b66abf440cab31c0981b2747476b759e5e0aafd0ec21fd6aa9f93574e47" Namespace="calico-system" Pod="csi-node-driver-vb427" WorkloadEndpoint="ip--172--31--18--203-k8s-csi--node--driver--vb427-eth0" Sep 12 23:55:06.527842 containerd[2151]: 2025-09-12 23:55:06.336 [INFO][5441] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9eefa4107f9 ContainerID="ffb69b66abf440cab31c0981b2747476b759e5e0aafd0ec21fd6aa9f93574e47" Namespace="calico-system" Pod="csi-node-driver-vb427" WorkloadEndpoint="ip--172--31--18--203-k8s-csi--node--driver--vb427-eth0" Sep 12 23:55:06.527842 containerd[2151]: 2025-09-12 23:55:06.381 [INFO][5441] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ffb69b66abf440cab31c0981b2747476b759e5e0aafd0ec21fd6aa9f93574e47" Namespace="calico-system" Pod="csi-node-driver-vb427" WorkloadEndpoint="ip--172--31--18--203-k8s-csi--node--driver--vb427-eth0" Sep 12 23:55:06.527842 containerd[2151]: 2025-09-12 23:55:06.410 [INFO][5441] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ffb69b66abf440cab31c0981b2747476b759e5e0aafd0ec21fd6aa9f93574e47" Namespace="calico-system" Pod="csi-node-driver-vb427" WorkloadEndpoint="ip--172--31--18--203-k8s-csi--node--driver--vb427-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--203-k8s-csi--node--driver--vb427-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e874f212-ec82-4dc1-a7f2-b6ff94f1cb99", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 54, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-203", ContainerID:"ffb69b66abf440cab31c0981b2747476b759e5e0aafd0ec21fd6aa9f93574e47", Pod:"csi-node-driver-vb427", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9eefa4107f9", MAC:"ba:2b:21:77:c6:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:55:06.527842 containerd[2151]: 2025-09-12 23:55:06.479 [INFO][5441] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ffb69b66abf440cab31c0981b2747476b759e5e0aafd0ec21fd6aa9f93574e47" Namespace="calico-system" Pod="csi-node-driver-vb427" WorkloadEndpoint="ip--172--31--18--203-k8s-csi--node--driver--vb427-eth0" Sep 12 23:55:06.556447 containerd[2151]: time="2025-09-12T23:55:06.552045938Z" level=info msg="StartContainer for \"2ac7f31b84ce56a9b7083aa5bdc8af7c3e820887e718f71457c63424335d2ce9\" returns successfully" Sep 12 23:55:06.696793 containerd[2151]: time="2025-09-12T23:55:06.696689523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c48bb7547-2nt2f,Uid:077d1d76-d7b8-4b1c-bc6e-9119a67ba30b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"769330a917e76f3a10253645fd40423c87d0759e48b68e42c891d8d36636c168\"" Sep 12 23:55:06.973968 containerd[2151]: time="2025-09-12T23:55:06.971939597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:55:06.973968 containerd[2151]: time="2025-09-12T23:55:06.972039629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:55:06.973968 containerd[2151]: time="2025-09-12T23:55:06.972086441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:06.973968 containerd[2151]: time="2025-09-12T23:55:06.972250997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:07.047246 systemd-networkd[1694]: cali23946670777: Gained IPv6LL Sep 12 23:55:07.051502 systemd-networkd[1694]: calieb598a1cdb0: Link UP Sep 12 23:55:07.065922 systemd-networkd[1694]: calieb598a1cdb0: Gained carrier Sep 12 23:55:07.164437 containerd[2151]: 2025-09-12 23:55:06.116 [INFO][5577] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--pbxdf-eth0 calico-apiserver-5c48bb7547- calico-apiserver 9f5b3f0c-b02e-481f-a083-c8af4d9dc294 1025 0 2025-09-12 23:54:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c48bb7547 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-203 calico-apiserver-5c48bb7547-pbxdf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calieb598a1cdb0 [] [] }} ContainerID="e70994a593e64b191797f3cc1ce5934089422b8049c0221cbfc1a87dbb8ee24b" Namespace="calico-apiserver" Pod="calico-apiserver-5c48bb7547-pbxdf" WorkloadEndpoint="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--pbxdf-" Sep 12 23:55:07.164437 containerd[2151]: 2025-09-12 23:55:06.130 [INFO][5577] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e70994a593e64b191797f3cc1ce5934089422b8049c0221cbfc1a87dbb8ee24b" Namespace="calico-apiserver" Pod="calico-apiserver-5c48bb7547-pbxdf" WorkloadEndpoint="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--pbxdf-eth0" Sep 12 23:55:07.164437 containerd[2151]: 2025-09-12 23:55:06.758 [INFO][5663] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e70994a593e64b191797f3cc1ce5934089422b8049c0221cbfc1a87dbb8ee24b" HandleID="k8s-pod-network.e70994a593e64b191797f3cc1ce5934089422b8049c0221cbfc1a87dbb8ee24b" Workload="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--pbxdf-eth0" Sep 12 23:55:07.164437 containerd[2151]: 2025-09-12 23:55:06.759 [INFO][5663] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e70994a593e64b191797f3cc1ce5934089422b8049c0221cbfc1a87dbb8ee24b" HandleID="k8s-pod-network.e70994a593e64b191797f3cc1ce5934089422b8049c0221cbfc1a87dbb8ee24b" Workload="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--pbxdf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000120320), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-203", "pod":"calico-apiserver-5c48bb7547-pbxdf", "timestamp":"2025-09-12 23:55:06.758687607 +0000 UTC"}, Hostname:"ip-172-31-18-203", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:55:07.164437 containerd[2151]: 2025-09-12 23:55:06.759 [INFO][5663] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:55:07.164437 containerd[2151]: 2025-09-12 23:55:06.760 [INFO][5663] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:55:07.164437 containerd[2151]: 2025-09-12 23:55:06.760 [INFO][5663] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-203' Sep 12 23:55:07.164437 containerd[2151]: 2025-09-12 23:55:06.796 [INFO][5663] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e70994a593e64b191797f3cc1ce5934089422b8049c0221cbfc1a87dbb8ee24b" host="ip-172-31-18-203" Sep 12 23:55:07.164437 containerd[2151]: 2025-09-12 23:55:06.837 [INFO][5663] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-203" Sep 12 23:55:07.164437 containerd[2151]: 2025-09-12 23:55:06.882 [INFO][5663] ipam/ipam.go 511: Trying affinity for 192.168.50.192/26 host="ip-172-31-18-203" Sep 12 23:55:07.164437 containerd[2151]: 2025-09-12 23:55:06.891 [INFO][5663] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.192/26 host="ip-172-31-18-203" Sep 12 23:55:07.164437 containerd[2151]: 2025-09-12 23:55:06.902 [INFO][5663] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.192/26 host="ip-172-31-18-203" Sep 12 23:55:07.164437 containerd[2151]: 2025-09-12 23:55:06.902 [INFO][5663] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.50.192/26 handle="k8s-pod-network.e70994a593e64b191797f3cc1ce5934089422b8049c0221cbfc1a87dbb8ee24b" host="ip-172-31-18-203" Sep 12 23:55:07.164437 containerd[2151]: 2025-09-12 23:55:06.909 [INFO][5663] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e70994a593e64b191797f3cc1ce5934089422b8049c0221cbfc1a87dbb8ee24b Sep 12 23:55:07.164437 containerd[2151]: 2025-09-12 23:55:06.931 [INFO][5663] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.50.192/26 handle="k8s-pod-network.e70994a593e64b191797f3cc1ce5934089422b8049c0221cbfc1a87dbb8ee24b" host="ip-172-31-18-203" Sep 12 23:55:07.164437 containerd[2151]: 2025-09-12 23:55:06.960 [INFO][5663] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.50.200/26] block=192.168.50.192/26 handle="k8s-pod-network.e70994a593e64b191797f3cc1ce5934089422b8049c0221cbfc1a87dbb8ee24b" host="ip-172-31-18-203" Sep 12 23:55:07.164437 containerd[2151]: 2025-09-12 23:55:06.960 [INFO][5663] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.200/26] handle="k8s-pod-network.e70994a593e64b191797f3cc1ce5934089422b8049c0221cbfc1a87dbb8ee24b" host="ip-172-31-18-203" Sep 12 23:55:07.164437 containerd[2151]: 2025-09-12 23:55:06.960 [INFO][5663] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:55:07.164437 containerd[2151]: 2025-09-12 23:55:06.960 [INFO][5663] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.200/26] IPv6=[] ContainerID="e70994a593e64b191797f3cc1ce5934089422b8049c0221cbfc1a87dbb8ee24b" HandleID="k8s-pod-network.e70994a593e64b191797f3cc1ce5934089422b8049c0221cbfc1a87dbb8ee24b" Workload="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--pbxdf-eth0" Sep 12 23:55:07.171314 containerd[2151]: 2025-09-12 23:55:06.982 [INFO][5577] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e70994a593e64b191797f3cc1ce5934089422b8049c0221cbfc1a87dbb8ee24b" Namespace="calico-apiserver" Pod="calico-apiserver-5c48bb7547-pbxdf" WorkloadEndpoint="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--pbxdf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--pbxdf-eth0", GenerateName:"calico-apiserver-5c48bb7547-", Namespace:"calico-apiserver", SelfLink:"", UID:"9f5b3f0c-b02e-481f-a083-c8af4d9dc294", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 54, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c48bb7547", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-203", ContainerID:"", Pod:"calico-apiserver-5c48bb7547-pbxdf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb598a1cdb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:55:07.171314 containerd[2151]: 2025-09-12 23:55:06.987 [INFO][5577] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.200/32] ContainerID="e70994a593e64b191797f3cc1ce5934089422b8049c0221cbfc1a87dbb8ee24b" Namespace="calico-apiserver" Pod="calico-apiserver-5c48bb7547-pbxdf" WorkloadEndpoint="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--pbxdf-eth0" Sep 12 23:55:07.171314 containerd[2151]: 2025-09-12 23:55:06.993 [INFO][5577] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb598a1cdb0 ContainerID="e70994a593e64b191797f3cc1ce5934089422b8049c0221cbfc1a87dbb8ee24b" Namespace="calico-apiserver" Pod="calico-apiserver-5c48bb7547-pbxdf" WorkloadEndpoint="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--pbxdf-eth0" Sep 12 23:55:07.171314 containerd[2151]: 2025-09-12 23:55:07.077 [INFO][5577] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e70994a593e64b191797f3cc1ce5934089422b8049c0221cbfc1a87dbb8ee24b" Namespace="calico-apiserver" Pod="calico-apiserver-5c48bb7547-pbxdf" WorkloadEndpoint="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--pbxdf-eth0" Sep 12 23:55:07.171314 containerd[2151]: 2025-09-12 23:55:07.090 [INFO][5577] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e70994a593e64b191797f3cc1ce5934089422b8049c0221cbfc1a87dbb8ee24b" Namespace="calico-apiserver" Pod="calico-apiserver-5c48bb7547-pbxdf" WorkloadEndpoint="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--pbxdf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--pbxdf-eth0", GenerateName:"calico-apiserver-5c48bb7547-", Namespace:"calico-apiserver", SelfLink:"", UID:"9f5b3f0c-b02e-481f-a083-c8af4d9dc294", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 54, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c48bb7547", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-203", ContainerID:"e70994a593e64b191797f3cc1ce5934089422b8049c0221cbfc1a87dbb8ee24b", Pod:"calico-apiserver-5c48bb7547-pbxdf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb598a1cdb0", MAC:"0e:9b:6d:4b:f1:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:55:07.171314 containerd[2151]: 2025-09-12 23:55:07.130 [INFO][5577] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e70994a593e64b191797f3cc1ce5934089422b8049c0221cbfc1a87dbb8ee24b" Namespace="calico-apiserver" Pod="calico-apiserver-5c48bb7547-pbxdf" WorkloadEndpoint="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--pbxdf-eth0" Sep 12 23:55:07.305365 systemd[1]: run-containerd-runc-k8s.io-ffb69b66abf440cab31c0981b2747476b759e5e0aafd0ec21fd6aa9f93574e47-runc.otLxXZ.mount: Deactivated successfully. Sep 12 23:55:07.390873 kubelet[3594]: I0912 23:55:07.387909 3594 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-j88mc" podStartSLOduration=57.387731439 podStartE2EDuration="57.387731439s" podCreationTimestamp="2025-09-12 23:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:55:07.377454543 +0000 UTC m=+61.456257403" watchObservedRunningTime="2025-09-12 23:55:07.387731439 +0000 UTC m=+61.466534791" Sep 12 23:55:07.516303 containerd[2151]: time="2025-09-12T23:55:07.515425971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-h78v2,Uid:3fae242f-71cb-4cc8-a7fa-b06a5787570e,Namespace:kube-system,Attempt:1,} returns sandbox id \"e1b0b2681c8696a710ada509712019d80c693e27308ddc27f48be9d6c7a27649\"" Sep 12 23:55:07.529816 containerd[2151]: time="2025-09-12T23:55:07.528923967Z" level=info msg="CreateContainer within sandbox \"e1b0b2681c8696a710ada509712019d80c693e27308ddc27f48be9d6c7a27649\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 23:55:07.540771 containerd[2151]: time="2025-09-12T23:55:07.539120379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vb427,Uid:e874f212-ec82-4dc1-a7f2-b6ff94f1cb99,Namespace:calico-system,Attempt:1,} returns sandbox id \"ffb69b66abf440cab31c0981b2747476b759e5e0aafd0ec21fd6aa9f93574e47\"" Sep 12 23:55:07.563805 containerd[2151]: time="2025-09-12T23:55:07.561667287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 23:55:07.563805 containerd[2151]: time="2025-09-12T23:55:07.561775947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 23:55:07.563805 containerd[2151]: time="2025-09-12T23:55:07.561814623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:07.563805 containerd[2151]: time="2025-09-12T23:55:07.562043667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 23:55:07.642757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount680872311.mount: Deactivated successfully. Sep 12 23:55:07.651039 containerd[2151]: time="2025-09-12T23:55:07.650530600Z" level=info msg="CreateContainer within sandbox \"e1b0b2681c8696a710ada509712019d80c693e27308ddc27f48be9d6c7a27649\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8b8000a811b66400567d9f2f41476fe4daec6cea6f5c93eedd6e4fa2bca21000\"" Sep 12 23:55:07.654682 containerd[2151]: time="2025-09-12T23:55:07.652618324Z" level=info msg="StartContainer for \"8b8000a811b66400567d9f2f41476fe4daec6cea6f5c93eedd6e4fa2bca21000\"" Sep 12 23:55:07.793587 containerd[2151]: 2025-09-12 23:55:07.254 [WARNING][5685] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" WorkloadEndpoint="ip--172--31--18--203-k8s-whisker--78c4b4c45--vpm9g-eth0" Sep 12 23:55:07.793587 containerd[2151]: 2025-09-12 23:55:07.254 [INFO][5685] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" Sep 12 23:55:07.793587 containerd[2151]: 2025-09-12 23:55:07.254 [INFO][5685] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" iface="eth0" netns="" Sep 12 23:55:07.793587 containerd[2151]: 2025-09-12 23:55:07.254 [INFO][5685] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" Sep 12 23:55:07.793587 containerd[2151]: 2025-09-12 23:55:07.254 [INFO][5685] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" Sep 12 23:55:07.793587 containerd[2151]: 2025-09-12 23:55:07.706 [INFO][5797] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" HandleID="k8s-pod-network.f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" Workload="ip--172--31--18--203-k8s-whisker--78c4b4c45--vpm9g-eth0" Sep 12 23:55:07.793587 containerd[2151]: 2025-09-12 23:55:07.706 [INFO][5797] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:55:07.793587 containerd[2151]: 2025-09-12 23:55:07.706 [INFO][5797] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:55:07.793587 containerd[2151]: 2025-09-12 23:55:07.730 [WARNING][5797] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" HandleID="k8s-pod-network.f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" Workload="ip--172--31--18--203-k8s-whisker--78c4b4c45--vpm9g-eth0" Sep 12 23:55:07.793587 containerd[2151]: 2025-09-12 23:55:07.732 [INFO][5797] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" HandleID="k8s-pod-network.f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" Workload="ip--172--31--18--203-k8s-whisker--78c4b4c45--vpm9g-eth0" Sep 12 23:55:07.793587 containerd[2151]: 2025-09-12 23:55:07.737 [INFO][5797] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:55:07.793587 containerd[2151]: 2025-09-12 23:55:07.770 [INFO][5685] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" Sep 12 23:55:07.803941 containerd[2151]: time="2025-09-12T23:55:07.793659713Z" level=info msg="TearDown network for sandbox \"f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00\" successfully" Sep 12 23:55:07.803941 containerd[2151]: time="2025-09-12T23:55:07.793706285Z" level=info msg="StopPodSandbox for \"f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00\" returns successfully" Sep 12 23:55:07.803941 containerd[2151]: time="2025-09-12T23:55:07.803458301Z" level=info msg="RemovePodSandbox for \"f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00\"" Sep 12 23:55:07.803941 containerd[2151]: time="2025-09-12T23:55:07.803536169Z" level=info msg="Forcibly stopping sandbox \"f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00\"" Sep 12 23:55:07.888787 containerd[2151]: time="2025-09-12T23:55:07.888059753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:07.893376 containerd[2151]: time="2025-09-12T23:55:07.893301245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 12 23:55:07.909969 containerd[2151]: time="2025-09-12T23:55:07.909879869Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:07.929202 systemd-networkd[1694]: cali5f0a31a1ea0: Gained IPv6LL Sep 12 23:55:07.943676 containerd[2151]: time="2025-09-12T23:55:07.942496961Z" level=info msg="StartContainer for \"8b8000a811b66400567d9f2f41476fe4daec6cea6f5c93eedd6e4fa2bca21000\" returns successfully" Sep 12 23:55:07.976029 containerd[2151]: time="2025-09-12T23:55:07.970491869Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:07.976029 containerd[2151]: time="2025-09-12T23:55:07.974406678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c48bb7547-pbxdf,Uid:9f5b3f0c-b02e-481f-a083-c8af4d9dc294,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e70994a593e64b191797f3cc1ce5934089422b8049c0221cbfc1a87dbb8ee24b\"" Sep 12 23:55:07.980700 containerd[2151]: time="2025-09-12T23:55:07.980056446Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 4.768061052s" Sep 12 23:55:07.980700 containerd[2151]: time="2025-09-12T23:55:07.980147022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 12 23:55:07.995087 containerd[2151]: time="2025-09-12T23:55:07.995005686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 23:55:07.999765 containerd[2151]: time="2025-09-12T23:55:07.999537030Z" level=info msg="CreateContainer within sandbox \"c7c2fa4625da9a26429a361fcdf303472d2cef1672c9c6a484ac84bd2fa54dc1\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 23:55:08.073122 containerd[2151]: time="2025-09-12T23:55:08.072965702Z" level=info msg="CreateContainer within sandbox \"c7c2fa4625da9a26429a361fcdf303472d2cef1672c9c6a484ac84bd2fa54dc1\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"d91f4f725b50221640a7946a282891e9a243059ecefc1052a6ae8432188dda0f\"" Sep 12 23:55:08.076753 containerd[2151]: time="2025-09-12T23:55:08.076444634Z" level=info msg="StartContainer for \"d91f4f725b50221640a7946a282891e9a243059ecefc1052a6ae8432188dda0f\"" Sep 12 23:55:08.121980 systemd-networkd[1694]: cali9eefa4107f9: Gained IPv6LL Sep 12 23:55:08.340882 containerd[2151]: 2025-09-12 23:55:08.163 [WARNING][5894] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" WorkloadEndpoint="ip--172--31--18--203-k8s-whisker--78c4b4c45--vpm9g-eth0" Sep 12 23:55:08.340882 containerd[2151]: 2025-09-12 23:55:08.163 [INFO][5894] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" Sep 12 23:55:08.340882 containerd[2151]: 2025-09-12 23:55:08.163 [INFO][5894] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" iface="eth0" netns="" Sep 12 23:55:08.340882 containerd[2151]: 2025-09-12 23:55:08.163 [INFO][5894] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" Sep 12 23:55:08.340882 containerd[2151]: 2025-09-12 23:55:08.163 [INFO][5894] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" Sep 12 23:55:08.340882 containerd[2151]: 2025-09-12 23:55:08.274 [INFO][5935] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" HandleID="k8s-pod-network.f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" Workload="ip--172--31--18--203-k8s-whisker--78c4b4c45--vpm9g-eth0" Sep 12 23:55:08.340882 containerd[2151]: 2025-09-12 23:55:08.276 [INFO][5935] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:55:08.340882 containerd[2151]: 2025-09-12 23:55:08.277 [INFO][5935] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:55:08.340882 containerd[2151]: 2025-09-12 23:55:08.318 [WARNING][5935] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" HandleID="k8s-pod-network.f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" Workload="ip--172--31--18--203-k8s-whisker--78c4b4c45--vpm9g-eth0" Sep 12 23:55:08.340882 containerd[2151]: 2025-09-12 23:55:08.318 [INFO][5935] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" HandleID="k8s-pod-network.f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" Workload="ip--172--31--18--203-k8s-whisker--78c4b4c45--vpm9g-eth0" Sep 12 23:55:08.340882 containerd[2151]: 2025-09-12 23:55:08.327 [INFO][5935] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:55:08.340882 containerd[2151]: 2025-09-12 23:55:08.332 [INFO][5894] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00" Sep 12 23:55:08.345901 containerd[2151]: time="2025-09-12T23:55:08.340958019Z" level=info msg="TearDown network for sandbox \"f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00\" successfully" Sep 12 23:55:08.358385 containerd[2151]: time="2025-09-12T23:55:08.358302987Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:55:08.358560 containerd[2151]: time="2025-09-12T23:55:08.358424259Z" level=info msg="RemovePodSandbox \"f83444b828b9877049d890783dde780accf1a99c2a75369d2dd00a28656e0c00\" returns successfully" Sep 12 23:55:08.361043 containerd[2151]: time="2025-09-12T23:55:08.360958131Z" level=info msg="StopPodSandbox for \"c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb\"" Sep 12 23:55:08.538297 kubelet[3594]: I0912 23:55:08.538165 3594 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-h78v2" podStartSLOduration=58.538137988 podStartE2EDuration="58.538137988s" podCreationTimestamp="2025-09-12 23:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:55:08.489983668 +0000 UTC m=+62.568786540" watchObservedRunningTime="2025-09-12 23:55:08.538137988 +0000 UTC m=+62.616940836" Sep 12 23:55:08.681848 containerd[2151]: time="2025-09-12T23:55:08.680861105Z" level=info msg="StartContainer for \"d91f4f725b50221640a7946a282891e9a243059ecefc1052a6ae8432188dda0f\" returns successfully" Sep 12 23:55:08.732742 containerd[2151]: 2025-09-12 23:55:08.629 [WARNING][5954] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--203-k8s-coredns--7c65d6cfc9--j88mc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9eeb1078-74ba-4b83-8069-cea1b65e8744", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 54, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-203", ContainerID:"5789dc517b88b5911558648be43271564a1c88143583916cc951afb355f09ba6", Pod:"coredns-7c65d6cfc9-j88mc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali81adfd4b05f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:55:08.732742 containerd[2151]: 2025-09-12 23:55:08.629 [INFO][5954] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" Sep 12 23:55:08.732742 containerd[2151]: 2025-09-12 23:55:08.629 [INFO][5954] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" iface="eth0" netns="" Sep 12 23:55:08.732742 containerd[2151]: 2025-09-12 23:55:08.629 [INFO][5954] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" Sep 12 23:55:08.732742 containerd[2151]: 2025-09-12 23:55:08.629 [INFO][5954] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" Sep 12 23:55:08.732742 containerd[2151]: 2025-09-12 23:55:08.706 [INFO][5968] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" HandleID="k8s-pod-network.c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" Workload="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--j88mc-eth0" Sep 12 23:55:08.732742 containerd[2151]: 2025-09-12 23:55:08.707 [INFO][5968] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:55:08.732742 containerd[2151]: 2025-09-12 23:55:08.707 [INFO][5968] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:55:08.732742 containerd[2151]: 2025-09-12 23:55:08.722 [WARNING][5968] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" HandleID="k8s-pod-network.c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" Workload="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--j88mc-eth0" Sep 12 23:55:08.732742 containerd[2151]: 2025-09-12 23:55:08.722 [INFO][5968] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" HandleID="k8s-pod-network.c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" Workload="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--j88mc-eth0" Sep 12 23:55:08.732742 containerd[2151]: 2025-09-12 23:55:08.725 [INFO][5968] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:55:08.732742 containerd[2151]: 2025-09-12 23:55:08.728 [INFO][5954] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" Sep 12 23:55:08.734840 containerd[2151]: time="2025-09-12T23:55:08.732774425Z" level=info msg="TearDown network for sandbox \"c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb\" successfully" Sep 12 23:55:08.734840 containerd[2151]: time="2025-09-12T23:55:08.732816017Z" level=info msg="StopPodSandbox for \"c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb\" returns successfully" Sep 12 23:55:08.735905 containerd[2151]: time="2025-09-12T23:55:08.735406589Z" level=info msg="RemovePodSandbox for \"c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb\"" Sep 12 23:55:08.735905 containerd[2151]: time="2025-09-12T23:55:08.735531869Z" level=info msg="Forcibly stopping sandbox \"c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb\"" Sep 12 23:55:08.898256 containerd[2151]: 2025-09-12 23:55:08.816 [WARNING][5992] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--203-k8s-coredns--7c65d6cfc9--j88mc-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9eeb1078-74ba-4b83-8069-cea1b65e8744", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 54, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-203", ContainerID:"5789dc517b88b5911558648be43271564a1c88143583916cc951afb355f09ba6", Pod:"coredns-7c65d6cfc9-j88mc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali81adfd4b05f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:55:08.898256 containerd[2151]: 2025-09-12 23:55:08.817 [INFO][5992] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" Sep 12 23:55:08.898256 containerd[2151]: 2025-09-12 23:55:08.817 [INFO][5992] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" iface="eth0" netns="" Sep 12 23:55:08.898256 containerd[2151]: 2025-09-12 23:55:08.817 [INFO][5992] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" Sep 12 23:55:08.898256 containerd[2151]: 2025-09-12 23:55:08.817 [INFO][5992] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" Sep 12 23:55:08.898256 containerd[2151]: 2025-09-12 23:55:08.864 [INFO][6001] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" HandleID="k8s-pod-network.c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" Workload="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--j88mc-eth0" Sep 12 23:55:08.898256 containerd[2151]: 2025-09-12 23:55:08.864 [INFO][6001] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:55:08.898256 containerd[2151]: 2025-09-12 23:55:08.864 [INFO][6001] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:55:08.898256 containerd[2151]: 2025-09-12 23:55:08.888 [WARNING][6001] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" HandleID="k8s-pod-network.c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" Workload="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--j88mc-eth0" Sep 12 23:55:08.898256 containerd[2151]: 2025-09-12 23:55:08.888 [INFO][6001] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" HandleID="k8s-pod-network.c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" Workload="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--j88mc-eth0" Sep 12 23:55:08.898256 containerd[2151]: 2025-09-12 23:55:08.891 [INFO][6001] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:55:08.898256 containerd[2151]: 2025-09-12 23:55:08.894 [INFO][5992] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb" Sep 12 23:55:08.898256 containerd[2151]: time="2025-09-12T23:55:08.897619242Z" level=info msg="TearDown network for sandbox \"c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb\" successfully" Sep 12 23:55:08.909713 containerd[2151]: time="2025-09-12T23:55:08.909219270Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:55:08.909713 containerd[2151]: time="2025-09-12T23:55:08.909381966Z" level=info msg="RemovePodSandbox \"c06401fc912e7c1dea7ba61ebbe27b75d9b357b655992a90f95636469eac60cb\" returns successfully" Sep 12 23:55:08.911761 containerd[2151]: time="2025-09-12T23:55:08.910462110Z" level=info msg="StopPodSandbox for \"99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c\"" Sep 12 23:55:08.960331 systemd[1]: Started sshd@8-172.31.18.203:22-147.75.109.163:34786.service - OpenSSH per-connection server daemon (147.75.109.163:34786). Sep 12 23:55:09.084943 systemd-networkd[1694]: calieb598a1cdb0: Gained IPv6LL Sep 12 23:55:09.098727 containerd[2151]: 2025-09-12 23:55:09.010 [WARNING][6015] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--203-k8s-calico--kube--controllers--5dc46b49f4--xjvcm-eth0", GenerateName:"calico-kube-controllers-5dc46b49f4-", Namespace:"calico-system", SelfLink:"", UID:"8d62a2d0-ccd7-4178-8371-f2c20fc86ca0", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 54, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5dc46b49f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-203", ContainerID:"9ac03a4b21b5cc80a771985e40acdcf014df4bdbe2465b3dc0252a80c0796135", Pod:"calico-kube-controllers-5dc46b49f4-xjvcm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali06c11963e8d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:55:09.098727 containerd[2151]: 2025-09-12 23:55:09.010 [INFO][6015] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" Sep 12 23:55:09.098727 containerd[2151]: 2025-09-12 23:55:09.010 [INFO][6015] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" iface="eth0" netns="" Sep 12 23:55:09.098727 containerd[2151]: 2025-09-12 23:55:09.010 [INFO][6015] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" Sep 12 23:55:09.098727 containerd[2151]: 2025-09-12 23:55:09.010 [INFO][6015] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" Sep 12 23:55:09.098727 containerd[2151]: 2025-09-12 23:55:09.056 [INFO][6023] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" HandleID="k8s-pod-network.99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" Workload="ip--172--31--18--203-k8s-calico--kube--controllers--5dc46b49f4--xjvcm-eth0" Sep 12 23:55:09.098727 containerd[2151]: 2025-09-12 23:55:09.056 [INFO][6023] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:55:09.098727 containerd[2151]: 2025-09-12 23:55:09.056 [INFO][6023] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:55:09.098727 containerd[2151]: 2025-09-12 23:55:09.088 [WARNING][6023] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" HandleID="k8s-pod-network.99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" Workload="ip--172--31--18--203-k8s-calico--kube--controllers--5dc46b49f4--xjvcm-eth0" Sep 12 23:55:09.098727 containerd[2151]: 2025-09-12 23:55:09.088 [INFO][6023] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" HandleID="k8s-pod-network.99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" Workload="ip--172--31--18--203-k8s-calico--kube--controllers--5dc46b49f4--xjvcm-eth0" Sep 12 23:55:09.098727 containerd[2151]: 2025-09-12 23:55:09.092 [INFO][6023] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:55:09.098727 containerd[2151]: 2025-09-12 23:55:09.095 [INFO][6015] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" Sep 12 23:55:09.100857 containerd[2151]: time="2025-09-12T23:55:09.098779851Z" level=info msg="TearDown network for sandbox \"99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c\" successfully" Sep 12 23:55:09.100857 containerd[2151]: time="2025-09-12T23:55:09.098821191Z" level=info msg="StopPodSandbox for \"99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c\" returns successfully" Sep 12 23:55:09.100857 containerd[2151]: time="2025-09-12T23:55:09.099597615Z" level=info msg="RemovePodSandbox for \"99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c\"" Sep 12 23:55:09.100857 containerd[2151]: time="2025-09-12T23:55:09.100134687Z" level=info msg="Forcibly stopping sandbox \"99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c\"" Sep 12 23:55:09.185175 sshd[6019]: Accepted publickey for core from 147.75.109.163 port 34786 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:09.190396 sshd[6019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:09.212509 systemd-logind[2118]: New session 9 of user core. Sep 12 23:55:09.221279 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 23:55:09.301713 containerd[2151]: 2025-09-12 23:55:09.184 [WARNING][6038] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--203-k8s-calico--kube--controllers--5dc46b49f4--xjvcm-eth0", GenerateName:"calico-kube-controllers-5dc46b49f4-", Namespace:"calico-system", SelfLink:"", UID:"8d62a2d0-ccd7-4178-8371-f2c20fc86ca0", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 54, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5dc46b49f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-203", ContainerID:"9ac03a4b21b5cc80a771985e40acdcf014df4bdbe2465b3dc0252a80c0796135", Pod:"calico-kube-controllers-5dc46b49f4-xjvcm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali06c11963e8d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:55:09.301713 containerd[2151]: 2025-09-12 23:55:09.184 [INFO][6038] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" Sep 12 23:55:09.301713 containerd[2151]: 2025-09-12 23:55:09.185 [INFO][6038] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" iface="eth0" netns="" Sep 12 23:55:09.301713 containerd[2151]: 2025-09-12 23:55:09.185 [INFO][6038] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" Sep 12 23:55:09.301713 containerd[2151]: 2025-09-12 23:55:09.185 [INFO][6038] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" Sep 12 23:55:09.301713 containerd[2151]: 2025-09-12 23:55:09.275 [INFO][6045] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" HandleID="k8s-pod-network.99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" Workload="ip--172--31--18--203-k8s-calico--kube--controllers--5dc46b49f4--xjvcm-eth0" Sep 12 23:55:09.301713 containerd[2151]: 2025-09-12 23:55:09.276 [INFO][6045] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:55:09.301713 containerd[2151]: 2025-09-12 23:55:09.276 [INFO][6045] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:55:09.301713 containerd[2151]: 2025-09-12 23:55:09.290 [WARNING][6045] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" HandleID="k8s-pod-network.99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" Workload="ip--172--31--18--203-k8s-calico--kube--controllers--5dc46b49f4--xjvcm-eth0" Sep 12 23:55:09.301713 containerd[2151]: 2025-09-12 23:55:09.290 [INFO][6045] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" HandleID="k8s-pod-network.99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" Workload="ip--172--31--18--203-k8s-calico--kube--controllers--5dc46b49f4--xjvcm-eth0" Sep 12 23:55:09.301713 containerd[2151]: 2025-09-12 23:55:09.293 [INFO][6045] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:55:09.301713 containerd[2151]: 2025-09-12 23:55:09.297 [INFO][6038] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c" Sep 12 23:55:09.301713 containerd[2151]: time="2025-09-12T23:55:09.301017040Z" level=info msg="TearDown network for sandbox \"99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c\" successfully" Sep 12 23:55:09.309867 containerd[2151]: time="2025-09-12T23:55:09.309576112Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:55:09.310048 containerd[2151]: time="2025-09-12T23:55:09.309919312Z" level=info msg="RemovePodSandbox \"99b1651e7b49e679b7891d30c59c0d4511358118e33c24266723f280e9474b5c\" returns successfully" Sep 12 23:55:09.586293 sshd[6019]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:09.602988 systemd[1]: sshd@8-172.31.18.203:22-147.75.109.163:34786.service: Deactivated successfully. Sep 12 23:55:09.611500 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 23:55:09.621842 systemd-logind[2118]: Session 9 logged out. Waiting for processes to exit. Sep 12 23:55:09.630544 systemd-logind[2118]: Removed session 9. Sep 12 23:55:11.541975 containerd[2151]: time="2025-09-12T23:55:11.541864099Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:11.544474 containerd[2151]: time="2025-09-12T23:55:11.544070611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 12 23:55:11.548697 containerd[2151]: time="2025-09-12T23:55:11.548192395Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:11.558515 containerd[2151]: time="2025-09-12T23:55:11.558445363Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:11.560742 containerd[2151]: time="2025-09-12T23:55:11.560604031Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 3.565504553s" Sep 12 23:55:11.561293 containerd[2151]: time="2025-09-12T23:55:11.560971483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 12 23:55:11.566159 containerd[2151]: time="2025-09-12T23:55:11.566056183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 23:55:11.614062 containerd[2151]: time="2025-09-12T23:55:11.613760012Z" level=info msg="CreateContainer within sandbox \"9ac03a4b21b5cc80a771985e40acdcf014df4bdbe2465b3dc0252a80c0796135\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 23:55:11.636741 containerd[2151]: time="2025-09-12T23:55:11.636576008Z" level=info msg="CreateContainer within sandbox \"9ac03a4b21b5cc80a771985e40acdcf014df4bdbe2465b3dc0252a80c0796135\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ceb8b328450b4344c3a8e17f2b52a244348a71ab7705f45817366479ab445d61\"" Sep 12 23:55:11.644741 containerd[2151]: time="2025-09-12T23:55:11.644622884Z" level=info msg="StartContainer for \"ceb8b328450b4344c3a8e17f2b52a244348a71ab7705f45817366479ab445d61\"" Sep 12 23:55:11.814667 containerd[2151]: time="2025-09-12T23:55:11.814433169Z" level=info msg="StartContainer for \"ceb8b328450b4344c3a8e17f2b52a244348a71ab7705f45817366479ab445d61\" returns successfully" Sep 12 23:55:12.020126 ntpd[2101]: Listen normally on 6 vxlan.calico 192.168.50.192:123 Sep 12 23:55:12.022836 ntpd[2101]: 12 Sep 23:55:12 ntpd[2101]: Listen normally on 6 vxlan.calico 192.168.50.192:123 Sep 12 23:55:12.022836 ntpd[2101]: 12 Sep 23:55:12 ntpd[2101]: Listen normally on 7 calic8481f5e80c [fe80::ecee:eeff:feee:eeee%4]:123 Sep 12 23:55:12.022836 ntpd[2101]: 12 Sep 23:55:12 ntpd[2101]: Listen normally on 8 cali06c11963e8d [fe80::ecee:eeff:feee:eeee%5]:123 Sep 12 23:55:12.022836 ntpd[2101]: 12 Sep 23:55:12 ntpd[2101]: Listen normally on 9 cali81adfd4b05f [fe80::ecee:eeff:feee:eeee%6]:123 Sep 12 23:55:12.022836 ntpd[2101]: 12 Sep 23:55:12 ntpd[2101]: Listen normally on 10 vxlan.calico [fe80::648f:7aff:fe18:6458%7]:123 Sep 12 23:55:12.022836 ntpd[2101]: 12 Sep 23:55:12 ntpd[2101]: Listen normally on 11 cali23946670777 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 12 23:55:12.022836 ntpd[2101]: 12 Sep 23:55:12 ntpd[2101]: Listen normally on 12 cali95460f8f173 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 12 23:55:12.022836 ntpd[2101]: 12 Sep 23:55:12 ntpd[2101]: Listen normally on 13 cali5f0a31a1ea0 [fe80::ecee:eeff:feee:eeee%12]:123 Sep 12 23:55:12.022836 ntpd[2101]: 12 Sep 23:55:12 ntpd[2101]: Listen normally on 14 cali9eefa4107f9 [fe80::ecee:eeff:feee:eeee%13]:123 Sep 12 23:55:12.022836 ntpd[2101]: 12 Sep 23:55:12 ntpd[2101]: Listen normally on 15 calieb598a1cdb0 [fe80::ecee:eeff:feee:eeee%14]:123 Sep 12 23:55:12.020269 ntpd[2101]: Listen normally on 7 calic8481f5e80c [fe80::ecee:eeff:feee:eeee%4]:123 Sep 12 23:55:12.020362 ntpd[2101]: Listen normally on 8 cali06c11963e8d [fe80::ecee:eeff:feee:eeee%5]:123 Sep 12 23:55:12.020439 ntpd[2101]: Listen normally on 9 cali81adfd4b05f [fe80::ecee:eeff:feee:eeee%6]:123 Sep 12 23:55:12.020522 ntpd[2101]: Listen normally on 10 vxlan.calico [fe80::648f:7aff:fe18:6458%7]:123 Sep 12 23:55:12.020606 ntpd[2101]: Listen normally on 11 cali23946670777 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 12 23:55:12.020720 ntpd[2101]: Listen normally on 12 cali95460f8f173 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 12 23:55:12.020804 ntpd[2101]: Listen normally on 13 cali5f0a31a1ea0 [fe80::ecee:eeff:feee:eeee%12]:123 Sep 12 23:55:12.020883 ntpd[2101]: Listen normally on 14 cali9eefa4107f9 [fe80::ecee:eeff:feee:eeee%13]:123 Sep 12 23:55:12.020978 ntpd[2101]: Listen normally on 15 calieb598a1cdb0 [fe80::ecee:eeff:feee:eeee%14]:123 Sep 12 23:55:13.670889 kubelet[3594]: I0912 23:55:13.669485 3594 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5dc46b49f4-xjvcm" podStartSLOduration=31.451629026 podStartE2EDuration="38.66945727s" podCreationTimestamp="2025-09-12 23:54:35 +0000 UTC" firstStartedPulling="2025-09-12 23:55:04.347069531 +0000 UTC m=+58.425872379" lastFinishedPulling="2025-09-12 23:55:11.564897775 +0000 UTC m=+65.643700623" observedRunningTime="2025-09-12 23:55:12.531729164 +0000 UTC m=+66.610532060" watchObservedRunningTime="2025-09-12 23:55:13.66945727 +0000 UTC m=+67.748260142" Sep 12 23:55:14.631245 systemd[1]: Started sshd@9-172.31.18.203:22-147.75.109.163:41576.service - OpenSSH per-connection server daemon (147.75.109.163:41576). Sep 12 23:55:14.849700 sshd[6180]: Accepted publickey for core from 147.75.109.163 port 41576 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:14.859175 sshd[6180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:14.880874 systemd-logind[2118]: New session 10 of user core. Sep 12 23:55:14.893940 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 23:55:15.006472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2212643900.mount: Deactivated successfully. Sep 12 23:55:15.309480 sshd[6180]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:15.323612 systemd-logind[2118]: Session 10 logged out. Waiting for processes to exit. Sep 12 23:55:15.324283 systemd[1]: sshd@9-172.31.18.203:22-147.75.109.163:41576.service: Deactivated successfully. Sep 12 23:55:15.335932 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 23:55:15.378841 systemd[1]: Started sshd@10-172.31.18.203:22-147.75.109.163:41580.service - OpenSSH per-connection server daemon (147.75.109.163:41580). Sep 12 23:55:15.381452 systemd-logind[2118]: Removed session 10. Sep 12 23:55:15.608004 sshd[6204]: Accepted publickey for core from 147.75.109.163 port 41580 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:15.613204 sshd[6204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:15.633856 systemd-logind[2118]: New session 11 of user core. Sep 12 23:55:15.639013 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 23:55:16.129846 sshd[6204]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:16.154098 systemd[1]: sshd@10-172.31.18.203:22-147.75.109.163:41580.service: Deactivated successfully. Sep 12 23:55:16.171270 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 23:55:16.191848 systemd-logind[2118]: Session 11 logged out. Waiting for processes to exit. Sep 12 23:55:16.204316 systemd[1]: Started sshd@11-172.31.18.203:22-147.75.109.163:41590.service - OpenSSH per-connection server daemon (147.75.109.163:41590). Sep 12 23:55:16.213121 systemd-logind[2118]: Removed session 11. Sep 12 23:55:16.440465 sshd[6217]: Accepted publickey for core from 147.75.109.163 port 41590 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:16.444558 sshd[6217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:16.464186 systemd-logind[2118]: New session 12 of user core. Sep 12 23:55:16.470040 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 23:55:16.629660 containerd[2151]: time="2025-09-12T23:55:16.626953860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:16.630355 containerd[2151]: time="2025-09-12T23:55:16.629678629Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 12 23:55:16.632752 containerd[2151]: time="2025-09-12T23:55:16.632673181Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:16.642159 containerd[2151]: time="2025-09-12T23:55:16.642078385Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:16.645511 containerd[2151]: time="2025-09-12T23:55:16.645435073Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 5.07930917s" Sep 12 23:55:16.645511 containerd[2151]: time="2025-09-12T23:55:16.645507229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 12 23:55:16.649658 containerd[2151]: time="2025-09-12T23:55:16.647951425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 23:55:16.651647 containerd[2151]: time="2025-09-12T23:55:16.651556705Z" level=info msg="CreateContainer within sandbox \"1c18a913cc4f58f85fe9a19f5dcc18dca04ba609e716597e0cbec52fcd78b226\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 23:55:16.693679 containerd[2151]: time="2025-09-12T23:55:16.691081417Z" level=info msg="CreateContainer within sandbox \"1c18a913cc4f58f85fe9a19f5dcc18dca04ba609e716597e0cbec52fcd78b226\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"fbc9289343a3da298acc84831f0a533b56238c643225725c5553713b76a4878c\"" Sep 12 23:55:16.694772 containerd[2151]: time="2025-09-12T23:55:16.694411645Z" level=info msg="StartContainer for \"fbc9289343a3da298acc84831f0a533b56238c643225725c5553713b76a4878c\"" Sep 12 23:55:16.834041 sshd[6217]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:16.854513 systemd[1]: sshd@11-172.31.18.203:22-147.75.109.163:41590.service: Deactivated successfully. Sep 12 23:55:16.864649 systemd-logind[2118]: Session 12 logged out. Waiting for processes to exit. Sep 12 23:55:16.865970 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 23:55:16.871214 systemd-logind[2118]: Removed session 12. Sep 12 23:55:16.951979 containerd[2151]: time="2025-09-12T23:55:16.950805926Z" level=info msg="StartContainer for \"fbc9289343a3da298acc84831f0a533b56238c643225725c5553713b76a4878c\" returns successfully" Sep 12 23:55:19.728713 containerd[2151]: time="2025-09-12T23:55:19.728582008Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:19.732121 containerd[2151]: time="2025-09-12T23:55:19.731936704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 12 23:55:19.736061 containerd[2151]: time="2025-09-12T23:55:19.735704344Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:19.744156 containerd[2151]: time="2025-09-12T23:55:19.744080668Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:19.748365 containerd[2151]: time="2025-09-12T23:55:19.748140652Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 3.100123899s" Sep 12 23:55:19.748365 containerd[2151]: time="2025-09-12T23:55:19.748222312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 23:55:19.755221 containerd[2151]: time="2025-09-12T23:55:19.754867852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 23:55:19.760323 containerd[2151]: time="2025-09-12T23:55:19.759906964Z" level=info msg="CreateContainer within sandbox \"769330a917e76f3a10253645fd40423c87d0759e48b68e42c891d8d36636c168\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 23:55:19.797042 containerd[2151]: time="2025-09-12T23:55:19.796966324Z" level=info msg="CreateContainer within sandbox \"769330a917e76f3a10253645fd40423c87d0759e48b68e42c891d8d36636c168\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"455a1e04998267b6e07ca22802c7b701e97c99e3d9f0c3db1c05edba06aeb879\"" Sep 12 23:55:19.805706 containerd[2151]: time="2025-09-12T23:55:19.803941612Z" level=info msg="StartContainer for \"455a1e04998267b6e07ca22802c7b701e97c99e3d9f0c3db1c05edba06aeb879\"" Sep 12 23:55:19.984798 containerd[2151]: time="2025-09-12T23:55:19.984589865Z" level=info msg="StartContainer for \"455a1e04998267b6e07ca22802c7b701e97c99e3d9f0c3db1c05edba06aeb879\" returns successfully" Sep 12 23:55:20.592748 kubelet[3594]: I0912 23:55:20.592529 3594 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-bhgbj" podStartSLOduration=35.453288533 podStartE2EDuration="45.592503544s" podCreationTimestamp="2025-09-12 23:54:35 +0000 UTC" firstStartedPulling="2025-09-12 23:55:06.508074002 +0000 UTC m=+60.586876850" lastFinishedPulling="2025-09-12 23:55:16.647288989 +0000 UTC m=+70.726091861" observedRunningTime="2025-09-12 23:55:17.573493993 +0000 UTC m=+71.652296877" watchObservedRunningTime="2025-09-12 23:55:20.592503544 +0000 UTC m=+74.671306404" Sep 12 23:55:21.445408 containerd[2151]: time="2025-09-12T23:55:21.445295392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:21.448252 containerd[2151]: time="2025-09-12T23:55:21.448182148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 12 23:55:21.450661 containerd[2151]: time="2025-09-12T23:55:21.450571684Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:21.456668 containerd[2151]: time="2025-09-12T23:55:21.456585064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:21.458970 containerd[2151]: time="2025-09-12T23:55:21.458902168Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.703968628s" Sep 12 23:55:21.459769 containerd[2151]: time="2025-09-12T23:55:21.458971288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 12 23:55:21.462247 containerd[2151]: time="2025-09-12T23:55:21.462079313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 23:55:21.470144 containerd[2151]: time="2025-09-12T23:55:21.469942145Z" level=info msg="CreateContainer within sandbox \"ffb69b66abf440cab31c0981b2747476b759e5e0aafd0ec21fd6aa9f93574e47\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 23:55:21.507170 containerd[2151]: time="2025-09-12T23:55:21.507080189Z" level=info msg="CreateContainer within sandbox \"ffb69b66abf440cab31c0981b2747476b759e5e0aafd0ec21fd6aa9f93574e47\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0373f455d1847ec204ddc3ff477a38309da3116a125de5ffa014eca77ffa2c95\"" Sep 12 23:55:21.508659 containerd[2151]: time="2025-09-12T23:55:21.508320353Z" level=info msg="StartContainer for \"0373f455d1847ec204ddc3ff477a38309da3116a125de5ffa014eca77ffa2c95\"" Sep 12 23:55:21.664527 containerd[2151]: time="2025-09-12T23:55:21.664218774Z" level=info msg="StartContainer for \"0373f455d1847ec204ddc3ff477a38309da3116a125de5ffa014eca77ffa2c95\" returns successfully" Sep 12 23:55:21.794956 containerd[2151]: time="2025-09-12T23:55:21.794881350Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:21.797681 containerd[2151]: time="2025-09-12T23:55:21.796834914Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 23:55:21.801682 containerd[2151]: time="2025-09-12T23:55:21.801573318Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 339.428197ms" Sep 12 23:55:21.801904 containerd[2151]: time="2025-09-12T23:55:21.801664110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 23:55:21.806396 containerd[2151]: time="2025-09-12T23:55:21.804967722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 23:55:21.807549 containerd[2151]: time="2025-09-12T23:55:21.807483774Z" level=info msg="CreateContainer within sandbox \"e70994a593e64b191797f3cc1ce5934089422b8049c0221cbfc1a87dbb8ee24b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 23:55:21.838801 containerd[2151]: time="2025-09-12T23:55:21.838499274Z" level=info msg="CreateContainer within sandbox \"e70994a593e64b191797f3cc1ce5934089422b8049c0221cbfc1a87dbb8ee24b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a1dc431b425042c6ec0c82bf4da53023a000c9e169fb7a3e8ba06e4f3e42a7ba\"" Sep 12 23:55:21.842456 containerd[2151]: time="2025-09-12T23:55:21.842194962Z" level=info msg="StartContainer for \"a1dc431b425042c6ec0c82bf4da53023a000c9e169fb7a3e8ba06e4f3e42a7ba\"" Sep 12 23:55:21.867162 systemd[1]: Started sshd@12-172.31.18.203:22-147.75.109.163:51662.service - OpenSSH per-connection server daemon (147.75.109.163:51662). Sep 12 23:55:22.053480 containerd[2151]: time="2025-09-12T23:55:22.053177607Z" level=info msg="StartContainer for \"a1dc431b425042c6ec0c82bf4da53023a000c9e169fb7a3e8ba06e4f3e42a7ba\" returns successfully" Sep 12 23:55:22.121216 sshd[6468]: Accepted publickey for core from 147.75.109.163 port 51662 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:22.127509 sshd[6468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:22.153021 systemd-logind[2118]: New session 13 of user core. Sep 12 23:55:22.163758 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 23:55:22.635995 kubelet[3594]: I0912 23:55:22.635719 3594 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:55:22.671938 kubelet[3594]: I0912 23:55:22.666853 3594 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5c48bb7547-pbxdf" podStartSLOduration=44.843388974 podStartE2EDuration="58.666823926s" podCreationTimestamp="2025-09-12 23:54:24 +0000 UTC" firstStartedPulling="2025-09-12 23:55:07.979494294 +0000 UTC m=+62.058297154" lastFinishedPulling="2025-09-12 23:55:21.802929246 +0000 UTC m=+75.881732106" observedRunningTime="2025-09-12 23:55:22.658880046 +0000 UTC m=+76.737682918" watchObservedRunningTime="2025-09-12 23:55:22.666823926 +0000 UTC m=+76.745627062" Sep 12 23:55:22.671938 kubelet[3594]: I0912 23:55:22.667789 3594 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5c48bb7547-2nt2f" podStartSLOduration=45.630677693 podStartE2EDuration="58.667598406s" podCreationTimestamp="2025-09-12 23:54:24 +0000 UTC" firstStartedPulling="2025-09-12 23:55:06.717136587 +0000 UTC m=+60.795939447" lastFinishedPulling="2025-09-12 23:55:19.754057276 +0000 UTC m=+73.832860160" observedRunningTime="2025-09-12 23:55:20.590368 +0000 UTC m=+74.669170872" watchObservedRunningTime="2025-09-12 23:55:22.667598406 +0000 UTC m=+76.746401566" Sep 12 23:55:22.718421 sshd[6468]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:22.733789 systemd-logind[2118]: Session 13 logged out. Waiting for processes to exit. Sep 12 23:55:22.740342 systemd[1]: sshd@12-172.31.18.203:22-147.75.109.163:51662.service: Deactivated successfully. Sep 12 23:55:22.753536 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 23:55:22.758747 systemd-logind[2118]: Removed session 13. Sep 12 23:55:25.095880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1261549829.mount: Deactivated successfully. Sep 12 23:55:25.124824 containerd[2151]: time="2025-09-12T23:55:25.124756027Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:25.129410 containerd[2151]: time="2025-09-12T23:55:25.129340279Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 12 23:55:25.132764 containerd[2151]: time="2025-09-12T23:55:25.131476747Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:25.139221 containerd[2151]: time="2025-09-12T23:55:25.139158535Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:25.140359 containerd[2151]: time="2025-09-12T23:55:25.139547647Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 3.334510049s" Sep 12 23:55:25.140613 containerd[2151]: time="2025-09-12T23:55:25.140572135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 12 23:55:25.145265 containerd[2151]: time="2025-09-12T23:55:25.145197199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 23:55:25.151336 containerd[2151]: time="2025-09-12T23:55:25.151111435Z" level=info msg="CreateContainer within sandbox \"c7c2fa4625da9a26429a361fcdf303472d2cef1672c9c6a484ac84bd2fa54dc1\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 23:55:25.187180 containerd[2151]: time="2025-09-12T23:55:25.187114327Z" level=info msg="CreateContainer within sandbox \"c7c2fa4625da9a26429a361fcdf303472d2cef1672c9c6a484ac84bd2fa54dc1\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"a3e3908789484ae378ac39dcb3e087b66932d9bc725f9b2e5e1aba3d7f37c136\"" Sep 12 23:55:25.191959 containerd[2151]: time="2025-09-12T23:55:25.188460247Z" level=info msg="StartContainer for \"a3e3908789484ae378ac39dcb3e087b66932d9bc725f9b2e5e1aba3d7f37c136\"" Sep 12 23:55:25.633457 containerd[2151]: time="2025-09-12T23:55:25.633346881Z" level=info msg="StartContainer for \"a3e3908789484ae378ac39dcb3e087b66932d9bc725f9b2e5e1aba3d7f37c136\" returns successfully" Sep 12 23:55:25.706776 kubelet[3594]: I0912 23:55:25.702209 3594 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6d9cf74dd-xfzvz" podStartSLOduration=4.765898501 podStartE2EDuration="26.702184846s" podCreationTimestamp="2025-09-12 23:54:59 +0000 UTC" firstStartedPulling="2025-09-12 23:55:03.207223294 +0000 UTC m=+57.286026142" lastFinishedPulling="2025-09-12 23:55:25.143509639 +0000 UTC m=+79.222312487" observedRunningTime="2025-09-12 23:55:25.701352034 +0000 UTC m=+79.780154906" watchObservedRunningTime="2025-09-12 23:55:25.702184846 +0000 UTC m=+79.780987706" Sep 12 23:55:27.757655 systemd[1]: Started sshd@13-172.31.18.203:22-147.75.109.163:51674.service - OpenSSH per-connection server daemon (147.75.109.163:51674). Sep 12 23:55:27.793762 containerd[2151]: time="2025-09-12T23:55:27.793241520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:27.804687 containerd[2151]: time="2025-09-12T23:55:27.803127144Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 12 23:55:27.817680 containerd[2151]: time="2025-09-12T23:55:27.815170476Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:27.872542 containerd[2151]: time="2025-09-12T23:55:27.869313948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:55:27.878387 containerd[2151]: time="2025-09-12T23:55:27.877133028Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 2.731426513s" Sep 12 23:55:27.878387 containerd[2151]: time="2025-09-12T23:55:27.877212588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 12 23:55:27.896774 containerd[2151]: time="2025-09-12T23:55:27.896295948Z" level=info msg="CreateContainer within sandbox \"ffb69b66abf440cab31c0981b2747476b759e5e0aafd0ec21fd6aa9f93574e47\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 23:55:27.985989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1272541626.mount: Deactivated successfully. Sep 12 23:55:27.998069 containerd[2151]: time="2025-09-12T23:55:27.997970689Z" level=info msg="CreateContainer within sandbox \"ffb69b66abf440cab31c0981b2747476b759e5e0aafd0ec21fd6aa9f93574e47\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2c939068c82020832530dfe54d3f0948a7db8b6b0f11f15477a3a61d182d990c\"" Sep 12 23:55:28.004904 containerd[2151]: time="2025-09-12T23:55:28.000935085Z" level=info msg="StartContainer for \"2c939068c82020832530dfe54d3f0948a7db8b6b0f11f15477a3a61d182d990c\"" Sep 12 23:55:28.033816 sshd[6600]: Accepted publickey for core from 147.75.109.163 port 51674 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:28.040873 sshd[6600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:28.070815 systemd-logind[2118]: New session 14 of user core. Sep 12 23:55:28.077519 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 23:55:28.400261 containerd[2151]: time="2025-09-12T23:55:28.400023083Z" level=info msg="StartContainer for \"2c939068c82020832530dfe54d3f0948a7db8b6b0f11f15477a3a61d182d990c\" returns successfully" Sep 12 23:55:28.571954 sshd[6600]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:28.585555 systemd[1]: sshd@13-172.31.18.203:22-147.75.109.163:51674.service: Deactivated successfully. Sep 12 23:55:28.606188 systemd-logind[2118]: Session 14 logged out. Waiting for processes to exit. Sep 12 23:55:28.608109 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 23:55:28.618312 systemd-logind[2118]: Removed session 14. Sep 12 23:55:28.674321 systemd[1]: run-containerd-runc-k8s.io-2c939068c82020832530dfe54d3f0948a7db8b6b0f11f15477a3a61d182d990c-runc.QJ4tY5.mount: Deactivated successfully. Sep 12 23:55:28.738072 kubelet[3594]: I0912 23:55:28.737575 3594 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-vb427" podStartSLOduration=33.395698780000004 podStartE2EDuration="53.737550409s" podCreationTimestamp="2025-09-12 23:54:35 +0000 UTC" firstStartedPulling="2025-09-12 23:55:07.543600051 +0000 UTC m=+61.622402899" lastFinishedPulling="2025-09-12 23:55:27.88545168 +0000 UTC m=+81.964254528" observedRunningTime="2025-09-12 23:55:28.736861309 +0000 UTC m=+82.815664229" watchObservedRunningTime="2025-09-12 23:55:28.737550409 +0000 UTC m=+82.816353269" Sep 12 23:55:29.466380 kubelet[3594]: I0912 23:55:29.466310 3594 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 23:55:29.466720 kubelet[3594]: I0912 23:55:29.466395 3594 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 23:55:33.610230 systemd[1]: Started sshd@14-172.31.18.203:22-147.75.109.163:47764.service - OpenSSH per-connection server daemon (147.75.109.163:47764). Sep 12 23:55:33.833216 sshd[6657]: Accepted publickey for core from 147.75.109.163 port 47764 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:33.836928 sshd[6657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:33.852073 systemd-logind[2118]: New session 15 of user core. Sep 12 23:55:33.861268 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 23:55:34.190805 sshd[6657]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:34.200716 systemd[1]: sshd@14-172.31.18.203:22-147.75.109.163:47764.service: Deactivated successfully. Sep 12 23:55:34.212220 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 23:55:34.218079 systemd-logind[2118]: Session 15 logged out. Waiting for processes to exit. Sep 12 23:55:34.223301 systemd-logind[2118]: Removed session 15. Sep 12 23:55:39.222333 systemd[1]: Started sshd@15-172.31.18.203:22-147.75.109.163:47780.service - OpenSSH per-connection server daemon (147.75.109.163:47780). Sep 12 23:55:39.408538 sshd[6691]: Accepted publickey for core from 147.75.109.163 port 47780 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:39.413198 sshd[6691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:39.422775 systemd-logind[2118]: New session 16 of user core. Sep 12 23:55:39.430354 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 23:55:39.704791 sshd[6691]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:39.717958 systemd[1]: sshd@15-172.31.18.203:22-147.75.109.163:47780.service: Deactivated successfully. Sep 12 23:55:39.726152 systemd-logind[2118]: Session 16 logged out. Waiting for processes to exit. Sep 12 23:55:39.732246 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 23:55:39.748196 systemd[1]: Started sshd@16-172.31.18.203:22-147.75.109.163:47792.service - OpenSSH per-connection server daemon (147.75.109.163:47792). Sep 12 23:55:39.750087 systemd-logind[2118]: Removed session 16. Sep 12 23:55:39.955490 sshd[6704]: Accepted publickey for core from 147.75.109.163 port 47792 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:39.958520 sshd[6704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:39.967738 systemd-logind[2118]: New session 17 of user core. Sep 12 23:55:39.979382 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 23:55:40.615382 sshd[6704]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:40.636193 systemd[1]: sshd@16-172.31.18.203:22-147.75.109.163:47792.service: Deactivated successfully. Sep 12 23:55:40.657300 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 23:55:40.658208 systemd-logind[2118]: Session 17 logged out. Waiting for processes to exit. Sep 12 23:55:40.681825 systemd[1]: Started sshd@17-172.31.18.203:22-147.75.109.163:46636.service - OpenSSH per-connection server daemon (147.75.109.163:46636). Sep 12 23:55:40.683779 systemd-logind[2118]: Removed session 17. Sep 12 23:55:40.907128 sshd[6717]: Accepted publickey for core from 147.75.109.163 port 46636 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:40.910485 sshd[6717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:40.932516 systemd-logind[2118]: New session 18 of user core. Sep 12 23:55:40.940986 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 23:55:44.777683 sshd[6717]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:44.799137 systemd[1]: sshd@17-172.31.18.203:22-147.75.109.163:46636.service: Deactivated successfully. Sep 12 23:55:44.816723 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 23:55:44.824571 systemd-logind[2118]: Session 18 logged out. Waiting for processes to exit. Sep 12 23:55:44.840807 systemd[1]: Started sshd@18-172.31.18.203:22-147.75.109.163:46652.service - OpenSSH per-connection server daemon (147.75.109.163:46652). Sep 12 23:55:44.846944 systemd-logind[2118]: Removed session 18. Sep 12 23:55:45.036477 sshd[6759]: Accepted publickey for core from 147.75.109.163 port 46652 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:45.039942 sshd[6759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:45.050122 systemd-logind[2118]: New session 19 of user core. Sep 12 23:55:45.061517 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 23:55:45.677833 sshd[6759]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:45.690802 systemd[1]: sshd@18-172.31.18.203:22-147.75.109.163:46652.service: Deactivated successfully. Sep 12 23:55:45.696940 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 23:55:45.698983 systemd-logind[2118]: Session 19 logged out. Waiting for processes to exit. Sep 12 23:55:45.712161 systemd[1]: Started sshd@19-172.31.18.203:22-147.75.109.163:46654.service - OpenSSH per-connection server daemon (147.75.109.163:46654). Sep 12 23:55:45.714397 systemd-logind[2118]: Removed session 19. Sep 12 23:55:45.903754 sshd[6778]: Accepted publickey for core from 147.75.109.163 port 46654 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:45.906757 sshd[6778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:45.916913 systemd-logind[2118]: New session 20 of user core. Sep 12 23:55:45.922521 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 23:55:46.188060 sshd[6778]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:46.196874 systemd[1]: sshd@19-172.31.18.203:22-147.75.109.163:46654.service: Deactivated successfully. Sep 12 23:55:46.203756 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 23:55:46.206609 systemd-logind[2118]: Session 20 logged out. Waiting for processes to exit. Sep 12 23:55:46.209272 systemd-logind[2118]: Removed session 20. Sep 12 23:55:51.224795 systemd[1]: Started sshd@20-172.31.18.203:22-147.75.109.163:43054.service - OpenSSH per-connection server daemon (147.75.109.163:43054). Sep 12 23:55:51.412073 sshd[6835]: Accepted publickey for core from 147.75.109.163 port 43054 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:51.415671 sshd[6835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:51.429190 systemd-logind[2118]: New session 21 of user core. Sep 12 23:55:51.434261 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 23:55:51.692770 sshd[6835]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:51.700863 systemd[1]: sshd@20-172.31.18.203:22-147.75.109.163:43054.service: Deactivated successfully. Sep 12 23:55:51.707459 systemd-logind[2118]: Session 21 logged out. Waiting for processes to exit. Sep 12 23:55:51.709190 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 23:55:51.714004 systemd-logind[2118]: Removed session 21. Sep 12 23:55:56.725163 systemd[1]: Started sshd@21-172.31.18.203:22-147.75.109.163:43064.service - OpenSSH per-connection server daemon (147.75.109.163:43064). Sep 12 23:55:56.903488 sshd[6852]: Accepted publickey for core from 147.75.109.163 port 43064 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:55:56.906191 sshd[6852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:55:56.915975 systemd-logind[2118]: New session 22 of user core. Sep 12 23:55:56.924396 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 23:55:57.194820 sshd[6852]: pam_unix(sshd:session): session closed for user core Sep 12 23:55:57.202553 systemd-logind[2118]: Session 22 logged out. Waiting for processes to exit. Sep 12 23:55:57.204227 systemd[1]: sshd@21-172.31.18.203:22-147.75.109.163:43064.service: Deactivated successfully. Sep 12 23:55:57.212189 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 23:55:57.215100 systemd-logind[2118]: Removed session 22. Sep 12 23:56:02.236419 systemd[1]: Started sshd@22-172.31.18.203:22-147.75.109.163:43544.service - OpenSSH per-connection server daemon (147.75.109.163:43544). Sep 12 23:56:02.436229 sshd[6868]: Accepted publickey for core from 147.75.109.163 port 43544 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:56:02.439532 sshd[6868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:56:02.449297 systemd-logind[2118]: New session 23 of user core. Sep 12 23:56:02.456247 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 23:56:02.775693 sshd[6868]: pam_unix(sshd:session): session closed for user core Sep 12 23:56:02.784933 systemd[1]: sshd@22-172.31.18.203:22-147.75.109.163:43544.service: Deactivated successfully. Sep 12 23:56:02.800415 systemd-logind[2118]: Session 23 logged out. Waiting for processes to exit. Sep 12 23:56:02.809574 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 23:56:02.815999 systemd-logind[2118]: Removed session 23. Sep 12 23:56:07.811285 systemd[1]: Started sshd@23-172.31.18.203:22-147.75.109.163:43550.service - OpenSSH per-connection server daemon (147.75.109.163:43550). Sep 12 23:56:08.016715 sshd[6885]: Accepted publickey for core from 147.75.109.163 port 43550 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:56:08.023878 sshd[6885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:56:08.046835 systemd-logind[2118]: New session 24 of user core. Sep 12 23:56:08.057657 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 23:56:08.389906 sshd[6885]: pam_unix(sshd:session): session closed for user core Sep 12 23:56:08.398334 systemd[1]: sshd@23-172.31.18.203:22-147.75.109.163:43550.service: Deactivated successfully. Sep 12 23:56:08.408131 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 23:56:08.415260 systemd-logind[2118]: Session 24 logged out. Waiting for processes to exit. Sep 12 23:56:08.419258 systemd-logind[2118]: Removed session 24. Sep 12 23:56:09.326800 containerd[2151]: time="2025-09-12T23:56:09.325707650Z" level=info msg="StopPodSandbox for \"75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4\"" Sep 12 23:56:09.536397 containerd[2151]: 2025-09-12 23:56:09.453 [WARNING][6907] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--203-k8s-coredns--7c65d6cfc9--h78v2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3fae242f-71cb-4cc8-a7fa-b06a5787570e", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 54, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-203", ContainerID:"e1b0b2681c8696a710ada509712019d80c693e27308ddc27f48be9d6c7a27649", Pod:"coredns-7c65d6cfc9-h78v2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f0a31a1ea0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:09.536397 containerd[2151]: 2025-09-12 23:56:09.453 [INFO][6907] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" Sep 12 23:56:09.536397 containerd[2151]: 2025-09-12 23:56:09.453 [INFO][6907] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" iface="eth0" netns="" Sep 12 23:56:09.536397 containerd[2151]: 2025-09-12 23:56:09.453 [INFO][6907] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" Sep 12 23:56:09.536397 containerd[2151]: 2025-09-12 23:56:09.453 [INFO][6907] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" Sep 12 23:56:09.536397 containerd[2151]: 2025-09-12 23:56:09.506 [INFO][6914] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" HandleID="k8s-pod-network.75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" Workload="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--h78v2-eth0" Sep 12 23:56:09.536397 containerd[2151]: 2025-09-12 23:56:09.507 [INFO][6914] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:09.536397 containerd[2151]: 2025-09-12 23:56:09.507 [INFO][6914] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:09.536397 containerd[2151]: 2025-09-12 23:56:09.524 [WARNING][6914] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" HandleID="k8s-pod-network.75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" Workload="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--h78v2-eth0" Sep 12 23:56:09.536397 containerd[2151]: 2025-09-12 23:56:09.524 [INFO][6914] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" HandleID="k8s-pod-network.75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" Workload="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--h78v2-eth0" Sep 12 23:56:09.536397 containerd[2151]: 2025-09-12 23:56:09.527 [INFO][6914] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:09.536397 containerd[2151]: 2025-09-12 23:56:09.530 [INFO][6907] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" Sep 12 23:56:09.536397 containerd[2151]: time="2025-09-12T23:56:09.535256847Z" level=info msg="TearDown network for sandbox \"75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4\" successfully" Sep 12 23:56:09.536397 containerd[2151]: time="2025-09-12T23:56:09.535307475Z" level=info msg="StopPodSandbox for \"75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4\" returns successfully" Sep 12 23:56:09.538931 containerd[2151]: time="2025-09-12T23:56:09.536608995Z" level=info msg="RemovePodSandbox for \"75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4\"" Sep 12 23:56:09.538931 containerd[2151]: time="2025-09-12T23:56:09.537057579Z" level=info msg="Forcibly stopping sandbox \"75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4\"" Sep 12 23:56:09.742607 containerd[2151]: 2025-09-12 23:56:09.638 [WARNING][6929] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--203-k8s-coredns--7c65d6cfc9--h78v2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3fae242f-71cb-4cc8-a7fa-b06a5787570e", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 54, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-203", ContainerID:"e1b0b2681c8696a710ada509712019d80c693e27308ddc27f48be9d6c7a27649", Pod:"coredns-7c65d6cfc9-h78v2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5f0a31a1ea0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:09.742607 containerd[2151]: 2025-09-12 23:56:09.640 [INFO][6929] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" Sep 12 23:56:09.742607 containerd[2151]: 2025-09-12 23:56:09.640 [INFO][6929] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" iface="eth0" netns="" Sep 12 23:56:09.742607 containerd[2151]: 2025-09-12 23:56:09.640 [INFO][6929] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" Sep 12 23:56:09.742607 containerd[2151]: 2025-09-12 23:56:09.640 [INFO][6929] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" Sep 12 23:56:09.742607 containerd[2151]: 2025-09-12 23:56:09.700 [INFO][6937] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" HandleID="k8s-pod-network.75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" Workload="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--h78v2-eth0" Sep 12 23:56:09.742607 containerd[2151]: 2025-09-12 23:56:09.700 [INFO][6937] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:09.742607 containerd[2151]: 2025-09-12 23:56:09.700 [INFO][6937] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:09.742607 containerd[2151]: 2025-09-12 23:56:09.723 [WARNING][6937] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" HandleID="k8s-pod-network.75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" Workload="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--h78v2-eth0" Sep 12 23:56:09.742607 containerd[2151]: 2025-09-12 23:56:09.724 [INFO][6937] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" HandleID="k8s-pod-network.75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" Workload="ip--172--31--18--203-k8s-coredns--7c65d6cfc9--h78v2-eth0" Sep 12 23:56:09.742607 containerd[2151]: 2025-09-12 23:56:09.727 [INFO][6937] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:09.742607 containerd[2151]: 2025-09-12 23:56:09.734 [INFO][6929] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4" Sep 12 23:56:09.742607 containerd[2151]: time="2025-09-12T23:56:09.740783812Z" level=info msg="TearDown network for sandbox \"75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4\" successfully" Sep 12 23:56:09.753979 containerd[2151]: time="2025-09-12T23:56:09.751805668Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:56:09.753979 containerd[2151]: time="2025-09-12T23:56:09.751922416Z" level=info msg="RemovePodSandbox \"75266b39a8a48d0667d957f16f4ae1b32c096cad3417626bcabab2fb53b9eba4\" returns successfully" Sep 12 23:56:09.755889 containerd[2151]: time="2025-09-12T23:56:09.754458232Z" level=info msg="StopPodSandbox for \"15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89\"" Sep 12 23:56:10.001101 containerd[2151]: 2025-09-12 23:56:09.870 [WARNING][6952] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--pbxdf-eth0", GenerateName:"calico-apiserver-5c48bb7547-", Namespace:"calico-apiserver", SelfLink:"", UID:"9f5b3f0c-b02e-481f-a083-c8af4d9dc294", ResourceVersion:"1243", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 54, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c48bb7547", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-203", ContainerID:"e70994a593e64b191797f3cc1ce5934089422b8049c0221cbfc1a87dbb8ee24b", Pod:"calico-apiserver-5c48bb7547-pbxdf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb598a1cdb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:10.001101 containerd[2151]: 2025-09-12 23:56:09.872 [INFO][6952] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" Sep 12 23:56:10.001101 containerd[2151]: 2025-09-12 23:56:09.872 [INFO][6952] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" iface="eth0" netns="" Sep 12 23:56:10.001101 containerd[2151]: 2025-09-12 23:56:09.872 [INFO][6952] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" Sep 12 23:56:10.001101 containerd[2151]: 2025-09-12 23:56:09.872 [INFO][6952] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" Sep 12 23:56:10.001101 containerd[2151]: 2025-09-12 23:56:09.967 [INFO][6960] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" HandleID="k8s-pod-network.15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" Workload="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--pbxdf-eth0" Sep 12 23:56:10.001101 containerd[2151]: 2025-09-12 23:56:09.967 [INFO][6960] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:10.001101 containerd[2151]: 2025-09-12 23:56:09.967 [INFO][6960] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:10.001101 containerd[2151]: 2025-09-12 23:56:09.983 [WARNING][6960] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" HandleID="k8s-pod-network.15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" Workload="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--pbxdf-eth0" Sep 12 23:56:10.001101 containerd[2151]: 2025-09-12 23:56:09.983 [INFO][6960] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" HandleID="k8s-pod-network.15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" Workload="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--pbxdf-eth0" Sep 12 23:56:10.001101 containerd[2151]: 2025-09-12 23:56:09.990 [INFO][6960] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:10.001101 containerd[2151]: 2025-09-12 23:56:09.995 [INFO][6952] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" Sep 12 23:56:10.006471 containerd[2151]: time="2025-09-12T23:56:10.002476934Z" level=info msg="TearDown network for sandbox \"15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89\" successfully" Sep 12 23:56:10.006471 containerd[2151]: time="2025-09-12T23:56:10.002778662Z" level=info msg="StopPodSandbox for \"15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89\" returns successfully" Sep 12 23:56:10.009498 containerd[2151]: time="2025-09-12T23:56:10.008827346Z" level=info msg="RemovePodSandbox for \"15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89\"" Sep 12 23:56:10.009498 containerd[2151]: time="2025-09-12T23:56:10.009003794Z" level=info msg="Forcibly stopping sandbox \"15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89\"" Sep 12 23:56:10.220769 containerd[2151]: 2025-09-12 23:56:10.107 [WARNING][6974] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--pbxdf-eth0", GenerateName:"calico-apiserver-5c48bb7547-", Namespace:"calico-apiserver", SelfLink:"", UID:"9f5b3f0c-b02e-481f-a083-c8af4d9dc294", ResourceVersion:"1243", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 54, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c48bb7547", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-203", ContainerID:"e70994a593e64b191797f3cc1ce5934089422b8049c0221cbfc1a87dbb8ee24b", Pod:"calico-apiserver-5c48bb7547-pbxdf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb598a1cdb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:10.220769 containerd[2151]: 2025-09-12 23:56:10.108 [INFO][6974] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" Sep 12 23:56:10.220769 containerd[2151]: 2025-09-12 23:56:10.108 [INFO][6974] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" iface="eth0" netns="" Sep 12 23:56:10.220769 containerd[2151]: 2025-09-12 23:56:10.108 [INFO][6974] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" Sep 12 23:56:10.220769 containerd[2151]: 2025-09-12 23:56:10.108 [INFO][6974] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" Sep 12 23:56:10.220769 containerd[2151]: 2025-09-12 23:56:10.192 [INFO][6981] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" HandleID="k8s-pod-network.15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" Workload="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--pbxdf-eth0" Sep 12 23:56:10.220769 containerd[2151]: 2025-09-12 23:56:10.193 [INFO][6981] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:10.220769 containerd[2151]: 2025-09-12 23:56:10.194 [INFO][6981] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:10.220769 containerd[2151]: 2025-09-12 23:56:10.210 [WARNING][6981] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" HandleID="k8s-pod-network.15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" Workload="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--pbxdf-eth0" Sep 12 23:56:10.220769 containerd[2151]: 2025-09-12 23:56:10.210 [INFO][6981] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" HandleID="k8s-pod-network.15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" Workload="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--pbxdf-eth0" Sep 12 23:56:10.220769 containerd[2151]: 2025-09-12 23:56:10.214 [INFO][6981] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:10.220769 containerd[2151]: 2025-09-12 23:56:10.217 [INFO][6974] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89" Sep 12 23:56:10.223295 containerd[2151]: time="2025-09-12T23:56:10.221880291Z" level=info msg="TearDown network for sandbox \"15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89\" successfully" Sep 12 23:56:10.232911 containerd[2151]: time="2025-09-12T23:56:10.231928767Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:56:10.232911 containerd[2151]: time="2025-09-12T23:56:10.232228323Z" level=info msg="RemovePodSandbox \"15ab2109485911f3e6ed2550dc76c16ef11c432ec63aa3bee577ee6767722e89\" returns successfully" Sep 12 23:56:10.235196 containerd[2151]: time="2025-09-12T23:56:10.234523263Z" level=info msg="StopPodSandbox for \"559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd\"" Sep 12 23:56:10.485715 containerd[2151]: 2025-09-12 23:56:10.346 [WARNING][6996] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--203-k8s-goldmane--7988f88666--bhgbj-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"082bf9af-912b-4ff6-8411-79fadb8bf200", ResourceVersion:"1359", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 54, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-203", ContainerID:"1c18a913cc4f58f85fe9a19f5dcc18dca04ba609e716597e0cbec52fcd78b226", Pod:"goldmane-7988f88666-bhgbj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.50.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali95460f8f173", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:10.485715 containerd[2151]: 2025-09-12 23:56:10.346 [INFO][6996] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" Sep 12 23:56:10.485715 containerd[2151]: 2025-09-12 23:56:10.346 [INFO][6996] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" iface="eth0" netns="" Sep 12 23:56:10.485715 containerd[2151]: 2025-09-12 23:56:10.346 [INFO][6996] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" Sep 12 23:56:10.485715 containerd[2151]: 2025-09-12 23:56:10.346 [INFO][6996] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" Sep 12 23:56:10.485715 containerd[2151]: 2025-09-12 23:56:10.440 [INFO][7003] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" HandleID="k8s-pod-network.559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" Workload="ip--172--31--18--203-k8s-goldmane--7988f88666--bhgbj-eth0" Sep 12 23:56:10.485715 containerd[2151]: 2025-09-12 23:56:10.441 [INFO][7003] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:10.485715 containerd[2151]: 2025-09-12 23:56:10.441 [INFO][7003] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:10.485715 containerd[2151]: 2025-09-12 23:56:10.463 [WARNING][7003] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" HandleID="k8s-pod-network.559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" Workload="ip--172--31--18--203-k8s-goldmane--7988f88666--bhgbj-eth0" Sep 12 23:56:10.485715 containerd[2151]: 2025-09-12 23:56:10.465 [INFO][7003] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" HandleID="k8s-pod-network.559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" Workload="ip--172--31--18--203-k8s-goldmane--7988f88666--bhgbj-eth0" Sep 12 23:56:10.485715 containerd[2151]: 2025-09-12 23:56:10.470 [INFO][7003] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:10.485715 containerd[2151]: 2025-09-12 23:56:10.478 [INFO][6996] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" Sep 12 23:56:10.493022 containerd[2151]: time="2025-09-12T23:56:10.488837152Z" level=info msg="TearDown network for sandbox \"559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd\" successfully" Sep 12 23:56:10.493022 containerd[2151]: time="2025-09-12T23:56:10.488913412Z" level=info msg="StopPodSandbox for \"559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd\" returns successfully" Sep 12 23:56:10.493022 containerd[2151]: time="2025-09-12T23:56:10.489782368Z" level=info msg="RemovePodSandbox for \"559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd\"" Sep 12 23:56:10.493022 containerd[2151]: time="2025-09-12T23:56:10.489841780Z" level=info msg="Forcibly stopping sandbox \"559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd\"" Sep 12 23:56:10.745772 containerd[2151]: 2025-09-12 23:56:10.657 [WARNING][7017] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--203-k8s-goldmane--7988f88666--bhgbj-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"082bf9af-912b-4ff6-8411-79fadb8bf200", ResourceVersion:"1359", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 54, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-203", ContainerID:"1c18a913cc4f58f85fe9a19f5dcc18dca04ba609e716597e0cbec52fcd78b226", Pod:"goldmane-7988f88666-bhgbj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.50.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali95460f8f173", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:10.745772 containerd[2151]: 2025-09-12 23:56:10.658 [INFO][7017] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" Sep 12 23:56:10.745772 containerd[2151]: 2025-09-12 23:56:10.658 [INFO][7017] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" iface="eth0" netns="" Sep 12 23:56:10.745772 containerd[2151]: 2025-09-12 23:56:10.658 [INFO][7017] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" Sep 12 23:56:10.745772 containerd[2151]: 2025-09-12 23:56:10.658 [INFO][7017] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" Sep 12 23:56:10.745772 containerd[2151]: 2025-09-12 23:56:10.707 [INFO][7025] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" HandleID="k8s-pod-network.559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" Workload="ip--172--31--18--203-k8s-goldmane--7988f88666--bhgbj-eth0" Sep 12 23:56:10.745772 containerd[2151]: 2025-09-12 23:56:10.708 [INFO][7025] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:10.745772 containerd[2151]: 2025-09-12 23:56:10.708 [INFO][7025] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:10.745772 containerd[2151]: 2025-09-12 23:56:10.731 [WARNING][7025] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" HandleID="k8s-pod-network.559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" Workload="ip--172--31--18--203-k8s-goldmane--7988f88666--bhgbj-eth0" Sep 12 23:56:10.745772 containerd[2151]: 2025-09-12 23:56:10.731 [INFO][7025] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" HandleID="k8s-pod-network.559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" Workload="ip--172--31--18--203-k8s-goldmane--7988f88666--bhgbj-eth0" Sep 12 23:56:10.745772 containerd[2151]: 2025-09-12 23:56:10.735 [INFO][7025] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:10.745772 containerd[2151]: 2025-09-12 23:56:10.739 [INFO][7017] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd" Sep 12 23:56:10.748935 containerd[2151]: time="2025-09-12T23:56:10.748276529Z" level=info msg="TearDown network for sandbox \"559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd\" successfully" Sep 12 23:56:10.759827 containerd[2151]: time="2025-09-12T23:56:10.759481205Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:56:10.759827 containerd[2151]: time="2025-09-12T23:56:10.759610553Z" level=info msg="RemovePodSandbox \"559d8edcde47076caaf1f3f861c2bb798a464d13645b40d932e5000c7e10f0fd\" returns successfully" Sep 12 23:56:10.761665 containerd[2151]: time="2025-09-12T23:56:10.761412485Z" level=info msg="StopPodSandbox for \"c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d\"" Sep 12 23:56:10.977694 containerd[2151]: 2025-09-12 23:56:10.876 [WARNING][7038] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--203-k8s-csi--node--driver--vb427-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e874f212-ec82-4dc1-a7f2-b6ff94f1cb99", ResourceVersion:"1238", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 54, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-203", ContainerID:"ffb69b66abf440cab31c0981b2747476b759e5e0aafd0ec21fd6aa9f93574e47", Pod:"csi-node-driver-vb427", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9eefa4107f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:10.977694 containerd[2151]: 2025-09-12 23:56:10.877 [INFO][7038] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" Sep 12 23:56:10.977694 containerd[2151]: 2025-09-12 23:56:10.877 [INFO][7038] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" iface="eth0" netns="" Sep 12 23:56:10.977694 containerd[2151]: 2025-09-12 23:56:10.877 [INFO][7038] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" Sep 12 23:56:10.977694 containerd[2151]: 2025-09-12 23:56:10.877 [INFO][7038] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" Sep 12 23:56:10.977694 containerd[2151]: 2025-09-12 23:56:10.946 [INFO][7046] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" HandleID="k8s-pod-network.c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" Workload="ip--172--31--18--203-k8s-csi--node--driver--vb427-eth0" Sep 12 23:56:10.977694 containerd[2151]: 2025-09-12 23:56:10.947 [INFO][7046] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:10.977694 containerd[2151]: 2025-09-12 23:56:10.947 [INFO][7046] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:10.977694 containerd[2151]: 2025-09-12 23:56:10.965 [WARNING][7046] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" HandleID="k8s-pod-network.c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" Workload="ip--172--31--18--203-k8s-csi--node--driver--vb427-eth0" Sep 12 23:56:10.977694 containerd[2151]: 2025-09-12 23:56:10.965 [INFO][7046] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" HandleID="k8s-pod-network.c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" Workload="ip--172--31--18--203-k8s-csi--node--driver--vb427-eth0" Sep 12 23:56:10.977694 containerd[2151]: 2025-09-12 23:56:10.968 [INFO][7046] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:10.977694 containerd[2151]: 2025-09-12 23:56:10.973 [INFO][7038] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" Sep 12 23:56:10.977694 containerd[2151]: time="2025-09-12T23:56:10.977599386Z" level=info msg="TearDown network for sandbox \"c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d\" successfully" Sep 12 23:56:10.979926 containerd[2151]: time="2025-09-12T23:56:10.977700186Z" level=info msg="StopPodSandbox for \"c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d\" returns successfully" Sep 12 23:56:10.980448 containerd[2151]: time="2025-09-12T23:56:10.980376318Z" level=info msg="RemovePodSandbox for \"c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d\"" Sep 12 23:56:10.980562 containerd[2151]: time="2025-09-12T23:56:10.980446914Z" level=info msg="Forcibly stopping sandbox \"c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d\"" Sep 12 23:56:11.187861 containerd[2151]: 2025-09-12 23:56:11.090 [WARNING][7060] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--203-k8s-csi--node--driver--vb427-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e874f212-ec82-4dc1-a7f2-b6ff94f1cb99", ResourceVersion:"1238", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 54, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-203", ContainerID:"ffb69b66abf440cab31c0981b2747476b759e5e0aafd0ec21fd6aa9f93574e47", Pod:"csi-node-driver-vb427", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9eefa4107f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:11.187861 containerd[2151]: 2025-09-12 23:56:11.090 [INFO][7060] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" Sep 12 23:56:11.187861 containerd[2151]: 2025-09-12 23:56:11.090 [INFO][7060] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" iface="eth0" netns="" Sep 12 23:56:11.187861 containerd[2151]: 2025-09-12 23:56:11.090 [INFO][7060] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" Sep 12 23:56:11.187861 containerd[2151]: 2025-09-12 23:56:11.091 [INFO][7060] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" Sep 12 23:56:11.187861 containerd[2151]: 2025-09-12 23:56:11.150 [INFO][7067] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" HandleID="k8s-pod-network.c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" Workload="ip--172--31--18--203-k8s-csi--node--driver--vb427-eth0" Sep 12 23:56:11.187861 containerd[2151]: 2025-09-12 23:56:11.151 [INFO][7067] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:11.187861 containerd[2151]: 2025-09-12 23:56:11.151 [INFO][7067] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:11.187861 containerd[2151]: 2025-09-12 23:56:11.175 [WARNING][7067] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" HandleID="k8s-pod-network.c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" Workload="ip--172--31--18--203-k8s-csi--node--driver--vb427-eth0" Sep 12 23:56:11.187861 containerd[2151]: 2025-09-12 23:56:11.176 [INFO][7067] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" HandleID="k8s-pod-network.c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" Workload="ip--172--31--18--203-k8s-csi--node--driver--vb427-eth0" Sep 12 23:56:11.187861 containerd[2151]: 2025-09-12 23:56:11.181 [INFO][7067] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:11.187861 containerd[2151]: 2025-09-12 23:56:11.184 [INFO][7060] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d" Sep 12 23:56:11.194345 containerd[2151]: time="2025-09-12T23:56:11.190857736Z" level=info msg="TearDown network for sandbox \"c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d\" successfully" Sep 12 23:56:11.202387 containerd[2151]: time="2025-09-12T23:56:11.201821788Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:56:11.202387 containerd[2151]: time="2025-09-12T23:56:11.201944620Z" level=info msg="RemovePodSandbox \"c0c70f0719cea18788beb792243b5a7923312b6f24163d32cc04b1c4eb4a169d\" returns successfully" Sep 12 23:56:11.203690 containerd[2151]: time="2025-09-12T23:56:11.202983592Z" level=info msg="StopPodSandbox for \"ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e\"" Sep 12 23:56:11.412745 containerd[2151]: 2025-09-12 23:56:11.293 [WARNING][7081] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--2nt2f-eth0", GenerateName:"calico-apiserver-5c48bb7547-", Namespace:"calico-apiserver", SelfLink:"", UID:"077d1d76-d7b8-4b1c-bc6e-9119a67ba30b", ResourceVersion:"1203", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 54, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c48bb7547", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-203", ContainerID:"769330a917e76f3a10253645fd40423c87d0759e48b68e42c891d8d36636c168", Pod:"calico-apiserver-5c48bb7547-2nt2f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali23946670777", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:11.412745 containerd[2151]: 2025-09-12 23:56:11.293 [INFO][7081] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" Sep 12 23:56:11.412745 containerd[2151]: 2025-09-12 23:56:11.294 [INFO][7081] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" iface="eth0" netns="" Sep 12 23:56:11.412745 containerd[2151]: 2025-09-12 23:56:11.294 [INFO][7081] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" Sep 12 23:56:11.412745 containerd[2151]: 2025-09-12 23:56:11.294 [INFO][7081] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" Sep 12 23:56:11.412745 containerd[2151]: 2025-09-12 23:56:11.379 [INFO][7088] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" HandleID="k8s-pod-network.ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" Workload="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--2nt2f-eth0" Sep 12 23:56:11.412745 containerd[2151]: 2025-09-12 23:56:11.379 [INFO][7088] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:11.412745 containerd[2151]: 2025-09-12 23:56:11.380 [INFO][7088] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:11.412745 containerd[2151]: 2025-09-12 23:56:11.400 [WARNING][7088] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" HandleID="k8s-pod-network.ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" Workload="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--2nt2f-eth0" Sep 12 23:56:11.412745 containerd[2151]: 2025-09-12 23:56:11.400 [INFO][7088] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" HandleID="k8s-pod-network.ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" Workload="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--2nt2f-eth0" Sep 12 23:56:11.412745 containerd[2151]: 2025-09-12 23:56:11.403 [INFO][7088] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:11.412745 containerd[2151]: 2025-09-12 23:56:11.408 [INFO][7081] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" Sep 12 23:56:11.414308 containerd[2151]: time="2025-09-12T23:56:11.413888765Z" level=info msg="TearDown network for sandbox \"ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e\" successfully" Sep 12 23:56:11.414308 containerd[2151]: time="2025-09-12T23:56:11.413937953Z" level=info msg="StopPodSandbox for \"ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e\" returns successfully" Sep 12 23:56:11.415073 containerd[2151]: time="2025-09-12T23:56:11.414715109Z" level=info msg="RemovePodSandbox for \"ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e\"" Sep 12 23:56:11.415073 containerd[2151]: time="2025-09-12T23:56:11.414771293Z" level=info msg="Forcibly stopping sandbox \"ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e\"" Sep 12 23:56:11.592042 containerd[2151]: 2025-09-12 23:56:11.500 [WARNING][7102] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--2nt2f-eth0", GenerateName:"calico-apiserver-5c48bb7547-", Namespace:"calico-apiserver", SelfLink:"", UID:"077d1d76-d7b8-4b1c-bc6e-9119a67ba30b", ResourceVersion:"1203", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 54, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c48bb7547", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-203", ContainerID:"769330a917e76f3a10253645fd40423c87d0759e48b68e42c891d8d36636c168", Pod:"calico-apiserver-5c48bb7547-2nt2f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali23946670777", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:56:11.592042 containerd[2151]: 2025-09-12 23:56:11.500 [INFO][7102] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" Sep 12 23:56:11.592042 containerd[2151]: 2025-09-12 23:56:11.500 [INFO][7102] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" iface="eth0" netns="" Sep 12 23:56:11.592042 containerd[2151]: 2025-09-12 23:56:11.500 [INFO][7102] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" Sep 12 23:56:11.592042 containerd[2151]: 2025-09-12 23:56:11.500 [INFO][7102] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" Sep 12 23:56:11.592042 containerd[2151]: 2025-09-12 23:56:11.560 [INFO][7109] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" HandleID="k8s-pod-network.ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" Workload="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--2nt2f-eth0" Sep 12 23:56:11.592042 containerd[2151]: 2025-09-12 23:56:11.561 [INFO][7109] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:56:11.592042 containerd[2151]: 2025-09-12 23:56:11.561 [INFO][7109] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:56:11.592042 containerd[2151]: 2025-09-12 23:56:11.579 [WARNING][7109] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" HandleID="k8s-pod-network.ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" Workload="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--2nt2f-eth0" Sep 12 23:56:11.592042 containerd[2151]: 2025-09-12 23:56:11.580 [INFO][7109] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" HandleID="k8s-pod-network.ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" Workload="ip--172--31--18--203-k8s-calico--apiserver--5c48bb7547--2nt2f-eth0" Sep 12 23:56:11.592042 containerd[2151]: 2025-09-12 23:56:11.584 [INFO][7109] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:56:11.592042 containerd[2151]: 2025-09-12 23:56:11.588 [INFO][7102] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e" Sep 12 23:56:11.592042 containerd[2151]: time="2025-09-12T23:56:11.591910698Z" level=info msg="TearDown network for sandbox \"ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e\" successfully" Sep 12 23:56:11.599384 containerd[2151]: time="2025-09-12T23:56:11.599275998Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 23:56:11.599595 containerd[2151]: time="2025-09-12T23:56:11.599440830Z" level=info msg="RemovePodSandbox \"ec75ee4f9fee91586d3701e8bb254142b2954f8a54c8a7b0b76b256c5f83d39e\" returns successfully" Sep 12 23:56:13.427560 systemd[1]: Started sshd@24-172.31.18.203:22-147.75.109.163:60604.service - OpenSSH per-connection server daemon (147.75.109.163:60604). Sep 12 23:56:13.625858 sshd[7118]: Accepted publickey for core from 147.75.109.163 port 60604 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:56:13.630281 sshd[7118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:56:13.661039 systemd-logind[2118]: New session 25 of user core. Sep 12 23:56:13.668255 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 23:56:14.063070 sshd[7118]: pam_unix(sshd:session): session closed for user core Sep 12 23:56:14.075941 systemd[1]: sshd@24-172.31.18.203:22-147.75.109.163:60604.service: Deactivated successfully. Sep 12 23:56:14.089462 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 23:56:14.096390 systemd-logind[2118]: Session 25 logged out. Waiting for processes to exit. Sep 12 23:56:14.101249 systemd-logind[2118]: Removed session 25. Sep 12 23:56:19.095135 systemd[1]: Started sshd@25-172.31.18.203:22-147.75.109.163:60616.service - OpenSSH per-connection server daemon (147.75.109.163:60616). Sep 12 23:56:19.299662 sshd[7191]: Accepted publickey for core from 147.75.109.163 port 60616 ssh2: RSA SHA256:hzqoQUQMDNGIX4spfLoTi9cnhX+EaAcejntAjTQoGoc Sep 12 23:56:19.304774 sshd[7191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:56:19.324288 systemd-logind[2118]: New session 26 of user core. Sep 12 23:56:19.331107 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 23:56:19.664052 sshd[7191]: pam_unix(sshd:session): session closed for user core Sep 12 23:56:19.678188 systemd[1]: sshd@25-172.31.18.203:22-147.75.109.163:60616.service: Deactivated successfully. Sep 12 23:56:19.686587 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 23:56:19.692555 systemd-logind[2118]: Session 26 logged out. Waiting for processes to exit. Sep 12 23:56:19.697116 systemd-logind[2118]: Removed session 26. Sep 12 23:56:33.405825 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf60cc61a7421260f240e6795443ada83c058bce28976e1b52befe86b5a04383-rootfs.mount: Deactivated successfully. Sep 12 23:56:33.439976 containerd[2151]: time="2025-09-12T23:56:33.402828002Z" level=info msg="shim disconnected" id=cf60cc61a7421260f240e6795443ada83c058bce28976e1b52befe86b5a04383 namespace=k8s.io Sep 12 23:56:33.440834 containerd[2151]: time="2025-09-12T23:56:33.439969406Z" level=warning msg="cleaning up after shim disconnected" id=cf60cc61a7421260f240e6795443ada83c058bce28976e1b52befe86b5a04383 namespace=k8s.io Sep 12 23:56:33.440834 containerd[2151]: time="2025-09-12T23:56:33.440005298Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:56:34.040955 containerd[2151]: time="2025-09-12T23:56:34.040186201Z" level=info msg="shim disconnected" id=825e4105ba939c180acf363d17a7e00594a4bc255ec91cecfb59209fdaf32c33 namespace=k8s.io Sep 12 23:56:34.044702 containerd[2151]: time="2025-09-12T23:56:34.041946853Z" level=warning msg="cleaning up after shim disconnected" id=825e4105ba939c180acf363d17a7e00594a4bc255ec91cecfb59209fdaf32c33 namespace=k8s.io Sep 12 23:56:34.044702 containerd[2151]: time="2025-09-12T23:56:34.042986821Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:56:34.050191 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-825e4105ba939c180acf363d17a7e00594a4bc255ec91cecfb59209fdaf32c33-rootfs.mount: Deactivated successfully. Sep 12 23:56:34.076916 kubelet[3594]: I0912 23:56:34.076734 3594 scope.go:117] "RemoveContainer" containerID="cf60cc61a7421260f240e6795443ada83c058bce28976e1b52befe86b5a04383" Sep 12 23:56:34.081768 containerd[2151]: time="2025-09-12T23:56:34.081708037Z" level=info msg="CreateContainer within sandbox \"ad2c23730bddc1a225e5e2a4d2fdfc1414ab9521b47858d8fd3f5ae7e38fe019\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 12 23:56:34.111105 containerd[2151]: time="2025-09-12T23:56:34.110951761Z" level=info msg="CreateContainer within sandbox \"ad2c23730bddc1a225e5e2a4d2fdfc1414ab9521b47858d8fd3f5ae7e38fe019\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"7b9f3cca0f238001a678c68a280ed3f4a4fe5e824424225250bd5f730aa48fd8\"" Sep 12 23:56:34.112753 containerd[2151]: time="2025-09-12T23:56:34.111825985Z" level=info msg="StartContainer for \"7b9f3cca0f238001a678c68a280ed3f4a4fe5e824424225250bd5f730aa48fd8\"" Sep 12 23:56:34.235586 containerd[2151]: time="2025-09-12T23:56:34.235412726Z" level=info msg="StartContainer for \"7b9f3cca0f238001a678c68a280ed3f4a4fe5e824424225250bd5f730aa48fd8\" returns successfully" Sep 12 23:56:35.086045 kubelet[3594]: I0912 23:56:35.085721 3594 scope.go:117] "RemoveContainer" containerID="825e4105ba939c180acf363d17a7e00594a4bc255ec91cecfb59209fdaf32c33" Sep 12 23:56:35.091269 containerd[2151]: time="2025-09-12T23:56:35.091204166Z" level=info msg="CreateContainer within sandbox \"6d6c2850d6d1f0b680c8a693f505db8f9c33be273f87c0fb48a8753a7aecb059\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 12 23:56:35.119215 containerd[2151]: time="2025-09-12T23:56:35.119123966Z" level=info msg="CreateContainer within sandbox \"6d6c2850d6d1f0b680c8a693f505db8f9c33be273f87c0fb48a8753a7aecb059\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"5966dbab63ecc7187906a7ab0359a5dbf6c61eb6fbd81b0c46ebb6dfe3384aff\"" Sep 12 23:56:35.121664 containerd[2151]: time="2025-09-12T23:56:35.120042674Z" level=info msg="StartContainer for \"5966dbab63ecc7187906a7ab0359a5dbf6c61eb6fbd81b0c46ebb6dfe3384aff\"" Sep 12 23:56:35.282287 containerd[2151]: time="2025-09-12T23:56:35.282228927Z" level=info msg="StartContainer for \"5966dbab63ecc7187906a7ab0359a5dbf6c61eb6fbd81b0c46ebb6dfe3384aff\" returns successfully" Sep 12 23:56:36.110262 systemd[1]: run-containerd-runc-k8s.io-fbc9289343a3da298acc84831f0a533b56238c643225725c5553713b76a4878c-runc.uvqyRM.mount: Deactivated successfully. Sep 12 23:56:38.851583 containerd[2151]: time="2025-09-12T23:56:38.851260809Z" level=info msg="shim disconnected" id=6999d02a0e37514bddf01cf6b7be62ffc7e2fd5a884249c40fdca1334a5283f5 namespace=k8s.io Sep 12 23:56:38.851583 containerd[2151]: time="2025-09-12T23:56:38.851532225Z" level=warning msg="cleaning up after shim disconnected" id=6999d02a0e37514bddf01cf6b7be62ffc7e2fd5a884249c40fdca1334a5283f5 namespace=k8s.io Sep 12 23:56:38.852440 containerd[2151]: time="2025-09-12T23:56:38.851556273Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 23:56:38.865030 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6999d02a0e37514bddf01cf6b7be62ffc7e2fd5a884249c40fdca1334a5283f5-rootfs.mount: Deactivated successfully. Sep 12 23:56:38.879558 kubelet[3594]: E0912 23:56:38.879356 3594 request.go:1255] Unexpected error when reading response body: net/http: request canceled (Client.Timeout or context cancellation while reading body) Sep 12 23:56:38.879558 kubelet[3594]: E0912 23:56:38.879468 3594 controller.go:195] "Failed to update lease" err="unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body)" Sep 12 23:56:39.108115 kubelet[3594]: I0912 23:56:39.107533 3594 scope.go:117] "RemoveContainer" containerID="6999d02a0e37514bddf01cf6b7be62ffc7e2fd5a884249c40fdca1334a5283f5" Sep 12 23:56:39.110918 containerd[2151]: time="2025-09-12T23:56:39.110856966Z" level=info msg="CreateContainer within sandbox \"54238bfcc484f0401ae0797d4769b387b8277bb253f47f76270e0daab5250b08\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 12 23:56:39.134823 containerd[2151]: time="2025-09-12T23:56:39.134735142Z" level=info msg="CreateContainer within sandbox \"54238bfcc484f0401ae0797d4769b387b8277bb253f47f76270e0daab5250b08\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"04182395157c1d3372c0393d4b8aa8e4fa81019fc3f6488554a20fdc1da149b9\"" Sep 12 23:56:39.136328 containerd[2151]: time="2025-09-12T23:56:39.135987606Z" level=info msg="StartContainer for \"04182395157c1d3372c0393d4b8aa8e4fa81019fc3f6488554a20fdc1da149b9\"" Sep 12 23:56:39.307671 containerd[2151]: time="2025-09-12T23:56:39.306407671Z" level=info msg="StartContainer for \"04182395157c1d3372c0393d4b8aa8e4fa81019fc3f6488554a20fdc1da149b9\" returns successfully" Sep 12 23:56:39.861571 systemd[1]: run-containerd-runc-k8s.io-04182395157c1d3372c0393d4b8aa8e4fa81019fc3f6488554a20fdc1da149b9-runc.umbtZp.mount: Deactivated successfully.