Sep 5 23:52:55.283888 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 5 23:52:55.283943 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 5 22:30:47 -00 2025 Sep 5 23:52:55.283971 kernel: KASLR disabled due to lack of seed Sep 5 23:52:55.283989 kernel: efi: EFI v2.7 by EDK II Sep 5 23:52:55.284007 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x7852ee18 Sep 5 23:52:55.284023 kernel: ACPI: Early table checksum verification disabled Sep 5 23:52:55.284041 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 5 23:52:55.284056 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 5 23:52:55.284072 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 5 23:52:55.284089 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 5 23:52:55.284110 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 5 23:52:55.284127 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 5 23:52:55.284143 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 5 23:52:55.284159 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 5 23:52:55.284178 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 5 23:52:55.284198 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 5 23:52:55.284216 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 5 23:52:55.284232 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 5 23:52:55.284249 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 5 23:52:55.284266 kernel: printk: bootconsole [uart0] enabled Sep 5 23:52:55.284283 kernel: NUMA: Failed to initialise from firmware Sep 5 23:52:55.284300 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 5 23:52:55.284317 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Sep 5 23:52:55.284334 kernel: Zone ranges: Sep 5 23:52:55.284351 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 5 23:52:55.284367 kernel: DMA32 empty Sep 5 23:52:55.284388 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 5 23:52:55.284405 kernel: Movable zone start for each node Sep 5 23:52:55.284422 kernel: Early memory node ranges Sep 5 23:52:55.284439 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 5 23:52:55.284456 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 5 23:52:55.284473 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 5 23:52:55.284489 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 5 23:52:55.284506 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 5 23:52:55.284522 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 5 23:52:55.284539 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 5 23:52:55.284556 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 5 23:52:55.284572 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 5 23:52:55.284593 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 5 23:52:55.285680 kernel: psci: probing for conduit method from ACPI. Sep 5 23:52:55.285712 kernel: psci: PSCIv1.0 detected in firmware. Sep 5 23:52:55.285731 kernel: psci: Using standard PSCI v0.2 function IDs Sep 5 23:52:55.285750 kernel: psci: Trusted OS migration not required Sep 5 23:52:55.285773 kernel: psci: SMC Calling Convention v1.1 Sep 5 23:52:55.285794 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Sep 5 23:52:55.285813 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 5 23:52:55.285831 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 5 23:52:55.285850 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 5 23:52:55.285869 kernel: Detected PIPT I-cache on CPU0 Sep 5 23:52:55.285886 kernel: CPU features: detected: GIC system register CPU interface Sep 5 23:52:55.285927 kernel: CPU features: detected: Spectre-v2 Sep 5 23:52:55.285968 kernel: CPU features: detected: Spectre-v3a Sep 5 23:52:55.286000 kernel: CPU features: detected: Spectre-BHB Sep 5 23:52:55.286020 kernel: CPU features: detected: ARM erratum 1742098 Sep 5 23:52:55.286046 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 5 23:52:55.286065 kernel: alternatives: applying boot alternatives Sep 5 23:52:55.286085 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 5 23:52:55.286105 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 23:52:55.286123 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 23:52:55.286141 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 23:52:55.286159 kernel: Fallback order for Node 0: 0 Sep 5 23:52:55.286177 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Sep 5 23:52:55.286195 kernel: Policy zone: Normal Sep 5 23:52:55.286212 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 23:52:55.286230 kernel: software IO TLB: area num 2. Sep 5 23:52:55.286253 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Sep 5 23:52:55.286272 kernel: Memory: 3820088K/4030464K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 210376K reserved, 0K cma-reserved) Sep 5 23:52:55.286290 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 5 23:52:55.286307 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 23:52:55.286326 kernel: rcu: RCU event tracing is enabled. Sep 5 23:52:55.286344 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 5 23:52:55.286362 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 23:52:55.286380 kernel: Tracing variant of Tasks RCU enabled. Sep 5 23:52:55.286398 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 23:52:55.286416 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 5 23:52:55.286433 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 5 23:52:55.286455 kernel: GICv3: 96 SPIs implemented Sep 5 23:52:55.286473 kernel: GICv3: 0 Extended SPIs implemented Sep 5 23:52:55.286491 kernel: Root IRQ handler: gic_handle_irq Sep 5 23:52:55.286509 kernel: GICv3: GICv3 features: 16 PPIs Sep 5 23:52:55.286526 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 5 23:52:55.286544 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 5 23:52:55.286561 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Sep 5 23:52:55.286580 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Sep 5 23:52:55.286623 kernel: GICv3: using LPI property table @0x00000004000d0000 Sep 5 23:52:55.286681 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 5 23:52:55.286700 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Sep 5 23:52:55.286719 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 23:52:55.286746 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 5 23:52:55.286764 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 5 23:52:55.286783 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 5 23:52:55.286801 kernel: Console: colour dummy device 80x25 Sep 5 23:52:55.286820 kernel: printk: console [tty1] enabled Sep 5 23:52:55.286838 kernel: ACPI: Core revision 20230628 Sep 5 23:52:55.286856 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 5 23:52:55.286874 kernel: pid_max: default: 32768 minimum: 301 Sep 5 23:52:55.286892 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 5 23:52:55.286915 kernel: landlock: Up and running. Sep 5 23:52:55.286934 kernel: SELinux: Initializing. Sep 5 23:52:55.286952 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 23:52:55.286970 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 23:52:55.286988 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 5 23:52:55.287006 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 5 23:52:55.287025 kernel: rcu: Hierarchical SRCU implementation. Sep 5 23:52:55.287045 kernel: rcu: Max phase no-delay instances is 400. Sep 5 23:52:55.287064 kernel: Platform MSI: ITS@0x10080000 domain created Sep 5 23:52:55.287088 kernel: PCI/MSI: ITS@0x10080000 domain created Sep 5 23:52:55.287107 kernel: Remapping and enabling EFI services. Sep 5 23:52:55.287125 kernel: smp: Bringing up secondary CPUs ... Sep 5 23:52:55.287143 kernel: Detected PIPT I-cache on CPU1 Sep 5 23:52:55.287161 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 5 23:52:55.287178 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Sep 5 23:52:55.287196 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 5 23:52:55.287214 kernel: smp: Brought up 1 node, 2 CPUs Sep 5 23:52:55.287231 kernel: SMP: Total of 2 processors activated. Sep 5 23:52:55.287253 kernel: CPU features: detected: 32-bit EL0 Support Sep 5 23:52:55.287271 kernel: CPU features: detected: 32-bit EL1 Support Sep 5 23:52:55.287289 kernel: CPU features: detected: CRC32 instructions Sep 5 23:52:55.287319 kernel: CPU: All CPU(s) started at EL1 Sep 5 23:52:55.287342 kernel: alternatives: applying system-wide alternatives Sep 5 23:52:55.287360 kernel: devtmpfs: initialized Sep 5 23:52:55.287380 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 23:52:55.287399 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 5 23:52:55.287447 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 23:52:55.287499 kernel: SMBIOS 3.0.0 present. Sep 5 23:52:55.287532 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 5 23:52:55.287552 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 23:52:55.287573 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 5 23:52:55.287593 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 5 23:52:55.288717 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 5 23:52:55.288740 kernel: audit: initializing netlink subsys (disabled) Sep 5 23:52:55.288759 kernel: audit: type=2000 audit(0.288:1): state=initialized audit_enabled=0 res=1 Sep 5 23:52:55.288788 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 23:52:55.288807 kernel: cpuidle: using governor menu Sep 5 23:52:55.288826 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 5 23:52:55.288844 kernel: ASID allocator initialised with 65536 entries Sep 5 23:52:55.288863 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 23:52:55.288882 kernel: Serial: AMBA PL011 UART driver Sep 5 23:52:55.288900 kernel: Modules: 17488 pages in range for non-PLT usage Sep 5 23:52:55.288919 kernel: Modules: 509008 pages in range for PLT usage Sep 5 23:52:55.288938 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 23:52:55.288961 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 23:52:55.288980 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 5 23:52:55.288998 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 5 23:52:55.289017 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 23:52:55.289035 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 23:52:55.289054 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 5 23:52:55.289073 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 5 23:52:55.289091 kernel: ACPI: Added _OSI(Module Device) Sep 5 23:52:55.289110 kernel: ACPI: Added _OSI(Processor Device) Sep 5 23:52:55.289133 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 23:52:55.289152 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 23:52:55.289171 kernel: ACPI: Interpreter enabled Sep 5 23:52:55.289189 kernel: ACPI: Using GIC for interrupt routing Sep 5 23:52:55.289208 kernel: ACPI: MCFG table detected, 1 entries Sep 5 23:52:55.289226 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 5 23:52:55.289549 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 23:52:55.289913 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 5 23:52:55.292686 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 5 23:52:55.292967 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 5 23:52:55.293175 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 5 23:52:55.293202 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 5 23:52:55.293223 kernel: acpiphp: Slot [1] registered Sep 5 23:52:55.293242 kernel: acpiphp: Slot [2] registered Sep 5 23:52:55.293261 kernel: acpiphp: Slot [3] registered Sep 5 23:52:55.293279 kernel: acpiphp: Slot [4] registered Sep 5 23:52:55.293308 kernel: acpiphp: Slot [5] registered Sep 5 23:52:55.293327 kernel: acpiphp: Slot [6] registered Sep 5 23:52:55.293345 kernel: acpiphp: Slot [7] registered Sep 5 23:52:55.293363 kernel: acpiphp: Slot [8] registered Sep 5 23:52:55.293382 kernel: acpiphp: Slot [9] registered Sep 5 23:52:55.293401 kernel: acpiphp: Slot [10] registered Sep 5 23:52:55.293419 kernel: acpiphp: Slot [11] registered Sep 5 23:52:55.293437 kernel: acpiphp: Slot [12] registered Sep 5 23:52:55.293456 kernel: acpiphp: Slot [13] registered Sep 5 23:52:55.293474 kernel: acpiphp: Slot [14] registered Sep 5 23:52:55.293498 kernel: acpiphp: Slot [15] registered Sep 5 23:52:55.293516 kernel: acpiphp: Slot [16] registered Sep 5 23:52:55.293535 kernel: acpiphp: Slot [17] registered Sep 5 23:52:55.293553 kernel: acpiphp: Slot [18] registered Sep 5 23:52:55.293572 kernel: acpiphp: Slot [19] registered Sep 5 23:52:55.293638 kernel: acpiphp: Slot [20] registered Sep 5 23:52:55.293665 kernel: acpiphp: Slot [21] registered Sep 5 23:52:55.293685 kernel: acpiphp: Slot [22] registered Sep 5 23:52:55.293704 kernel: acpiphp: Slot [23] registered Sep 5 23:52:55.293729 kernel: acpiphp: Slot [24] registered Sep 5 23:52:55.293748 kernel: acpiphp: Slot [25] registered Sep 5 23:52:55.293766 kernel: acpiphp: Slot [26] registered Sep 5 23:52:55.293785 kernel: acpiphp: Slot [27] registered Sep 5 23:52:55.293804 kernel: acpiphp: Slot [28] registered Sep 5 23:52:55.293822 kernel: acpiphp: Slot [29] registered Sep 5 23:52:55.293841 kernel: acpiphp: Slot [30] registered Sep 5 23:52:55.293859 kernel: acpiphp: Slot [31] registered Sep 5 23:52:55.293878 kernel: PCI host bridge to bus 0000:00 Sep 5 23:52:55.294105 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 5 23:52:55.294303 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 5 23:52:55.294490 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 5 23:52:55.299342 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 5 23:52:55.299664 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Sep 5 23:52:55.299917 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Sep 5 23:52:55.300127 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Sep 5 23:52:55.300356 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 5 23:52:55.300560 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Sep 5 23:52:55.302950 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 5 23:52:55.303213 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 5 23:52:55.303441 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Sep 5 23:52:55.303709 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Sep 5 23:52:55.303932 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Sep 5 23:52:55.304135 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 5 23:52:55.304337 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Sep 5 23:52:55.304564 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Sep 5 23:52:55.307920 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Sep 5 23:52:55.308156 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Sep 5 23:52:55.308379 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Sep 5 23:52:55.308576 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 5 23:52:55.308826 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 5 23:52:55.309032 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 5 23:52:55.309059 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 5 23:52:55.309079 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 5 23:52:55.309098 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 5 23:52:55.309117 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 5 23:52:55.309136 kernel: iommu: Default domain type: Translated Sep 5 23:52:55.309155 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 5 23:52:55.309185 kernel: efivars: Registered efivars operations Sep 5 23:52:55.309204 kernel: vgaarb: loaded Sep 5 23:52:55.309223 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 5 23:52:55.309242 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 23:52:55.309261 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 23:52:55.309280 kernel: pnp: PnP ACPI init Sep 5 23:52:55.309524 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 5 23:52:55.309555 kernel: pnp: PnP ACPI: found 1 devices Sep 5 23:52:55.309582 kernel: NET: Registered PF_INET protocol family Sep 5 23:52:55.311727 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 23:52:55.311759 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 23:52:55.311779 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 23:52:55.311799 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 23:52:55.311820 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 23:52:55.311840 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 23:52:55.311859 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 23:52:55.311880 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 23:52:55.311910 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 23:52:55.311929 kernel: PCI: CLS 0 bytes, default 64 Sep 5 23:52:55.311948 kernel: kvm [1]: HYP mode not available Sep 5 23:52:55.311966 kernel: Initialise system trusted keyrings Sep 5 23:52:55.311986 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 23:52:55.312004 kernel: Key type asymmetric registered Sep 5 23:52:55.312023 kernel: Asymmetric key parser 'x509' registered Sep 5 23:52:55.312043 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 5 23:52:55.312061 kernel: io scheduler mq-deadline registered Sep 5 23:52:55.312084 kernel: io scheduler kyber registered Sep 5 23:52:55.312103 kernel: io scheduler bfq registered Sep 5 23:52:55.312383 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 5 23:52:55.312414 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 5 23:52:55.312433 kernel: ACPI: button: Power Button [PWRB] Sep 5 23:52:55.312452 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 5 23:52:55.312471 kernel: ACPI: button: Sleep Button [SLPB] Sep 5 23:52:55.312490 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 23:52:55.312516 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 5 23:52:55.314847 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 5 23:52:55.314896 kernel: printk: console [ttyS0] disabled Sep 5 23:52:55.314917 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 5 23:52:55.314937 kernel: printk: console [ttyS0] enabled Sep 5 23:52:55.314957 kernel: printk: bootconsole [uart0] disabled Sep 5 23:52:55.314976 kernel: thunder_xcv, ver 1.0 Sep 5 23:52:55.314995 kernel: thunder_bgx, ver 1.0 Sep 5 23:52:55.315015 kernel: nicpf, ver 1.0 Sep 5 23:52:55.315046 kernel: nicvf, ver 1.0 Sep 5 23:52:55.315305 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 5 23:52:55.315514 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-05T23:52:54 UTC (1757116374) Sep 5 23:52:55.315544 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 5 23:52:55.315565 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Sep 5 23:52:55.315586 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 5 23:52:55.317345 kernel: watchdog: Hard watchdog permanently disabled Sep 5 23:52:55.317379 kernel: NET: Registered PF_INET6 protocol family Sep 5 23:52:55.317411 kernel: Segment Routing with IPv6 Sep 5 23:52:55.317431 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 23:52:55.317452 kernel: NET: Registered PF_PACKET protocol family Sep 5 23:52:55.317471 kernel: Key type dns_resolver registered Sep 5 23:52:55.317491 kernel: registered taskstats version 1 Sep 5 23:52:55.317510 kernel: Loading compiled-in X.509 certificates Sep 5 23:52:55.317530 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: 5b16e1dfa86dac534548885fd675b87757ff9e20' Sep 5 23:52:55.317550 kernel: Key type .fscrypt registered Sep 5 23:52:55.317569 kernel: Key type fscrypt-provisioning registered Sep 5 23:52:55.318065 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 23:52:55.318432 kernel: ima: Allocated hash algorithm: sha1 Sep 5 23:52:55.319072 kernel: ima: No architecture policies found Sep 5 23:52:55.319099 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 5 23:52:55.319118 kernel: clk: Disabling unused clocks Sep 5 23:52:55.319137 kernel: Freeing unused kernel memory: 39424K Sep 5 23:52:55.319156 kernel: Run /init as init process Sep 5 23:52:55.319174 kernel: with arguments: Sep 5 23:52:55.319193 kernel: /init Sep 5 23:52:55.319211 kernel: with environment: Sep 5 23:52:55.319239 kernel: HOME=/ Sep 5 23:52:55.319258 kernel: TERM=linux Sep 5 23:52:55.319276 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 23:52:55.319301 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 23:52:55.319325 systemd[1]: Detected virtualization amazon. Sep 5 23:52:55.319346 systemd[1]: Detected architecture arm64. Sep 5 23:52:55.319366 systemd[1]: Running in initrd. Sep 5 23:52:55.319390 systemd[1]: No hostname configured, using default hostname. Sep 5 23:52:55.319410 systemd[1]: Hostname set to . Sep 5 23:52:55.319431 systemd[1]: Initializing machine ID from VM UUID. Sep 5 23:52:55.319451 systemd[1]: Queued start job for default target initrd.target. Sep 5 23:52:55.319471 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:52:55.319492 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:52:55.319513 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 23:52:55.319534 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 23:52:55.319559 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 23:52:55.319581 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 23:52:55.319624 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 23:52:55.319649 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 23:52:55.319670 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:52:55.319690 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:52:55.319710 systemd[1]: Reached target paths.target - Path Units. Sep 5 23:52:55.319737 systemd[1]: Reached target slices.target - Slice Units. Sep 5 23:52:55.319757 systemd[1]: Reached target swap.target - Swaps. Sep 5 23:52:55.319777 systemd[1]: Reached target timers.target - Timer Units. Sep 5 23:52:55.319797 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 23:52:55.319817 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 23:52:55.319838 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 23:52:55.319858 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 5 23:52:55.319879 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:52:55.319899 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 23:52:55.319924 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:52:55.319944 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 23:52:55.319965 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 23:52:55.319986 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 23:52:55.320006 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 23:52:55.320027 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 23:52:55.320047 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 23:52:55.320068 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 23:52:55.320094 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:52:55.320115 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 23:52:55.320135 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:52:55.320155 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 23:52:55.320177 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 23:52:55.320203 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:52:55.320281 systemd-journald[251]: Collecting audit messages is disabled. Sep 5 23:52:55.320330 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:52:55.320351 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 23:52:55.320376 kernel: Bridge firewalling registered Sep 5 23:52:55.320397 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 23:52:55.320418 systemd-journald[251]: Journal started Sep 5 23:52:55.320455 systemd-journald[251]: Runtime Journal (/run/log/journal/ec210d88bc9698a12093c482747d761a) is 8.0M, max 75.3M, 67.3M free. Sep 5 23:52:55.283129 systemd-modules-load[252]: Inserted module 'overlay' Sep 5 23:52:55.314014 systemd-modules-load[252]: Inserted module 'br_netfilter' Sep 5 23:52:55.332951 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 23:52:55.333900 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 23:52:55.349046 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 23:52:55.354870 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 23:52:55.372905 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 23:52:55.388444 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:52:55.402068 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 23:52:55.410722 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:52:55.421512 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:52:55.441670 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:52:55.455025 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 23:52:55.479643 dracut-cmdline[279]: dracut-dracut-053 Sep 5 23:52:55.486675 dracut-cmdline[279]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 5 23:52:55.534492 systemd-resolved[288]: Positive Trust Anchors: Sep 5 23:52:55.536558 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 23:52:55.536655 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 23:52:55.660633 kernel: SCSI subsystem initialized Sep 5 23:52:55.667641 kernel: Loading iSCSI transport class v2.0-870. Sep 5 23:52:55.681647 kernel: iscsi: registered transport (tcp) Sep 5 23:52:55.704101 kernel: iscsi: registered transport (qla4xxx) Sep 5 23:52:55.704176 kernel: QLogic iSCSI HBA Driver Sep 5 23:52:55.777646 kernel: random: crng init done Sep 5 23:52:55.778260 systemd-resolved[288]: Defaulting to hostname 'linux'. Sep 5 23:52:55.782513 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 23:52:55.785290 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:52:55.820693 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 23:52:55.831894 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 23:52:55.869644 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 23:52:55.872980 kernel: device-mapper: uevent: version 1.0.3 Sep 5 23:52:55.873041 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 5 23:52:55.941687 kernel: raid6: neonx8 gen() 6635 MB/s Sep 5 23:52:55.958660 kernel: raid6: neonx4 gen() 6430 MB/s Sep 5 23:52:55.975645 kernel: raid6: neonx2 gen() 5361 MB/s Sep 5 23:52:55.992654 kernel: raid6: neonx1 gen() 3919 MB/s Sep 5 23:52:56.009646 kernel: raid6: int64x8 gen() 3793 MB/s Sep 5 23:52:56.026652 kernel: raid6: int64x4 gen() 3681 MB/s Sep 5 23:52:56.043649 kernel: raid6: int64x2 gen() 3552 MB/s Sep 5 23:52:56.061645 kernel: raid6: int64x1 gen() 2768 MB/s Sep 5 23:52:56.061723 kernel: raid6: using algorithm neonx8 gen() 6635 MB/s Sep 5 23:52:56.080648 kernel: raid6: .... xor() 4918 MB/s, rmw enabled Sep 5 23:52:56.080724 kernel: raid6: using neon recovery algorithm Sep 5 23:52:56.088646 kernel: xor: measuring software checksum speed Sep 5 23:52:56.090959 kernel: 8regs : 10257 MB/sec Sep 5 23:52:56.091001 kernel: 32regs : 11952 MB/sec Sep 5 23:52:56.092224 kernel: arm64_neon : 9564 MB/sec Sep 5 23:52:56.092258 kernel: xor: using function: 32regs (11952 MB/sec) Sep 5 23:52:56.178658 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 23:52:56.199327 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 23:52:56.215981 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:52:56.252985 systemd-udevd[469]: Using default interface naming scheme 'v255'. Sep 5 23:52:56.261287 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:52:56.291011 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 23:52:56.315903 dracut-pre-trigger[483]: rd.md=0: removing MD RAID activation Sep 5 23:52:56.376899 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 23:52:56.389941 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 23:52:56.508994 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:52:56.531014 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 23:52:56.599916 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 23:52:56.619187 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 23:52:56.627039 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:52:56.629941 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 23:52:56.646949 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 23:52:56.691291 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 23:52:56.746427 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 5 23:52:56.746490 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 5 23:52:56.771038 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 5 23:52:56.771105 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 5 23:52:56.771427 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 5 23:52:56.774633 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 5 23:52:56.778542 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 23:52:56.778731 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:52:56.786688 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:52:56.800875 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:0e:55:f5:8e:f3 Sep 5 23:52:56.801209 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 5 23:52:56.793806 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 23:52:56.793935 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:52:56.794144 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:52:56.816310 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 5 23:52:56.816349 kernel: GPT:9289727 != 16777215 Sep 5 23:52:56.816374 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 5 23:52:56.817196 kernel: GPT:9289727 != 16777215 Sep 5 23:52:56.818380 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 23:52:56.819383 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 5 23:52:56.819953 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:52:56.830570 (udev-worker)[521]: Network interface NamePolicy= disabled on kernel command line. Sep 5 23:52:56.856003 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:52:56.867973 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 23:52:56.917523 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (518) Sep 5 23:52:56.931936 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:52:56.975708 kernel: BTRFS: device fsid 045c118e-b098-46f0-884a-43665575c70e devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (524) Sep 5 23:52:57.015585 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 5 23:52:57.036484 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 5 23:52:57.098347 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 5 23:52:57.115463 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 5 23:52:57.118224 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 5 23:52:57.138983 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 23:52:57.148772 disk-uuid[663]: Primary Header is updated. Sep 5 23:52:57.148772 disk-uuid[663]: Secondary Entries is updated. Sep 5 23:52:57.148772 disk-uuid[663]: Secondary Header is updated. Sep 5 23:52:57.161765 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 5 23:52:57.168740 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 5 23:52:57.173626 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 5 23:52:58.181635 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 5 23:52:58.187462 disk-uuid[665]: The operation has completed successfully. Sep 5 23:52:58.398862 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 23:52:58.399093 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 23:52:58.425956 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 23:52:58.449460 sh[1007]: Success Sep 5 23:52:58.476646 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 5 23:52:58.600756 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 23:52:58.605762 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 23:52:58.619003 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 23:52:58.648739 kernel: BTRFS info (device dm-0): first mount of filesystem 045c118e-b098-46f0-884a-43665575c70e Sep 5 23:52:58.648821 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:52:58.648850 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 5 23:52:58.652055 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 23:52:58.652132 kernel: BTRFS info (device dm-0): using free space tree Sep 5 23:52:58.762653 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 5 23:52:58.796291 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 23:52:58.800970 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 23:52:58.811922 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 23:52:58.817888 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 23:52:58.855883 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:52:58.855966 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:52:58.858324 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 5 23:52:58.863642 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 5 23:52:58.882888 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 5 23:52:58.887644 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:52:58.898010 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 23:52:58.911985 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 23:52:59.013451 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 23:52:59.025935 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 23:52:59.094743 systemd-networkd[1199]: lo: Link UP Sep 5 23:52:59.094759 systemd-networkd[1199]: lo: Gained carrier Sep 5 23:52:59.099616 systemd-networkd[1199]: Enumeration completed Sep 5 23:52:59.101360 systemd-networkd[1199]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:52:59.101367 systemd-networkd[1199]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:52:59.111539 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 23:52:59.114219 systemd[1]: Reached target network.target - Network. Sep 5 23:52:59.117577 systemd-networkd[1199]: eth0: Link UP Sep 5 23:52:59.117587 systemd-networkd[1199]: eth0: Gained carrier Sep 5 23:52:59.117626 systemd-networkd[1199]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:52:59.143729 systemd-networkd[1199]: eth0: DHCPv4 address 172.31.23.98/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 5 23:52:59.335188 ignition[1124]: Ignition 2.19.0 Sep 5 23:52:59.335751 ignition[1124]: Stage: fetch-offline Sep 5 23:52:59.338157 ignition[1124]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:52:59.338196 ignition[1124]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 5 23:52:59.341242 ignition[1124]: Ignition finished successfully Sep 5 23:52:59.347851 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 23:52:59.358035 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 5 23:52:59.386985 ignition[1209]: Ignition 2.19.0 Sep 5 23:52:59.387513 ignition[1209]: Stage: fetch Sep 5 23:52:59.388634 ignition[1209]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:52:59.388666 ignition[1209]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 5 23:52:59.388831 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 5 23:52:59.407491 ignition[1209]: PUT result: OK Sep 5 23:52:59.410900 ignition[1209]: parsed url from cmdline: "" Sep 5 23:52:59.410918 ignition[1209]: no config URL provided Sep 5 23:52:59.410936 ignition[1209]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 23:52:59.410965 ignition[1209]: no config at "/usr/lib/ignition/user.ign" Sep 5 23:52:59.410998 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 5 23:52:59.415443 ignition[1209]: PUT result: OK Sep 5 23:52:59.415561 ignition[1209]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 5 23:52:59.420840 ignition[1209]: GET result: OK Sep 5 23:52:59.421017 ignition[1209]: parsing config with SHA512: 0fb52c59b0ef8fe139546f67271c89496584d63409d513f2197c8a43c5c85c8700ccbf897192dbe375edd66d43be42700d268ec304a40fcc69a3bd870b03bcd6 Sep 5 23:52:59.432569 unknown[1209]: fetched base config from "system" Sep 5 23:52:59.432860 unknown[1209]: fetched base config from "system" Sep 5 23:52:59.433541 ignition[1209]: fetch: fetch complete Sep 5 23:52:59.432875 unknown[1209]: fetched user config from "aws" Sep 5 23:52:59.434287 ignition[1209]: fetch: fetch passed Sep 5 23:52:59.439849 ignition[1209]: Ignition finished successfully Sep 5 23:52:59.450693 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 5 23:52:59.462071 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 23:52:59.502388 ignition[1215]: Ignition 2.19.0 Sep 5 23:52:59.502428 ignition[1215]: Stage: kargs Sep 5 23:52:59.504453 ignition[1215]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:52:59.504487 ignition[1215]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 5 23:52:59.504725 ignition[1215]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 5 23:52:59.513942 ignition[1215]: PUT result: OK Sep 5 23:52:59.521084 ignition[1215]: kargs: kargs passed Sep 5 23:52:59.521294 ignition[1215]: Ignition finished successfully Sep 5 23:52:59.525941 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 23:52:59.548894 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 23:52:59.576095 ignition[1221]: Ignition 2.19.0 Sep 5 23:52:59.576133 ignition[1221]: Stage: disks Sep 5 23:52:59.576993 ignition[1221]: no configs at "/usr/lib/ignition/base.d" Sep 5 23:52:59.577026 ignition[1221]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 5 23:52:59.577225 ignition[1221]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 5 23:52:59.586036 ignition[1221]: PUT result: OK Sep 5 23:52:59.593575 ignition[1221]: disks: disks passed Sep 5 23:52:59.593765 ignition[1221]: Ignition finished successfully Sep 5 23:52:59.601374 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 23:52:59.605731 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 23:52:59.612118 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 23:52:59.615069 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 23:52:59.617502 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 23:52:59.620268 systemd[1]: Reached target basic.target - Basic System. Sep 5 23:52:59.638816 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 23:52:59.697738 systemd-fsck[1229]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 5 23:52:59.707262 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 23:52:59.721949 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 23:52:59.806641 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 72e55cb0-8368-4871-a3a0-8637412e72e8 r/w with ordered data mode. Quota mode: none. Sep 5 23:52:59.808199 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 23:52:59.812450 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 23:52:59.828974 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 23:52:59.836839 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 23:52:59.841656 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 5 23:52:59.841786 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 23:52:59.841846 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 23:52:59.866656 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1248) Sep 5 23:52:59.871481 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:52:59.871565 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:52:59.872888 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 5 23:52:59.880566 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 23:52:59.882807 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 5 23:52:59.893933 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 23:52:59.901212 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 23:53:00.226532 initrd-setup-root[1272]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 23:53:00.246518 initrd-setup-root[1279]: cut: /sysroot/etc/group: No such file or directory Sep 5 23:53:00.255330 initrd-setup-root[1286]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 23:53:00.265046 initrd-setup-root[1293]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 23:53:00.575764 systemd-networkd[1199]: eth0: Gained IPv6LL Sep 5 23:53:00.638482 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 23:53:00.651837 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 23:53:00.658979 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 23:53:00.678231 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 23:53:00.680836 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:53:00.732425 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 23:53:00.733347 ignition[1361]: INFO : Ignition 2.19.0 Sep 5 23:53:00.741567 ignition[1361]: INFO : Stage: mount Sep 5 23:53:00.741567 ignition[1361]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:53:00.741567 ignition[1361]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 5 23:53:00.741567 ignition[1361]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 5 23:53:00.756012 ignition[1361]: INFO : PUT result: OK Sep 5 23:53:00.759930 ignition[1361]: INFO : mount: mount passed Sep 5 23:53:00.766833 ignition[1361]: INFO : Ignition finished successfully Sep 5 23:53:00.769324 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 23:53:00.783870 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 23:53:00.820043 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 23:53:00.841694 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1372) Sep 5 23:53:00.845575 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 5 23:53:00.845680 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 5 23:53:00.845709 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 5 23:53:00.851643 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 5 23:53:00.856571 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 23:53:00.897794 ignition[1389]: INFO : Ignition 2.19.0 Sep 5 23:53:00.897794 ignition[1389]: INFO : Stage: files Sep 5 23:53:00.902181 ignition[1389]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:53:00.902181 ignition[1389]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 5 23:53:00.902181 ignition[1389]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 5 23:53:00.911197 ignition[1389]: INFO : PUT result: OK Sep 5 23:53:00.915377 ignition[1389]: DEBUG : files: compiled without relabeling support, skipping Sep 5 23:53:00.918727 ignition[1389]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 23:53:00.918727 ignition[1389]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 23:53:00.953339 ignition[1389]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 23:53:00.956931 ignition[1389]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 23:53:00.960896 unknown[1389]: wrote ssh authorized keys file for user: core Sep 5 23:53:00.963911 ignition[1389]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 23:53:00.968327 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 5 23:53:00.968327 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 5 23:53:01.074638 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 5 23:53:01.228486 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 5 23:53:01.228486 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 5 23:53:01.236678 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 23:53:01.236678 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 23:53:01.236678 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 23:53:01.236678 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 23:53:01.236678 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 23:53:01.236678 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 23:53:01.236678 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 23:53:01.236678 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 23:53:01.236678 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 23:53:01.236678 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 5 23:53:01.276538 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 5 23:53:01.276538 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 5 23:53:01.276538 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 5 23:53:01.592681 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 5 23:53:02.004001 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 5 23:53:02.004001 ignition[1389]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 5 23:53:02.012094 ignition[1389]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 23:53:02.012094 ignition[1389]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 23:53:02.012094 ignition[1389]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 5 23:53:02.012094 ignition[1389]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 5 23:53:02.012094 ignition[1389]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 23:53:02.012094 ignition[1389]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 23:53:02.012094 ignition[1389]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 23:53:02.012094 ignition[1389]: INFO : files: files passed Sep 5 23:53:02.012094 ignition[1389]: INFO : Ignition finished successfully Sep 5 23:53:02.042210 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 23:53:02.052926 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 23:53:02.056876 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 23:53:02.075292 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 23:53:02.079503 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 23:53:02.094275 initrd-setup-root-after-ignition[1417]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:53:02.094275 initrd-setup-root-after-ignition[1417]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:53:02.101769 initrd-setup-root-after-ignition[1421]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 23:53:02.107572 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 23:53:02.110897 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 23:53:02.127022 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 23:53:02.178722 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 23:53:02.178929 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 23:53:02.182230 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 23:53:02.184729 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 23:53:02.187291 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 23:53:02.207627 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 23:53:02.240204 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 23:53:02.252953 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 23:53:02.294544 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:53:02.299975 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:53:02.303099 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 23:53:02.307864 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 23:53:02.308190 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 23:53:02.316003 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 23:53:02.316309 systemd[1]: Stopped target basic.target - Basic System. Sep 5 23:53:02.320543 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 23:53:02.321341 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 23:53:02.351580 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 23:53:02.357363 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 23:53:02.370029 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 23:53:02.376488 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 23:53:02.379483 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 23:53:02.388091 systemd[1]: Stopped target swap.target - Swaps. Sep 5 23:53:02.391847 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 23:53:02.394220 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 23:53:02.397590 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:53:02.408311 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:53:02.416688 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 23:53:02.420895 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:53:02.427375 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 23:53:02.427691 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 23:53:02.432413 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 23:53:02.432998 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 23:53:02.444697 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 23:53:02.445311 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 23:53:02.464160 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 23:53:02.469082 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 23:53:02.474813 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 23:53:02.475144 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:53:02.479693 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 23:53:02.479958 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 23:53:02.501644 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 23:53:02.503878 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 23:53:02.533262 ignition[1442]: INFO : Ignition 2.19.0 Sep 5 23:53:02.533262 ignition[1442]: INFO : Stage: umount Sep 5 23:53:02.533262 ignition[1442]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 23:53:02.533262 ignition[1442]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 5 23:53:02.533262 ignition[1442]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 5 23:53:02.545645 ignition[1442]: INFO : PUT result: OK Sep 5 23:53:02.552802 ignition[1442]: INFO : umount: umount passed Sep 5 23:53:02.554810 ignition[1442]: INFO : Ignition finished successfully Sep 5 23:53:02.557437 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 23:53:02.563455 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 23:53:02.563794 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 23:53:02.568561 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 23:53:02.568729 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 23:53:02.568948 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 23:53:02.569052 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 23:53:02.569198 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 5 23:53:02.569284 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 5 23:53:02.569407 systemd[1]: Stopped target network.target - Network. Sep 5 23:53:02.569475 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 23:53:02.569702 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 23:53:02.596674 systemd[1]: Stopped target paths.target - Path Units. Sep 5 23:53:02.605618 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 23:53:02.610116 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:53:02.617015 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 23:53:02.619240 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 23:53:02.621833 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 23:53:02.621942 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 23:53:02.625988 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 23:53:02.626088 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 23:53:02.628528 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 23:53:02.628678 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 23:53:02.633696 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 23:53:02.633810 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 23:53:02.644219 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 23:53:02.647897 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 23:53:02.650763 systemd-networkd[1199]: eth0: DHCPv6 lease lost Sep 5 23:53:02.660430 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 23:53:02.660728 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 23:53:02.664747 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 23:53:02.664943 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 23:53:02.679359 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 23:53:02.679539 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:53:02.692046 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 23:53:02.692822 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 23:53:02.707811 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 23:53:02.713297 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 23:53:02.713437 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 23:53:02.737508 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:53:02.745149 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 23:53:02.749087 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 23:53:02.765313 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 23:53:02.765490 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:53:02.770818 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 23:53:02.770941 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 23:53:02.776132 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 23:53:02.776258 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:53:02.784488 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 23:53:02.785378 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:53:02.799520 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 23:53:02.799715 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 23:53:02.803377 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 23:53:02.803460 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:53:02.809476 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 23:53:02.809666 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 23:53:02.814637 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 23:53:02.814761 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 23:53:02.817428 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 23:53:02.817565 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 23:53:02.838067 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 23:53:02.847000 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 23:53:02.847150 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:53:02.853841 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 5 23:53:02.853955 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 23:53:02.856996 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 23:53:02.857146 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:53:02.860257 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 23:53:02.860378 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:53:02.867976 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 23:53:02.868255 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 23:53:02.873145 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 23:53:02.873349 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 23:53:02.881914 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 23:53:02.903448 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 23:53:02.952169 systemd[1]: Switching root. Sep 5 23:53:03.000755 systemd-journald[251]: Journal stopped Sep 5 23:53:05.156651 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Sep 5 23:53:05.156791 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 23:53:05.156836 kernel: SELinux: policy capability open_perms=1 Sep 5 23:53:05.156877 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 23:53:05.156914 kernel: SELinux: policy capability always_check_network=0 Sep 5 23:53:05.156944 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 23:53:05.156974 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 23:53:05.157006 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 23:53:05.157035 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 23:53:05.157063 kernel: audit: type=1403 audit(1757116383.371:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 23:53:05.157097 systemd[1]: Successfully loaded SELinux policy in 66.067ms. Sep 5 23:53:05.157144 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.983ms. Sep 5 23:53:05.157180 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 23:53:05.157216 systemd[1]: Detected virtualization amazon. Sep 5 23:53:05.157248 systemd[1]: Detected architecture arm64. Sep 5 23:53:05.157277 systemd[1]: Detected first boot. Sep 5 23:53:05.157309 systemd[1]: Initializing machine ID from VM UUID. Sep 5 23:53:05.157342 zram_generator::config[1484]: No configuration found. Sep 5 23:53:05.157377 systemd[1]: Populated /etc with preset unit settings. Sep 5 23:53:05.157411 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 5 23:53:05.157441 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 5 23:53:05.157478 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 5 23:53:05.157513 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 23:53:05.157573 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 23:53:05.159696 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 23:53:05.159758 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 23:53:05.159796 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 23:53:05.159830 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 23:53:05.159877 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 23:53:05.159918 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 23:53:05.159953 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 23:53:05.159986 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 23:53:05.160019 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 23:53:05.160049 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 23:53:05.160084 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 23:53:05.160120 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 23:53:05.160151 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 5 23:53:05.160182 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 23:53:05.160215 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 5 23:53:05.160252 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 5 23:53:05.160285 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 5 23:53:05.160317 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 23:53:05.160347 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 23:53:05.160378 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 23:53:05.160410 systemd[1]: Reached target slices.target - Slice Units. Sep 5 23:53:05.160444 systemd[1]: Reached target swap.target - Swaps. Sep 5 23:53:05.160480 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 23:53:05.160510 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 23:53:05.160539 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 23:53:05.160570 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 23:53:05.160657 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 23:53:05.160694 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 23:53:05.160724 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 23:53:05.160758 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 23:53:05.160791 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 23:53:05.160831 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 23:53:05.160865 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 23:53:05.160895 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 23:53:05.160928 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 23:53:05.160958 systemd[1]: Reached target machines.target - Containers. Sep 5 23:53:05.160989 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 23:53:05.161020 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:53:05.161054 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 23:53:05.161097 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 23:53:05.161135 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:53:05.161170 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 23:53:05.161202 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:53:05.161232 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 23:53:05.161262 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:53:05.161293 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 23:53:05.161324 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 5 23:53:05.161356 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 5 23:53:05.161390 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 5 23:53:05.161422 systemd[1]: Stopped systemd-fsck-usr.service. Sep 5 23:53:05.161451 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 23:53:05.161480 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 23:53:05.161511 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 23:53:05.161567 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 23:53:05.163651 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 23:53:05.163713 systemd[1]: verity-setup.service: Deactivated successfully. Sep 5 23:53:05.163744 systemd[1]: Stopped verity-setup.service. Sep 5 23:53:05.163781 kernel: loop: module loaded Sep 5 23:53:05.163812 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 23:53:05.163844 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 23:53:05.163876 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 23:53:05.163905 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 23:53:05.163938 kernel: ACPI: bus type drm_connector registered Sep 5 23:53:05.163968 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 23:53:05.164002 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 23:53:05.164032 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 23:53:05.164061 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 23:53:05.164091 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 23:53:05.164120 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:53:05.164152 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:53:05.164182 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 23:53:05.164218 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 23:53:05.164248 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:53:05.164277 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:53:05.164306 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:53:05.164341 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:53:05.164377 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 23:53:05.164409 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 23:53:05.164441 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 23:53:05.164470 kernel: fuse: init (API version 7.39) Sep 5 23:53:05.164498 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 23:53:05.164579 systemd-journald[1562]: Collecting audit messages is disabled. Sep 5 23:53:05.164657 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 5 23:53:05.164690 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 23:53:05.164726 systemd-journald[1562]: Journal started Sep 5 23:53:05.164776 systemd-journald[1562]: Runtime Journal (/run/log/journal/ec210d88bc9698a12093c482747d761a) is 8.0M, max 75.3M, 67.3M free. Sep 5 23:53:04.485311 systemd[1]: Queued start job for default target multi-user.target. Sep 5 23:53:04.524170 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 5 23:53:04.525122 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 5 23:53:05.184633 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 23:53:05.184726 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:53:05.204645 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 23:53:05.204753 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 23:53:05.233730 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 23:53:05.233847 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 23:53:05.245674 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 23:53:05.263646 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 23:53:05.270639 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 23:53:05.271818 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 23:53:05.273749 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 23:53:05.276870 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 23:53:05.279935 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 23:53:05.283210 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 23:53:05.287250 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 23:53:05.354694 kernel: loop0: detected capacity change from 0 to 52536 Sep 5 23:53:05.354690 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 23:53:05.362258 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 23:53:05.373395 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 23:53:05.377570 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 23:53:05.389866 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 23:53:05.417592 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 23:53:05.432721 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 23:53:05.429019 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 5 23:53:05.443059 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 23:53:05.447967 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 23:53:05.470654 systemd-tmpfiles[1587]: ACLs are not supported, ignoring. Sep 5 23:53:05.470688 systemd-tmpfiles[1587]: ACLs are not supported, ignoring. Sep 5 23:53:05.498691 kernel: loop1: detected capacity change from 0 to 211168 Sep 5 23:53:05.507530 systemd-journald[1562]: Time spent on flushing to /var/log/journal/ec210d88bc9698a12093c482747d761a is 134.949ms for 916 entries. Sep 5 23:53:05.507530 systemd-journald[1562]: System Journal (/var/log/journal/ec210d88bc9698a12093c482747d761a) is 8.0M, max 195.6M, 187.6M free. Sep 5 23:53:05.676848 systemd-journald[1562]: Received client request to flush runtime journal. Sep 5 23:53:05.509419 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 23:53:05.527135 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 23:53:05.537989 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 23:53:05.547294 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 5 23:53:05.601893 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 23:53:05.616981 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 23:53:05.634120 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 5 23:53:05.683574 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 23:53:05.692781 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 23:53:05.700658 kernel: loop2: detected capacity change from 0 to 114328 Sep 5 23:53:05.711147 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 23:53:05.723668 udevadm[1631]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 5 23:53:05.748721 systemd-tmpfiles[1636]: ACLs are not supported, ignoring. Sep 5 23:53:05.748763 systemd-tmpfiles[1636]: ACLs are not supported, ignoring. Sep 5 23:53:05.757717 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 23:53:05.844126 kernel: loop3: detected capacity change from 0 to 114432 Sep 5 23:53:05.966676 kernel: loop4: detected capacity change from 0 to 52536 Sep 5 23:53:05.994864 kernel: loop5: detected capacity change from 0 to 211168 Sep 5 23:53:06.032884 kernel: loop6: detected capacity change from 0 to 114328 Sep 5 23:53:06.054085 kernel: loop7: detected capacity change from 0 to 114432 Sep 5 23:53:06.072825 (sd-merge)[1641]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 5 23:53:06.073896 (sd-merge)[1641]: Merged extensions into '/usr'. Sep 5 23:53:06.086886 systemd[1]: Reloading requested from client PID 1586 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 23:53:06.086921 systemd[1]: Reloading... Sep 5 23:53:06.299732 ldconfig[1579]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 23:53:06.324650 zram_generator::config[1668]: No configuration found. Sep 5 23:53:06.609571 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:53:06.734307 systemd[1]: Reloading finished in 646 ms. Sep 5 23:53:06.774577 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 23:53:06.780022 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 23:53:06.783779 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 23:53:06.801999 systemd[1]: Starting ensure-sysext.service... Sep 5 23:53:06.817837 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 23:53:06.825031 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 23:53:06.844829 systemd[1]: Reloading requested from client PID 1720 ('systemctl') (unit ensure-sysext.service)... Sep 5 23:53:06.844857 systemd[1]: Reloading... Sep 5 23:53:06.870004 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 23:53:06.870782 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 23:53:06.874861 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 23:53:06.875438 systemd-tmpfiles[1721]: ACLs are not supported, ignoring. Sep 5 23:53:06.875622 systemd-tmpfiles[1721]: ACLs are not supported, ignoring. Sep 5 23:53:06.889064 systemd-tmpfiles[1721]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 23:53:06.889096 systemd-tmpfiles[1721]: Skipping /boot Sep 5 23:53:06.938333 systemd-tmpfiles[1721]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 23:53:06.938371 systemd-tmpfiles[1721]: Skipping /boot Sep 5 23:53:06.956544 systemd-udevd[1722]: Using default interface naming scheme 'v255'. Sep 5 23:53:07.098631 zram_generator::config[1759]: No configuration found. Sep 5 23:53:07.189760 (udev-worker)[1765]: Network interface NamePolicy= disabled on kernel command line. Sep 5 23:53:07.539483 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:53:07.559708 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1761) Sep 5 23:53:07.733137 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 5 23:53:07.733665 systemd[1]: Reloading finished in 888 ms. Sep 5 23:53:07.767143 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 23:53:07.775653 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 23:53:07.815450 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 5 23:53:07.838118 systemd[1]: Finished ensure-sysext.service. Sep 5 23:53:07.871153 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 5 23:53:07.881939 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 23:53:07.904001 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 23:53:07.913087 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 23:53:07.920540 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 5 23:53:07.926206 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 23:53:07.939011 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 23:53:07.945938 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 23:53:07.957990 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 23:53:07.962045 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 23:53:07.977914 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 23:53:07.993126 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 23:53:08.001986 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 23:53:08.012426 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 23:53:08.016019 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 23:53:08.024726 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 23:53:08.032915 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 23:53:08.037347 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 23:53:08.039698 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 23:53:08.042876 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 23:53:08.044708 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 23:53:08.048294 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 23:53:08.049375 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 23:53:08.061272 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 23:53:08.076769 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 23:53:08.109264 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 23:53:08.113373 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 23:53:08.115405 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 23:53:08.121308 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 23:53:08.123793 lvm[1926]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 23:53:08.128720 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 23:53:08.184841 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 23:53:08.209928 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 23:53:08.220416 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 23:53:08.223749 augenrules[1954]: No rules Sep 5 23:53:08.231987 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 23:53:08.257703 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 5 23:53:08.259336 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 23:53:08.267941 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 5 23:53:08.295654 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 23:53:08.298298 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 23:53:08.307670 lvm[1964]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 23:53:08.317832 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 23:53:08.373645 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 5 23:53:08.400337 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 23:53:08.466443 systemd-networkd[1935]: lo: Link UP Sep 5 23:53:08.467051 systemd-networkd[1935]: lo: Gained carrier Sep 5 23:53:08.468212 systemd-resolved[1937]: Positive Trust Anchors: Sep 5 23:53:08.468258 systemd-resolved[1937]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 23:53:08.468323 systemd-resolved[1937]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 23:53:08.471277 systemd-networkd[1935]: Enumeration completed Sep 5 23:53:08.471496 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 23:53:08.477894 systemd-networkd[1935]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:53:08.477912 systemd-networkd[1935]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 23:53:08.480581 systemd-networkd[1935]: eth0: Link UP Sep 5 23:53:08.481252 systemd-networkd[1935]: eth0: Gained carrier Sep 5 23:53:08.481448 systemd-networkd[1935]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 23:53:08.482963 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 23:53:08.490759 systemd-networkd[1935]: eth0: DHCPv4 address 172.31.23.98/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 5 23:53:08.495982 systemd-resolved[1937]: Defaulting to hostname 'linux'. Sep 5 23:53:08.511504 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 23:53:08.515022 systemd[1]: Reached target network.target - Network. Sep 5 23:53:08.517020 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 23:53:08.522201 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 23:53:08.524979 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 23:53:08.527725 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 23:53:08.531286 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 23:53:08.533947 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 23:53:08.536667 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 23:53:08.539708 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 23:53:08.539758 systemd[1]: Reached target paths.target - Path Units. Sep 5 23:53:08.541702 systemd[1]: Reached target timers.target - Timer Units. Sep 5 23:53:08.544931 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 23:53:08.549977 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 23:53:08.561058 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 23:53:08.564407 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 23:53:08.567054 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 23:53:08.569284 systemd[1]: Reached target basic.target - Basic System. Sep 5 23:53:08.571659 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 23:53:08.571726 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 23:53:08.579933 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 23:53:08.587960 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 5 23:53:08.602667 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 23:53:08.610687 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 23:53:08.626011 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 23:53:08.626182 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 23:53:08.636983 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 23:53:08.645417 systemd[1]: Started ntpd.service - Network Time Service. Sep 5 23:53:08.652054 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 23:53:08.661831 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 5 23:53:08.669199 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 23:53:08.674966 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 23:53:08.688797 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 23:53:08.695152 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 23:53:08.696092 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 23:53:08.698924 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 23:53:08.704885 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 23:53:08.746787 jq[1983]: false Sep 5 23:53:08.753924 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 23:53:08.755710 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 23:53:08.786339 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 23:53:08.785997 dbus-daemon[1982]: [system] SELinux support is enabled Sep 5 23:53:08.795055 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 23:53:08.795118 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 23:53:08.801264 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 23:53:08.801308 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 23:53:08.820668 jq[1995]: true Sep 5 23:53:08.819781 dbus-daemon[1982]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1935 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 5 23:53:08.824922 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 23:53:08.825401 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 23:53:08.833321 dbus-daemon[1982]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 5 23:53:08.855208 (ntainerd)[2003]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 23:53:08.857852 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 5 23:53:08.889848 extend-filesystems[1984]: Found loop4 Sep 5 23:53:08.889848 extend-filesystems[1984]: Found loop5 Sep 5 23:53:08.889848 extend-filesystems[1984]: Found loop6 Sep 5 23:53:08.889848 extend-filesystems[1984]: Found loop7 Sep 5 23:53:08.948957 extend-filesystems[1984]: Found nvme0n1 Sep 5 23:53:08.948957 extend-filesystems[1984]: Found nvme0n1p1 Sep 5 23:53:08.948957 extend-filesystems[1984]: Found nvme0n1p2 Sep 5 23:53:08.948957 extend-filesystems[1984]: Found nvme0n1p3 Sep 5 23:53:08.948957 extend-filesystems[1984]: Found usr Sep 5 23:53:08.948957 extend-filesystems[1984]: Found nvme0n1p4 Sep 5 23:53:08.948957 extend-filesystems[1984]: Found nvme0n1p6 Sep 5 23:53:08.948957 extend-filesystems[1984]: Found nvme0n1p7 Sep 5 23:53:08.948957 extend-filesystems[1984]: Found nvme0n1p9 Sep 5 23:53:08.948957 extend-filesystems[1984]: Checking size of /dev/nvme0n1p9 Sep 5 23:53:08.942398 ntpd[1986]: ntpd 4.2.8p17@1.4004-o Fri Sep 5 21:57:21 UTC 2025 (1): Starting Sep 5 23:53:09.020860 ntpd[1986]: 5 Sep 23:53:08 ntpd[1986]: ntpd 4.2.8p17@1.4004-o Fri Sep 5 21:57:21 UTC 2025 (1): Starting Sep 5 23:53:09.020860 ntpd[1986]: 5 Sep 23:53:08 ntpd[1986]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 5 23:53:09.020860 ntpd[1986]: 5 Sep 23:53:08 ntpd[1986]: ---------------------------------------------------- Sep 5 23:53:09.020860 ntpd[1986]: 5 Sep 23:53:08 ntpd[1986]: ntp-4 is maintained by Network Time Foundation, Sep 5 23:53:09.020860 ntpd[1986]: 5 Sep 23:53:08 ntpd[1986]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 5 23:53:09.020860 ntpd[1986]: 5 Sep 23:53:08 ntpd[1986]: corporation. Support and training for ntp-4 are Sep 5 23:53:09.020860 ntpd[1986]: 5 Sep 23:53:08 ntpd[1986]: available at https://www.nwtime.org/support Sep 5 23:53:09.020860 ntpd[1986]: 5 Sep 23:53:08 ntpd[1986]: ---------------------------------------------------- Sep 5 23:53:09.020860 ntpd[1986]: 5 Sep 23:53:08 ntpd[1986]: proto: precision = 0.108 usec (-23) Sep 5 23:53:09.020860 ntpd[1986]: 5 Sep 23:53:08 ntpd[1986]: basedate set to 2025-08-24 Sep 5 23:53:09.020860 ntpd[1986]: 5 Sep 23:53:08 ntpd[1986]: gps base set to 2025-08-24 (week 2381) Sep 5 23:53:09.020860 ntpd[1986]: 5 Sep 23:53:09 ntpd[1986]: Listen and drop on 0 v6wildcard [::]:123 Sep 5 23:53:09.020860 ntpd[1986]: 5 Sep 23:53:09 ntpd[1986]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 5 23:53:09.006384 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 5 23:53:08.942453 ntpd[1986]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 5 23:53:09.041529 tar[2004]: linux-arm64/LICENSE Sep 5 23:53:09.041529 tar[2004]: linux-arm64/helm Sep 5 23:53:09.058150 ntpd[1986]: 5 Sep 23:53:09 ntpd[1986]: Listen normally on 2 lo 127.0.0.1:123 Sep 5 23:53:09.058150 ntpd[1986]: 5 Sep 23:53:09 ntpd[1986]: Listen normally on 3 eth0 172.31.23.98:123 Sep 5 23:53:09.058150 ntpd[1986]: 5 Sep 23:53:09 ntpd[1986]: Listen normally on 4 lo [::1]:123 Sep 5 23:53:09.058150 ntpd[1986]: 5 Sep 23:53:09 ntpd[1986]: bind(21) AF_INET6 fe80::40e:55ff:fef5:8ef3%2#123 flags 0x11 failed: Cannot assign requested address Sep 5 23:53:09.058150 ntpd[1986]: 5 Sep 23:53:09 ntpd[1986]: unable to create socket on eth0 (5) for fe80::40e:55ff:fef5:8ef3%2#123 Sep 5 23:53:09.058150 ntpd[1986]: 5 Sep 23:53:09 ntpd[1986]: failed to init interface for address fe80::40e:55ff:fef5:8ef3%2 Sep 5 23:53:09.058150 ntpd[1986]: 5 Sep 23:53:09 ntpd[1986]: Listening on routing socket on fd #21 for interface updates Sep 5 23:53:09.030748 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 23:53:08.942474 ntpd[1986]: ---------------------------------------------------- Sep 5 23:53:09.062079 jq[2010]: true Sep 5 23:53:09.033732 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 23:53:08.942494 ntpd[1986]: ntp-4 is maintained by Network Time Foundation, Sep 5 23:53:08.942513 ntpd[1986]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 5 23:53:08.942532 ntpd[1986]: corporation. Support and training for ntp-4 are Sep 5 23:53:09.073587 update_engine[1994]: I20250905 23:53:09.073017 1994 main.cc:92] Flatcar Update Engine starting Sep 5 23:53:08.942550 ntpd[1986]: available at https://www.nwtime.org/support Sep 5 23:53:09.099807 ntpd[1986]: 5 Sep 23:53:09 ntpd[1986]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 5 23:53:09.099807 ntpd[1986]: 5 Sep 23:53:09 ntpd[1986]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 5 23:53:09.099938 update_engine[1994]: I20250905 23:53:09.079459 1994 update_check_scheduler.cc:74] Next update check in 6m6s Sep 5 23:53:09.086481 systemd[1]: Started update-engine.service - Update Engine. Sep 5 23:53:09.100133 extend-filesystems[1984]: Resized partition /dev/nvme0n1p9 Sep 5 23:53:08.942574 ntpd[1986]: ---------------------------------------------------- Sep 5 23:53:09.107270 extend-filesystems[2034]: resize2fs 1.47.1 (20-May-2024) Sep 5 23:53:08.980331 ntpd[1986]: proto: precision = 0.108 usec (-23) Sep 5 23:53:09.116244 coreos-metadata[1981]: Sep 05 23:53:09.113 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 5 23:53:08.980759 ntpd[1986]: basedate set to 2025-08-24 Sep 5 23:53:09.120270 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 23:53:08.980786 ntpd[1986]: gps base set to 2025-08-24 (week 2381) Sep 5 23:53:09.010371 ntpd[1986]: Listen and drop on 0 v6wildcard [::]:123 Sep 5 23:53:09.010461 ntpd[1986]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 5 23:53:09.031742 ntpd[1986]: Listen normally on 2 lo 127.0.0.1:123 Sep 5 23:53:09.031814 ntpd[1986]: Listen normally on 3 eth0 172.31.23.98:123 Sep 5 23:53:09.031880 ntpd[1986]: Listen normally on 4 lo [::1]:123 Sep 5 23:53:09.031958 ntpd[1986]: bind(21) AF_INET6 fe80::40e:55ff:fef5:8ef3%2#123 flags 0x11 failed: Cannot assign requested address Sep 5 23:53:09.031997 ntpd[1986]: unable to create socket on eth0 (5) for fe80::40e:55ff:fef5:8ef3%2#123 Sep 5 23:53:09.032026 ntpd[1986]: failed to init interface for address fe80::40e:55ff:fef5:8ef3%2 Sep 5 23:53:09.032087 ntpd[1986]: Listening on routing socket on fd #21 for interface updates Sep 5 23:53:09.094701 ntpd[1986]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 5 23:53:09.094762 ntpd[1986]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 5 23:53:09.131654 coreos-metadata[1981]: Sep 05 23:53:09.130 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 5 23:53:09.137671 coreos-metadata[1981]: Sep 05 23:53:09.136 INFO Fetch successful Sep 5 23:53:09.137671 coreos-metadata[1981]: Sep 05 23:53:09.136 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 5 23:53:09.144686 coreos-metadata[1981]: Sep 05 23:53:09.144 INFO Fetch successful Sep 5 23:53:09.144686 coreos-metadata[1981]: Sep 05 23:53:09.144 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 5 23:53:09.146633 coreos-metadata[1981]: Sep 05 23:53:09.146 INFO Fetch successful Sep 5 23:53:09.146633 coreos-metadata[1981]: Sep 05 23:53:09.146 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 5 23:53:09.152653 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 5 23:53:09.152737 coreos-metadata[1981]: Sep 05 23:53:09.150 INFO Fetch successful Sep 5 23:53:09.152737 coreos-metadata[1981]: Sep 05 23:53:09.150 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 5 23:53:09.152737 coreos-metadata[1981]: Sep 05 23:53:09.151 INFO Fetch failed with 404: resource not found Sep 5 23:53:09.152737 coreos-metadata[1981]: Sep 05 23:53:09.151 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 5 23:53:09.157794 coreos-metadata[1981]: Sep 05 23:53:09.157 INFO Fetch successful Sep 5 23:53:09.157794 coreos-metadata[1981]: Sep 05 23:53:09.157 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 5 23:53:09.162708 coreos-metadata[1981]: Sep 05 23:53:09.162 INFO Fetch successful Sep 5 23:53:09.162708 coreos-metadata[1981]: Sep 05 23:53:09.162 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 5 23:53:09.164225 coreos-metadata[1981]: Sep 05 23:53:09.164 INFO Fetch successful Sep 5 23:53:09.164371 coreos-metadata[1981]: Sep 05 23:53:09.164 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 5 23:53:09.169781 coreos-metadata[1981]: Sep 05 23:53:09.169 INFO Fetch successful Sep 5 23:53:09.173094 coreos-metadata[1981]: Sep 05 23:53:09.169 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 5 23:53:09.175633 coreos-metadata[1981]: Sep 05 23:53:09.173 INFO Fetch successful Sep 5 23:53:09.276747 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1763) Sep 5 23:53:09.276833 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 5 23:53:09.283236 bash[2060]: Updated "/home/core/.ssh/authorized_keys" Sep 5 23:53:09.285807 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 23:53:09.305418 extend-filesystems[2034]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 5 23:53:09.305418 extend-filesystems[2034]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 5 23:53:09.305418 extend-filesystems[2034]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 5 23:53:09.334816 extend-filesystems[1984]: Resized filesystem in /dev/nvme0n1p9 Sep 5 23:53:09.339864 systemd[1]: Starting sshkeys.service... Sep 5 23:53:09.344113 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 23:53:09.345735 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 23:53:09.359374 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 5 23:53:09.454495 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 5 23:53:09.476761 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 5 23:53:09.491526 systemd-logind[1993]: Watching system buttons on /dev/input/event0 (Power Button) Sep 5 23:53:09.507145 systemd-logind[1993]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 5 23:53:09.529929 dbus-daemon[1982]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 5 23:53:09.507525 systemd-logind[1993]: New seat seat0. Sep 5 23:53:09.530703 dbus-daemon[1982]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.4' (uid=0 pid=2012 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 5 23:53:09.541595 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 5 23:53:09.550122 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 23:53:09.559805 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 5 23:53:09.575264 systemd[1]: Starting polkit.service - Authorization Manager... Sep 5 23:53:09.607319 polkitd[2122]: Started polkitd version 121 Sep 5 23:53:09.658398 locksmithd[2040]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 23:53:09.664791 polkitd[2122]: Loading rules from directory /etc/polkit-1/rules.d Sep 5 23:53:09.664907 polkitd[2122]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 5 23:53:09.678962 polkitd[2122]: Finished loading, compiling and executing 2 rules Sep 5 23:53:09.694630 dbus-daemon[1982]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 5 23:53:09.694928 systemd[1]: Started polkit.service - Authorization Manager. Sep 5 23:53:09.695570 polkitd[2122]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 5 23:53:09.757970 systemd-hostnamed[2012]: Hostname set to (transient) Sep 5 23:53:09.758159 systemd-resolved[1937]: System hostname changed to 'ip-172-31-23-98'. Sep 5 23:53:09.917028 containerd[2003]: time="2025-09-05T23:53:09.915451320Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 5 23:53:09.943540 ntpd[1986]: bind(24) AF_INET6 fe80::40e:55ff:fef5:8ef3%2#123 flags 0x11 failed: Cannot assign requested address Sep 5 23:53:09.944336 ntpd[1986]: 5 Sep 23:53:09 ntpd[1986]: bind(24) AF_INET6 fe80::40e:55ff:fef5:8ef3%2#123 flags 0x11 failed: Cannot assign requested address Sep 5 23:53:09.944336 ntpd[1986]: 5 Sep 23:53:09 ntpd[1986]: unable to create socket on eth0 (6) for fe80::40e:55ff:fef5:8ef3%2#123 Sep 5 23:53:09.944336 ntpd[1986]: 5 Sep 23:53:09 ntpd[1986]: failed to init interface for address fe80::40e:55ff:fef5:8ef3%2 Sep 5 23:53:09.943655 ntpd[1986]: unable to create socket on eth0 (6) for fe80::40e:55ff:fef5:8ef3%2#123 Sep 5 23:53:09.943688 ntpd[1986]: failed to init interface for address fe80::40e:55ff:fef5:8ef3%2 Sep 5 23:53:09.982893 coreos-metadata[2102]: Sep 05 23:53:09.980 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 5 23:53:09.984023 coreos-metadata[2102]: Sep 05 23:53:09.983 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 5 23:53:09.987648 coreos-metadata[2102]: Sep 05 23:53:09.985 INFO Fetch successful Sep 5 23:53:09.987648 coreos-metadata[2102]: Sep 05 23:53:09.985 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 5 23:53:09.990377 coreos-metadata[2102]: Sep 05 23:53:09.989 INFO Fetch successful Sep 5 23:53:09.993099 unknown[2102]: wrote ssh authorized keys file for user: core Sep 5 23:53:10.045085 update-ssh-keys[2182]: Updated "/home/core/.ssh/authorized_keys" Sep 5 23:53:10.049333 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 5 23:53:10.062644 systemd[1]: Finished sshkeys.service. Sep 5 23:53:10.087812 containerd[2003]: time="2025-09-05T23:53:10.087724329Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:53:10.091091 containerd[2003]: time="2025-09-05T23:53:10.090896157Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:53:10.091429 containerd[2003]: time="2025-09-05T23:53:10.091265829Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 5 23:53:10.091916 containerd[2003]: time="2025-09-05T23:53:10.091314981Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 5 23:53:10.092185 containerd[2003]: time="2025-09-05T23:53:10.092140797Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 5 23:53:10.092443 containerd[2003]: time="2025-09-05T23:53:10.092410173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 5 23:53:10.092855 containerd[2003]: time="2025-09-05T23:53:10.092681637Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:53:10.092855 containerd[2003]: time="2025-09-05T23:53:10.092720709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:53:10.093268 containerd[2003]: time="2025-09-05T23:53:10.093223473Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:53:10.093704 containerd[2003]: time="2025-09-05T23:53:10.093360729Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 5 23:53:10.093704 containerd[2003]: time="2025-09-05T23:53:10.093400377Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:53:10.093704 containerd[2003]: time="2025-09-05T23:53:10.093430545Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 5 23:53:10.094025 containerd[2003]: time="2025-09-05T23:53:10.093989061Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:53:10.094554 containerd[2003]: time="2025-09-05T23:53:10.094513797Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 5 23:53:10.094967 containerd[2003]: time="2025-09-05T23:53:10.094928097Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 23:53:10.095527 containerd[2003]: time="2025-09-05T23:53:10.095095341Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 5 23:53:10.095527 containerd[2003]: time="2025-09-05T23:53:10.095305089Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 5 23:53:10.095527 containerd[2003]: time="2025-09-05T23:53:10.095402925Z" level=info msg="metadata content store policy set" policy=shared Sep 5 23:53:10.101565 containerd[2003]: time="2025-09-05T23:53:10.101341941Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 5 23:53:10.101565 containerd[2003]: time="2025-09-05T23:53:10.101450169Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 5 23:53:10.101565 containerd[2003]: time="2025-09-05T23:53:10.101508861Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 5 23:53:10.102067 containerd[2003]: time="2025-09-05T23:53:10.102032613Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 5 23:53:10.103461 containerd[2003]: time="2025-09-05T23:53:10.102205377Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 5 23:53:10.103461 containerd[2003]: time="2025-09-05T23:53:10.102485025Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 5 23:53:10.104381 containerd[2003]: time="2025-09-05T23:53:10.104329005Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 5 23:53:10.105924 containerd[2003]: time="2025-09-05T23:53:10.105869481Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 5 23:53:10.108433 containerd[2003]: time="2025-09-05T23:53:10.106064409Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 5 23:53:10.108433 containerd[2003]: time="2025-09-05T23:53:10.106104693Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 5 23:53:10.108433 containerd[2003]: time="2025-09-05T23:53:10.106137189Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 5 23:53:10.108433 containerd[2003]: time="2025-09-05T23:53:10.106167813Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 5 23:53:10.108433 containerd[2003]: time="2025-09-05T23:53:10.106197921Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 5 23:53:10.108433 containerd[2003]: time="2025-09-05T23:53:10.106232073Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 5 23:53:10.108433 containerd[2003]: time="2025-09-05T23:53:10.106264413Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 5 23:53:10.108433 containerd[2003]: time="2025-09-05T23:53:10.106293753Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 5 23:53:10.108433 containerd[2003]: time="2025-09-05T23:53:10.106322121Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 5 23:53:10.108433 containerd[2003]: time="2025-09-05T23:53:10.106354785Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 5 23:53:10.108433 containerd[2003]: time="2025-09-05T23:53:10.106395105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 5 23:53:10.108433 containerd[2003]: time="2025-09-05T23:53:10.106437225Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 5 23:53:10.108433 containerd[2003]: time="2025-09-05T23:53:10.106467453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 5 23:53:10.108433 containerd[2003]: time="2025-09-05T23:53:10.106497729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 5 23:53:10.109163 containerd[2003]: time="2025-09-05T23:53:10.106528329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 5 23:53:10.109163 containerd[2003]: time="2025-09-05T23:53:10.106563981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 5 23:53:10.109163 containerd[2003]: time="2025-09-05T23:53:10.106592277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 5 23:53:10.109163 containerd[2003]: time="2025-09-05T23:53:10.106661637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 5 23:53:10.109163 containerd[2003]: time="2025-09-05T23:53:10.106698477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 5 23:53:10.109163 containerd[2003]: time="2025-09-05T23:53:10.106734165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 5 23:53:10.109163 containerd[2003]: time="2025-09-05T23:53:10.106764741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 5 23:53:10.109163 containerd[2003]: time="2025-09-05T23:53:10.106793817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 5 23:53:10.109163 containerd[2003]: time="2025-09-05T23:53:10.106821753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 5 23:53:10.109163 containerd[2003]: time="2025-09-05T23:53:10.106854873Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 5 23:53:10.109163 containerd[2003]: time="2025-09-05T23:53:10.106899633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 5 23:53:10.109163 containerd[2003]: time="2025-09-05T23:53:10.106929621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 5 23:53:10.109163 containerd[2003]: time="2025-09-05T23:53:10.106960509Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 5 23:53:10.109163 containerd[2003]: time="2025-09-05T23:53:10.107200905Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 5 23:53:10.109779 containerd[2003]: time="2025-09-05T23:53:10.107241093Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 5 23:53:10.109779 containerd[2003]: time="2025-09-05T23:53:10.107268681Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 5 23:53:10.109779 containerd[2003]: time="2025-09-05T23:53:10.107296917Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 5 23:53:10.109779 containerd[2003]: time="2025-09-05T23:53:10.107327601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 5 23:53:10.109779 containerd[2003]: time="2025-09-05T23:53:10.107362413Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 5 23:53:10.109779 containerd[2003]: time="2025-09-05T23:53:10.107397801Z" level=info msg="NRI interface is disabled by configuration." Sep 5 23:53:10.109779 containerd[2003]: time="2025-09-05T23:53:10.107438685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 5 23:53:10.115191 containerd[2003]: time="2025-09-05T23:53:10.112117029Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 5 23:53:10.115191 containerd[2003]: time="2025-09-05T23:53:10.112261437Z" level=info msg="Connect containerd service" Sep 5 23:53:10.115191 containerd[2003]: time="2025-09-05T23:53:10.112333221Z" level=info msg="using legacy CRI server" Sep 5 23:53:10.115191 containerd[2003]: time="2025-09-05T23:53:10.112352013Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 23:53:10.115191 containerd[2003]: time="2025-09-05T23:53:10.112518885Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 5 23:53:10.117550 containerd[2003]: time="2025-09-05T23:53:10.117466533Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 23:53:10.118159 containerd[2003]: time="2025-09-05T23:53:10.118032729Z" level=info msg="Start subscribing containerd event" Sep 5 23:53:10.118159 containerd[2003]: time="2025-09-05T23:53:10.118131813Z" level=info msg="Start recovering state" Sep 5 23:53:10.118299 containerd[2003]: time="2025-09-05T23:53:10.118267341Z" level=info msg="Start event monitor" Sep 5 23:53:10.118348 containerd[2003]: time="2025-09-05T23:53:10.118294737Z" level=info msg="Start snapshots syncer" Sep 5 23:53:10.118348 containerd[2003]: time="2025-09-05T23:53:10.118317417Z" level=info msg="Start cni network conf syncer for default" Sep 5 23:53:10.118348 containerd[2003]: time="2025-09-05T23:53:10.118336221Z" level=info msg="Start streaming server" Sep 5 23:53:10.119476 containerd[2003]: time="2025-09-05T23:53:10.119426037Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 23:53:10.119742 containerd[2003]: time="2025-09-05T23:53:10.119714913Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 23:53:10.122635 containerd[2003]: time="2025-09-05T23:53:10.120484149Z" level=info msg="containerd successfully booted in 0.211790s" Sep 5 23:53:10.120646 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 23:53:10.238811 systemd-networkd[1935]: eth0: Gained IPv6LL Sep 5 23:53:10.247347 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 23:53:10.271158 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 23:53:10.276310 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 23:53:10.289095 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 5 23:53:10.306076 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:53:10.317162 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 23:53:10.438930 amazon-ssm-agent[2188]: Initializing new seelog logger Sep 5 23:53:10.438930 amazon-ssm-agent[2188]: New Seelog Logger Creation Complete Sep 5 23:53:10.438930 amazon-ssm-agent[2188]: 2025/09/05 23:53:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 5 23:53:10.438930 amazon-ssm-agent[2188]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 5 23:53:10.439663 amazon-ssm-agent[2188]: 2025/09/05 23:53:10 processing appconfig overrides Sep 5 23:53:10.444471 amazon-ssm-agent[2188]: 2025/09/05 23:53:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 5 23:53:10.444471 amazon-ssm-agent[2188]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 5 23:53:10.444471 amazon-ssm-agent[2188]: 2025/09/05 23:53:10 processing appconfig overrides Sep 5 23:53:10.444722 amazon-ssm-agent[2188]: 2025/09/05 23:53:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 5 23:53:10.444722 amazon-ssm-agent[2188]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 5 23:53:10.444722 amazon-ssm-agent[2188]: 2025/09/05 23:53:10 processing appconfig overrides Sep 5 23:53:10.452482 amazon-ssm-agent[2188]: 2025-09-05 23:53:10 INFO Proxy environment variables: Sep 5 23:53:10.453454 amazon-ssm-agent[2188]: 2025/09/05 23:53:10 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 5 23:53:10.453454 amazon-ssm-agent[2188]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 5 23:53:10.453723 amazon-ssm-agent[2188]: 2025/09/05 23:53:10 processing appconfig overrides Sep 5 23:53:10.466823 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 23:53:10.548412 amazon-ssm-agent[2188]: 2025-09-05 23:53:10 INFO https_proxy: Sep 5 23:53:10.648189 amazon-ssm-agent[2188]: 2025-09-05 23:53:10 INFO http_proxy: Sep 5 23:53:10.747290 amazon-ssm-agent[2188]: 2025-09-05 23:53:10 INFO no_proxy: Sep 5 23:53:10.845781 amazon-ssm-agent[2188]: 2025-09-05 23:53:10 INFO Checking if agent identity type OnPrem can be assumed Sep 5 23:53:10.945568 amazon-ssm-agent[2188]: 2025-09-05 23:53:10 INFO Checking if agent identity type EC2 can be assumed Sep 5 23:53:11.045577 amazon-ssm-agent[2188]: 2025-09-05 23:53:10 INFO Agent will take identity from EC2 Sep 5 23:53:11.082908 tar[2004]: linux-arm64/README.md Sep 5 23:53:11.133765 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 23:53:11.145644 amazon-ssm-agent[2188]: 2025-09-05 23:53:10 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 5 23:53:11.243322 amazon-ssm-agent[2188]: 2025-09-05 23:53:10 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 5 23:53:11.343715 amazon-ssm-agent[2188]: 2025-09-05 23:53:10 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 5 23:53:11.444723 amazon-ssm-agent[2188]: 2025-09-05 23:53:10 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 5 23:53:11.546688 amazon-ssm-agent[2188]: 2025-09-05 23:53:10 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 5 23:53:11.645303 amazon-ssm-agent[2188]: 2025-09-05 23:53:10 INFO [amazon-ssm-agent] Starting Core Agent Sep 5 23:53:11.745091 amazon-ssm-agent[2188]: 2025-09-05 23:53:10 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 5 23:53:11.752289 amazon-ssm-agent[2188]: 2025-09-05 23:53:10 INFO [Registrar] Starting registrar module Sep 5 23:53:11.752425 amazon-ssm-agent[2188]: 2025-09-05 23:53:10 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 5 23:53:11.752425 amazon-ssm-agent[2188]: 2025-09-05 23:53:11 INFO [EC2Identity] EC2 registration was successful. Sep 5 23:53:11.752425 amazon-ssm-agent[2188]: 2025-09-05 23:53:11 INFO [CredentialRefresher] credentialRefresher has started Sep 5 23:53:11.752580 amazon-ssm-agent[2188]: 2025-09-05 23:53:11 INFO [CredentialRefresher] Starting credentials refresher loop Sep 5 23:53:11.752580 amazon-ssm-agent[2188]: 2025-09-05 23:53:11 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 5 23:53:11.844631 amazon-ssm-agent[2188]: 2025-09-05 23:53:11 INFO [CredentialRefresher] Next credential rotation will be in 32.06662824556667 minutes Sep 5 23:53:11.911280 sshd_keygen[2019]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 23:53:11.959737 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 23:53:11.976073 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 23:53:11.984781 systemd[1]: Started sshd@0-172.31.23.98:22-139.178.68.195:46038.service - OpenSSH per-connection server daemon (139.178.68.195:46038). Sep 5 23:53:11.998226 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 23:53:11.998645 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 23:53:12.025282 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 23:53:12.067419 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 23:53:12.078256 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 23:53:12.095340 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 5 23:53:12.098286 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 23:53:12.216374 sshd[2219]: Accepted publickey for core from 139.178.68.195 port 46038 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:53:12.219826 sshd[2219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:53:12.239205 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 23:53:12.248168 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 23:53:12.258374 systemd-logind[1993]: New session 1 of user core. Sep 5 23:53:12.282546 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 23:53:12.298255 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 23:53:12.324412 (systemd)[2230]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 23:53:12.553838 systemd[2230]: Queued start job for default target default.target. Sep 5 23:53:12.563380 systemd[2230]: Created slice app.slice - User Application Slice. Sep 5 23:53:12.563453 systemd[2230]: Reached target paths.target - Paths. Sep 5 23:53:12.563487 systemd[2230]: Reached target timers.target - Timers. Sep 5 23:53:12.567883 systemd[2230]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 23:53:12.592061 systemd[2230]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 23:53:12.592308 systemd[2230]: Reached target sockets.target - Sockets. Sep 5 23:53:12.592343 systemd[2230]: Reached target basic.target - Basic System. Sep 5 23:53:12.592423 systemd[2230]: Reached target default.target - Main User Target. Sep 5 23:53:12.592484 systemd[2230]: Startup finished in 255ms. Sep 5 23:53:12.592802 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 23:53:12.608917 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 23:53:12.771130 systemd[1]: Started sshd@1-172.31.23.98:22-139.178.68.195:46050.service - OpenSSH per-connection server daemon (139.178.68.195:46050). Sep 5 23:53:12.809391 amazon-ssm-agent[2188]: 2025-09-05 23:53:12 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 5 23:53:12.909345 amazon-ssm-agent[2188]: 2025-09-05 23:53:12 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2243) started Sep 5 23:53:12.943545 ntpd[1986]: Listen normally on 7 eth0 [fe80::40e:55ff:fef5:8ef3%2]:123 Sep 5 23:53:12.945074 ntpd[1986]: 5 Sep 23:53:12 ntpd[1986]: Listen normally on 7 eth0 [fe80::40e:55ff:fef5:8ef3%2]:123 Sep 5 23:53:12.972728 sshd[2242]: Accepted publickey for core from 139.178.68.195 port 46050 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:53:12.973957 sshd[2242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:53:12.987316 systemd-logind[1993]: New session 2 of user core. Sep 5 23:53:12.994944 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 23:53:13.011127 amazon-ssm-agent[2188]: 2025-09-05 23:53:12 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 5 23:53:13.114117 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:53:13.119299 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 23:53:13.126069 systemd[1]: Startup finished in 1.197s (kernel) + 8.542s (initrd) + 9.818s (userspace) = 19.558s. Sep 5 23:53:13.126461 (kubelet)[2261]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:53:13.142715 sshd[2242]: pam_unix(sshd:session): session closed for user core Sep 5 23:53:13.151437 systemd[1]: sshd@1-172.31.23.98:22-139.178.68.195:46050.service: Deactivated successfully. Sep 5 23:53:13.157195 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 23:53:13.160981 systemd-logind[1993]: Session 2 logged out. Waiting for processes to exit. Sep 5 23:53:13.187283 systemd-logind[1993]: Removed session 2. Sep 5 23:53:13.193287 systemd[1]: Started sshd@2-172.31.23.98:22-139.178.68.195:46054.service - OpenSSH per-connection server daemon (139.178.68.195:46054). Sep 5 23:53:13.378756 sshd[2269]: Accepted publickey for core from 139.178.68.195 port 46054 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:53:13.381099 sshd[2269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:53:13.390231 systemd-logind[1993]: New session 3 of user core. Sep 5 23:53:13.397921 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 23:53:13.532941 sshd[2269]: pam_unix(sshd:session): session closed for user core Sep 5 23:53:13.539710 systemd[1]: sshd@2-172.31.23.98:22-139.178.68.195:46054.service: Deactivated successfully. Sep 5 23:53:13.544260 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 23:53:13.546688 systemd-logind[1993]: Session 3 logged out. Waiting for processes to exit. Sep 5 23:53:13.548811 systemd-logind[1993]: Removed session 3. Sep 5 23:53:14.455024 kubelet[2261]: E0905 23:53:14.454892 2261 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:53:14.460341 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:53:14.460790 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:53:14.461331 systemd[1]: kubelet.service: Consumed 1.448s CPU time. Sep 5 23:53:16.206555 systemd-resolved[1937]: Clock change detected. Flushing caches. Sep 5 23:53:23.846813 systemd[1]: Started sshd@3-172.31.23.98:22-139.178.68.195:57170.service - OpenSSH per-connection server daemon (139.178.68.195:57170). Sep 5 23:53:24.013583 sshd[2282]: Accepted publickey for core from 139.178.68.195 port 57170 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:53:24.016794 sshd[2282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:53:24.025667 systemd-logind[1993]: New session 4 of user core. Sep 5 23:53:24.032643 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 23:53:24.159616 sshd[2282]: pam_unix(sshd:session): session closed for user core Sep 5 23:53:24.166628 systemd-logind[1993]: Session 4 logged out. Waiting for processes to exit. Sep 5 23:53:24.168048 systemd[1]: sshd@3-172.31.23.98:22-139.178.68.195:57170.service: Deactivated successfully. Sep 5 23:53:24.173592 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 23:53:24.175893 systemd-logind[1993]: Removed session 4. Sep 5 23:53:24.196861 systemd[1]: Started sshd@4-172.31.23.98:22-139.178.68.195:57186.service - OpenSSH per-connection server daemon (139.178.68.195:57186). Sep 5 23:53:24.379225 sshd[2289]: Accepted publickey for core from 139.178.68.195 port 57186 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:53:24.382161 sshd[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:53:24.390564 systemd-logind[1993]: New session 5 of user core. Sep 5 23:53:24.403655 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 23:53:24.525158 sshd[2289]: pam_unix(sshd:session): session closed for user core Sep 5 23:53:24.532159 systemd-logind[1993]: Session 5 logged out. Waiting for processes to exit. Sep 5 23:53:24.534149 systemd[1]: sshd@4-172.31.23.98:22-139.178.68.195:57186.service: Deactivated successfully. Sep 5 23:53:24.539163 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 23:53:24.542137 systemd-logind[1993]: Removed session 5. Sep 5 23:53:24.567920 systemd[1]: Started sshd@5-172.31.23.98:22-139.178.68.195:57198.service - OpenSSH per-connection server daemon (139.178.68.195:57198). Sep 5 23:53:24.748288 sshd[2296]: Accepted publickey for core from 139.178.68.195 port 57198 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:53:24.751164 sshd[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:53:24.753104 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 23:53:24.761492 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:53:24.768630 systemd-logind[1993]: New session 6 of user core. Sep 5 23:53:24.774603 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 23:53:24.911685 sshd[2296]: pam_unix(sshd:session): session closed for user core Sep 5 23:53:24.918670 systemd-logind[1993]: Session 6 logged out. Waiting for processes to exit. Sep 5 23:53:24.919677 systemd[1]: sshd@5-172.31.23.98:22-139.178.68.195:57198.service: Deactivated successfully. Sep 5 23:53:24.924707 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 23:53:24.929720 systemd-logind[1993]: Removed session 6. Sep 5 23:53:24.952135 systemd[1]: Started sshd@6-172.31.23.98:22-139.178.68.195:57200.service - OpenSSH per-connection server daemon (139.178.68.195:57200). Sep 5 23:53:25.125132 sshd[2306]: Accepted publickey for core from 139.178.68.195 port 57200 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:53:25.128694 sshd[2306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:53:25.148704 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:53:25.155535 systemd-logind[1993]: New session 7 of user core. Sep 5 23:53:25.157724 (kubelet)[2313]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:53:25.160229 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 23:53:25.238568 kubelet[2313]: E0905 23:53:25.238505 2313 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:53:25.246677 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:53:25.247066 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:53:25.284107 sudo[2321]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 23:53:25.284884 sudo[2321]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:53:25.304310 sudo[2321]: pam_unix(sudo:session): session closed for user root Sep 5 23:53:25.327909 sshd[2306]: pam_unix(sshd:session): session closed for user core Sep 5 23:53:25.335073 systemd[1]: sshd@6-172.31.23.98:22-139.178.68.195:57200.service: Deactivated successfully. Sep 5 23:53:25.338545 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 23:53:25.339831 systemd-logind[1993]: Session 7 logged out. Waiting for processes to exit. Sep 5 23:53:25.342696 systemd-logind[1993]: Removed session 7. Sep 5 23:53:25.372850 systemd[1]: Started sshd@7-172.31.23.98:22-139.178.68.195:57208.service - OpenSSH per-connection server daemon (139.178.68.195:57208). Sep 5 23:53:25.543441 sshd[2326]: Accepted publickey for core from 139.178.68.195 port 57208 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:53:25.546406 sshd[2326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:53:25.554792 systemd-logind[1993]: New session 8 of user core. Sep 5 23:53:25.563686 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 23:53:25.671453 sudo[2330]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 23:53:25.672842 sudo[2330]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:53:25.680152 sudo[2330]: pam_unix(sudo:session): session closed for user root Sep 5 23:53:25.691324 sudo[2329]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 5 23:53:25.692042 sudo[2329]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:53:25.718210 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 5 23:53:25.722160 auditctl[2333]: No rules Sep 5 23:53:25.724236 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 23:53:25.726428 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 5 23:53:25.732972 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 23:53:25.782616 augenrules[2351]: No rules Sep 5 23:53:25.786101 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 23:53:25.788623 sudo[2329]: pam_unix(sudo:session): session closed for user root Sep 5 23:53:25.812037 sshd[2326]: pam_unix(sshd:session): session closed for user core Sep 5 23:53:25.818249 systemd[1]: sshd@7-172.31.23.98:22-139.178.68.195:57208.service: Deactivated successfully. Sep 5 23:53:25.821265 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 23:53:25.826417 systemd-logind[1993]: Session 8 logged out. Waiting for processes to exit. Sep 5 23:53:25.828673 systemd-logind[1993]: Removed session 8. Sep 5 23:53:25.845778 systemd[1]: Started sshd@8-172.31.23.98:22-139.178.68.195:57216.service - OpenSSH per-connection server daemon (139.178.68.195:57216). Sep 5 23:53:26.029854 sshd[2359]: Accepted publickey for core from 139.178.68.195 port 57216 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:53:26.032863 sshd[2359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:53:26.040033 systemd-logind[1993]: New session 9 of user core. Sep 5 23:53:26.049578 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 23:53:26.153174 sudo[2362]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 23:53:26.154534 sudo[2362]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 23:53:26.658797 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 23:53:26.659574 (dockerd)[2378]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 23:53:27.057403 dockerd[2378]: time="2025-09-05T23:53:27.056479497Z" level=info msg="Starting up" Sep 5 23:53:27.217088 dockerd[2378]: time="2025-09-05T23:53:27.217000990Z" level=info msg="Loading containers: start." Sep 5 23:53:27.386396 kernel: Initializing XFRM netlink socket Sep 5 23:53:27.420960 (udev-worker)[2401]: Network interface NamePolicy= disabled on kernel command line. Sep 5 23:53:27.516455 systemd-networkd[1935]: docker0: Link UP Sep 5 23:53:27.543956 dockerd[2378]: time="2025-09-05T23:53:27.543807972Z" level=info msg="Loading containers: done." Sep 5 23:53:27.568075 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3632030838-merged.mount: Deactivated successfully. Sep 5 23:53:27.579780 dockerd[2378]: time="2025-09-05T23:53:27.579704976Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 23:53:27.580120 dockerd[2378]: time="2025-09-05T23:53:27.579870216Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 5 23:53:27.580120 dockerd[2378]: time="2025-09-05T23:53:27.580076076Z" level=info msg="Daemon has completed initialization" Sep 5 23:53:27.654791 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 23:53:27.656256 dockerd[2378]: time="2025-09-05T23:53:27.654846528Z" level=info msg="API listen on /run/docker.sock" Sep 5 23:53:29.118871 containerd[2003]: time="2025-09-05T23:53:29.118399968Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 5 23:53:29.946072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1661094967.mount: Deactivated successfully. Sep 5 23:53:31.362403 containerd[2003]: time="2025-09-05T23:53:31.362279703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:31.364539 containerd[2003]: time="2025-09-05T23:53:31.364461243Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=27352613" Sep 5 23:53:31.366894 containerd[2003]: time="2025-09-05T23:53:31.366815235Z" level=info msg="ImageCreate event name:\"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:31.372921 containerd[2003]: time="2025-09-05T23:53:31.372838683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:31.375914 containerd[2003]: time="2025-09-05T23:53:31.375109575Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"27349413\" in 2.256631883s" Sep 5 23:53:31.375914 containerd[2003]: time="2025-09-05T23:53:31.375176259Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\"" Sep 5 23:53:31.377956 containerd[2003]: time="2025-09-05T23:53:31.377915067Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 5 23:53:32.807376 containerd[2003]: time="2025-09-05T23:53:32.805614210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:32.808433 containerd[2003]: time="2025-09-05T23:53:32.808389510Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=23536977" Sep 5 23:53:32.809500 containerd[2003]: time="2025-09-05T23:53:32.809461950Z" level=info msg="ImageCreate event name:\"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:32.815214 containerd[2003]: time="2025-09-05T23:53:32.815155950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:32.817595 containerd[2003]: time="2025-09-05T23:53:32.817526682Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"25093155\" in 1.439242519s" Sep 5 23:53:32.817717 containerd[2003]: time="2025-09-05T23:53:32.817591038Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\"" Sep 5 23:53:32.818231 containerd[2003]: time="2025-09-05T23:53:32.818160846Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 5 23:53:34.024392 containerd[2003]: time="2025-09-05T23:53:34.023902420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:34.026063 containerd[2003]: time="2025-09-05T23:53:34.025992616Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=18292014" Sep 5 23:53:34.028205 containerd[2003]: time="2025-09-05T23:53:34.027373876Z" level=info msg="ImageCreate event name:\"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:34.033022 containerd[2003]: time="2025-09-05T23:53:34.032948956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:34.035820 containerd[2003]: time="2025-09-05T23:53:34.035730076Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"19848210\" in 1.21750737s" Sep 5 23:53:34.035820 containerd[2003]: time="2025-09-05T23:53:34.035804320Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\"" Sep 5 23:53:34.036609 containerd[2003]: time="2025-09-05T23:53:34.036544276Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 5 23:53:35.413445 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 5 23:53:35.422765 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:53:35.469462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2203889860.mount: Deactivated successfully. Sep 5 23:53:35.835270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:53:35.849229 (kubelet)[2597]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:53:35.940181 kubelet[2597]: E0905 23:53:35.939728 2597 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:53:35.945577 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:53:35.945891 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:53:36.356195 containerd[2003]: time="2025-09-05T23:53:36.356120000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:36.361391 containerd[2003]: time="2025-09-05T23:53:36.361309484Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=28199959" Sep 5 23:53:36.366136 containerd[2003]: time="2025-09-05T23:53:36.366073088Z" level=info msg="ImageCreate event name:\"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:36.374674 containerd[2003]: time="2025-09-05T23:53:36.374565188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:36.376609 containerd[2003]: time="2025-09-05T23:53:36.376404752Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"28198978\" in 2.33979408s" Sep 5 23:53:36.376609 containerd[2003]: time="2025-09-05T23:53:36.376462676Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\"" Sep 5 23:53:36.378314 containerd[2003]: time="2025-09-05T23:53:36.378249872Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 5 23:53:36.960763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3907579524.mount: Deactivated successfully. Sep 5 23:53:38.305878 containerd[2003]: time="2025-09-05T23:53:38.305815749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:38.309024 containerd[2003]: time="2025-09-05T23:53:38.308815917Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Sep 5 23:53:38.312282 containerd[2003]: time="2025-09-05T23:53:38.311210949Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:38.318224 containerd[2003]: time="2025-09-05T23:53:38.318163821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:38.320891 containerd[2003]: time="2025-09-05T23:53:38.320819445Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.942504173s" Sep 5 23:53:38.321140 containerd[2003]: time="2025-09-05T23:53:38.321095445Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 5 23:53:38.322237 containerd[2003]: time="2025-09-05T23:53:38.322110861Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 23:53:38.819770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3232643097.mount: Deactivated successfully. Sep 5 23:53:38.832699 containerd[2003]: time="2025-09-05T23:53:38.832616100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:38.835876 containerd[2003]: time="2025-09-05T23:53:38.835524360Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Sep 5 23:53:38.838127 containerd[2003]: time="2025-09-05T23:53:38.838030500Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:38.844651 containerd[2003]: time="2025-09-05T23:53:38.844523820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:38.846673 containerd[2003]: time="2025-09-05T23:53:38.846156912Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 523.680855ms" Sep 5 23:53:38.846673 containerd[2003]: time="2025-09-05T23:53:38.846214932Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 5 23:53:38.847516 containerd[2003]: time="2025-09-05T23:53:38.847228272Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 5 23:53:39.380949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount732959857.mount: Deactivated successfully. Sep 5 23:53:40.050849 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 5 23:53:41.522585 containerd[2003]: time="2025-09-05T23:53:41.521811481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:41.524724 containerd[2003]: time="2025-09-05T23:53:41.524634601Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465295" Sep 5 23:53:41.527400 containerd[2003]: time="2025-09-05T23:53:41.527312197Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:41.535414 containerd[2003]: time="2025-09-05T23:53:41.534602401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:53:41.537759 containerd[2003]: time="2025-09-05T23:53:41.537533701Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.690252389s" Sep 5 23:53:41.537759 containerd[2003]: time="2025-09-05T23:53:41.537607429Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 5 23:53:46.163547 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 5 23:53:46.173848 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:53:46.573834 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:53:46.597101 (kubelet)[2748]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 23:53:46.692164 kubelet[2748]: E0905 23:53:46.692090 2748 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 23:53:46.700324 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 23:53:46.701937 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 23:53:47.262238 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:53:47.274910 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:53:47.340975 systemd[1]: Reloading requested from client PID 2762 ('systemctl') (unit session-9.scope)... Sep 5 23:53:47.341272 systemd[1]: Reloading... Sep 5 23:53:47.616503 zram_generator::config[2805]: No configuration found. Sep 5 23:53:47.864615 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:53:48.046231 systemd[1]: Reloading finished in 704 ms. Sep 5 23:53:48.141521 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 5 23:53:48.141721 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 5 23:53:48.143454 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:53:48.153072 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:53:48.491593 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:53:48.508213 (kubelet)[2864]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 23:53:48.587385 kubelet[2864]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:53:48.588867 kubelet[2864]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 23:53:48.588867 kubelet[2864]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:53:48.588867 kubelet[2864]: I0905 23:53:48.587974 2864 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 23:53:51.415791 kubelet[2864]: I0905 23:53:51.415723 2864 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 5 23:53:51.415791 kubelet[2864]: I0905 23:53:51.415777 2864 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 23:53:51.416538 kubelet[2864]: I0905 23:53:51.416171 2864 server.go:956] "Client rotation is on, will bootstrap in background" Sep 5 23:53:51.468647 kubelet[2864]: E0905 23:53:51.468554 2864 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.23.98:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.23.98:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 5 23:53:51.469148 kubelet[2864]: I0905 23:53:51.468965 2864 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 23:53:51.490405 kubelet[2864]: E0905 23:53:51.489937 2864 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 23:53:51.490405 kubelet[2864]: I0905 23:53:51.490027 2864 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 23:53:51.495770 kubelet[2864]: I0905 23:53:51.495732 2864 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 23:53:51.496636 kubelet[2864]: I0905 23:53:51.496595 2864 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 23:53:51.497001 kubelet[2864]: I0905 23:53:51.496744 2864 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-98","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 23:53:51.498024 kubelet[2864]: I0905 23:53:51.497319 2864 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 23:53:51.498024 kubelet[2864]: I0905 23:53:51.497399 2864 container_manager_linux.go:303] "Creating device plugin manager" Sep 5 23:53:51.498024 kubelet[2864]: I0905 23:53:51.497729 2864 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:53:51.504291 kubelet[2864]: I0905 23:53:51.504250 2864 kubelet.go:480] "Attempting to sync node with API server" Sep 5 23:53:51.504523 kubelet[2864]: I0905 23:53:51.504500 2864 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 23:53:51.504662 kubelet[2864]: I0905 23:53:51.504644 2864 kubelet.go:386] "Adding apiserver pod source" Sep 5 23:53:51.507147 kubelet[2864]: I0905 23:53:51.507117 2864 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 23:53:51.512067 kubelet[2864]: E0905 23:53:51.511995 2864 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.23.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-98&limit=500&resourceVersion=0\": dial tcp 172.31.23.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 5 23:53:51.513089 kubelet[2864]: E0905 23:53:51.512906 2864 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.23.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 5 23:53:51.513248 kubelet[2864]: I0905 23:53:51.513212 2864 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 23:53:51.514494 kubelet[2864]: I0905 23:53:51.514446 2864 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 5 23:53:51.514721 kubelet[2864]: W0905 23:53:51.514688 2864 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 23:53:51.521045 kubelet[2864]: I0905 23:53:51.520998 2864 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 23:53:51.521200 kubelet[2864]: I0905 23:53:51.521071 2864 server.go:1289] "Started kubelet" Sep 5 23:53:51.525297 kubelet[2864]: I0905 23:53:51.524560 2864 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 23:53:51.527921 kubelet[2864]: I0905 23:53:51.527101 2864 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 23:53:51.527921 kubelet[2864]: I0905 23:53:51.527553 2864 server.go:317] "Adding debug handlers to kubelet server" Sep 5 23:53:51.527921 kubelet[2864]: I0905 23:53:51.527687 2864 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 23:53:51.531682 kubelet[2864]: I0905 23:53:51.531627 2864 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 23:53:51.536406 kubelet[2864]: E0905 23:53:51.533329 2864 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.98:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.98:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-98.18628815b9d01d57 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-98,UID:ip-172-31-23-98,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-98,},FirstTimestamp:2025-09-05 23:53:51.521029463 +0000 UTC m=+3.004902580,LastTimestamp:2025-09-05 23:53:51.521029463 +0000 UTC m=+3.004902580,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-98,}" Sep 5 23:53:51.538095 kubelet[2864]: I0905 23:53:51.538042 2864 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 23:53:51.541310 kubelet[2864]: I0905 23:53:51.541255 2864 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 23:53:51.542356 kubelet[2864]: E0905 23:53:51.541861 2864 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-98\" not found" Sep 5 23:53:51.543151 kubelet[2864]: I0905 23:53:51.543108 2864 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 23:53:51.543468 kubelet[2864]: I0905 23:53:51.543425 2864 reconciler.go:26] "Reconciler: start to sync state" Sep 5 23:53:51.545920 kubelet[2864]: I0905 23:53:51.545871 2864 factory.go:223] Registration of the systemd container factory successfully Sep 5 23:53:51.546071 kubelet[2864]: I0905 23:53:51.546018 2864 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 23:53:51.549434 kubelet[2864]: E0905 23:53:51.549136 2864 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.23.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 5 23:53:51.551614 kubelet[2864]: E0905 23:53:51.550333 2864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-98?timeout=10s\": dial tcp 172.31.23.98:6443: connect: connection refused" interval="200ms" Sep 5 23:53:51.551614 kubelet[2864]: E0905 23:53:51.550632 2864 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 23:53:51.551614 kubelet[2864]: I0905 23:53:51.550998 2864 factory.go:223] Registration of the containerd container factory successfully Sep 5 23:53:51.577934 kubelet[2864]: I0905 23:53:51.577537 2864 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 23:53:51.577934 kubelet[2864]: I0905 23:53:51.577576 2864 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 23:53:51.577934 kubelet[2864]: I0905 23:53:51.577617 2864 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:53:51.582243 kubelet[2864]: I0905 23:53:51.582192 2864 policy_none.go:49] "None policy: Start" Sep 5 23:53:51.582243 kubelet[2864]: I0905 23:53:51.582247 2864 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 23:53:51.582479 kubelet[2864]: I0905 23:53:51.582272 2864 state_mem.go:35] "Initializing new in-memory state store" Sep 5 23:53:51.594962 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 5 23:53:51.607678 kubelet[2864]: I0905 23:53:51.607325 2864 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 5 23:53:51.609737 kubelet[2864]: I0905 23:53:51.609684 2864 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 5 23:53:51.609989 kubelet[2864]: I0905 23:53:51.609917 2864 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 5 23:53:51.609989 kubelet[2864]: I0905 23:53:51.609956 2864 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 23:53:51.610318 kubelet[2864]: I0905 23:53:51.610138 2864 kubelet.go:2436] "Starting kubelet main sync loop" Sep 5 23:53:51.610318 kubelet[2864]: E0905 23:53:51.610265 2864 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 23:53:51.620753 kubelet[2864]: E0905 23:53:51.619972 2864 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.23.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 5 23:53:51.626651 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 5 23:53:51.635701 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 5 23:53:51.642589 kubelet[2864]: E0905 23:53:51.642521 2864 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-98\" not found" Sep 5 23:53:51.646085 kubelet[2864]: E0905 23:53:51.646037 2864 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 5 23:53:51.647007 kubelet[2864]: I0905 23:53:51.646330 2864 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 23:53:51.647007 kubelet[2864]: I0905 23:53:51.646400 2864 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 23:53:51.647007 kubelet[2864]: I0905 23:53:51.646883 2864 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 23:53:51.649982 kubelet[2864]: E0905 23:53:51.649923 2864 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 23:53:51.650123 kubelet[2864]: E0905 23:53:51.650059 2864 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-98\" not found" Sep 5 23:53:51.740734 systemd[1]: Created slice kubepods-burstable-podb078b462aaaabcb07fa7f4b84e79481d.slice - libcontainer container kubepods-burstable-podb078b462aaaabcb07fa7f4b84e79481d.slice. Sep 5 23:53:51.750920 kubelet[2864]: I0905 23:53:51.750880 2864 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-98" Sep 5 23:53:51.752709 kubelet[2864]: E0905 23:53:51.752514 2864 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.23.98:6443/api/v1/nodes\": dial tcp 172.31.23.98:6443: connect: connection refused" node="ip-172-31-23-98" Sep 5 23:53:51.754124 kubelet[2864]: E0905 23:53:51.753460 2864 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-98\" not found" node="ip-172-31-23-98" Sep 5 23:53:51.756186 kubelet[2864]: E0905 23:53:51.754582 2864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-98?timeout=10s\": dial tcp 172.31.23.98:6443: connect: connection refused" interval="400ms" Sep 5 23:53:51.764222 systemd[1]: Created slice kubepods-burstable-pod691d8c5ae5cbec4c22ec3f489d0e48f3.slice - libcontainer container kubepods-burstable-pod691d8c5ae5cbec4c22ec3f489d0e48f3.slice. Sep 5 23:53:51.787502 kubelet[2864]: E0905 23:53:51.787386 2864 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-98\" not found" node="ip-172-31-23-98" Sep 5 23:53:51.792038 systemd[1]: Created slice kubepods-burstable-podf5defd5dcfbb8cd0efae6916b33ebbe9.slice - libcontainer container kubepods-burstable-podf5defd5dcfbb8cd0efae6916b33ebbe9.slice. Sep 5 23:53:51.796372 kubelet[2864]: E0905 23:53:51.795970 2864 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-98\" not found" node="ip-172-31-23-98" Sep 5 23:53:51.843950 kubelet[2864]: I0905 23:53:51.843904 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b078b462aaaabcb07fa7f4b84e79481d-ca-certs\") pod \"kube-apiserver-ip-172-31-23-98\" (UID: \"b078b462aaaabcb07fa7f4b84e79481d\") " pod="kube-system/kube-apiserver-ip-172-31-23-98" Sep 5 23:53:51.844297 kubelet[2864]: I0905 23:53:51.844243 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b078b462aaaabcb07fa7f4b84e79481d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-98\" (UID: \"b078b462aaaabcb07fa7f4b84e79481d\") " pod="kube-system/kube-apiserver-ip-172-31-23-98" Sep 5 23:53:51.844501 kubelet[2864]: I0905 23:53:51.844445 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/691d8c5ae5cbec4c22ec3f489d0e48f3-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-98\" (UID: \"691d8c5ae5cbec4c22ec3f489d0e48f3\") " pod="kube-system/kube-controller-manager-ip-172-31-23-98" Sep 5 23:53:51.844501 kubelet[2864]: I0905 23:53:51.844492 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/691d8c5ae5cbec4c22ec3f489d0e48f3-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-98\" (UID: \"691d8c5ae5cbec4c22ec3f489d0e48f3\") " pod="kube-system/kube-controller-manager-ip-172-31-23-98" Sep 5 23:53:51.844683 kubelet[2864]: I0905 23:53:51.844535 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5defd5dcfbb8cd0efae6916b33ebbe9-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-98\" (UID: \"f5defd5dcfbb8cd0efae6916b33ebbe9\") " pod="kube-system/kube-scheduler-ip-172-31-23-98" Sep 5 23:53:51.844683 kubelet[2864]: I0905 23:53:51.844570 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b078b462aaaabcb07fa7f4b84e79481d-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-98\" (UID: \"b078b462aaaabcb07fa7f4b84e79481d\") " pod="kube-system/kube-apiserver-ip-172-31-23-98" Sep 5 23:53:51.844683 kubelet[2864]: I0905 23:53:51.844606 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/691d8c5ae5cbec4c22ec3f489d0e48f3-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-98\" (UID: \"691d8c5ae5cbec4c22ec3f489d0e48f3\") " pod="kube-system/kube-controller-manager-ip-172-31-23-98" Sep 5 23:53:51.844683 kubelet[2864]: I0905 23:53:51.844640 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/691d8c5ae5cbec4c22ec3f489d0e48f3-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-98\" (UID: \"691d8c5ae5cbec4c22ec3f489d0e48f3\") " pod="kube-system/kube-controller-manager-ip-172-31-23-98" Sep 5 23:53:51.844979 kubelet[2864]: I0905 23:53:51.844692 2864 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/691d8c5ae5cbec4c22ec3f489d0e48f3-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-98\" (UID: \"691d8c5ae5cbec4c22ec3f489d0e48f3\") " pod="kube-system/kube-controller-manager-ip-172-31-23-98" Sep 5 23:53:51.956438 kubelet[2864]: I0905 23:53:51.955723 2864 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-98" Sep 5 23:53:51.956438 kubelet[2864]: E0905 23:53:51.956327 2864 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.23.98:6443/api/v1/nodes\": dial tcp 172.31.23.98:6443: connect: connection refused" node="ip-172-31-23-98" Sep 5 23:53:52.056822 containerd[2003]: time="2025-09-05T23:53:52.056635870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-98,Uid:b078b462aaaabcb07fa7f4b84e79481d,Namespace:kube-system,Attempt:0,}" Sep 5 23:53:52.089018 containerd[2003]: time="2025-09-05T23:53:52.088840846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-98,Uid:691d8c5ae5cbec4c22ec3f489d0e48f3,Namespace:kube-system,Attempt:0,}" Sep 5 23:53:52.098581 containerd[2003]: time="2025-09-05T23:53:52.097968178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-98,Uid:f5defd5dcfbb8cd0efae6916b33ebbe9,Namespace:kube-system,Attempt:0,}" Sep 5 23:53:52.156059 kubelet[2864]: E0905 23:53:52.155991 2864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-98?timeout=10s\": dial tcp 172.31.23.98:6443: connect: connection refused" interval="800ms" Sep 5 23:53:52.359267 kubelet[2864]: I0905 23:53:52.359069 2864 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-98" Sep 5 23:53:52.360546 kubelet[2864]: E0905 23:53:52.360479 2864 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.23.98:6443/api/v1/nodes\": dial tcp 172.31.23.98:6443: connect: connection refused" node="ip-172-31-23-98" Sep 5 23:53:52.571152 kubelet[2864]: E0905 23:53:52.571055 2864 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.23.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-98&limit=500&resourceVersion=0\": dial tcp 172.31.23.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 5 23:53:52.584223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1474607627.mount: Deactivated successfully. Sep 5 23:53:52.600128 containerd[2003]: time="2025-09-05T23:53:52.598369044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:53:52.606206 containerd[2003]: time="2025-09-05T23:53:52.606115896Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 5 23:53:52.609197 containerd[2003]: time="2025-09-05T23:53:52.608114436Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:53:52.610145 kubelet[2864]: E0905 23:53:52.609978 2864 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.23.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 5 23:53:52.611030 containerd[2003]: time="2025-09-05T23:53:52.610915968Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:53:52.615160 containerd[2003]: time="2025-09-05T23:53:52.614967744Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 23:53:52.617975 containerd[2003]: time="2025-09-05T23:53:52.617568144Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 23:53:52.617975 containerd[2003]: time="2025-09-05T23:53:52.617735040Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:53:52.622477 containerd[2003]: time="2025-09-05T23:53:52.622311732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 23:53:52.624805 containerd[2003]: time="2025-09-05T23:53:52.624446040Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 567.643934ms" Sep 5 23:53:52.643762 containerd[2003]: time="2025-09-05T23:53:52.643591164Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 545.475878ms" Sep 5 23:53:52.650416 containerd[2003]: time="2025-09-05T23:53:52.649982845Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 560.903403ms" Sep 5 23:53:52.875706 kubelet[2864]: E0905 23:53:52.875504 2864 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.23.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 5 23:53:52.891462 containerd[2003]: time="2025-09-05T23:53:52.890322866Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:53:52.891462 containerd[2003]: time="2025-09-05T23:53:52.890474006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:53:52.891462 containerd[2003]: time="2025-09-05T23:53:52.890523626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:52.891462 containerd[2003]: time="2025-09-05T23:53:52.890730734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:52.895215 containerd[2003]: time="2025-09-05T23:53:52.894745502Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:53:52.895652 containerd[2003]: time="2025-09-05T23:53:52.895191158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:53:52.895652 containerd[2003]: time="2025-09-05T23:53:52.895535018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:52.896238 containerd[2003]: time="2025-09-05T23:53:52.895985798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:52.898915 containerd[2003]: time="2025-09-05T23:53:52.898745906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:53:52.899382 containerd[2003]: time="2025-09-05T23:53:52.898946210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:53:52.899382 containerd[2003]: time="2025-09-05T23:53:52.899021186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:52.899382 containerd[2003]: time="2025-09-05T23:53:52.899290538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:53:52.952725 systemd[1]: Started cri-containerd-18a3c05440c0b11816f7164f0afa21879319e20da9290264fe5b159a487bbf80.scope - libcontainer container 18a3c05440c0b11816f7164f0afa21879319e20da9290264fe5b159a487bbf80. Sep 5 23:53:52.959661 kubelet[2864]: E0905 23:53:52.958084 2864 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.23.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 5 23:53:52.959661 kubelet[2864]: E0905 23:53:52.958281 2864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-98?timeout=10s\": dial tcp 172.31.23.98:6443: connect: connection refused" interval="1.6s" Sep 5 23:53:52.979269 systemd[1]: Started cri-containerd-d9800aae53d7b0e5f3f14dfff2bfd0c6a588e6979179b584effa70bffa63142d.scope - libcontainer container d9800aae53d7b0e5f3f14dfff2bfd0c6a588e6979179b584effa70bffa63142d. Sep 5 23:53:52.996709 systemd[1]: Started cri-containerd-9e55655471a95584d347cadf16a9d22f47e4008164637cfbec38f2160ce9fb5e.scope - libcontainer container 9e55655471a95584d347cadf16a9d22f47e4008164637cfbec38f2160ce9fb5e. Sep 5 23:53:53.106518 containerd[2003]: time="2025-09-05T23:53:53.106443275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-98,Uid:691d8c5ae5cbec4c22ec3f489d0e48f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"18a3c05440c0b11816f7164f0afa21879319e20da9290264fe5b159a487bbf80\"" Sep 5 23:53:53.124371 containerd[2003]: time="2025-09-05T23:53:53.124280687Z" level=info msg="CreateContainer within sandbox \"18a3c05440c0b11816f7164f0afa21879319e20da9290264fe5b159a487bbf80\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 23:53:53.136918 containerd[2003]: time="2025-09-05T23:53:53.135501359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-98,Uid:b078b462aaaabcb07fa7f4b84e79481d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9800aae53d7b0e5f3f14dfff2bfd0c6a588e6979179b584effa70bffa63142d\"" Sep 5 23:53:53.145305 containerd[2003]: time="2025-09-05T23:53:53.145078619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-98,Uid:f5defd5dcfbb8cd0efae6916b33ebbe9,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e55655471a95584d347cadf16a9d22f47e4008164637cfbec38f2160ce9fb5e\"" Sep 5 23:53:53.149098 containerd[2003]: time="2025-09-05T23:53:53.149032211Z" level=info msg="CreateContainer within sandbox \"d9800aae53d7b0e5f3f14dfff2bfd0c6a588e6979179b584effa70bffa63142d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 23:53:53.159092 containerd[2003]: time="2025-09-05T23:53:53.158859995Z" level=info msg="CreateContainer within sandbox \"9e55655471a95584d347cadf16a9d22f47e4008164637cfbec38f2160ce9fb5e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 23:53:53.165179 kubelet[2864]: I0905 23:53:53.165055 2864 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-98" Sep 5 23:53:53.165951 kubelet[2864]: E0905 23:53:53.165880 2864 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.23.98:6443/api/v1/nodes\": dial tcp 172.31.23.98:6443: connect: connection refused" node="ip-172-31-23-98" Sep 5 23:53:53.181784 containerd[2003]: time="2025-09-05T23:53:53.181687739Z" level=info msg="CreateContainer within sandbox \"18a3c05440c0b11816f7164f0afa21879319e20da9290264fe5b159a487bbf80\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"136ccf76cf4b721baf4b8429925cc6d21e694515e3dcd5ba65b3ac490984d74b\"" Sep 5 23:53:53.182903 containerd[2003]: time="2025-09-05T23:53:53.182821835Z" level=info msg="StartContainer for \"136ccf76cf4b721baf4b8429925cc6d21e694515e3dcd5ba65b3ac490984d74b\"" Sep 5 23:53:53.209190 containerd[2003]: time="2025-09-05T23:53:53.209127935Z" level=info msg="CreateContainer within sandbox \"d9800aae53d7b0e5f3f14dfff2bfd0c6a588e6979179b584effa70bffa63142d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e5ceb2478554c9d8e5ff40d387ed9d1e1b4f67c7c08408e8803ddfaf413533d6\"" Sep 5 23:53:53.210779 containerd[2003]: time="2025-09-05T23:53:53.210544079Z" level=info msg="StartContainer for \"e5ceb2478554c9d8e5ff40d387ed9d1e1b4f67c7c08408e8803ddfaf413533d6\"" Sep 5 23:53:53.213811 containerd[2003]: time="2025-09-05T23:53:53.213631055Z" level=info msg="CreateContainer within sandbox \"9e55655471a95584d347cadf16a9d22f47e4008164637cfbec38f2160ce9fb5e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1acacc9217baff95e7c2818ae8a205f5d13dc159c9e3ecf3272afacb0269ce1f\"" Sep 5 23:53:53.217410 containerd[2003]: time="2025-09-05T23:53:53.215552555Z" level=info msg="StartContainer for \"1acacc9217baff95e7c2818ae8a205f5d13dc159c9e3ecf3272afacb0269ce1f\"" Sep 5 23:53:53.254316 systemd[1]: Started cri-containerd-136ccf76cf4b721baf4b8429925cc6d21e694515e3dcd5ba65b3ac490984d74b.scope - libcontainer container 136ccf76cf4b721baf4b8429925cc6d21e694515e3dcd5ba65b3ac490984d74b. Sep 5 23:53:53.296630 systemd[1]: Started cri-containerd-e5ceb2478554c9d8e5ff40d387ed9d1e1b4f67c7c08408e8803ddfaf413533d6.scope - libcontainer container e5ceb2478554c9d8e5ff40d387ed9d1e1b4f67c7c08408e8803ddfaf413533d6. Sep 5 23:53:53.334722 systemd[1]: Started cri-containerd-1acacc9217baff95e7c2818ae8a205f5d13dc159c9e3ecf3272afacb0269ce1f.scope - libcontainer container 1acacc9217baff95e7c2818ae8a205f5d13dc159c9e3ecf3272afacb0269ce1f. Sep 5 23:53:53.412055 containerd[2003]: time="2025-09-05T23:53:53.410888412Z" level=info msg="StartContainer for \"136ccf76cf4b721baf4b8429925cc6d21e694515e3dcd5ba65b3ac490984d74b\" returns successfully" Sep 5 23:53:53.456549 containerd[2003]: time="2025-09-05T23:53:53.456257281Z" level=info msg="StartContainer for \"e5ceb2478554c9d8e5ff40d387ed9d1e1b4f67c7c08408e8803ddfaf413533d6\" returns successfully" Sep 5 23:53:53.500716 containerd[2003]: time="2025-09-05T23:53:53.499969909Z" level=info msg="StartContainer for \"1acacc9217baff95e7c2818ae8a205f5d13dc159c9e3ecf3272afacb0269ce1f\" returns successfully" Sep 5 23:53:53.529090 kubelet[2864]: E0905 23:53:53.529009 2864 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.23.98:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.23.98:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 5 23:53:53.633997 kubelet[2864]: E0905 23:53:53.633181 2864 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-98\" not found" node="ip-172-31-23-98" Sep 5 23:53:53.644832 kubelet[2864]: E0905 23:53:53.644251 2864 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-98\" not found" node="ip-172-31-23-98" Sep 5 23:53:53.651048 kubelet[2864]: E0905 23:53:53.650995 2864 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-98\" not found" node="ip-172-31-23-98" Sep 5 23:53:54.655957 kubelet[2864]: E0905 23:53:54.655909 2864 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-98\" not found" node="ip-172-31-23-98" Sep 5 23:53:54.661136 kubelet[2864]: E0905 23:53:54.656519 2864 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-98\" not found" node="ip-172-31-23-98" Sep 5 23:53:54.661677 kubelet[2864]: E0905 23:53:54.657092 2864 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-98\" not found" node="ip-172-31-23-98" Sep 5 23:53:54.770222 kubelet[2864]: I0905 23:53:54.770106 2864 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-98" Sep 5 23:53:54.831495 update_engine[1994]: I20250905 23:53:54.831388 1994 update_attempter.cc:509] Updating boot flags... Sep 5 23:53:55.009416 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3159) Sep 5 23:53:55.459381 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3158) Sep 5 23:53:55.902375 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3158) Sep 5 23:53:57.099313 kubelet[2864]: E0905 23:53:57.099039 2864 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-98\" not found" node="ip-172-31-23-98" Sep 5 23:53:57.936425 kubelet[2864]: E0905 23:53:57.936359 2864 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-23-98\" not found" node="ip-172-31-23-98" Sep 5 23:53:57.939879 kubelet[2864]: E0905 23:53:57.939417 2864 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-23-98.18628815b9d01d57 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-98,UID:ip-172-31-23-98,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-98,},FirstTimestamp:2025-09-05 23:53:51.521029463 +0000 UTC m=+3.004902580,LastTimestamp:2025-09-05 23:53:51.521029463 +0000 UTC m=+3.004902580,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-98,}" Sep 5 23:53:57.977395 kubelet[2864]: I0905 23:53:57.976002 2864 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-23-98" Sep 5 23:53:58.042810 kubelet[2864]: I0905 23:53:58.042740 2864 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-23-98" Sep 5 23:53:58.082299 kubelet[2864]: E0905 23:53:58.082224 2864 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-23-98\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-23-98" Sep 5 23:53:58.082299 kubelet[2864]: I0905 23:53:58.082287 2864 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-23-98" Sep 5 23:53:58.088303 kubelet[2864]: E0905 23:53:58.087941 2864 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-23-98\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-23-98" Sep 5 23:53:58.088303 kubelet[2864]: I0905 23:53:58.087991 2864 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-23-98" Sep 5 23:53:58.091767 kubelet[2864]: E0905 23:53:58.091704 2864 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-23-98\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-23-98" Sep 5 23:53:58.517216 kubelet[2864]: I0905 23:53:58.517092 2864 apiserver.go:52] "Watching apiserver" Sep 5 23:53:58.543821 kubelet[2864]: I0905 23:53:58.543748 2864 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 23:54:00.507281 systemd[1]: Reloading requested from client PID 3418 ('systemctl') (unit session-9.scope)... Sep 5 23:54:00.507316 systemd[1]: Reloading... Sep 5 23:54:00.744397 zram_generator::config[3464]: No configuration found. Sep 5 23:54:00.981453 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 23:54:01.222391 systemd[1]: Reloading finished in 714 ms. Sep 5 23:54:01.324648 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:54:01.338490 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 23:54:01.339173 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:54:01.339459 systemd[1]: kubelet.service: Consumed 3.860s CPU time, 131.6M memory peak, 0B memory swap peak. Sep 5 23:54:01.356562 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 23:54:01.759954 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 23:54:01.773158 (kubelet)[3518]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 23:54:01.881917 kubelet[3518]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:54:01.881917 kubelet[3518]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 23:54:01.881917 kubelet[3518]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 23:54:01.882763 kubelet[3518]: I0905 23:54:01.882391 3518 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 23:54:01.902704 kubelet[3518]: I0905 23:54:01.901804 3518 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 5 23:54:01.902704 kubelet[3518]: I0905 23:54:01.901895 3518 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 23:54:01.902956 kubelet[3518]: I0905 23:54:01.902928 3518 server.go:956] "Client rotation is on, will bootstrap in background" Sep 5 23:54:01.909043 kubelet[3518]: I0905 23:54:01.907527 3518 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 5 23:54:01.918985 kubelet[3518]: I0905 23:54:01.918900 3518 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 23:54:01.932824 kubelet[3518]: E0905 23:54:01.932761 3518 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 23:54:01.933225 kubelet[3518]: I0905 23:54:01.933145 3518 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 23:54:01.947403 kubelet[3518]: I0905 23:54:01.945605 3518 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 23:54:01.947403 kubelet[3518]: I0905 23:54:01.946101 3518 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 23:54:01.947403 kubelet[3518]: I0905 23:54:01.946160 3518 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-98","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 23:54:01.947403 kubelet[3518]: I0905 23:54:01.946725 3518 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 23:54:01.947846 kubelet[3518]: I0905 23:54:01.946751 3518 container_manager_linux.go:303] "Creating device plugin manager" Sep 5 23:54:01.947846 kubelet[3518]: I0905 23:54:01.946883 3518 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:54:01.947846 kubelet[3518]: I0905 23:54:01.947167 3518 kubelet.go:480] "Attempting to sync node with API server" Sep 5 23:54:01.947846 kubelet[3518]: I0905 23:54:01.947191 3518 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 23:54:01.947846 kubelet[3518]: I0905 23:54:01.947238 3518 kubelet.go:386] "Adding apiserver pod source" Sep 5 23:54:01.947846 kubelet[3518]: I0905 23:54:01.947271 3518 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 23:54:01.956701 kubelet[3518]: I0905 23:54:01.956638 3518 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 23:54:01.957808 kubelet[3518]: I0905 23:54:01.957744 3518 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 5 23:54:01.968175 kubelet[3518]: I0905 23:54:01.968102 3518 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 23:54:01.968512 kubelet[3518]: I0905 23:54:01.968186 3518 server.go:1289] "Started kubelet" Sep 5 23:54:01.981242 kubelet[3518]: I0905 23:54:01.981180 3518 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 23:54:01.992364 kubelet[3518]: I0905 23:54:01.992285 3518 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 23:54:02.000311 kubelet[3518]: I0905 23:54:02.000267 3518 server.go:317] "Adding debug handlers to kubelet server" Sep 5 23:54:02.019456 kubelet[3518]: I0905 23:54:02.019119 3518 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 23:54:02.020927 kubelet[3518]: I0905 23:54:02.020874 3518 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 23:54:02.036676 kubelet[3518]: I0905 23:54:02.024764 3518 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 23:54:02.036999 kubelet[3518]: I0905 23:54:02.031270 3518 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 23:54:02.038184 kubelet[3518]: I0905 23:54:02.031291 3518 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 23:54:02.038184 kubelet[3518]: E0905 23:54:02.032883 3518 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-98\" not found" Sep 5 23:54:02.038958 kubelet[3518]: I0905 23:54:02.038737 3518 reconciler.go:26] "Reconciler: start to sync state" Sep 5 23:54:02.076183 kubelet[3518]: E0905 23:54:02.075822 3518 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 23:54:02.080781 kubelet[3518]: I0905 23:54:02.080325 3518 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 23:54:02.097010 kubelet[3518]: I0905 23:54:02.096941 3518 factory.go:223] Registration of the containerd container factory successfully Sep 5 23:54:02.097506 kubelet[3518]: I0905 23:54:02.097299 3518 factory.go:223] Registration of the systemd container factory successfully Sep 5 23:54:02.129414 kubelet[3518]: I0905 23:54:02.128694 3518 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 5 23:54:02.157748 kubelet[3518]: I0905 23:54:02.157539 3518 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 5 23:54:02.157748 kubelet[3518]: I0905 23:54:02.157626 3518 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 5 23:54:02.157748 kubelet[3518]: I0905 23:54:02.157699 3518 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 23:54:02.157748 kubelet[3518]: I0905 23:54:02.157717 3518 kubelet.go:2436] "Starting kubelet main sync loop" Sep 5 23:54:02.158058 kubelet[3518]: E0905 23:54:02.157867 3518 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 23:54:02.256228 kubelet[3518]: I0905 23:54:02.256166 3518 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 23:54:02.256228 kubelet[3518]: I0905 23:54:02.256227 3518 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 23:54:02.256573 kubelet[3518]: I0905 23:54:02.256266 3518 state_mem.go:36] "Initialized new in-memory state store" Sep 5 23:54:02.256816 kubelet[3518]: I0905 23:54:02.256759 3518 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 23:54:02.256929 kubelet[3518]: I0905 23:54:02.256806 3518 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 23:54:02.256929 kubelet[3518]: I0905 23:54:02.256877 3518 policy_none.go:49] "None policy: Start" Sep 5 23:54:02.256929 kubelet[3518]: I0905 23:54:02.256902 3518 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 23:54:02.257121 kubelet[3518]: I0905 23:54:02.256957 3518 state_mem.go:35] "Initializing new in-memory state store" Sep 5 23:54:02.257507 kubelet[3518]: I0905 23:54:02.257431 3518 state_mem.go:75] "Updated machine memory state" Sep 5 23:54:02.259897 kubelet[3518]: E0905 23:54:02.259836 3518 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 23:54:02.272453 kubelet[3518]: E0905 23:54:02.272271 3518 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 5 23:54:02.272727 kubelet[3518]: I0905 23:54:02.272675 3518 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 23:54:02.272839 kubelet[3518]: I0905 23:54:02.272724 3518 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 23:54:02.284610 kubelet[3518]: I0905 23:54:02.273938 3518 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 23:54:02.286417 kubelet[3518]: E0905 23:54:02.286129 3518 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 23:54:02.402290 kubelet[3518]: I0905 23:54:02.402230 3518 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-98" Sep 5 23:54:02.423868 kubelet[3518]: I0905 23:54:02.423798 3518 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-23-98" Sep 5 23:54:02.424449 kubelet[3518]: I0905 23:54:02.424032 3518 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-23-98" Sep 5 23:54:02.461802 kubelet[3518]: I0905 23:54:02.461751 3518 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-23-98" Sep 5 23:54:02.463744 kubelet[3518]: I0905 23:54:02.463669 3518 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-23-98" Sep 5 23:54:02.468124 kubelet[3518]: I0905 23:54:02.466912 3518 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-23-98" Sep 5 23:54:02.542254 kubelet[3518]: I0905 23:54:02.542089 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/691d8c5ae5cbec4c22ec3f489d0e48f3-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-98\" (UID: \"691d8c5ae5cbec4c22ec3f489d0e48f3\") " pod="kube-system/kube-controller-manager-ip-172-31-23-98" Sep 5 23:54:02.542254 kubelet[3518]: I0905 23:54:02.542163 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5defd5dcfbb8cd0efae6916b33ebbe9-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-98\" (UID: \"f5defd5dcfbb8cd0efae6916b33ebbe9\") " pod="kube-system/kube-scheduler-ip-172-31-23-98" Sep 5 23:54:02.542254 kubelet[3518]: I0905 23:54:02.542211 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b078b462aaaabcb07fa7f4b84e79481d-ca-certs\") pod \"kube-apiserver-ip-172-31-23-98\" (UID: \"b078b462aaaabcb07fa7f4b84e79481d\") " pod="kube-system/kube-apiserver-ip-172-31-23-98" Sep 5 23:54:02.542523 kubelet[3518]: I0905 23:54:02.542284 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b078b462aaaabcb07fa7f4b84e79481d-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-98\" (UID: \"b078b462aaaabcb07fa7f4b84e79481d\") " pod="kube-system/kube-apiserver-ip-172-31-23-98" Sep 5 23:54:02.542523 kubelet[3518]: I0905 23:54:02.542372 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b078b462aaaabcb07fa7f4b84e79481d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-98\" (UID: \"b078b462aaaabcb07fa7f4b84e79481d\") " pod="kube-system/kube-apiserver-ip-172-31-23-98" Sep 5 23:54:02.542523 kubelet[3518]: I0905 23:54:02.542426 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/691d8c5ae5cbec4c22ec3f489d0e48f3-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-98\" (UID: \"691d8c5ae5cbec4c22ec3f489d0e48f3\") " pod="kube-system/kube-controller-manager-ip-172-31-23-98" Sep 5 23:54:02.542523 kubelet[3518]: I0905 23:54:02.542465 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/691d8c5ae5cbec4c22ec3f489d0e48f3-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-98\" (UID: \"691d8c5ae5cbec4c22ec3f489d0e48f3\") " pod="kube-system/kube-controller-manager-ip-172-31-23-98" Sep 5 23:54:02.542523 kubelet[3518]: I0905 23:54:02.542507 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/691d8c5ae5cbec4c22ec3f489d0e48f3-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-98\" (UID: \"691d8c5ae5cbec4c22ec3f489d0e48f3\") " pod="kube-system/kube-controller-manager-ip-172-31-23-98" Sep 5 23:54:02.542845 kubelet[3518]: I0905 23:54:02.542568 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/691d8c5ae5cbec4c22ec3f489d0e48f3-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-98\" (UID: \"691d8c5ae5cbec4c22ec3f489d0e48f3\") " pod="kube-system/kube-controller-manager-ip-172-31-23-98" Sep 5 23:54:02.952081 kubelet[3518]: I0905 23:54:02.952007 3518 apiserver.go:52] "Watching apiserver" Sep 5 23:54:03.039283 kubelet[3518]: I0905 23:54:03.039202 3518 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 23:54:03.279456 kubelet[3518]: I0905 23:54:03.277609 3518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-98" podStartSLOduration=1.277582857 podStartE2EDuration="1.277582857s" podCreationTimestamp="2025-09-05 23:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:54:03.255330009 +0000 UTC m=+1.467622724" watchObservedRunningTime="2025-09-05 23:54:03.277582857 +0000 UTC m=+1.489875416" Sep 5 23:54:03.322031 kubelet[3518]: I0905 23:54:03.321542 3518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-98" podStartSLOduration=1.32152045 podStartE2EDuration="1.32152045s" podCreationTimestamp="2025-09-05 23:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:54:03.282042429 +0000 UTC m=+1.494334988" watchObservedRunningTime="2025-09-05 23:54:03.32152045 +0000 UTC m=+1.533812997" Sep 5 23:54:03.324287 kubelet[3518]: I0905 23:54:03.324211 3518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-98" podStartSLOduration=1.324190954 podStartE2EDuration="1.324190954s" podCreationTimestamp="2025-09-05 23:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:54:03.324135454 +0000 UTC m=+1.536427989" watchObservedRunningTime="2025-09-05 23:54:03.324190954 +0000 UTC m=+1.536483489" Sep 5 23:54:06.019281 kubelet[3518]: I0905 23:54:06.019243 3518 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 23:54:06.022601 containerd[2003]: time="2025-09-05T23:54:06.022525811Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 23:54:06.023197 kubelet[3518]: I0905 23:54:06.022991 3518 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 23:54:06.869772 systemd[1]: Created slice kubepods-besteffort-pod11410dcf_06f7_49c1_9c5d_2a995ea6416a.slice - libcontainer container kubepods-besteffort-pod11410dcf_06f7_49c1_9c5d_2a995ea6416a.slice. Sep 5 23:54:06.874578 kubelet[3518]: I0905 23:54:06.874482 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/11410dcf-06f7-49c1-9c5d-2a995ea6416a-kube-proxy\") pod \"kube-proxy-q6fkz\" (UID: \"11410dcf-06f7-49c1-9c5d-2a995ea6416a\") " pod="kube-system/kube-proxy-q6fkz" Sep 5 23:54:06.874578 kubelet[3518]: I0905 23:54:06.874579 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/11410dcf-06f7-49c1-9c5d-2a995ea6416a-xtables-lock\") pod \"kube-proxy-q6fkz\" (UID: \"11410dcf-06f7-49c1-9c5d-2a995ea6416a\") " pod="kube-system/kube-proxy-q6fkz" Sep 5 23:54:06.875958 kubelet[3518]: I0905 23:54:06.874649 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11410dcf-06f7-49c1-9c5d-2a995ea6416a-lib-modules\") pod \"kube-proxy-q6fkz\" (UID: \"11410dcf-06f7-49c1-9c5d-2a995ea6416a\") " pod="kube-system/kube-proxy-q6fkz" Sep 5 23:54:06.875958 kubelet[3518]: I0905 23:54:06.874693 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xmkz\" (UniqueName: \"kubernetes.io/projected/11410dcf-06f7-49c1-9c5d-2a995ea6416a-kube-api-access-2xmkz\") pod \"kube-proxy-q6fkz\" (UID: \"11410dcf-06f7-49c1-9c5d-2a995ea6416a\") " pod="kube-system/kube-proxy-q6fkz" Sep 5 23:54:07.169186 systemd[1]: Created slice kubepods-besteffort-pod1006d99f_beb6_4635_9c2b_26c746882cfd.slice - libcontainer container kubepods-besteffort-pod1006d99f_beb6_4635_9c2b_26c746882cfd.slice. Sep 5 23:54:07.176907 kubelet[3518]: I0905 23:54:07.176825 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42lxz\" (UniqueName: \"kubernetes.io/projected/1006d99f-beb6-4635-9c2b-26c746882cfd-kube-api-access-42lxz\") pod \"tigera-operator-755d956888-xkskx\" (UID: \"1006d99f-beb6-4635-9c2b-26c746882cfd\") " pod="tigera-operator/tigera-operator-755d956888-xkskx" Sep 5 23:54:07.177594 kubelet[3518]: I0905 23:54:07.177071 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1006d99f-beb6-4635-9c2b-26c746882cfd-var-lib-calico\") pod \"tigera-operator-755d956888-xkskx\" (UID: \"1006d99f-beb6-4635-9c2b-26c746882cfd\") " pod="tigera-operator/tigera-operator-755d956888-xkskx" Sep 5 23:54:07.187651 containerd[2003]: time="2025-09-05T23:54:07.187056181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q6fkz,Uid:11410dcf-06f7-49c1-9c5d-2a995ea6416a,Namespace:kube-system,Attempt:0,}" Sep 5 23:54:07.242066 containerd[2003]: time="2025-09-05T23:54:07.241451413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:54:07.242066 containerd[2003]: time="2025-09-05T23:54:07.241568929Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:54:07.242066 containerd[2003]: time="2025-09-05T23:54:07.241657501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:07.242066 containerd[2003]: time="2025-09-05T23:54:07.241906441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:07.289937 systemd[1]: Started cri-containerd-bd20a902e32995169b114dfec58bb55c62012bb124e6f46846b6c08934e86dd4.scope - libcontainer container bd20a902e32995169b114dfec58bb55c62012bb124e6f46846b6c08934e86dd4. Sep 5 23:54:07.352015 containerd[2003]: time="2025-09-05T23:54:07.351930950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q6fkz,Uid:11410dcf-06f7-49c1-9c5d-2a995ea6416a,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd20a902e32995169b114dfec58bb55c62012bb124e6f46846b6c08934e86dd4\"" Sep 5 23:54:07.363931 containerd[2003]: time="2025-09-05T23:54:07.363813014Z" level=info msg="CreateContainer within sandbox \"bd20a902e32995169b114dfec58bb55c62012bb124e6f46846b6c08934e86dd4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 23:54:07.397110 containerd[2003]: time="2025-09-05T23:54:07.397026266Z" level=info msg="CreateContainer within sandbox \"bd20a902e32995169b114dfec58bb55c62012bb124e6f46846b6c08934e86dd4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7ccef0f1ef4c326db70a98935fb54493ab5a07957b4d6808978b8fe5241a06e6\"" Sep 5 23:54:07.398091 containerd[2003]: time="2025-09-05T23:54:07.398019722Z" level=info msg="StartContainer for \"7ccef0f1ef4c326db70a98935fb54493ab5a07957b4d6808978b8fe5241a06e6\"" Sep 5 23:54:07.453704 systemd[1]: Started cri-containerd-7ccef0f1ef4c326db70a98935fb54493ab5a07957b4d6808978b8fe5241a06e6.scope - libcontainer container 7ccef0f1ef4c326db70a98935fb54493ab5a07957b4d6808978b8fe5241a06e6. Sep 5 23:54:07.479295 containerd[2003]: time="2025-09-05T23:54:07.479206742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-xkskx,Uid:1006d99f-beb6-4635-9c2b-26c746882cfd,Namespace:tigera-operator,Attempt:0,}" Sep 5 23:54:07.523959 containerd[2003]: time="2025-09-05T23:54:07.523798106Z" level=info msg="StartContainer for \"7ccef0f1ef4c326db70a98935fb54493ab5a07957b4d6808978b8fe5241a06e6\" returns successfully" Sep 5 23:54:07.560134 containerd[2003]: time="2025-09-05T23:54:07.559882587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:54:07.560134 containerd[2003]: time="2025-09-05T23:54:07.560029695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:54:07.560134 containerd[2003]: time="2025-09-05T23:54:07.560071671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:07.560845 containerd[2003]: time="2025-09-05T23:54:07.560232615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:07.603227 systemd[1]: Started cri-containerd-566f588f5f0954bc4c7d193b345684491a4b5b926ce40fd2f7b9220f7693fa31.scope - libcontainer container 566f588f5f0954bc4c7d193b345684491a4b5b926ce40fd2f7b9220f7693fa31. Sep 5 23:54:07.692389 containerd[2003]: time="2025-09-05T23:54:07.692295831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-xkskx,Uid:1006d99f-beb6-4635-9c2b-26c746882cfd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"566f588f5f0954bc4c7d193b345684491a4b5b926ce40fd2f7b9220f7693fa31\"" Sep 5 23:54:07.706257 containerd[2003]: time="2025-09-05T23:54:07.702864279Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 5 23:54:08.251188 kubelet[3518]: I0905 23:54:08.250791 3518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q6fkz" podStartSLOduration=2.2505776539999998 podStartE2EDuration="2.250577654s" podCreationTimestamp="2025-09-05 23:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:54:08.248333894 +0000 UTC m=+6.460626501" watchObservedRunningTime="2025-09-05 23:54:08.250577654 +0000 UTC m=+6.462870309" Sep 5 23:54:09.122594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1543899577.mount: Deactivated successfully. Sep 5 23:54:09.974320 containerd[2003]: time="2025-09-05T23:54:09.974263963Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:09.977212 containerd[2003]: time="2025-09-05T23:54:09.977096059Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 5 23:54:09.977682 containerd[2003]: time="2025-09-05T23:54:09.977637511Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:09.982941 containerd[2003]: time="2025-09-05T23:54:09.982270435Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:09.985552 containerd[2003]: time="2025-09-05T23:54:09.985488559Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.27992968s" Sep 5 23:54:09.985686 containerd[2003]: time="2025-09-05T23:54:09.985552435Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 5 23:54:09.998010 containerd[2003]: time="2025-09-05T23:54:09.997910119Z" level=info msg="CreateContainer within sandbox \"566f588f5f0954bc4c7d193b345684491a4b5b926ce40fd2f7b9220f7693fa31\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 5 23:54:10.024566 containerd[2003]: time="2025-09-05T23:54:10.024490839Z" level=info msg="CreateContainer within sandbox \"566f588f5f0954bc4c7d193b345684491a4b5b926ce40fd2f7b9220f7693fa31\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5e095b30f2fd85cd377cf5f415593514aeb3937ce5b70b4f216383a686857de3\"" Sep 5 23:54:10.026384 containerd[2003]: time="2025-09-05T23:54:10.026241075Z" level=info msg="StartContainer for \"5e095b30f2fd85cd377cf5f415593514aeb3937ce5b70b4f216383a686857de3\"" Sep 5 23:54:10.094674 systemd[1]: Started cri-containerd-5e095b30f2fd85cd377cf5f415593514aeb3937ce5b70b4f216383a686857de3.scope - libcontainer container 5e095b30f2fd85cd377cf5f415593514aeb3937ce5b70b4f216383a686857de3. Sep 5 23:54:10.146724 containerd[2003]: time="2025-09-05T23:54:10.146635887Z" level=info msg="StartContainer for \"5e095b30f2fd85cd377cf5f415593514aeb3937ce5b70b4f216383a686857de3\" returns successfully" Sep 5 23:54:10.273471 kubelet[3518]: I0905 23:54:10.271961 3518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-xkskx" podStartSLOduration=0.983309884 podStartE2EDuration="3.2719396s" podCreationTimestamp="2025-09-05 23:54:07 +0000 UTC" firstStartedPulling="2025-09-05 23:54:07.698876499 +0000 UTC m=+5.911169034" lastFinishedPulling="2025-09-05 23:54:09.987506227 +0000 UTC m=+8.199798750" observedRunningTime="2025-09-05 23:54:10.270880984 +0000 UTC m=+8.483173519" watchObservedRunningTime="2025-09-05 23:54:10.2719396 +0000 UTC m=+8.484232135" Sep 5 23:54:19.169806 sudo[2362]: pam_unix(sudo:session): session closed for user root Sep 5 23:54:19.196687 sshd[2359]: pam_unix(sshd:session): session closed for user core Sep 5 23:54:19.205384 systemd[1]: sshd@8-172.31.23.98:22-139.178.68.195:57216.service: Deactivated successfully. Sep 5 23:54:19.212796 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 23:54:19.213814 systemd[1]: session-9.scope: Consumed 9.420s CPU time, 155.4M memory peak, 0B memory swap peak. Sep 5 23:54:19.215815 systemd-logind[1993]: Session 9 logged out. Waiting for processes to exit. Sep 5 23:54:19.218513 systemd-logind[1993]: Removed session 9. Sep 5 23:54:33.485254 systemd[1]: Created slice kubepods-besteffort-pod666bbbf2_91a1_42ab_a7a8_9f941854aa1e.slice - libcontainer container kubepods-besteffort-pod666bbbf2_91a1_42ab_a7a8_9f941854aa1e.slice. Sep 5 23:54:33.555878 kubelet[3518]: I0905 23:54:33.555610 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/666bbbf2-91a1-42ab-a7a8-9f941854aa1e-tigera-ca-bundle\") pod \"calico-typha-55c688fc4b-2drnc\" (UID: \"666bbbf2-91a1-42ab-a7a8-9f941854aa1e\") " pod="calico-system/calico-typha-55c688fc4b-2drnc" Sep 5 23:54:33.555878 kubelet[3518]: I0905 23:54:33.555716 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfdf6\" (UniqueName: \"kubernetes.io/projected/666bbbf2-91a1-42ab-a7a8-9f941854aa1e-kube-api-access-nfdf6\") pod \"calico-typha-55c688fc4b-2drnc\" (UID: \"666bbbf2-91a1-42ab-a7a8-9f941854aa1e\") " pod="calico-system/calico-typha-55c688fc4b-2drnc" Sep 5 23:54:33.555878 kubelet[3518]: I0905 23:54:33.555789 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/666bbbf2-91a1-42ab-a7a8-9f941854aa1e-typha-certs\") pod \"calico-typha-55c688fc4b-2drnc\" (UID: \"666bbbf2-91a1-42ab-a7a8-9f941854aa1e\") " pod="calico-system/calico-typha-55c688fc4b-2drnc" Sep 5 23:54:33.803246 containerd[2003]: time="2025-09-05T23:54:33.802532081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55c688fc4b-2drnc,Uid:666bbbf2-91a1-42ab-a7a8-9f941854aa1e,Namespace:calico-system,Attempt:0,}" Sep 5 23:54:33.864383 kubelet[3518]: I0905 23:54:33.861872 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7f51bd08-2517-4f30-8fc0-858b388ecc1f-cni-bin-dir\") pod \"calico-node-6x7ds\" (UID: \"7f51bd08-2517-4f30-8fc0-858b388ecc1f\") " pod="calico-system/calico-node-6x7ds" Sep 5 23:54:33.869468 kubelet[3518]: I0905 23:54:33.864708 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7f51bd08-2517-4f30-8fc0-858b388ecc1f-cni-net-dir\") pod \"calico-node-6x7ds\" (UID: \"7f51bd08-2517-4f30-8fc0-858b388ecc1f\") " pod="calico-system/calico-node-6x7ds" Sep 5 23:54:33.869468 kubelet[3518]: I0905 23:54:33.864785 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7f51bd08-2517-4f30-8fc0-858b388ecc1f-var-lib-calico\") pod \"calico-node-6x7ds\" (UID: \"7f51bd08-2517-4f30-8fc0-858b388ecc1f\") " pod="calico-system/calico-node-6x7ds" Sep 5 23:54:33.869468 kubelet[3518]: I0905 23:54:33.864857 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7f51bd08-2517-4f30-8fc0-858b388ecc1f-flexvol-driver-host\") pod \"calico-node-6x7ds\" (UID: \"7f51bd08-2517-4f30-8fc0-858b388ecc1f\") " pod="calico-system/calico-node-6x7ds" Sep 5 23:54:33.869468 kubelet[3518]: I0905 23:54:33.864912 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7f51bd08-2517-4f30-8fc0-858b388ecc1f-var-run-calico\") pod \"calico-node-6x7ds\" (UID: \"7f51bd08-2517-4f30-8fc0-858b388ecc1f\") " pod="calico-system/calico-node-6x7ds" Sep 5 23:54:33.869468 kubelet[3518]: I0905 23:54:33.864963 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f51bd08-2517-4f30-8fc0-858b388ecc1f-xtables-lock\") pod \"calico-node-6x7ds\" (UID: \"7f51bd08-2517-4f30-8fc0-858b388ecc1f\") " pod="calico-system/calico-node-6x7ds" Sep 5 23:54:33.869854 kubelet[3518]: I0905 23:54:33.865019 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7f51bd08-2517-4f30-8fc0-858b388ecc1f-node-certs\") pod \"calico-node-6x7ds\" (UID: \"7f51bd08-2517-4f30-8fc0-858b388ecc1f\") " pod="calico-system/calico-node-6x7ds" Sep 5 23:54:33.869854 kubelet[3518]: I0905 23:54:33.865080 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7f51bd08-2517-4f30-8fc0-858b388ecc1f-cni-log-dir\") pod \"calico-node-6x7ds\" (UID: \"7f51bd08-2517-4f30-8fc0-858b388ecc1f\") " pod="calico-system/calico-node-6x7ds" Sep 5 23:54:33.869854 kubelet[3518]: I0905 23:54:33.865132 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f51bd08-2517-4f30-8fc0-858b388ecc1f-lib-modules\") pod \"calico-node-6x7ds\" (UID: \"7f51bd08-2517-4f30-8fc0-858b388ecc1f\") " pod="calico-system/calico-node-6x7ds" Sep 5 23:54:33.869854 kubelet[3518]: I0905 23:54:33.865175 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7f51bd08-2517-4f30-8fc0-858b388ecc1f-policysync\") pod \"calico-node-6x7ds\" (UID: \"7f51bd08-2517-4f30-8fc0-858b388ecc1f\") " pod="calico-system/calico-node-6x7ds" Sep 5 23:54:33.869854 kubelet[3518]: I0905 23:54:33.865222 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f51bd08-2517-4f30-8fc0-858b388ecc1f-tigera-ca-bundle\") pod \"calico-node-6x7ds\" (UID: \"7f51bd08-2517-4f30-8fc0-858b388ecc1f\") " pod="calico-system/calico-node-6x7ds" Sep 5 23:54:33.870143 kubelet[3518]: I0905 23:54:33.865271 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gngs2\" (UniqueName: \"kubernetes.io/projected/7f51bd08-2517-4f30-8fc0-858b388ecc1f-kube-api-access-gngs2\") pod \"calico-node-6x7ds\" (UID: \"7f51bd08-2517-4f30-8fc0-858b388ecc1f\") " pod="calico-system/calico-node-6x7ds" Sep 5 23:54:33.898766 systemd[1]: Created slice kubepods-besteffort-pod7f51bd08_2517_4f30_8fc0_858b388ecc1f.slice - libcontainer container kubepods-besteffort-pod7f51bd08_2517_4f30_8fc0_858b388ecc1f.slice. Sep 5 23:54:33.925974 containerd[2003]: time="2025-09-05T23:54:33.925777734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:54:33.925974 containerd[2003]: time="2025-09-05T23:54:33.925900326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:54:33.926509 containerd[2003]: time="2025-09-05T23:54:33.926332554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:33.928056 containerd[2003]: time="2025-09-05T23:54:33.927522102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:33.978583 kubelet[3518]: E0905 23:54:33.978238 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:33.978583 kubelet[3518]: W0905 23:54:33.978401 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:33.978583 kubelet[3518]: E0905 23:54:33.978443 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.001331 kubelet[3518]: E0905 23:54:34.001246 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.001331 kubelet[3518]: W0905 23:54:34.001317 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.001711 kubelet[3518]: E0905 23:54:34.001417 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.025714 systemd[1]: Started cri-containerd-783b5396eba08930f9828a327d0bf605183500be75cb7eb1674512da3a9f3125.scope - libcontainer container 783b5396eba08930f9828a327d0bf605183500be75cb7eb1674512da3a9f3125. Sep 5 23:54:34.058565 kubelet[3518]: E0905 23:54:34.058411 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.058565 kubelet[3518]: W0905 23:54:34.058470 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.058565 kubelet[3518]: E0905 23:54:34.058506 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.185712 kubelet[3518]: E0905 23:54:34.185508 3518 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5d9jn" podUID="a29a0520-465f-4a15-9908-cc439e2ca7ce" Sep 5 23:54:34.216401 containerd[2003]: time="2025-09-05T23:54:34.216294267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6x7ds,Uid:7f51bd08-2517-4f30-8fc0-858b388ecc1f,Namespace:calico-system,Attempt:0,}" Sep 5 23:54:34.261723 kubelet[3518]: E0905 23:54:34.261113 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.261723 kubelet[3518]: W0905 23:54:34.261211 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.261723 kubelet[3518]: E0905 23:54:34.261250 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.264373 kubelet[3518]: E0905 23:54:34.263385 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.264373 kubelet[3518]: W0905 23:54:34.263431 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.264373 kubelet[3518]: E0905 23:54:34.263584 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.264373 kubelet[3518]: E0905 23:54:34.264303 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.264373 kubelet[3518]: W0905 23:54:34.264369 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.264790 kubelet[3518]: E0905 23:54:34.264406 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.265802 kubelet[3518]: E0905 23:54:34.265068 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.265802 kubelet[3518]: W0905 23:54:34.265146 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.265802 kubelet[3518]: E0905 23:54:34.265235 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.266074 kubelet[3518]: E0905 23:54:34.265926 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.266074 kubelet[3518]: W0905 23:54:34.265952 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.266074 kubelet[3518]: E0905 23:54:34.266010 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.267318 kubelet[3518]: E0905 23:54:34.267168 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.267318 kubelet[3518]: W0905 23:54:34.267282 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.269826 kubelet[3518]: E0905 23:54:34.269737 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.272384 kubelet[3518]: E0905 23:54:34.271884 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.272384 kubelet[3518]: W0905 23:54:34.271954 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.272384 kubelet[3518]: E0905 23:54:34.272023 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.273793 kubelet[3518]: E0905 23:54:34.272627 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.273793 kubelet[3518]: W0905 23:54:34.272656 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.273793 kubelet[3518]: E0905 23:54:34.272686 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.273793 kubelet[3518]: E0905 23:54:34.273156 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.273793 kubelet[3518]: W0905 23:54:34.273179 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.273793 kubelet[3518]: E0905 23:54:34.273205 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.273793 kubelet[3518]: E0905 23:54:34.273799 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.274279 kubelet[3518]: W0905 23:54:34.273825 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.274279 kubelet[3518]: E0905 23:54:34.273856 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.277388 kubelet[3518]: E0905 23:54:34.276524 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.277388 kubelet[3518]: W0905 23:54:34.276570 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.277388 kubelet[3518]: E0905 23:54:34.276609 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.277615 kubelet[3518]: E0905 23:54:34.277519 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.277615 kubelet[3518]: W0905 23:54:34.277548 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.277615 kubelet[3518]: E0905 23:54:34.277584 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.279746 kubelet[3518]: E0905 23:54:34.278202 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.279746 kubelet[3518]: W0905 23:54:34.278244 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.279746 kubelet[3518]: E0905 23:54:34.278276 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.280052 kubelet[3518]: E0905 23:54:34.279870 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.280052 kubelet[3518]: W0905 23:54:34.279899 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.280052 kubelet[3518]: E0905 23:54:34.279933 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.281199 kubelet[3518]: E0905 23:54:34.281017 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.281199 kubelet[3518]: W0905 23:54:34.281045 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.281199 kubelet[3518]: E0905 23:54:34.281075 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.283678 kubelet[3518]: E0905 23:54:34.283621 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.283678 kubelet[3518]: W0905 23:54:34.283667 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.283945 kubelet[3518]: E0905 23:54:34.283702 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.286397 kubelet[3518]: E0905 23:54:34.286143 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.286397 kubelet[3518]: W0905 23:54:34.286181 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.286397 kubelet[3518]: E0905 23:54:34.286214 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.290019 kubelet[3518]: E0905 23:54:34.289528 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.290019 kubelet[3518]: W0905 23:54:34.289573 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.290019 kubelet[3518]: E0905 23:54:34.289665 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.290298 kubelet[3518]: E0905 23:54:34.290167 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.290298 kubelet[3518]: W0905 23:54:34.290194 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.290298 kubelet[3518]: E0905 23:54:34.290223 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.292115 kubelet[3518]: E0905 23:54:34.291509 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.292115 kubelet[3518]: W0905 23:54:34.291542 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.292115 kubelet[3518]: E0905 23:54:34.291572 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.294704 kubelet[3518]: E0905 23:54:34.294588 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.294704 kubelet[3518]: W0905 23:54:34.294631 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.294704 kubelet[3518]: E0905 23:54:34.294666 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.297125 kubelet[3518]: I0905 23:54:34.296804 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a29a0520-465f-4a15-9908-cc439e2ca7ce-registration-dir\") pod \"csi-node-driver-5d9jn\" (UID: \"a29a0520-465f-4a15-9908-cc439e2ca7ce\") " pod="calico-system/csi-node-driver-5d9jn" Sep 5 23:54:34.300388 kubelet[3518]: E0905 23:54:34.299524 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.300388 kubelet[3518]: W0905 23:54:34.299572 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.300388 kubelet[3518]: E0905 23:54:34.299606 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.302576 kubelet[3518]: E0905 23:54:34.302520 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.302576 kubelet[3518]: W0905 23:54:34.302562 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.302795 kubelet[3518]: E0905 23:54:34.302596 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.304415 kubelet[3518]: E0905 23:54:34.304332 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.304415 kubelet[3518]: W0905 23:54:34.304403 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.304629 kubelet[3518]: E0905 23:54:34.304441 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.305813 kubelet[3518]: I0905 23:54:34.304904 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a29a0520-465f-4a15-9908-cc439e2ca7ce-varrun\") pod \"csi-node-driver-5d9jn\" (UID: \"a29a0520-465f-4a15-9908-cc439e2ca7ce\") " pod="calico-system/csi-node-driver-5d9jn" Sep 5 23:54:34.306878 kubelet[3518]: E0905 23:54:34.306818 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.307067 kubelet[3518]: W0905 23:54:34.306948 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.307067 kubelet[3518]: E0905 23:54:34.307019 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.310128 kubelet[3518]: E0905 23:54:34.309940 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.310128 kubelet[3518]: W0905 23:54:34.309990 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.310128 kubelet[3518]: E0905 23:54:34.310028 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.318626 kubelet[3518]: E0905 23:54:34.318566 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.318626 kubelet[3518]: W0905 23:54:34.318613 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.318626 kubelet[3518]: E0905 23:54:34.318647 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.318934 kubelet[3518]: I0905 23:54:34.318715 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a29a0520-465f-4a15-9908-cc439e2ca7ce-kubelet-dir\") pod \"csi-node-driver-5d9jn\" (UID: \"a29a0520-465f-4a15-9908-cc439e2ca7ce\") " pod="calico-system/csi-node-driver-5d9jn" Sep 5 23:54:34.320111 containerd[2003]: time="2025-09-05T23:54:34.313557039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:54:34.320111 containerd[2003]: time="2025-09-05T23:54:34.313675383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:54:34.320111 containerd[2003]: time="2025-09-05T23:54:34.313718631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:34.320111 containerd[2003]: time="2025-09-05T23:54:34.314260107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:34.321601 kubelet[3518]: E0905 23:54:34.321542 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.321601 kubelet[3518]: W0905 23:54:34.321591 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.321601 kubelet[3518]: E0905 23:54:34.321628 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.325229 kubelet[3518]: I0905 23:54:34.325168 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a29a0520-465f-4a15-9908-cc439e2ca7ce-socket-dir\") pod \"csi-node-driver-5d9jn\" (UID: \"a29a0520-465f-4a15-9908-cc439e2ca7ce\") " pod="calico-system/csi-node-driver-5d9jn" Sep 5 23:54:34.325826 kubelet[3518]: E0905 23:54:34.325702 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.325826 kubelet[3518]: W0905 23:54:34.325748 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.325826 kubelet[3518]: E0905 23:54:34.325782 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.327490 kubelet[3518]: E0905 23:54:34.327394 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.327490 kubelet[3518]: W0905 23:54:34.327435 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.327490 kubelet[3518]: E0905 23:54:34.327486 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.331196 kubelet[3518]: E0905 23:54:34.331103 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.331196 kubelet[3518]: W0905 23:54:34.331153 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.331196 kubelet[3518]: E0905 23:54:34.331189 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.331625 kubelet[3518]: I0905 23:54:34.331363 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffl29\" (UniqueName: \"kubernetes.io/projected/a29a0520-465f-4a15-9908-cc439e2ca7ce-kube-api-access-ffl29\") pod \"csi-node-driver-5d9jn\" (UID: \"a29a0520-465f-4a15-9908-cc439e2ca7ce\") " pod="calico-system/csi-node-driver-5d9jn" Sep 5 23:54:34.332468 kubelet[3518]: E0905 23:54:34.332038 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.332468 kubelet[3518]: W0905 23:54:34.332077 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.332468 kubelet[3518]: E0905 23:54:34.332110 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.337042 kubelet[3518]: E0905 23:54:34.333996 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.337042 kubelet[3518]: W0905 23:54:34.334043 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.337042 kubelet[3518]: E0905 23:54:34.334078 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.338275 kubelet[3518]: E0905 23:54:34.338192 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.338275 kubelet[3518]: W0905 23:54:34.338240 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.338275 kubelet[3518]: E0905 23:54:34.338276 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.340965 kubelet[3518]: E0905 23:54:34.340729 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.340965 kubelet[3518]: W0905 23:54:34.340842 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.342688 kubelet[3518]: E0905 23:54:34.341070 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.414196 systemd[1]: Started cri-containerd-cf7ae290dc03aa4766a709ec625d5b1fe46018da0f6ff4ce21b71b5de48ee1a3.scope - libcontainer container cf7ae290dc03aa4766a709ec625d5b1fe46018da0f6ff4ce21b71b5de48ee1a3. Sep 5 23:54:34.435569 kubelet[3518]: E0905 23:54:34.434675 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.435569 kubelet[3518]: W0905 23:54:34.434718 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.435569 kubelet[3518]: E0905 23:54:34.434752 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.437577 kubelet[3518]: E0905 23:54:34.437534 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.437915 kubelet[3518]: W0905 23:54:34.437747 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.437915 kubelet[3518]: E0905 23:54:34.437799 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.439680 kubelet[3518]: E0905 23:54:34.439610 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.439851 kubelet[3518]: W0905 23:54:34.439787 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.439851 kubelet[3518]: E0905 23:54:34.439836 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.442305 kubelet[3518]: E0905 23:54:34.442155 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.442305 kubelet[3518]: W0905 23:54:34.442201 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.442305 kubelet[3518]: E0905 23:54:34.442238 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.443669 kubelet[3518]: E0905 23:54:34.443603 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.443669 kubelet[3518]: W0905 23:54:34.443648 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.444539 kubelet[3518]: E0905 23:54:34.443683 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.447652 kubelet[3518]: E0905 23:54:34.446915 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.447652 kubelet[3518]: W0905 23:54:34.446963 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.447652 kubelet[3518]: E0905 23:54:34.446999 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.449391 kubelet[3518]: E0905 23:54:34.449309 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.449391 kubelet[3518]: W0905 23:54:34.449372 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.450564 kubelet[3518]: E0905 23:54:34.449417 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.451225 kubelet[3518]: E0905 23:54:34.450947 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.451225 kubelet[3518]: W0905 23:54:34.450991 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.451225 kubelet[3518]: E0905 23:54:34.451028 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.454607 kubelet[3518]: E0905 23:54:34.454396 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.454607 kubelet[3518]: W0905 23:54:34.454551 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.454607 kubelet[3518]: E0905 23:54:34.454591 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.456199 kubelet[3518]: E0905 23:54:34.456138 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.456199 kubelet[3518]: W0905 23:54:34.456179 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.456713 kubelet[3518]: E0905 23:54:34.456217 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.459403 kubelet[3518]: E0905 23:54:34.458225 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.459403 kubelet[3518]: W0905 23:54:34.458270 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.459403 kubelet[3518]: E0905 23:54:34.458306 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.460264 kubelet[3518]: E0905 23:54:34.460212 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.460264 kubelet[3518]: W0905 23:54:34.460252 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.460569 kubelet[3518]: E0905 23:54:34.460288 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.462375 kubelet[3518]: E0905 23:54:34.462308 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.462511 kubelet[3518]: W0905 23:54:34.462392 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.462511 kubelet[3518]: E0905 23:54:34.462430 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.463011 kubelet[3518]: E0905 23:54:34.462966 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.463011 kubelet[3518]: W0905 23:54:34.463001 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.464142 kubelet[3518]: E0905 23:54:34.463030 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.464935 kubelet[3518]: E0905 23:54:34.464804 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.464935 kubelet[3518]: W0905 23:54:34.464894 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.465134 kubelet[3518]: E0905 23:54:34.464931 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.467845 kubelet[3518]: E0905 23:54:34.467669 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.467845 kubelet[3518]: W0905 23:54:34.467736 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.467845 kubelet[3518]: E0905 23:54:34.467781 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.470022 kubelet[3518]: E0905 23:54:34.469956 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.470022 kubelet[3518]: W0905 23:54:34.470003 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.470254 kubelet[3518]: E0905 23:54:34.470041 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.471687 kubelet[3518]: E0905 23:54:34.471631 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.471971 kubelet[3518]: W0905 23:54:34.471866 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.472394 kubelet[3518]: E0905 23:54:34.472093 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.475378 containerd[2003]: time="2025-09-05T23:54:34.475278784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55c688fc4b-2drnc,Uid:666bbbf2-91a1-42ab-a7a8-9f941854aa1e,Namespace:calico-system,Attempt:0,} returns sandbox id \"783b5396eba08930f9828a327d0bf605183500be75cb7eb1674512da3a9f3125\"" Sep 5 23:54:34.475749 kubelet[3518]: E0905 23:54:34.475475 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.475749 kubelet[3518]: W0905 23:54:34.475537 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.475749 kubelet[3518]: E0905 23:54:34.475586 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.478920 kubelet[3518]: E0905 23:54:34.478223 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.478920 kubelet[3518]: W0905 23:54:34.478261 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.478920 kubelet[3518]: E0905 23:54:34.478293 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.481253 kubelet[3518]: E0905 23:54:34.480889 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.481253 kubelet[3518]: W0905 23:54:34.480926 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.481253 kubelet[3518]: E0905 23:54:34.480978 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.483090 kubelet[3518]: E0905 23:54:34.482846 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.483090 kubelet[3518]: W0905 23:54:34.482878 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.483090 kubelet[3518]: E0905 23:54:34.482909 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.484619 containerd[2003]: time="2025-09-05T23:54:34.483318700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 5 23:54:34.485168 kubelet[3518]: E0905 23:54:34.484939 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.486169 kubelet[3518]: W0905 23:54:34.485685 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.486169 kubelet[3518]: E0905 23:54:34.485785 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.494893 kubelet[3518]: E0905 23:54:34.493515 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.494893 kubelet[3518]: W0905 23:54:34.493558 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.494893 kubelet[3518]: E0905 23:54:34.493594 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.495869 kubelet[3518]: E0905 23:54:34.495826 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.496419 kubelet[3518]: W0905 23:54:34.496322 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.496958 kubelet[3518]: E0905 23:54:34.496916 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.543988 kubelet[3518]: E0905 23:54:34.543931 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:34.544184 kubelet[3518]: W0905 23:54:34.544098 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:34.544184 kubelet[3518]: E0905 23:54:34.544143 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:34.605125 containerd[2003]: time="2025-09-05T23:54:34.604944149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6x7ds,Uid:7f51bd08-2517-4f30-8fc0-858b388ecc1f,Namespace:calico-system,Attempt:0,} returns sandbox id \"cf7ae290dc03aa4766a709ec625d5b1fe46018da0f6ff4ce21b71b5de48ee1a3\"" Sep 5 23:54:35.800253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2164131697.mount: Deactivated successfully. Sep 5 23:54:36.161691 kubelet[3518]: E0905 23:54:36.159669 3518 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5d9jn" podUID="a29a0520-465f-4a15-9908-cc439e2ca7ce" Sep 5 23:54:37.223687 containerd[2003]: time="2025-09-05T23:54:37.222406950Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:37.224273 containerd[2003]: time="2025-09-05T23:54:37.224031510Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 5 23:54:37.224912 containerd[2003]: time="2025-09-05T23:54:37.224843490Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:37.230446 containerd[2003]: time="2025-09-05T23:54:37.230312994Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:37.233324 containerd[2003]: time="2025-09-05T23:54:37.232118346Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 2.748718814s" Sep 5 23:54:37.233324 containerd[2003]: time="2025-09-05T23:54:37.232191630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 5 23:54:37.236691 containerd[2003]: time="2025-09-05T23:54:37.236222166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 5 23:54:37.289838 containerd[2003]: time="2025-09-05T23:54:37.289512330Z" level=info msg="CreateContainer within sandbox \"783b5396eba08930f9828a327d0bf605183500be75cb7eb1674512da3a9f3125\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 5 23:54:37.318993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2037510326.mount: Deactivated successfully. Sep 5 23:54:37.324412 containerd[2003]: time="2025-09-05T23:54:37.322659702Z" level=info msg="CreateContainer within sandbox \"783b5396eba08930f9828a327d0bf605183500be75cb7eb1674512da3a9f3125\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5ac5fff899445a57296d7bc540e8acf1d85b6c4fa20ecad4bb1135f005fa0c50\"" Sep 5 23:54:37.324996 containerd[2003]: time="2025-09-05T23:54:37.324924126Z" level=info msg="StartContainer for \"5ac5fff899445a57296d7bc540e8acf1d85b6c4fa20ecad4bb1135f005fa0c50\"" Sep 5 23:54:37.395726 systemd[1]: Started cri-containerd-5ac5fff899445a57296d7bc540e8acf1d85b6c4fa20ecad4bb1135f005fa0c50.scope - libcontainer container 5ac5fff899445a57296d7bc540e8acf1d85b6c4fa20ecad4bb1135f005fa0c50. Sep 5 23:54:37.475297 containerd[2003]: time="2025-09-05T23:54:37.474888067Z" level=info msg="StartContainer for \"5ac5fff899445a57296d7bc540e8acf1d85b6c4fa20ecad4bb1135f005fa0c50\" returns successfully" Sep 5 23:54:38.160027 kubelet[3518]: E0905 23:54:38.158751 3518 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5d9jn" podUID="a29a0520-465f-4a15-9908-cc439e2ca7ce" Sep 5 23:54:38.426945 kubelet[3518]: E0905 23:54:38.426743 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:38.426945 kubelet[3518]: W0905 23:54:38.426789 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:38.426945 kubelet[3518]: E0905 23:54:38.426827 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:38.432148 kubelet[3518]: E0905 23:54:38.431474 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:38.432148 kubelet[3518]: W0905 23:54:38.431518 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:38.432148 kubelet[3518]: E0905 23:54:38.431619 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:38.434424 kubelet[3518]: E0905 23:54:38.434085 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:38.434424 kubelet[3518]: W0905 23:54:38.434126 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:38.434424 kubelet[3518]: E0905 23:54:38.434162 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:38.436627 kubelet[3518]: E0905 23:54:38.435152 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:38.436627 kubelet[3518]: W0905 23:54:38.435198 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:38.436627 kubelet[3518]: E0905 23:54:38.435236 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:38.436627 kubelet[3518]: E0905 23:54:38.435768 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:38.436627 kubelet[3518]: W0905 23:54:38.435795 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:38.436627 kubelet[3518]: E0905 23:54:38.435824 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:38.436627 kubelet[3518]: E0905 23:54:38.436255 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:38.436627 kubelet[3518]: W0905 23:54:38.436285 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:38.436627 kubelet[3518]: E0905 23:54:38.436313 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:38.436627 kubelet[3518]: E0905 23:54:38.437015 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:38.438386 kubelet[3518]: W0905 23:54:38.437045 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:38.438386 kubelet[3518]: E0905 23:54:38.437078 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:38.438386 kubelet[3518]: E0905 23:54:38.437747 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:38.438386 kubelet[3518]: W0905 23:54:38.437777 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:38.438386 kubelet[3518]: E0905 23:54:38.437811 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:38.440979 kubelet[3518]: E0905 23:54:38.438855 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:38.440979 kubelet[3518]: W0905 23:54:38.438900 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:38.440979 kubelet[3518]: E0905 23:54:38.439140 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:38.440979 kubelet[3518]: E0905 23:54:38.440455 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:38.440979 kubelet[3518]: W0905 23:54:38.440487 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:38.440979 kubelet[3518]: E0905 23:54:38.440520 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:38.443643 kubelet[3518]: E0905 23:54:38.442565 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:38.443643 kubelet[3518]: W0905 23:54:38.442610 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:38.443643 kubelet[3518]: E0905 23:54:38.442861 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:38.446482 kubelet[3518]: E0905 23:54:38.445917 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:38.446482 kubelet[3518]: W0905 23:54:38.445954 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:38.446482 kubelet[3518]: E0905 23:54:38.446085 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:38.448555 kubelet[3518]: E0905 23:54:38.448511 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:38.449123 kubelet[3518]: W0905 23:54:38.448627 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:38.449123 kubelet[3518]: E0905 23:54:38.448995 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:38.452995 kubelet[3518]: E0905 23:54:38.452629 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:38.454382 kubelet[3518]: W0905 23:54:38.453571 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:38.454382 kubelet[3518]: E0905 23:54:38.453648 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:38.457057 kubelet[3518]: E0905 23:54:38.455846 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:38.457057 kubelet[3518]: W0905 23:54:38.455887 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:38.457057 kubelet[3518]: E0905 23:54:38.455921 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:38.479812 kubelet[3518]: I0905 23:54:38.479724 3518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-55c688fc4b-2drnc" podStartSLOduration=2.725102118 podStartE2EDuration="5.47747744s" podCreationTimestamp="2025-09-05 23:54:33 +0000 UTC" firstStartedPulling="2025-09-05 23:54:34.482131624 +0000 UTC m=+32.694424159" lastFinishedPulling="2025-09-05 23:54:37.234506946 +0000 UTC m=+35.446799481" observedRunningTime="2025-09-05 23:54:38.439743368 +0000 UTC m=+36.652035891" watchObservedRunningTime="2025-09-05 23:54:38.47747744 +0000 UTC m=+36.689769975" Sep 5 23:54:38.501384 kubelet[3518]: E0905 23:54:38.499373 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:38.501384 kubelet[3518]: W0905 23:54:38.499439 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:38.501384 kubelet[3518]: E0905 23:54:38.499477 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:38.504407 kubelet[3518]: E0905 23:54:38.502699 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:38.504407 kubelet[3518]: W0905 23:54:38.502740 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:38.504407 kubelet[3518]: E0905 23:54:38.502774 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:38.505110 kubelet[3518]: E0905 23:54:38.505050 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:38.505110 kubelet[3518]: W0905 23:54:38.505098 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:38.505431 kubelet[3518]: E0905 23:54:38.505136 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:38.507264 kubelet[3518]: E0905 23:54:38.506742 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:38.507264 kubelet[3518]: W0905 23:54:38.506788 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:38.508384 kubelet[3518]: E0905 23:54:38.508161 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:38.508890 kubelet[3518]: E0905 23:54:38.508787 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:38.508890 kubelet[3518]: W0905 23:54:38.508831 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:38.508890 kubelet[3518]: E0905 23:54:38.508866 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:38.511215 kubelet[3518]: E0905 23:54:38.511147 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:38.511215 kubelet[3518]: W0905 23:54:38.511195 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:38.511529 kubelet[3518]: E0905 23:54:38.511237 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:38.514834 kubelet[3518]: E0905 23:54:38.514133 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:38.514834 kubelet[3518]: W0905 23:54:38.514791 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:38.514834 kubelet[3518]: E0905 23:54:38.514834 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:38.517983 kubelet[3518]: E0905 23:54:38.517885 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:38.517983 kubelet[3518]: W0905 23:54:38.517931 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:38.517983 kubelet[3518]: E0905 23:54:38.517968 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:38.521738 kubelet[3518]: E0905 23:54:38.521644 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:38.521738 kubelet[3518]: W0905 23:54:38.521690 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:38.521738 kubelet[3518]: E0905 23:54:38.521729 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:38.525162 kubelet[3518]: E0905 23:54:38.524993 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:38.525162 kubelet[3518]: W0905 23:54:38.525050 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:38.525162 kubelet[3518]: E0905 23:54:38.525087 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:38.528817 kubelet[3518]: E0905 23:54:38.528743 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:38.528817 kubelet[3518]: W0905 23:54:38.528795 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:38.529410 kubelet[3518]: E0905 23:54:38.528831 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:38.531533 kubelet[3518]: E0905 23:54:38.531476 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:38.531533 kubelet[3518]: W0905 23:54:38.531519 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:38.532676 kubelet[3518]: E0905 23:54:38.531555 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:38.533698 kubelet[3518]: E0905 23:54:38.533636 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:38.533698 kubelet[3518]: W0905 23:54:38.533680 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:38.534083 kubelet[3518]: E0905 23:54:38.533716 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:38.537740 kubelet[3518]: E0905 23:54:38.537675 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:38.537740 kubelet[3518]: W0905 23:54:38.537721 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:38.538693 kubelet[3518]: E0905 23:54:38.537758 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:38.541016 kubelet[3518]: E0905 23:54:38.540957 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:38.541016 kubelet[3518]: W0905 23:54:38.541001 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:38.541391 kubelet[3518]: E0905 23:54:38.541038 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:38.544705 kubelet[3518]: E0905 23:54:38.544633 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:38.544705 kubelet[3518]: W0905 23:54:38.544679 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:38.544705 kubelet[3518]: E0905 23:54:38.544715 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:38.547786 kubelet[3518]: E0905 23:54:38.547734 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:38.548401 kubelet[3518]: W0905 23:54:38.548260 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:38.548401 kubelet[3518]: E0905 23:54:38.548355 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:38.551484 kubelet[3518]: E0905 23:54:38.550061 3518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 23:54:38.551484 kubelet[3518]: W0905 23:54:38.550107 3518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 23:54:38.551484 kubelet[3518]: E0905 23:54:38.550144 3518 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 23:54:38.621159 containerd[2003]: time="2025-09-05T23:54:38.621091173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:38.625096 containerd[2003]: time="2025-09-05T23:54:38.625025625Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 5 23:54:38.628369 containerd[2003]: time="2025-09-05T23:54:38.627737733Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:38.634198 containerd[2003]: time="2025-09-05T23:54:38.634122477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:38.636278 containerd[2003]: time="2025-09-05T23:54:38.636204729Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.399907191s" Sep 5 23:54:38.636534 containerd[2003]: time="2025-09-05T23:54:38.636488325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 5 23:54:38.648880 containerd[2003]: time="2025-09-05T23:54:38.648813129Z" level=info msg="CreateContainer within sandbox \"cf7ae290dc03aa4766a709ec625d5b1fe46018da0f6ff4ce21b71b5de48ee1a3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 5 23:54:38.682377 containerd[2003]: time="2025-09-05T23:54:38.682169193Z" level=info msg="CreateContainer within sandbox \"cf7ae290dc03aa4766a709ec625d5b1fe46018da0f6ff4ce21b71b5de48ee1a3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1a3aa969868a46ec97229eb271da87927cd87feaf65d4ee25e850b75b090d90a\"" Sep 5 23:54:38.685772 containerd[2003]: time="2025-09-05T23:54:38.685678185Z" level=info msg="StartContainer for \"1a3aa969868a46ec97229eb271da87927cd87feaf65d4ee25e850b75b090d90a\"" Sep 5 23:54:38.762772 systemd[1]: Started cri-containerd-1a3aa969868a46ec97229eb271da87927cd87feaf65d4ee25e850b75b090d90a.scope - libcontainer container 1a3aa969868a46ec97229eb271da87927cd87feaf65d4ee25e850b75b090d90a. Sep 5 23:54:38.828829 containerd[2003]: time="2025-09-05T23:54:38.828735130Z" level=info msg="StartContainer for \"1a3aa969868a46ec97229eb271da87927cd87feaf65d4ee25e850b75b090d90a\" returns successfully" Sep 5 23:54:38.872850 systemd[1]: cri-containerd-1a3aa969868a46ec97229eb271da87927cd87feaf65d4ee25e850b75b090d90a.scope: Deactivated successfully. Sep 5 23:54:38.928417 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a3aa969868a46ec97229eb271da87927cd87feaf65d4ee25e850b75b090d90a-rootfs.mount: Deactivated successfully. Sep 5 23:54:39.275935 containerd[2003]: time="2025-09-05T23:54:39.275821460Z" level=info msg="shim disconnected" id=1a3aa969868a46ec97229eb271da87927cd87feaf65d4ee25e850b75b090d90a namespace=k8s.io Sep 5 23:54:39.275935 containerd[2003]: time="2025-09-05T23:54:39.275910812Z" level=warning msg="cleaning up after shim disconnected" id=1a3aa969868a46ec97229eb271da87927cd87feaf65d4ee25e850b75b090d90a namespace=k8s.io Sep 5 23:54:39.275935 containerd[2003]: time="2025-09-05T23:54:39.275937320Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:54:39.396796 containerd[2003]: time="2025-09-05T23:54:39.396693453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 5 23:54:40.160072 kubelet[3518]: E0905 23:54:40.159240 3518 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5d9jn" podUID="a29a0520-465f-4a15-9908-cc439e2ca7ce" Sep 5 23:54:42.161003 kubelet[3518]: E0905 23:54:42.160232 3518 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5d9jn" podUID="a29a0520-465f-4a15-9908-cc439e2ca7ce" Sep 5 23:54:42.370162 containerd[2003]: time="2025-09-05T23:54:42.370078223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:42.372095 containerd[2003]: time="2025-09-05T23:54:42.371973779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 5 23:54:42.373656 containerd[2003]: time="2025-09-05T23:54:42.372872099Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:42.377836 containerd[2003]: time="2025-09-05T23:54:42.377753064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:42.380252 containerd[2003]: time="2025-09-05T23:54:42.380151900Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.983353315s" Sep 5 23:54:42.380252 containerd[2003]: time="2025-09-05T23:54:42.380237460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 5 23:54:42.389830 containerd[2003]: time="2025-09-05T23:54:42.389761044Z" level=info msg="CreateContainer within sandbox \"cf7ae290dc03aa4766a709ec625d5b1fe46018da0f6ff4ce21b71b5de48ee1a3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 5 23:54:42.436445 containerd[2003]: time="2025-09-05T23:54:42.436331832Z" level=info msg="CreateContainer within sandbox \"cf7ae290dc03aa4766a709ec625d5b1fe46018da0f6ff4ce21b71b5de48ee1a3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a3918c9877b7b7ec90c31462f9d892d1538b41779e6179592c54c580319dcdc6\"" Sep 5 23:54:42.439452 containerd[2003]: time="2025-09-05T23:54:42.437830596Z" level=info msg="StartContainer for \"a3918c9877b7b7ec90c31462f9d892d1538b41779e6179592c54c580319dcdc6\"" Sep 5 23:54:42.504677 systemd[1]: run-containerd-runc-k8s.io-a3918c9877b7b7ec90c31462f9d892d1538b41779e6179592c54c580319dcdc6-runc.ExV4xZ.mount: Deactivated successfully. Sep 5 23:54:42.520487 systemd[1]: Started cri-containerd-a3918c9877b7b7ec90c31462f9d892d1538b41779e6179592c54c580319dcdc6.scope - libcontainer container a3918c9877b7b7ec90c31462f9d892d1538b41779e6179592c54c580319dcdc6. Sep 5 23:54:42.602632 containerd[2003]: time="2025-09-05T23:54:42.602563165Z" level=info msg="StartContainer for \"a3918c9877b7b7ec90c31462f9d892d1538b41779e6179592c54c580319dcdc6\" returns successfully" Sep 5 23:54:43.707070 containerd[2003]: time="2025-09-05T23:54:43.706987706Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 23:54:43.712746 systemd[1]: cri-containerd-a3918c9877b7b7ec90c31462f9d892d1538b41779e6179592c54c580319dcdc6.scope: Deactivated successfully. Sep 5 23:54:43.714715 systemd[1]: cri-containerd-a3918c9877b7b7ec90c31462f9d892d1538b41779e6179592c54c580319dcdc6.scope: Consumed 1.070s CPU time. Sep 5 23:54:43.756692 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3918c9877b7b7ec90c31462f9d892d1538b41779e6179592c54c580319dcdc6-rootfs.mount: Deactivated successfully. Sep 5 23:54:43.786414 kubelet[3518]: I0905 23:54:43.786023 3518 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 5 23:54:43.906275 systemd[1]: Created slice kubepods-burstable-pod1ce18320_0b1e_4d63_958f_2d9f5f435dca.slice - libcontainer container kubepods-burstable-pod1ce18320_0b1e_4d63_958f_2d9f5f435dca.slice. Sep 5 23:54:43.940506 systemd[1]: Created slice kubepods-besteffort-pod4f3e7d36_4ee1_4ff9_b3d4_3dc6513e06bf.slice - libcontainer container kubepods-besteffort-pod4f3e7d36_4ee1_4ff9_b3d4_3dc6513e06bf.slice. Sep 5 23:54:43.954777 kubelet[3518]: I0905 23:54:43.954715 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af3665b2-d693-4ba1-907b-899c82f2055d-tigera-ca-bundle\") pod \"calico-kube-controllers-65bc579fb7-j8488\" (UID: \"af3665b2-d693-4ba1-907b-899c82f2055d\") " pod="calico-system/calico-kube-controllers-65bc579fb7-j8488" Sep 5 23:54:43.974849 kubelet[3518]: I0905 23:54:43.954827 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ce18320-0b1e-4d63-958f-2d9f5f435dca-config-volume\") pod \"coredns-674b8bbfcf-tpnrx\" (UID: \"1ce18320-0b1e-4d63-958f-2d9f5f435dca\") " pod="kube-system/coredns-674b8bbfcf-tpnrx" Sep 5 23:54:43.974849 kubelet[3518]: I0905 23:54:43.954917 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c6cs\" (UniqueName: \"kubernetes.io/projected/1ce18320-0b1e-4d63-958f-2d9f5f435dca-kube-api-access-7c6cs\") pod \"coredns-674b8bbfcf-tpnrx\" (UID: \"1ce18320-0b1e-4d63-958f-2d9f5f435dca\") " pod="kube-system/coredns-674b8bbfcf-tpnrx" Sep 5 23:54:43.974849 kubelet[3518]: I0905 23:54:43.954995 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngx69\" (UniqueName: \"kubernetes.io/projected/af3665b2-d693-4ba1-907b-899c82f2055d-kube-api-access-ngx69\") pod \"calico-kube-controllers-65bc579fb7-j8488\" (UID: \"af3665b2-d693-4ba1-907b-899c82f2055d\") " pod="calico-system/calico-kube-controllers-65bc579fb7-j8488" Sep 5 23:54:44.025245 systemd[1]: Created slice kubepods-besteffort-podaf3665b2_d693_4ba1_907b_899c82f2055d.slice - libcontainer container kubepods-besteffort-podaf3665b2_d693_4ba1_907b_899c82f2055d.slice. Sep 5 23:54:44.051516 systemd[1]: Created slice kubepods-burstable-poda0f339f8_d06b_4fc1_92df_a5d9b2d87813.slice - libcontainer container kubepods-burstable-poda0f339f8_d06b_4fc1_92df_a5d9b2d87813.slice. Sep 5 23:54:44.061587 kubelet[3518]: I0905 23:54:44.055455 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf-whisker-ca-bundle\") pod \"whisker-6d5f8bf8bf-czhd4\" (UID: \"4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf\") " pod="calico-system/whisker-6d5f8bf8bf-czhd4" Sep 5 23:54:44.061587 kubelet[3518]: I0905 23:54:44.057427 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf-whisker-backend-key-pair\") pod \"whisker-6d5f8bf8bf-czhd4\" (UID: \"4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf\") " pod="calico-system/whisker-6d5f8bf8bf-czhd4" Sep 5 23:54:44.061587 kubelet[3518]: I0905 23:54:44.057667 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjt7z\" (UniqueName: \"kubernetes.io/projected/4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf-kube-api-access-mjt7z\") pod \"whisker-6d5f8bf8bf-czhd4\" (UID: \"4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf\") " pod="calico-system/whisker-6d5f8bf8bf-czhd4" Sep 5 23:54:44.129499 systemd[1]: Created slice kubepods-besteffort-podb45a6056_0154_4b46_9f54_64314ddc0dd5.slice - libcontainer container kubepods-besteffort-podb45a6056_0154_4b46_9f54_64314ddc0dd5.slice. Sep 5 23:54:44.159082 kubelet[3518]: I0905 23:54:44.158145 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a0f339f8-d06b-4fc1-92df-a5d9b2d87813-config-volume\") pod \"coredns-674b8bbfcf-g4vsz\" (UID: \"a0f339f8-d06b-4fc1-92df-a5d9b2d87813\") " pod="kube-system/coredns-674b8bbfcf-g4vsz" Sep 5 23:54:44.159082 kubelet[3518]: I0905 23:54:44.158230 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mcj8\" (UniqueName: \"kubernetes.io/projected/a0f339f8-d06b-4fc1-92df-a5d9b2d87813-kube-api-access-2mcj8\") pod \"coredns-674b8bbfcf-g4vsz\" (UID: \"a0f339f8-d06b-4fc1-92df-a5d9b2d87813\") " pod="kube-system/coredns-674b8bbfcf-g4vsz" Sep 5 23:54:44.159082 kubelet[3518]: I0905 23:54:44.158318 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdmq7\" (UniqueName: \"kubernetes.io/projected/b45a6056-0154-4b46-9f54-64314ddc0dd5-kube-api-access-fdmq7\") pod \"calico-apiserver-85cb674cb8-xmj4t\" (UID: \"b45a6056-0154-4b46-9f54-64314ddc0dd5\") " pod="calico-apiserver/calico-apiserver-85cb674cb8-xmj4t" Sep 5 23:54:44.159082 kubelet[3518]: I0905 23:54:44.158486 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b45a6056-0154-4b46-9f54-64314ddc0dd5-calico-apiserver-certs\") pod \"calico-apiserver-85cb674cb8-xmj4t\" (UID: \"b45a6056-0154-4b46-9f54-64314ddc0dd5\") " pod="calico-apiserver/calico-apiserver-85cb674cb8-xmj4t" Sep 5 23:54:44.217049 containerd[2003]: time="2025-09-05T23:54:44.216951805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tpnrx,Uid:1ce18320-0b1e-4d63-958f-2d9f5f435dca,Namespace:kube-system,Attempt:0,}" Sep 5 23:54:44.262586 kubelet[3518]: I0905 23:54:44.262262 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbzxx\" (UniqueName: \"kubernetes.io/projected/f77d155b-2827-48f6-a494-70cb819e25d7-kube-api-access-hbzxx\") pod \"goldmane-54d579b49d-jlk4g\" (UID: \"f77d155b-2827-48f6-a494-70cb819e25d7\") " pod="calico-system/goldmane-54d579b49d-jlk4g" Sep 5 23:54:44.262586 kubelet[3518]: I0905 23:54:44.262421 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f77d155b-2827-48f6-a494-70cb819e25d7-config\") pod \"goldmane-54d579b49d-jlk4g\" (UID: \"f77d155b-2827-48f6-a494-70cb819e25d7\") " pod="calico-system/goldmane-54d579b49d-jlk4g" Sep 5 23:54:44.262586 kubelet[3518]: I0905 23:54:44.262474 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f77d155b-2827-48f6-a494-70cb819e25d7-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-jlk4g\" (UID: \"f77d155b-2827-48f6-a494-70cb819e25d7\") " pod="calico-system/goldmane-54d579b49d-jlk4g" Sep 5 23:54:44.262586 kubelet[3518]: I0905 23:54:44.262529 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f77d155b-2827-48f6-a494-70cb819e25d7-goldmane-key-pair\") pod \"goldmane-54d579b49d-jlk4g\" (UID: \"f77d155b-2827-48f6-a494-70cb819e25d7\") " pod="calico-system/goldmane-54d579b49d-jlk4g" Sep 5 23:54:44.266047 containerd[2003]: time="2025-09-05T23:54:44.265567465Z" level=info msg="shim disconnected" id=a3918c9877b7b7ec90c31462f9d892d1538b41779e6179592c54c580319dcdc6 namespace=k8s.io Sep 5 23:54:44.267318 containerd[2003]: time="2025-09-05T23:54:44.266570845Z" level=warning msg="cleaning up after shim disconnected" id=a3918c9877b7b7ec90c31462f9d892d1538b41779e6179592c54c580319dcdc6 namespace=k8s.io Sep 5 23:54:44.267318 containerd[2003]: time="2025-09-05T23:54:44.267226513Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:54:44.290621 systemd[1]: Created slice kubepods-besteffort-podf77d155b_2827_48f6_a494_70cb819e25d7.slice - libcontainer container kubepods-besteffort-podf77d155b_2827_48f6_a494_70cb819e25d7.slice. Sep 5 23:54:44.293759 containerd[2003]: time="2025-09-05T23:54:44.292541893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d5f8bf8bf-czhd4,Uid:4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf,Namespace:calico-system,Attempt:0,}" Sep 5 23:54:44.347308 systemd[1]: Created slice kubepods-besteffort-pod69af1b0c_a845_4919_910b_83540ca47865.slice - libcontainer container kubepods-besteffort-pod69af1b0c_a845_4919_910b_83540ca47865.slice. Sep 5 23:54:44.363547 kubelet[3518]: I0905 23:54:44.363487 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/69af1b0c-a845-4919-910b-83540ca47865-calico-apiserver-certs\") pod \"calico-apiserver-85cb674cb8-skhtj\" (UID: \"69af1b0c-a845-4919-910b-83540ca47865\") " pod="calico-apiserver/calico-apiserver-85cb674cb8-skhtj" Sep 5 23:54:44.363991 kubelet[3518]: I0905 23:54:44.363945 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v78xn\" (UniqueName: \"kubernetes.io/projected/69af1b0c-a845-4919-910b-83540ca47865-kube-api-access-v78xn\") pod \"calico-apiserver-85cb674cb8-skhtj\" (UID: \"69af1b0c-a845-4919-910b-83540ca47865\") " pod="calico-apiserver/calico-apiserver-85cb674cb8-skhtj" Sep 5 23:54:44.365850 containerd[2003]: time="2025-09-05T23:54:44.365670925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65bc579fb7-j8488,Uid:af3665b2-d693-4ba1-907b-899c82f2055d,Namespace:calico-system,Attempt:0,}" Sep 5 23:54:44.412418 systemd[1]: Created slice kubepods-besteffort-poda29a0520_465f_4a15_9908_cc439e2ca7ce.slice - libcontainer container kubepods-besteffort-poda29a0520_465f_4a15_9908_cc439e2ca7ce.slice. Sep 5 23:54:44.444928 containerd[2003]: time="2025-09-05T23:54:44.444859490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5d9jn,Uid:a29a0520-465f-4a15-9908-cc439e2ca7ce,Namespace:calico-system,Attempt:0,}" Sep 5 23:54:44.449706 containerd[2003]: time="2025-09-05T23:54:44.449631806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85cb674cb8-xmj4t,Uid:b45a6056-0154-4b46-9f54-64314ddc0dd5,Namespace:calico-apiserver,Attempt:0,}" Sep 5 23:54:44.636795 containerd[2003]: time="2025-09-05T23:54:44.635835639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-jlk4g,Uid:f77d155b-2827-48f6-a494-70cb819e25d7,Namespace:calico-system,Attempt:0,}" Sep 5 23:54:44.666824 containerd[2003]: time="2025-09-05T23:54:44.666757131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-g4vsz,Uid:a0f339f8-d06b-4fc1-92df-a5d9b2d87813,Namespace:kube-system,Attempt:0,}" Sep 5 23:54:44.704258 containerd[2003]: time="2025-09-05T23:54:44.704067375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85cb674cb8-skhtj,Uid:69af1b0c-a845-4919-910b-83540ca47865,Namespace:calico-apiserver,Attempt:0,}" Sep 5 23:54:44.969400 containerd[2003]: time="2025-09-05T23:54:44.965963692Z" level=error msg="Failed to destroy network for sandbox \"87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:44.972263 containerd[2003]: time="2025-09-05T23:54:44.970645768Z" level=error msg="Failed to destroy network for sandbox \"79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:44.979036 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668-shm.mount: Deactivated successfully. Sep 5 23:54:44.981908 containerd[2003]: time="2025-09-05T23:54:44.980240092Z" level=error msg="encountered an error cleaning up failed sandbox \"87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:44.981908 containerd[2003]: time="2025-09-05T23:54:44.980397964Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65bc579fb7-j8488,Uid:af3665b2-d693-4ba1-907b-899c82f2055d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:44.989111 kubelet[3518]: E0905 23:54:44.982526 3518 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:44.989111 kubelet[3518]: E0905 23:54:44.982643 3518 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65bc579fb7-j8488" Sep 5 23:54:44.989111 kubelet[3518]: E0905 23:54:44.982680 3518 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65bc579fb7-j8488" Sep 5 23:54:44.992007 kubelet[3518]: E0905 23:54:44.982764 3518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-65bc579fb7-j8488_calico-system(af3665b2-d693-4ba1-907b-899c82f2055d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-65bc579fb7-j8488_calico-system(af3665b2-d693-4ba1-907b-899c82f2055d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65bc579fb7-j8488" podUID="af3665b2-d693-4ba1-907b-899c82f2055d" Sep 5 23:54:44.998935 containerd[2003]: time="2025-09-05T23:54:44.996447437Z" level=error msg="encountered an error cleaning up failed sandbox \"79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:44.998935 containerd[2003]: time="2025-09-05T23:54:44.996595421Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d5f8bf8bf-czhd4,Uid:4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:44.998140 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7-shm.mount: Deactivated successfully. Sep 5 23:54:45.001311 kubelet[3518]: E0905 23:54:44.996966 3518 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:45.001311 kubelet[3518]: E0905 23:54:44.997058 3518 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d5f8bf8bf-czhd4" Sep 5 23:54:45.001311 kubelet[3518]: E0905 23:54:44.997093 3518 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d5f8bf8bf-czhd4" Sep 5 23:54:45.002565 kubelet[3518]: E0905 23:54:44.997172 3518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6d5f8bf8bf-czhd4_calico-system(4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6d5f8bf8bf-czhd4_calico-system(4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d5f8bf8bf-czhd4" podUID="4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf" Sep 5 23:54:45.002807 containerd[2003]: time="2025-09-05T23:54:45.001723405Z" level=error msg="Failed to destroy network for sandbox \"15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:45.008410 containerd[2003]: time="2025-09-05T23:54:45.007727821Z" level=error msg="encountered an error cleaning up failed sandbox \"15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:45.008410 containerd[2003]: time="2025-09-05T23:54:45.007843981Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tpnrx,Uid:1ce18320-0b1e-4d63-958f-2d9f5f435dca,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:45.013439 kubelet[3518]: E0905 23:54:45.009720 3518 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:45.013439 kubelet[3518]: E0905 23:54:45.009816 3518 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-tpnrx" Sep 5 23:54:45.013439 kubelet[3518]: E0905 23:54:45.009854 3518 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-tpnrx" Sep 5 23:54:45.013927 kubelet[3518]: E0905 23:54:45.009944 3518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-tpnrx_kube-system(1ce18320-0b1e-4d63-958f-2d9f5f435dca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-tpnrx_kube-system(1ce18320-0b1e-4d63-958f-2d9f5f435dca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-tpnrx" podUID="1ce18320-0b1e-4d63-958f-2d9f5f435dca" Sep 5 23:54:45.019931 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb-shm.mount: Deactivated successfully. Sep 5 23:54:45.073067 containerd[2003]: time="2025-09-05T23:54:45.072994813Z" level=error msg="Failed to destroy network for sandbox \"c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:45.075120 containerd[2003]: time="2025-09-05T23:54:45.075040333Z" level=error msg="encountered an error cleaning up failed sandbox \"c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:45.075847 containerd[2003]: time="2025-09-05T23:54:45.075502213Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85cb674cb8-xmj4t,Uid:b45a6056-0154-4b46-9f54-64314ddc0dd5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:45.076123 kubelet[3518]: E0905 23:54:45.075857 3518 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:45.076123 kubelet[3518]: E0905 23:54:45.075944 3518 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85cb674cb8-xmj4t" Sep 5 23:54:45.076123 kubelet[3518]: E0905 23:54:45.075981 3518 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85cb674cb8-xmj4t" Sep 5 23:54:45.077391 kubelet[3518]: E0905 23:54:45.076090 3518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-85cb674cb8-xmj4t_calico-apiserver(b45a6056-0154-4b46-9f54-64314ddc0dd5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-85cb674cb8-xmj4t_calico-apiserver(b45a6056-0154-4b46-9f54-64314ddc0dd5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85cb674cb8-xmj4t" podUID="b45a6056-0154-4b46-9f54-64314ddc0dd5" Sep 5 23:54:45.137673 containerd[2003]: time="2025-09-05T23:54:45.137561053Z" level=error msg="Failed to destroy network for sandbox \"3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:45.140248 containerd[2003]: time="2025-09-05T23:54:45.139885645Z" level=error msg="encountered an error cleaning up failed sandbox \"3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:45.142520 containerd[2003]: time="2025-09-05T23:54:45.141677257Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85cb674cb8-skhtj,Uid:69af1b0c-a845-4919-910b-83540ca47865,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:45.143068 kubelet[3518]: E0905 23:54:45.143006 3518 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:45.144081 kubelet[3518]: E0905 23:54:45.143388 3518 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85cb674cb8-skhtj" Sep 5 23:54:45.144081 kubelet[3518]: E0905 23:54:45.143551 3518 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85cb674cb8-skhtj" Sep 5 23:54:45.144081 kubelet[3518]: E0905 23:54:45.143656 3518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-85cb674cb8-skhtj_calico-apiserver(69af1b0c-a845-4919-910b-83540ca47865)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-85cb674cb8-skhtj_calico-apiserver(69af1b0c-a845-4919-910b-83540ca47865)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85cb674cb8-skhtj" podUID="69af1b0c-a845-4919-910b-83540ca47865" Sep 5 23:54:45.166899 containerd[2003]: time="2025-09-05T23:54:45.166785925Z" level=error msg="Failed to destroy network for sandbox \"9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:45.168646 containerd[2003]: time="2025-09-05T23:54:45.168433357Z" level=error msg="encountered an error cleaning up failed sandbox \"9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:45.169058 containerd[2003]: time="2025-09-05T23:54:45.168805717Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5d9jn,Uid:a29a0520-465f-4a15-9908-cc439e2ca7ce,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:45.169629 kubelet[3518]: E0905 23:54:45.169571 3518 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:45.169897 kubelet[3518]: E0905 23:54:45.169854 3518 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5d9jn" Sep 5 23:54:45.170053 kubelet[3518]: E0905 23:54:45.170019 3518 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5d9jn" Sep 5 23:54:45.170303 kubelet[3518]: E0905 23:54:45.170249 3518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5d9jn_calico-system(a29a0520-465f-4a15-9908-cc439e2ca7ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5d9jn_calico-system(a29a0520-465f-4a15-9908-cc439e2ca7ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5d9jn" podUID="a29a0520-465f-4a15-9908-cc439e2ca7ce" Sep 5 23:54:45.191280 containerd[2003]: time="2025-09-05T23:54:45.191067709Z" level=error msg="Failed to destroy network for sandbox \"8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:45.193230 containerd[2003]: time="2025-09-05T23:54:45.193116242Z" level=error msg="encountered an error cleaning up failed sandbox \"8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:45.193717 containerd[2003]: time="2025-09-05T23:54:45.193506350Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-g4vsz,Uid:a0f339f8-d06b-4fc1-92df-a5d9b2d87813,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:45.194313 kubelet[3518]: E0905 23:54:45.194235 3518 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:45.194523 kubelet[3518]: E0905 23:54:45.194330 3518 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-g4vsz" Sep 5 23:54:45.194523 kubelet[3518]: E0905 23:54:45.194421 3518 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-g4vsz" Sep 5 23:54:45.194671 kubelet[3518]: E0905 23:54:45.194520 3518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-g4vsz_kube-system(a0f339f8-d06b-4fc1-92df-a5d9b2d87813)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-g4vsz_kube-system(a0f339f8-d06b-4fc1-92df-a5d9b2d87813)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-g4vsz" podUID="a0f339f8-d06b-4fc1-92df-a5d9b2d87813" Sep 5 23:54:45.199144 containerd[2003]: time="2025-09-05T23:54:45.198691718Z" level=error msg="Failed to destroy network for sandbox \"e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:45.199981 containerd[2003]: time="2025-09-05T23:54:45.199700894Z" level=error msg="encountered an error cleaning up failed sandbox \"e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:45.199981 containerd[2003]: time="2025-09-05T23:54:45.199806866Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-jlk4g,Uid:f77d155b-2827-48f6-a494-70cb819e25d7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:45.200862 kubelet[3518]: E0905 23:54:45.200546 3518 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:45.200862 kubelet[3518]: E0905 23:54:45.200644 3518 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-jlk4g" Sep 5 23:54:45.200862 kubelet[3518]: E0905 23:54:45.200682 3518 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-jlk4g" Sep 5 23:54:45.201141 kubelet[3518]: E0905 23:54:45.200769 3518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-jlk4g_calico-system(f77d155b-2827-48f6-a494-70cb819e25d7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-jlk4g_calico-system(f77d155b-2827-48f6-a494-70cb819e25d7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-jlk4g" podUID="f77d155b-2827-48f6-a494-70cb819e25d7" Sep 5 23:54:45.435212 kubelet[3518]: I0905 23:54:45.435158 3518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" Sep 5 23:54:45.438372 containerd[2003]: time="2025-09-05T23:54:45.437572851Z" level=info msg="StopPodSandbox for \"3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303\"" Sep 5 23:54:45.438372 containerd[2003]: time="2025-09-05T23:54:45.437951031Z" level=info msg="Ensure that sandbox 3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303 in task-service has been cleanup successfully" Sep 5 23:54:45.440918 kubelet[3518]: I0905 23:54:45.439907 3518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" Sep 5 23:54:45.443708 containerd[2003]: time="2025-09-05T23:54:45.443643327Z" level=info msg="StopPodSandbox for \"87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668\"" Sep 5 23:54:45.445140 containerd[2003]: time="2025-09-05T23:54:45.445044663Z" level=info msg="Ensure that sandbox 87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668 in task-service has been cleanup successfully" Sep 5 23:54:45.463395 kubelet[3518]: I0905 23:54:45.463291 3518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" Sep 5 23:54:45.464888 containerd[2003]: time="2025-09-05T23:54:45.464502555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 5 23:54:45.470076 containerd[2003]: time="2025-09-05T23:54:45.469983855Z" level=info msg="StopPodSandbox for \"8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840\"" Sep 5 23:54:45.474235 containerd[2003]: time="2025-09-05T23:54:45.473761611Z" level=info msg="Ensure that sandbox 8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840 in task-service has been cleanup successfully" Sep 5 23:54:45.484389 kubelet[3518]: I0905 23:54:45.483719 3518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" Sep 5 23:54:45.491415 containerd[2003]: time="2025-09-05T23:54:45.488185551Z" level=info msg="StopPodSandbox for \"9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c\"" Sep 5 23:54:45.491553 kubelet[3518]: I0905 23:54:45.490946 3518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" Sep 5 23:54:45.492507 containerd[2003]: time="2025-09-05T23:54:45.492423855Z" level=info msg="StopPodSandbox for \"e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7\"" Sep 5 23:54:45.492849 containerd[2003]: time="2025-09-05T23:54:45.492767019Z" level=info msg="Ensure that sandbox e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7 in task-service has been cleanup successfully" Sep 5 23:54:45.494652 containerd[2003]: time="2025-09-05T23:54:45.494585595Z" level=info msg="Ensure that sandbox 9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c in task-service has been cleanup successfully" Sep 5 23:54:45.518902 kubelet[3518]: I0905 23:54:45.518824 3518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" Sep 5 23:54:45.522332 containerd[2003]: time="2025-09-05T23:54:45.522237903Z" level=info msg="StopPodSandbox for \"c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e\"" Sep 5 23:54:45.522823 containerd[2003]: time="2025-09-05T23:54:45.522625011Z" level=info msg="Ensure that sandbox c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e in task-service has been cleanup successfully" Sep 5 23:54:45.546171 kubelet[3518]: I0905 23:54:45.546067 3518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" Sep 5 23:54:45.553386 containerd[2003]: time="2025-09-05T23:54:45.551405583Z" level=info msg="StopPodSandbox for \"79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7\"" Sep 5 23:54:45.555435 containerd[2003]: time="2025-09-05T23:54:45.555316467Z" level=info msg="Ensure that sandbox 79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7 in task-service has been cleanup successfully" Sep 5 23:54:45.570088 kubelet[3518]: I0905 23:54:45.568707 3518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" Sep 5 23:54:45.584111 containerd[2003]: time="2025-09-05T23:54:45.581104671Z" level=info msg="StopPodSandbox for \"15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb\"" Sep 5 23:54:45.584111 containerd[2003]: time="2025-09-05T23:54:45.581543775Z" level=info msg="Ensure that sandbox 15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb in task-service has been cleanup successfully" Sep 5 23:54:45.764213 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303-shm.mount: Deactivated successfully. Sep 5 23:54:45.764488 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840-shm.mount: Deactivated successfully. Sep 5 23:54:45.764644 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7-shm.mount: Deactivated successfully. Sep 5 23:54:45.764798 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c-shm.mount: Deactivated successfully. Sep 5 23:54:45.764941 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e-shm.mount: Deactivated successfully. Sep 5 23:54:45.866762 containerd[2003]: time="2025-09-05T23:54:45.865881161Z" level=error msg="StopPodSandbox for \"3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303\" failed" error="failed to destroy network for sandbox \"3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:45.867425 kubelet[3518]: E0905 23:54:45.866229 3518 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" Sep 5 23:54:45.867425 kubelet[3518]: E0905 23:54:45.866321 3518 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303"} Sep 5 23:54:45.867425 kubelet[3518]: E0905 23:54:45.866446 3518 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"69af1b0c-a845-4919-910b-83540ca47865\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:54:45.867425 kubelet[3518]: E0905 23:54:45.866506 3518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"69af1b0c-a845-4919-910b-83540ca47865\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85cb674cb8-skhtj" podUID="69af1b0c-a845-4919-910b-83540ca47865" Sep 5 23:54:45.880539 containerd[2003]: time="2025-09-05T23:54:45.880377557Z" level=error msg="StopPodSandbox for \"87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668\" failed" error="failed to destroy network for sandbox \"87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:45.881217 kubelet[3518]: E0905 23:54:45.880981 3518 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" Sep 5 23:54:45.881217 kubelet[3518]: E0905 23:54:45.881062 3518 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668"} Sep 5 23:54:45.881217 kubelet[3518]: E0905 23:54:45.881119 3518 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"af3665b2-d693-4ba1-907b-899c82f2055d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:54:45.881217 kubelet[3518]: E0905 23:54:45.881161 3518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"af3665b2-d693-4ba1-907b-899c82f2055d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65bc579fb7-j8488" podUID="af3665b2-d693-4ba1-907b-899c82f2055d" Sep 5 23:54:45.945908 containerd[2003]: time="2025-09-05T23:54:45.944397869Z" level=error msg="StopPodSandbox for \"9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c\" failed" error="failed to destroy network for sandbox \"9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:45.946119 kubelet[3518]: E0905 23:54:45.945373 3518 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" Sep 5 23:54:45.946119 kubelet[3518]: E0905 23:54:45.945444 3518 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c"} Sep 5 23:54:45.946119 kubelet[3518]: E0905 23:54:45.945506 3518 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a29a0520-465f-4a15-9908-cc439e2ca7ce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:54:45.946119 kubelet[3518]: E0905 23:54:45.945549 3518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a29a0520-465f-4a15-9908-cc439e2ca7ce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5d9jn" podUID="a29a0520-465f-4a15-9908-cc439e2ca7ce" Sep 5 23:54:45.976423 containerd[2003]: time="2025-09-05T23:54:45.972080573Z" level=error msg="StopPodSandbox for \"8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840\" failed" error="failed to destroy network for sandbox \"8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:45.977715 kubelet[3518]: E0905 23:54:45.974675 3518 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" Sep 5 23:54:45.977715 kubelet[3518]: E0905 23:54:45.975022 3518 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840"} Sep 5 23:54:45.977715 kubelet[3518]: E0905 23:54:45.975086 3518 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a0f339f8-d06b-4fc1-92df-a5d9b2d87813\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:54:45.977715 kubelet[3518]: E0905 23:54:45.975135 3518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a0f339f8-d06b-4fc1-92df-a5d9b2d87813\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-g4vsz" podUID="a0f339f8-d06b-4fc1-92df-a5d9b2d87813" Sep 5 23:54:45.994387 containerd[2003]: time="2025-09-05T23:54:45.993620045Z" level=error msg="StopPodSandbox for \"e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7\" failed" error="failed to destroy network for sandbox \"e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:45.994530 kubelet[3518]: E0905 23:54:45.993975 3518 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" Sep 5 23:54:45.994530 kubelet[3518]: E0905 23:54:45.994049 3518 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7"} Sep 5 23:54:45.994530 kubelet[3518]: E0905 23:54:45.994104 3518 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f77d155b-2827-48f6-a494-70cb819e25d7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:54:45.994530 kubelet[3518]: E0905 23:54:45.994159 3518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f77d155b-2827-48f6-a494-70cb819e25d7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-jlk4g" podUID="f77d155b-2827-48f6-a494-70cb819e25d7" Sep 5 23:54:45.996783 containerd[2003]: time="2025-09-05T23:54:45.995591825Z" level=error msg="StopPodSandbox for \"c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e\" failed" error="failed to destroy network for sandbox \"c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:45.997213 kubelet[3518]: E0905 23:54:45.995983 3518 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" Sep 5 23:54:45.997213 kubelet[3518]: E0905 23:54:45.996056 3518 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e"} Sep 5 23:54:45.997213 kubelet[3518]: E0905 23:54:45.996110 3518 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b45a6056-0154-4b46-9f54-64314ddc0dd5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:54:45.997213 kubelet[3518]: E0905 23:54:45.996151 3518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b45a6056-0154-4b46-9f54-64314ddc0dd5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85cb674cb8-xmj4t" podUID="b45a6056-0154-4b46-9f54-64314ddc0dd5" Sep 5 23:54:45.998150 containerd[2003]: time="2025-09-05T23:54:45.997960889Z" level=error msg="StopPodSandbox for \"79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7\" failed" error="failed to destroy network for sandbox \"79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:45.998822 kubelet[3518]: E0905 23:54:45.998562 3518 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" Sep 5 23:54:45.998822 kubelet[3518]: E0905 23:54:45.998638 3518 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7"} Sep 5 23:54:45.998822 kubelet[3518]: E0905 23:54:45.998694 3518 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:54:45.998822 kubelet[3518]: E0905 23:54:45.998747 3518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d5f8bf8bf-czhd4" podUID="4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf" Sep 5 23:54:46.005401 containerd[2003]: time="2025-09-05T23:54:46.005102774Z" level=error msg="StopPodSandbox for \"15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb\" failed" error="failed to destroy network for sandbox \"15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 23:54:46.006013 kubelet[3518]: E0905 23:54:46.005751 3518 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" Sep 5 23:54:46.006013 kubelet[3518]: E0905 23:54:46.005826 3518 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb"} Sep 5 23:54:46.006013 kubelet[3518]: E0905 23:54:46.005880 3518 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1ce18320-0b1e-4d63-958f-2d9f5f435dca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 23:54:46.006013 kubelet[3518]: E0905 23:54:46.005931 3518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1ce18320-0b1e-4d63-958f-2d9f5f435dca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-tpnrx" podUID="1ce18320-0b1e-4d63-958f-2d9f5f435dca" Sep 5 23:54:52.529061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount80168659.mount: Deactivated successfully. Sep 5 23:54:52.593216 containerd[2003]: time="2025-09-05T23:54:52.593152678Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:52.595424 containerd[2003]: time="2025-09-05T23:54:52.595333894Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 5 23:54:52.598165 containerd[2003]: time="2025-09-05T23:54:52.598099642Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:52.603320 containerd[2003]: time="2025-09-05T23:54:52.603249070Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:52.604628 containerd[2003]: time="2025-09-05T23:54:52.604564486Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 7.139843651s" Sep 5 23:54:52.604783 containerd[2003]: time="2025-09-05T23:54:52.604629370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 5 23:54:52.677333 containerd[2003]: time="2025-09-05T23:54:52.677080415Z" level=info msg="CreateContainer within sandbox \"cf7ae290dc03aa4766a709ec625d5b1fe46018da0f6ff4ce21b71b5de48ee1a3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 5 23:54:52.732960 containerd[2003]: time="2025-09-05T23:54:52.732623951Z" level=info msg="CreateContainer within sandbox \"cf7ae290dc03aa4766a709ec625d5b1fe46018da0f6ff4ce21b71b5de48ee1a3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d80c0113a97a238675038e468fcd099e4787d762608a13ebd013d9f470e25a58\"" Sep 5 23:54:52.734824 containerd[2003]: time="2025-09-05T23:54:52.734715803Z" level=info msg="StartContainer for \"d80c0113a97a238675038e468fcd099e4787d762608a13ebd013d9f470e25a58\"" Sep 5 23:54:52.791446 systemd[1]: Started cri-containerd-d80c0113a97a238675038e468fcd099e4787d762608a13ebd013d9f470e25a58.scope - libcontainer container d80c0113a97a238675038e468fcd099e4787d762608a13ebd013d9f470e25a58. Sep 5 23:54:52.862850 containerd[2003]: time="2025-09-05T23:54:52.862755516Z" level=info msg="StartContainer for \"d80c0113a97a238675038e468fcd099e4787d762608a13ebd013d9f470e25a58\" returns successfully" Sep 5 23:54:53.146818 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 5 23:54:53.146980 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 5 23:54:53.398120 containerd[2003]: time="2025-09-05T23:54:53.398038522Z" level=info msg="StopPodSandbox for \"79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7\"" Sep 5 23:54:53.679721 kubelet[3518]: I0905 23:54:53.678668 3518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6x7ds" podStartSLOduration=2.685512331 podStartE2EDuration="20.6786354s" podCreationTimestamp="2025-09-05 23:54:33 +0000 UTC" firstStartedPulling="2025-09-05 23:54:34.614880209 +0000 UTC m=+32.827172732" lastFinishedPulling="2025-09-05 23:54:52.608003278 +0000 UTC m=+50.820295801" observedRunningTime="2025-09-05 23:54:53.675220044 +0000 UTC m=+51.887512603" watchObservedRunningTime="2025-09-05 23:54:53.6786354 +0000 UTC m=+51.890927935" Sep 5 23:54:53.724291 systemd[1]: run-containerd-runc-k8s.io-d80c0113a97a238675038e468fcd099e4787d762608a13ebd013d9f470e25a58-runc.j0GV68.mount: Deactivated successfully. Sep 5 23:54:53.843790 containerd[2003]: 2025-09-05 23:54:53.604 [INFO][4739] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" Sep 5 23:54:53.843790 containerd[2003]: 2025-09-05 23:54:53.611 [INFO][4739] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" iface="eth0" netns="/var/run/netns/cni-1df9d8cc-5648-7ebc-f643-6bfb683d46dd" Sep 5 23:54:53.843790 containerd[2003]: 2025-09-05 23:54:53.615 [INFO][4739] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" iface="eth0" netns="/var/run/netns/cni-1df9d8cc-5648-7ebc-f643-6bfb683d46dd" Sep 5 23:54:53.843790 containerd[2003]: 2025-09-05 23:54:53.622 [INFO][4739] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" iface="eth0" netns="/var/run/netns/cni-1df9d8cc-5648-7ebc-f643-6bfb683d46dd" Sep 5 23:54:53.843790 containerd[2003]: 2025-09-05 23:54:53.622 [INFO][4739] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" Sep 5 23:54:53.843790 containerd[2003]: 2025-09-05 23:54:53.623 [INFO][4739] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" Sep 5 23:54:53.843790 containerd[2003]: 2025-09-05 23:54:53.801 [INFO][4748] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" HandleID="k8s-pod-network.79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" Workload="ip--172--31--23--98-k8s-whisker--6d5f8bf8bf--czhd4-eth0" Sep 5 23:54:53.843790 containerd[2003]: 2025-09-05 23:54:53.802 [INFO][4748] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:54:53.843790 containerd[2003]: 2025-09-05 23:54:53.802 [INFO][4748] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:54:53.843790 containerd[2003]: 2025-09-05 23:54:53.830 [WARNING][4748] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" HandleID="k8s-pod-network.79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" Workload="ip--172--31--23--98-k8s-whisker--6d5f8bf8bf--czhd4-eth0" Sep 5 23:54:53.843790 containerd[2003]: 2025-09-05 23:54:53.830 [INFO][4748] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" HandleID="k8s-pod-network.79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" Workload="ip--172--31--23--98-k8s-whisker--6d5f8bf8bf--czhd4-eth0" Sep 5 23:54:53.843790 containerd[2003]: 2025-09-05 23:54:53.835 [INFO][4748] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:54:53.843790 containerd[2003]: 2025-09-05 23:54:53.840 [INFO][4739] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" Sep 5 23:54:53.849729 containerd[2003]: time="2025-09-05T23:54:53.847061220Z" level=info msg="TearDown network for sandbox \"79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7\" successfully" Sep 5 23:54:53.849729 containerd[2003]: time="2025-09-05T23:54:53.847127604Z" level=info msg="StopPodSandbox for \"79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7\" returns successfully" Sep 5 23:54:53.855492 systemd[1]: run-netns-cni\x2d1df9d8cc\x2d5648\x2d7ebc\x2df643\x2d6bfb683d46dd.mount: Deactivated successfully. Sep 5 23:54:53.964887 kubelet[3518]: I0905 23:54:53.963981 3518 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf-whisker-backend-key-pair\") pod \"4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf\" (UID: \"4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf\") " Sep 5 23:54:53.964887 kubelet[3518]: I0905 23:54:53.964073 3518 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf-whisker-ca-bundle\") pod \"4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf\" (UID: \"4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf\") " Sep 5 23:54:53.964887 kubelet[3518]: I0905 23:54:53.964134 3518 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjt7z\" (UniqueName: \"kubernetes.io/projected/4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf-kube-api-access-mjt7z\") pod \"4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf\" (UID: \"4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf\") " Sep 5 23:54:53.973940 kubelet[3518]: I0905 23:54:53.973863 3518 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf" (UID: "4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 5 23:54:53.979507 kubelet[3518]: I0905 23:54:53.979263 3518 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf-kube-api-access-mjt7z" (OuterVolumeSpecName: "kube-api-access-mjt7z") pod "4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf" (UID: "4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf"). InnerVolumeSpecName "kube-api-access-mjt7z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 23:54:53.983172 systemd[1]: var-lib-kubelet-pods-4f3e7d36\x2d4ee1\x2d4ff9\x2db3d4\x2d3dc6513e06bf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmjt7z.mount: Deactivated successfully. Sep 5 23:54:53.987561 kubelet[3518]: I0905 23:54:53.987474 3518 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf" (UID: "4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 5 23:54:54.065230 kubelet[3518]: I0905 23:54:54.065103 3518 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf-whisker-ca-bundle\") on node \"ip-172-31-23-98\" DevicePath \"\"" Sep 5 23:54:54.065230 kubelet[3518]: I0905 23:54:54.065161 3518 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjt7z\" (UniqueName: \"kubernetes.io/projected/4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf-kube-api-access-mjt7z\") on node \"ip-172-31-23-98\" DevicePath \"\"" Sep 5 23:54:54.065230 kubelet[3518]: I0905 23:54:54.065185 3518 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf-whisker-backend-key-pair\") on node \"ip-172-31-23-98\" DevicePath \"\"" Sep 5 23:54:54.173459 systemd[1]: Removed slice kubepods-besteffort-pod4f3e7d36_4ee1_4ff9_b3d4_3dc6513e06bf.slice - libcontainer container kubepods-besteffort-pod4f3e7d36_4ee1_4ff9_b3d4_3dc6513e06bf.slice. Sep 5 23:54:54.530025 systemd[1]: var-lib-kubelet-pods-4f3e7d36\x2d4ee1\x2d4ff9\x2db3d4\x2d3dc6513e06bf-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 5 23:54:54.700131 systemd[1]: run-containerd-runc-k8s.io-d80c0113a97a238675038e468fcd099e4787d762608a13ebd013d9f470e25a58-runc.lkcRgf.mount: Deactivated successfully. Sep 5 23:54:54.801467 systemd[1]: Created slice kubepods-besteffort-pod1323d97e_709a_4cfe_815c_79ddc6dcf721.slice - libcontainer container kubepods-besteffort-pod1323d97e_709a_4cfe_815c_79ddc6dcf721.slice. Sep 5 23:54:54.871079 kubelet[3518]: I0905 23:54:54.870939 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1323d97e-709a-4cfe-815c-79ddc6dcf721-whisker-backend-key-pair\") pod \"whisker-7ff96fb9d8-qmwmw\" (UID: \"1323d97e-709a-4cfe-815c-79ddc6dcf721\") " pod="calico-system/whisker-7ff96fb9d8-qmwmw" Sep 5 23:54:54.871079 kubelet[3518]: I0905 23:54:54.871036 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1323d97e-709a-4cfe-815c-79ddc6dcf721-whisker-ca-bundle\") pod \"whisker-7ff96fb9d8-qmwmw\" (UID: \"1323d97e-709a-4cfe-815c-79ddc6dcf721\") " pod="calico-system/whisker-7ff96fb9d8-qmwmw" Sep 5 23:54:54.871079 kubelet[3518]: I0905 23:54:54.871078 3518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2xfc\" (UniqueName: \"kubernetes.io/projected/1323d97e-709a-4cfe-815c-79ddc6dcf721-kube-api-access-d2xfc\") pod \"whisker-7ff96fb9d8-qmwmw\" (UID: \"1323d97e-709a-4cfe-815c-79ddc6dcf721\") " pod="calico-system/whisker-7ff96fb9d8-qmwmw" Sep 5 23:54:55.111260 containerd[2003]: time="2025-09-05T23:54:55.110544299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7ff96fb9d8-qmwmw,Uid:1323d97e-709a-4cfe-815c-79ddc6dcf721,Namespace:calico-system,Attempt:0,}" Sep 5 23:54:55.476950 (udev-worker)[4719]: Network interface NamePolicy= disabled on kernel command line. Sep 5 23:54:55.480472 systemd-networkd[1935]: cali9f55ae7a108: Link UP Sep 5 23:54:55.481004 systemd-networkd[1935]: cali9f55ae7a108: Gained carrier Sep 5 23:54:55.541637 containerd[2003]: 2025-09-05 23:54:55.220 [INFO][4841] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 23:54:55.541637 containerd[2003]: 2025-09-05 23:54:55.273 [INFO][4841] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--98-k8s-whisker--7ff96fb9d8--qmwmw-eth0 whisker-7ff96fb9d8- calico-system 1323d97e-709a-4cfe-815c-79ddc6dcf721 962 0 2025-09-05 23:54:54 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7ff96fb9d8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-23-98 whisker-7ff96fb9d8-qmwmw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9f55ae7a108 [] [] }} ContainerID="22e14d460c616a9744970ab6afca7c7afd4222d16fe332186df6c2f3e378262e" Namespace="calico-system" Pod="whisker-7ff96fb9d8-qmwmw" WorkloadEndpoint="ip--172--31--23--98-k8s-whisker--7ff96fb9d8--qmwmw-" Sep 5 23:54:55.541637 containerd[2003]: 2025-09-05 23:54:55.273 [INFO][4841] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="22e14d460c616a9744970ab6afca7c7afd4222d16fe332186df6c2f3e378262e" Namespace="calico-system" Pod="whisker-7ff96fb9d8-qmwmw" WorkloadEndpoint="ip--172--31--23--98-k8s-whisker--7ff96fb9d8--qmwmw-eth0" Sep 5 23:54:55.541637 containerd[2003]: 2025-09-05 23:54:55.363 [INFO][4879] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="22e14d460c616a9744970ab6afca7c7afd4222d16fe332186df6c2f3e378262e" HandleID="k8s-pod-network.22e14d460c616a9744970ab6afca7c7afd4222d16fe332186df6c2f3e378262e" Workload="ip--172--31--23--98-k8s-whisker--7ff96fb9d8--qmwmw-eth0" Sep 5 23:54:55.541637 containerd[2003]: 2025-09-05 23:54:55.363 [INFO][4879] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="22e14d460c616a9744970ab6afca7c7afd4222d16fe332186df6c2f3e378262e" HandleID="k8s-pod-network.22e14d460c616a9744970ab6afca7c7afd4222d16fe332186df6c2f3e378262e" Workload="ip--172--31--23--98-k8s-whisker--7ff96fb9d8--qmwmw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400032cdd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-98", "pod":"whisker-7ff96fb9d8-qmwmw", "timestamp":"2025-09-05 23:54:55.363630396 +0000 UTC"}, Hostname:"ip-172-31-23-98", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:54:55.541637 containerd[2003]: 2025-09-05 23:54:55.364 [INFO][4879] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:54:55.541637 containerd[2003]: 2025-09-05 23:54:55.364 [INFO][4879] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:54:55.541637 containerd[2003]: 2025-09-05 23:54:55.364 [INFO][4879] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-98' Sep 5 23:54:55.541637 containerd[2003]: 2025-09-05 23:54:55.383 [INFO][4879] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.22e14d460c616a9744970ab6afca7c7afd4222d16fe332186df6c2f3e378262e" host="ip-172-31-23-98" Sep 5 23:54:55.541637 containerd[2003]: 2025-09-05 23:54:55.396 [INFO][4879] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-98" Sep 5 23:54:55.541637 containerd[2003]: 2025-09-05 23:54:55.411 [INFO][4879] ipam/ipam.go 511: Trying affinity for 192.168.30.128/26 host="ip-172-31-23-98" Sep 5 23:54:55.541637 containerd[2003]: 2025-09-05 23:54:55.416 [INFO][4879] ipam/ipam.go 158: Attempting to load block cidr=192.168.30.128/26 host="ip-172-31-23-98" Sep 5 23:54:55.541637 containerd[2003]: 2025-09-05 23:54:55.421 [INFO][4879] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.30.128/26 host="ip-172-31-23-98" Sep 5 23:54:55.541637 containerd[2003]: 2025-09-05 23:54:55.421 [INFO][4879] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.30.128/26 handle="k8s-pod-network.22e14d460c616a9744970ab6afca7c7afd4222d16fe332186df6c2f3e378262e" host="ip-172-31-23-98" Sep 5 23:54:55.541637 containerd[2003]: 2025-09-05 23:54:55.424 [INFO][4879] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.22e14d460c616a9744970ab6afca7c7afd4222d16fe332186df6c2f3e378262e Sep 5 23:54:55.541637 containerd[2003]: 2025-09-05 23:54:55.433 [INFO][4879] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.30.128/26 handle="k8s-pod-network.22e14d460c616a9744970ab6afca7c7afd4222d16fe332186df6c2f3e378262e" host="ip-172-31-23-98" Sep 5 23:54:55.541637 containerd[2003]: 2025-09-05 23:54:55.452 [INFO][4879] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.30.129/26] block=192.168.30.128/26 handle="k8s-pod-network.22e14d460c616a9744970ab6afca7c7afd4222d16fe332186df6c2f3e378262e" host="ip-172-31-23-98" Sep 5 23:54:55.541637 containerd[2003]: 2025-09-05 23:54:55.452 [INFO][4879] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.30.129/26] handle="k8s-pod-network.22e14d460c616a9744970ab6afca7c7afd4222d16fe332186df6c2f3e378262e" host="ip-172-31-23-98" Sep 5 23:54:55.541637 containerd[2003]: 2025-09-05 23:54:55.452 [INFO][4879] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:54:55.541637 containerd[2003]: 2025-09-05 23:54:55.452 [INFO][4879] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.30.129/26] IPv6=[] ContainerID="22e14d460c616a9744970ab6afca7c7afd4222d16fe332186df6c2f3e378262e" HandleID="k8s-pod-network.22e14d460c616a9744970ab6afca7c7afd4222d16fe332186df6c2f3e378262e" Workload="ip--172--31--23--98-k8s-whisker--7ff96fb9d8--qmwmw-eth0" Sep 5 23:54:55.549328 containerd[2003]: 2025-09-05 23:54:55.456 [INFO][4841] cni-plugin/k8s.go 418: Populated endpoint ContainerID="22e14d460c616a9744970ab6afca7c7afd4222d16fe332186df6c2f3e378262e" Namespace="calico-system" Pod="whisker-7ff96fb9d8-qmwmw" WorkloadEndpoint="ip--172--31--23--98-k8s-whisker--7ff96fb9d8--qmwmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--98-k8s-whisker--7ff96fb9d8--qmwmw-eth0", GenerateName:"whisker-7ff96fb9d8-", Namespace:"calico-system", SelfLink:"", UID:"1323d97e-709a-4cfe-815c-79ddc6dcf721", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7ff96fb9d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-98", ContainerID:"", Pod:"whisker-7ff96fb9d8-qmwmw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.30.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9f55ae7a108", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:54:55.549328 containerd[2003]: 2025-09-05 23:54:55.457 [INFO][4841] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.30.129/32] ContainerID="22e14d460c616a9744970ab6afca7c7afd4222d16fe332186df6c2f3e378262e" Namespace="calico-system" Pod="whisker-7ff96fb9d8-qmwmw" WorkloadEndpoint="ip--172--31--23--98-k8s-whisker--7ff96fb9d8--qmwmw-eth0" Sep 5 23:54:55.549328 containerd[2003]: 2025-09-05 23:54:55.458 [INFO][4841] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9f55ae7a108 ContainerID="22e14d460c616a9744970ab6afca7c7afd4222d16fe332186df6c2f3e378262e" Namespace="calico-system" Pod="whisker-7ff96fb9d8-qmwmw" WorkloadEndpoint="ip--172--31--23--98-k8s-whisker--7ff96fb9d8--qmwmw-eth0" Sep 5 23:54:55.549328 containerd[2003]: 2025-09-05 23:54:55.484 [INFO][4841] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="22e14d460c616a9744970ab6afca7c7afd4222d16fe332186df6c2f3e378262e" Namespace="calico-system" Pod="whisker-7ff96fb9d8-qmwmw" WorkloadEndpoint="ip--172--31--23--98-k8s-whisker--7ff96fb9d8--qmwmw-eth0" Sep 5 23:54:55.549328 containerd[2003]: 2025-09-05 23:54:55.485 [INFO][4841] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="22e14d460c616a9744970ab6afca7c7afd4222d16fe332186df6c2f3e378262e" Namespace="calico-system" Pod="whisker-7ff96fb9d8-qmwmw" WorkloadEndpoint="ip--172--31--23--98-k8s-whisker--7ff96fb9d8--qmwmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--98-k8s-whisker--7ff96fb9d8--qmwmw-eth0", GenerateName:"whisker-7ff96fb9d8-", Namespace:"calico-system", SelfLink:"", UID:"1323d97e-709a-4cfe-815c-79ddc6dcf721", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7ff96fb9d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-98", ContainerID:"22e14d460c616a9744970ab6afca7c7afd4222d16fe332186df6c2f3e378262e", Pod:"whisker-7ff96fb9d8-qmwmw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.30.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9f55ae7a108", MAC:"7e:c2:67:2f:fc:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:54:55.549328 containerd[2003]: 2025-09-05 23:54:55.529 [INFO][4841] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="22e14d460c616a9744970ab6afca7c7afd4222d16fe332186df6c2f3e378262e" Namespace="calico-system" Pod="whisker-7ff96fb9d8-qmwmw" WorkloadEndpoint="ip--172--31--23--98-k8s-whisker--7ff96fb9d8--qmwmw-eth0" Sep 5 23:54:55.621310 containerd[2003]: time="2025-09-05T23:54:55.619950877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:54:55.621310 containerd[2003]: time="2025-09-05T23:54:55.620058733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:54:55.621310 containerd[2003]: time="2025-09-05T23:54:55.620103229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:55.621310 containerd[2003]: time="2025-09-05T23:54:55.620283289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:55.686982 systemd[1]: Started cri-containerd-22e14d460c616a9744970ab6afca7c7afd4222d16fe332186df6c2f3e378262e.scope - libcontainer container 22e14d460c616a9744970ab6afca7c7afd4222d16fe332186df6c2f3e378262e. Sep 5 23:54:55.987701 containerd[2003]: time="2025-09-05T23:54:55.987599547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7ff96fb9d8-qmwmw,Uid:1323d97e-709a-4cfe-815c-79ddc6dcf721,Namespace:calico-system,Attempt:0,} returns sandbox id \"22e14d460c616a9744970ab6afca7c7afd4222d16fe332186df6c2f3e378262e\"" Sep 5 23:54:55.994740 containerd[2003]: time="2025-09-05T23:54:55.994435311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 5 23:54:56.169413 kubelet[3518]: I0905 23:54:56.166150 3518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf" path="/var/lib/kubelet/pods/4f3e7d36-4ee1-4ff9-b3d4-3dc6513e06bf/volumes" Sep 5 23:54:56.569384 kernel: bpftool[5003]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 5 23:54:56.943886 systemd-networkd[1935]: vxlan.calico: Link UP Sep 5 23:54:56.943907 systemd-networkd[1935]: vxlan.calico: Gained carrier Sep 5 23:54:57.011214 (udev-worker)[4718]: Network interface NamePolicy= disabled on kernel command line. Sep 5 23:54:57.061596 systemd-networkd[1935]: cali9f55ae7a108: Gained IPv6LL Sep 5 23:54:57.162817 containerd[2003]: time="2025-09-05T23:54:57.162693445Z" level=info msg="StopPodSandbox for \"3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303\"" Sep 5 23:54:57.164757 containerd[2003]: time="2025-09-05T23:54:57.162964633Z" level=info msg="StopPodSandbox for \"e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7\"" Sep 5 23:54:57.513911 containerd[2003]: 2025-09-05 23:54:57.344 [INFO][5054] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" Sep 5 23:54:57.513911 containerd[2003]: 2025-09-05 23:54:57.344 [INFO][5054] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" iface="eth0" netns="/var/run/netns/cni-cfd78619-fd41-2bf5-3179-b418b17ee905" Sep 5 23:54:57.513911 containerd[2003]: 2025-09-05 23:54:57.345 [INFO][5054] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" iface="eth0" netns="/var/run/netns/cni-cfd78619-fd41-2bf5-3179-b418b17ee905" Sep 5 23:54:57.513911 containerd[2003]: 2025-09-05 23:54:57.354 [INFO][5054] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" iface="eth0" netns="/var/run/netns/cni-cfd78619-fd41-2bf5-3179-b418b17ee905" Sep 5 23:54:57.513911 containerd[2003]: 2025-09-05 23:54:57.355 [INFO][5054] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" Sep 5 23:54:57.513911 containerd[2003]: 2025-09-05 23:54:57.355 [INFO][5054] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" Sep 5 23:54:57.513911 containerd[2003]: 2025-09-05 23:54:57.447 [INFO][5075] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" HandleID="k8s-pod-network.e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" Workload="ip--172--31--23--98-k8s-goldmane--54d579b49d--jlk4g-eth0" Sep 5 23:54:57.513911 containerd[2003]: 2025-09-05 23:54:57.448 [INFO][5075] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:54:57.513911 containerd[2003]: 2025-09-05 23:54:57.448 [INFO][5075] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:54:57.513911 containerd[2003]: 2025-09-05 23:54:57.486 [WARNING][5075] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" HandleID="k8s-pod-network.e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" Workload="ip--172--31--23--98-k8s-goldmane--54d579b49d--jlk4g-eth0" Sep 5 23:54:57.513911 containerd[2003]: 2025-09-05 23:54:57.487 [INFO][5075] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" HandleID="k8s-pod-network.e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" Workload="ip--172--31--23--98-k8s-goldmane--54d579b49d--jlk4g-eth0" Sep 5 23:54:57.513911 containerd[2003]: 2025-09-05 23:54:57.494 [INFO][5075] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:54:57.513911 containerd[2003]: 2025-09-05 23:54:57.503 [INFO][5054] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" Sep 5 23:54:57.523438 containerd[2003]: time="2025-09-05T23:54:57.521499195Z" level=info msg="TearDown network for sandbox \"e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7\" successfully" Sep 5 23:54:57.523438 containerd[2003]: time="2025-09-05T23:54:57.521562027Z" level=info msg="StopPodSandbox for \"e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7\" returns successfully" Sep 5 23:54:57.523640 systemd[1]: run-netns-cni\x2dcfd78619\x2dfd41\x2d2bf5\x2d3179\x2db418b17ee905.mount: Deactivated successfully. Sep 5 23:54:57.531040 containerd[2003]: time="2025-09-05T23:54:57.530106903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-jlk4g,Uid:f77d155b-2827-48f6-a494-70cb819e25d7,Namespace:calico-system,Attempt:1,}" Sep 5 23:54:57.598438 containerd[2003]: 2025-09-05 23:54:57.384 [INFO][5062] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" Sep 5 23:54:57.598438 containerd[2003]: 2025-09-05 23:54:57.384 [INFO][5062] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" iface="eth0" netns="/var/run/netns/cni-c1b8e710-d98d-28c4-8649-828cd625b736" Sep 5 23:54:57.598438 containerd[2003]: 2025-09-05 23:54:57.385 [INFO][5062] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" iface="eth0" netns="/var/run/netns/cni-c1b8e710-d98d-28c4-8649-828cd625b736" Sep 5 23:54:57.598438 containerd[2003]: 2025-09-05 23:54:57.392 [INFO][5062] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" iface="eth0" netns="/var/run/netns/cni-c1b8e710-d98d-28c4-8649-828cd625b736" Sep 5 23:54:57.598438 containerd[2003]: 2025-09-05 23:54:57.392 [INFO][5062] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" Sep 5 23:54:57.598438 containerd[2003]: 2025-09-05 23:54:57.392 [INFO][5062] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" Sep 5 23:54:57.598438 containerd[2003]: 2025-09-05 23:54:57.510 [INFO][5080] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" HandleID="k8s-pod-network.3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" Workload="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--skhtj-eth0" Sep 5 23:54:57.598438 containerd[2003]: 2025-09-05 23:54:57.512 [INFO][5080] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:54:57.598438 containerd[2003]: 2025-09-05 23:54:57.512 [INFO][5080] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:54:57.598438 containerd[2003]: 2025-09-05 23:54:57.556 [WARNING][5080] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" HandleID="k8s-pod-network.3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" Workload="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--skhtj-eth0" Sep 5 23:54:57.598438 containerd[2003]: 2025-09-05 23:54:57.556 [INFO][5080] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" HandleID="k8s-pod-network.3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" Workload="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--skhtj-eth0" Sep 5 23:54:57.598438 containerd[2003]: 2025-09-05 23:54:57.563 [INFO][5080] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:54:57.598438 containerd[2003]: 2025-09-05 23:54:57.574 [INFO][5062] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" Sep 5 23:54:57.600819 containerd[2003]: time="2025-09-05T23:54:57.599334111Z" level=info msg="TearDown network for sandbox \"3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303\" successfully" Sep 5 23:54:57.600819 containerd[2003]: time="2025-09-05T23:54:57.599412711Z" level=info msg="StopPodSandbox for \"3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303\" returns successfully" Sep 5 23:54:57.602570 containerd[2003]: time="2025-09-05T23:54:57.602017707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85cb674cb8-skhtj,Uid:69af1b0c-a845-4919-910b-83540ca47865,Namespace:calico-apiserver,Attempt:1,}" Sep 5 23:54:57.625508 systemd[1]: run-netns-cni\x2dc1b8e710\x2dd98d\x2d28c4\x2d8649\x2d828cd625b736.mount: Deactivated successfully. Sep 5 23:54:58.163681 containerd[2003]: time="2025-09-05T23:54:58.163498430Z" level=info msg="StopPodSandbox for \"8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840\"" Sep 5 23:54:58.174654 containerd[2003]: time="2025-09-05T23:54:58.173964938Z" level=info msg="StopPodSandbox for \"87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668\"" Sep 5 23:54:58.205602 containerd[2003]: time="2025-09-05T23:54:58.203087210Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 5 23:54:58.222057 containerd[2003]: time="2025-09-05T23:54:58.221555306Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:58.268889 containerd[2003]: time="2025-09-05T23:54:58.268705874Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:58.292848 containerd[2003]: time="2025-09-05T23:54:58.292779051Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 2.298273732s" Sep 5 23:54:58.295493 containerd[2003]: time="2025-09-05T23:54:58.294572067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 5 23:54:58.299767 containerd[2003]: time="2025-09-05T23:54:58.299395467Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:54:58.341799 systemd-networkd[1935]: vxlan.calico: Gained IPv6LL Sep 5 23:54:58.353637 containerd[2003]: time="2025-09-05T23:54:58.353417631Z" level=info msg="CreateContainer within sandbox \"22e14d460c616a9744970ab6afca7c7afd4222d16fe332186df6c2f3e378262e\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 5 23:54:58.354478 systemd-networkd[1935]: cali12011a860df: Link UP Sep 5 23:54:58.358656 systemd-networkd[1935]: cali12011a860df: Gained carrier Sep 5 23:54:58.413015 containerd[2003]: time="2025-09-05T23:54:58.412918263Z" level=info msg="CreateContainer within sandbox \"22e14d460c616a9744970ab6afca7c7afd4222d16fe332186df6c2f3e378262e\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"caaff9a54c527da41584f0a22706c757de8144d9b06b72f3a3a34fa1e6492de0\"" Sep 5 23:54:58.414863 containerd[2003]: time="2025-09-05T23:54:58.414614487Z" level=info msg="StartContainer for \"caaff9a54c527da41584f0a22706c757de8144d9b06b72f3a3a34fa1e6492de0\"" Sep 5 23:54:58.477017 containerd[2003]: 2025-09-05 23:54:57.820 [INFO][5101] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--98-k8s-goldmane--54d579b49d--jlk4g-eth0 goldmane-54d579b49d- calico-system f77d155b-2827-48f6-a494-70cb819e25d7 975 0 2025-09-05 23:54:33 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-23-98 goldmane-54d579b49d-jlk4g eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali12011a860df [] [] }} ContainerID="054214fb88c135390445dae38d4e29a1816224ecc84c04df26adbdd80e5713e9" Namespace="calico-system" Pod="goldmane-54d579b49d-jlk4g" WorkloadEndpoint="ip--172--31--23--98-k8s-goldmane--54d579b49d--jlk4g-" Sep 5 23:54:58.477017 containerd[2003]: 2025-09-05 23:54:57.821 [INFO][5101] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="054214fb88c135390445dae38d4e29a1816224ecc84c04df26adbdd80e5713e9" Namespace="calico-system" Pod="goldmane-54d579b49d-jlk4g" WorkloadEndpoint="ip--172--31--23--98-k8s-goldmane--54d579b49d--jlk4g-eth0" Sep 5 23:54:58.477017 containerd[2003]: 2025-09-05 23:54:58.035 [INFO][5133] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="054214fb88c135390445dae38d4e29a1816224ecc84c04df26adbdd80e5713e9" HandleID="k8s-pod-network.054214fb88c135390445dae38d4e29a1816224ecc84c04df26adbdd80e5713e9" Workload="ip--172--31--23--98-k8s-goldmane--54d579b49d--jlk4g-eth0" Sep 5 23:54:58.477017 containerd[2003]: 2025-09-05 23:54:58.050 [INFO][5133] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="054214fb88c135390445dae38d4e29a1816224ecc84c04df26adbdd80e5713e9" HandleID="k8s-pod-network.054214fb88c135390445dae38d4e29a1816224ecc84c04df26adbdd80e5713e9" Workload="ip--172--31--23--98-k8s-goldmane--54d579b49d--jlk4g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000326e90), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-98", "pod":"goldmane-54d579b49d-jlk4g", "timestamp":"2025-09-05 23:54:58.035902753 +0000 UTC"}, Hostname:"ip-172-31-23-98", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:54:58.477017 containerd[2003]: 2025-09-05 23:54:58.051 [INFO][5133] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:54:58.477017 containerd[2003]: 2025-09-05 23:54:58.051 [INFO][5133] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:54:58.477017 containerd[2003]: 2025-09-05 23:54:58.051 [INFO][5133] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-98' Sep 5 23:54:58.477017 containerd[2003]: 2025-09-05 23:54:58.095 [INFO][5133] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.054214fb88c135390445dae38d4e29a1816224ecc84c04df26adbdd80e5713e9" host="ip-172-31-23-98" Sep 5 23:54:58.477017 containerd[2003]: 2025-09-05 23:54:58.122 [INFO][5133] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-98" Sep 5 23:54:58.477017 containerd[2003]: 2025-09-05 23:54:58.143 [INFO][5133] ipam/ipam.go 511: Trying affinity for 192.168.30.128/26 host="ip-172-31-23-98" Sep 5 23:54:58.477017 containerd[2003]: 2025-09-05 23:54:58.151 [INFO][5133] ipam/ipam.go 158: Attempting to load block cidr=192.168.30.128/26 host="ip-172-31-23-98" Sep 5 23:54:58.477017 containerd[2003]: 2025-09-05 23:54:58.159 [INFO][5133] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.30.128/26 host="ip-172-31-23-98" Sep 5 23:54:58.477017 containerd[2003]: 2025-09-05 23:54:58.161 [INFO][5133] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.30.128/26 handle="k8s-pod-network.054214fb88c135390445dae38d4e29a1816224ecc84c04df26adbdd80e5713e9" host="ip-172-31-23-98" Sep 5 23:54:58.477017 containerd[2003]: 2025-09-05 23:54:58.182 [INFO][5133] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.054214fb88c135390445dae38d4e29a1816224ecc84c04df26adbdd80e5713e9 Sep 5 23:54:58.477017 containerd[2003]: 2025-09-05 23:54:58.219 [INFO][5133] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.30.128/26 handle="k8s-pod-network.054214fb88c135390445dae38d4e29a1816224ecc84c04df26adbdd80e5713e9" host="ip-172-31-23-98" Sep 5 23:54:58.477017 containerd[2003]: 2025-09-05 23:54:58.257 [INFO][5133] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.30.130/26] block=192.168.30.128/26 handle="k8s-pod-network.054214fb88c135390445dae38d4e29a1816224ecc84c04df26adbdd80e5713e9" host="ip-172-31-23-98" Sep 5 23:54:58.477017 containerd[2003]: 2025-09-05 23:54:58.258 [INFO][5133] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.30.130/26] handle="k8s-pod-network.054214fb88c135390445dae38d4e29a1816224ecc84c04df26adbdd80e5713e9" host="ip-172-31-23-98" Sep 5 23:54:58.477017 containerd[2003]: 2025-09-05 23:54:58.259 [INFO][5133] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:54:58.477017 containerd[2003]: 2025-09-05 23:54:58.261 [INFO][5133] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.30.130/26] IPv6=[] ContainerID="054214fb88c135390445dae38d4e29a1816224ecc84c04df26adbdd80e5713e9" HandleID="k8s-pod-network.054214fb88c135390445dae38d4e29a1816224ecc84c04df26adbdd80e5713e9" Workload="ip--172--31--23--98-k8s-goldmane--54d579b49d--jlk4g-eth0" Sep 5 23:54:58.482022 containerd[2003]: 2025-09-05 23:54:58.292 [INFO][5101] cni-plugin/k8s.go 418: Populated endpoint ContainerID="054214fb88c135390445dae38d4e29a1816224ecc84c04df26adbdd80e5713e9" Namespace="calico-system" Pod="goldmane-54d579b49d-jlk4g" WorkloadEndpoint="ip--172--31--23--98-k8s-goldmane--54d579b49d--jlk4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--98-k8s-goldmane--54d579b49d--jlk4g-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"f77d155b-2827-48f6-a494-70cb819e25d7", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-98", ContainerID:"", Pod:"goldmane-54d579b49d-jlk4g", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.30.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali12011a860df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:54:58.482022 containerd[2003]: 2025-09-05 23:54:58.293 [INFO][5101] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.30.130/32] ContainerID="054214fb88c135390445dae38d4e29a1816224ecc84c04df26adbdd80e5713e9" Namespace="calico-system" Pod="goldmane-54d579b49d-jlk4g" WorkloadEndpoint="ip--172--31--23--98-k8s-goldmane--54d579b49d--jlk4g-eth0" Sep 5 23:54:58.482022 containerd[2003]: 2025-09-05 23:54:58.296 [INFO][5101] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali12011a860df ContainerID="054214fb88c135390445dae38d4e29a1816224ecc84c04df26adbdd80e5713e9" Namespace="calico-system" Pod="goldmane-54d579b49d-jlk4g" WorkloadEndpoint="ip--172--31--23--98-k8s-goldmane--54d579b49d--jlk4g-eth0" Sep 5 23:54:58.482022 containerd[2003]: 2025-09-05 23:54:58.363 [INFO][5101] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="054214fb88c135390445dae38d4e29a1816224ecc84c04df26adbdd80e5713e9" Namespace="calico-system" Pod="goldmane-54d579b49d-jlk4g" WorkloadEndpoint="ip--172--31--23--98-k8s-goldmane--54d579b49d--jlk4g-eth0" Sep 5 23:54:58.482022 containerd[2003]: 2025-09-05 23:54:58.390 [INFO][5101] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="054214fb88c135390445dae38d4e29a1816224ecc84c04df26adbdd80e5713e9" Namespace="calico-system" Pod="goldmane-54d579b49d-jlk4g" WorkloadEndpoint="ip--172--31--23--98-k8s-goldmane--54d579b49d--jlk4g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--98-k8s-goldmane--54d579b49d--jlk4g-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"f77d155b-2827-48f6-a494-70cb819e25d7", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-98", ContainerID:"054214fb88c135390445dae38d4e29a1816224ecc84c04df26adbdd80e5713e9", Pod:"goldmane-54d579b49d-jlk4g", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.30.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali12011a860df", MAC:"76:c3:c8:46:fe:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:54:58.482022 containerd[2003]: 2025-09-05 23:54:58.451 [INFO][5101] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="054214fb88c135390445dae38d4e29a1816224ecc84c04df26adbdd80e5713e9" Namespace="calico-system" Pod="goldmane-54d579b49d-jlk4g" WorkloadEndpoint="ip--172--31--23--98-k8s-goldmane--54d579b49d--jlk4g-eth0" Sep 5 23:54:58.620895 systemd-networkd[1935]: calia08a393e916: Link UP Sep 5 23:54:58.626058 systemd-networkd[1935]: calia08a393e916: Gained carrier Sep 5 23:54:58.672415 containerd[2003]: time="2025-09-05T23:54:58.669664732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:54:58.672415 containerd[2003]: time="2025-09-05T23:54:58.669894904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:54:58.672415 containerd[2003]: time="2025-09-05T23:54:58.669937192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:58.672415 containerd[2003]: time="2025-09-05T23:54:58.670135648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:58.757870 containerd[2003]: 2025-09-05 23:54:57.951 [INFO][5116] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--skhtj-eth0 calico-apiserver-85cb674cb8- calico-apiserver 69af1b0c-a845-4919-910b-83540ca47865 976 0 2025-09-05 23:54:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85cb674cb8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-98 calico-apiserver-85cb674cb8-skhtj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia08a393e916 [] [] }} ContainerID="130ddfa7dd9babf01c929a319f5b03f042fbd354087ab26106d8289087b1bf9b" Namespace="calico-apiserver" Pod="calico-apiserver-85cb674cb8-skhtj" WorkloadEndpoint="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--skhtj-" Sep 5 23:54:58.757870 containerd[2003]: 2025-09-05 23:54:57.951 [INFO][5116] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="130ddfa7dd9babf01c929a319f5b03f042fbd354087ab26106d8289087b1bf9b" Namespace="calico-apiserver" Pod="calico-apiserver-85cb674cb8-skhtj" WorkloadEndpoint="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--skhtj-eth0" Sep 5 23:54:58.757870 containerd[2003]: 2025-09-05 23:54:58.170 [INFO][5145] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="130ddfa7dd9babf01c929a319f5b03f042fbd354087ab26106d8289087b1bf9b" HandleID="k8s-pod-network.130ddfa7dd9babf01c929a319f5b03f042fbd354087ab26106d8289087b1bf9b" Workload="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--skhtj-eth0" Sep 5 23:54:58.757870 containerd[2003]: 2025-09-05 23:54:58.170 [INFO][5145] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="130ddfa7dd9babf01c929a319f5b03f042fbd354087ab26106d8289087b1bf9b" HandleID="k8s-pod-network.130ddfa7dd9babf01c929a319f5b03f042fbd354087ab26106d8289087b1bf9b" Workload="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--skhtj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000247470), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-98", "pod":"calico-apiserver-85cb674cb8-skhtj", "timestamp":"2025-09-05 23:54:58.17007515 +0000 UTC"}, Hostname:"ip-172-31-23-98", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:54:58.757870 containerd[2003]: 2025-09-05 23:54:58.170 [INFO][5145] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:54:58.757870 containerd[2003]: 2025-09-05 23:54:58.259 [INFO][5145] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:54:58.757870 containerd[2003]: 2025-09-05 23:54:58.261 [INFO][5145] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-98' Sep 5 23:54:58.757870 containerd[2003]: 2025-09-05 23:54:58.325 [INFO][5145] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.130ddfa7dd9babf01c929a319f5b03f042fbd354087ab26106d8289087b1bf9b" host="ip-172-31-23-98" Sep 5 23:54:58.757870 containerd[2003]: 2025-09-05 23:54:58.363 [INFO][5145] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-98" Sep 5 23:54:58.757870 containerd[2003]: 2025-09-05 23:54:58.400 [INFO][5145] ipam/ipam.go 511: Trying affinity for 192.168.30.128/26 host="ip-172-31-23-98" Sep 5 23:54:58.757870 containerd[2003]: 2025-09-05 23:54:58.416 [INFO][5145] ipam/ipam.go 158: Attempting to load block cidr=192.168.30.128/26 host="ip-172-31-23-98" Sep 5 23:54:58.757870 containerd[2003]: 2025-09-05 23:54:58.435 [INFO][5145] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.30.128/26 host="ip-172-31-23-98" Sep 5 23:54:58.757870 containerd[2003]: 2025-09-05 23:54:58.435 [INFO][5145] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.30.128/26 handle="k8s-pod-network.130ddfa7dd9babf01c929a319f5b03f042fbd354087ab26106d8289087b1bf9b" host="ip-172-31-23-98" Sep 5 23:54:58.757870 containerd[2003]: 2025-09-05 23:54:58.474 [INFO][5145] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.130ddfa7dd9babf01c929a319f5b03f042fbd354087ab26106d8289087b1bf9b Sep 5 23:54:58.757870 containerd[2003]: 2025-09-05 23:54:58.507 [INFO][5145] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.30.128/26 handle="k8s-pod-network.130ddfa7dd9babf01c929a319f5b03f042fbd354087ab26106d8289087b1bf9b" host="ip-172-31-23-98" Sep 5 23:54:58.757870 containerd[2003]: 2025-09-05 23:54:58.581 [INFO][5145] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.30.131/26] block=192.168.30.128/26 handle="k8s-pod-network.130ddfa7dd9babf01c929a319f5b03f042fbd354087ab26106d8289087b1bf9b" host="ip-172-31-23-98" Sep 5 23:54:58.757870 containerd[2003]: 2025-09-05 23:54:58.581 [INFO][5145] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.30.131/26] handle="k8s-pod-network.130ddfa7dd9babf01c929a319f5b03f042fbd354087ab26106d8289087b1bf9b" host="ip-172-31-23-98" Sep 5 23:54:58.757870 containerd[2003]: 2025-09-05 23:54:58.581 [INFO][5145] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:54:58.757870 containerd[2003]: 2025-09-05 23:54:58.581 [INFO][5145] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.30.131/26] IPv6=[] ContainerID="130ddfa7dd9babf01c929a319f5b03f042fbd354087ab26106d8289087b1bf9b" HandleID="k8s-pod-network.130ddfa7dd9babf01c929a319f5b03f042fbd354087ab26106d8289087b1bf9b" Workload="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--skhtj-eth0" Sep 5 23:54:58.761197 containerd[2003]: 2025-09-05 23:54:58.600 [INFO][5116] cni-plugin/k8s.go 418: Populated endpoint ContainerID="130ddfa7dd9babf01c929a319f5b03f042fbd354087ab26106d8289087b1bf9b" Namespace="calico-apiserver" Pod="calico-apiserver-85cb674cb8-skhtj" WorkloadEndpoint="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--skhtj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--skhtj-eth0", GenerateName:"calico-apiserver-85cb674cb8-", Namespace:"calico-apiserver", SelfLink:"", UID:"69af1b0c-a845-4919-910b-83540ca47865", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85cb674cb8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-98", ContainerID:"", Pod:"calico-apiserver-85cb674cb8-skhtj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia08a393e916", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:54:58.761197 containerd[2003]: 2025-09-05 23:54:58.600 [INFO][5116] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.30.131/32] ContainerID="130ddfa7dd9babf01c929a319f5b03f042fbd354087ab26106d8289087b1bf9b" Namespace="calico-apiserver" Pod="calico-apiserver-85cb674cb8-skhtj" WorkloadEndpoint="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--skhtj-eth0" Sep 5 23:54:58.761197 containerd[2003]: 2025-09-05 23:54:58.601 [INFO][5116] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia08a393e916 ContainerID="130ddfa7dd9babf01c929a319f5b03f042fbd354087ab26106d8289087b1bf9b" Namespace="calico-apiserver" Pod="calico-apiserver-85cb674cb8-skhtj" WorkloadEndpoint="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--skhtj-eth0" Sep 5 23:54:58.761197 containerd[2003]: 2025-09-05 23:54:58.632 [INFO][5116] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="130ddfa7dd9babf01c929a319f5b03f042fbd354087ab26106d8289087b1bf9b" Namespace="calico-apiserver" Pod="calico-apiserver-85cb674cb8-skhtj" WorkloadEndpoint="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--skhtj-eth0" Sep 5 23:54:58.761197 containerd[2003]: 2025-09-05 23:54:58.652 [INFO][5116] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="130ddfa7dd9babf01c929a319f5b03f042fbd354087ab26106d8289087b1bf9b" Namespace="calico-apiserver" Pod="calico-apiserver-85cb674cb8-skhtj" WorkloadEndpoint="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--skhtj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--skhtj-eth0", GenerateName:"calico-apiserver-85cb674cb8-", Namespace:"calico-apiserver", SelfLink:"", UID:"69af1b0c-a845-4919-910b-83540ca47865", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85cb674cb8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-98", ContainerID:"130ddfa7dd9babf01c929a319f5b03f042fbd354087ab26106d8289087b1bf9b", Pod:"calico-apiserver-85cb674cb8-skhtj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia08a393e916", MAC:"4a:79:11:43:cc:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:54:58.761197 containerd[2003]: 2025-09-05 23:54:58.730 [INFO][5116] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="130ddfa7dd9babf01c929a319f5b03f042fbd354087ab26106d8289087b1bf9b" Namespace="calico-apiserver" Pod="calico-apiserver-85cb674cb8-skhtj" WorkloadEndpoint="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--skhtj-eth0" Sep 5 23:54:58.831848 systemd[1]: Started cri-containerd-caaff9a54c527da41584f0a22706c757de8144d9b06b72f3a3a34fa1e6492de0.scope - libcontainer container caaff9a54c527da41584f0a22706c757de8144d9b06b72f3a3a34fa1e6492de0. Sep 5 23:54:58.866694 systemd[1]: Started cri-containerd-054214fb88c135390445dae38d4e29a1816224ecc84c04df26adbdd80e5713e9.scope - libcontainer container 054214fb88c135390445dae38d4e29a1816224ecc84c04df26adbdd80e5713e9. Sep 5 23:54:58.995158 containerd[2003]: time="2025-09-05T23:54:58.992918010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:54:58.995158 containerd[2003]: time="2025-09-05T23:54:58.993077910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:54:58.995158 containerd[2003]: time="2025-09-05T23:54:58.993118362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:59.000266 containerd[2003]: time="2025-09-05T23:54:58.993331290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:54:59.060532 containerd[2003]: 2025-09-05 23:54:58.680 [INFO][5179] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" Sep 5 23:54:59.060532 containerd[2003]: 2025-09-05 23:54:58.698 [INFO][5179] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" iface="eth0" netns="/var/run/netns/cni-c8fe4ab3-c7d2-3d72-8cd2-29d353e32cd4" Sep 5 23:54:59.060532 containerd[2003]: 2025-09-05 23:54:58.700 [INFO][5179] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" iface="eth0" netns="/var/run/netns/cni-c8fe4ab3-c7d2-3d72-8cd2-29d353e32cd4" Sep 5 23:54:59.060532 containerd[2003]: 2025-09-05 23:54:58.707 [INFO][5179] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" iface="eth0" netns="/var/run/netns/cni-c8fe4ab3-c7d2-3d72-8cd2-29d353e32cd4" Sep 5 23:54:59.060532 containerd[2003]: 2025-09-05 23:54:58.707 [INFO][5179] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" Sep 5 23:54:59.060532 containerd[2003]: 2025-09-05 23:54:58.707 [INFO][5179] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" Sep 5 23:54:59.060532 containerd[2003]: 2025-09-05 23:54:58.997 [INFO][5244] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" HandleID="k8s-pod-network.87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" Workload="ip--172--31--23--98-k8s-calico--kube--controllers--65bc579fb7--j8488-eth0" Sep 5 23:54:59.060532 containerd[2003]: 2025-09-05 23:54:59.001 [INFO][5244] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:54:59.060532 containerd[2003]: 2025-09-05 23:54:59.001 [INFO][5244] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:54:59.060532 containerd[2003]: 2025-09-05 23:54:59.030 [WARNING][5244] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" HandleID="k8s-pod-network.87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" Workload="ip--172--31--23--98-k8s-calico--kube--controllers--65bc579fb7--j8488-eth0" Sep 5 23:54:59.060532 containerd[2003]: 2025-09-05 23:54:59.030 [INFO][5244] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" HandleID="k8s-pod-network.87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" Workload="ip--172--31--23--98-k8s-calico--kube--controllers--65bc579fb7--j8488-eth0" Sep 5 23:54:59.060532 containerd[2003]: 2025-09-05 23:54:59.038 [INFO][5244] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:54:59.060532 containerd[2003]: 2025-09-05 23:54:59.050 [INFO][5179] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" Sep 5 23:54:59.063892 containerd[2003]: time="2025-09-05T23:54:59.061441190Z" level=info msg="TearDown network for sandbox \"87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668\" successfully" Sep 5 23:54:59.063892 containerd[2003]: time="2025-09-05T23:54:59.061490942Z" level=info msg="StopPodSandbox for \"87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668\" returns successfully" Sep 5 23:54:59.077043 containerd[2003]: time="2025-09-05T23:54:59.076226726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65bc579fb7-j8488,Uid:af3665b2-d693-4ba1-907b-899c82f2055d,Namespace:calico-system,Attempt:1,}" Sep 5 23:54:59.122727 systemd[1]: Started cri-containerd-130ddfa7dd9babf01c929a319f5b03f042fbd354087ab26106d8289087b1bf9b.scope - libcontainer container 130ddfa7dd9babf01c929a319f5b03f042fbd354087ab26106d8289087b1bf9b. Sep 5 23:54:59.149572 containerd[2003]: 2025-09-05 23:54:58.684 [INFO][5180] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" Sep 5 23:54:59.149572 containerd[2003]: 2025-09-05 23:54:58.685 [INFO][5180] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" iface="eth0" netns="/var/run/netns/cni-bc5554e3-ce6c-17bc-9d71-5c8cd832ef79" Sep 5 23:54:59.149572 containerd[2003]: 2025-09-05 23:54:58.688 [INFO][5180] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" iface="eth0" netns="/var/run/netns/cni-bc5554e3-ce6c-17bc-9d71-5c8cd832ef79" Sep 5 23:54:59.149572 containerd[2003]: 2025-09-05 23:54:58.693 [INFO][5180] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" iface="eth0" netns="/var/run/netns/cni-bc5554e3-ce6c-17bc-9d71-5c8cd832ef79" Sep 5 23:54:59.149572 containerd[2003]: 2025-09-05 23:54:58.696 [INFO][5180] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" Sep 5 23:54:59.149572 containerd[2003]: 2025-09-05 23:54:58.696 [INFO][5180] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" Sep 5 23:54:59.149572 containerd[2003]: 2025-09-05 23:54:59.023 [INFO][5243] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" HandleID="k8s-pod-network.8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" Workload="ip--172--31--23--98-k8s-coredns--674b8bbfcf--g4vsz-eth0" Sep 5 23:54:59.149572 containerd[2003]: 2025-09-05 23:54:59.025 [INFO][5243] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:54:59.149572 containerd[2003]: 2025-09-05 23:54:59.042 [INFO][5243] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:54:59.149572 containerd[2003]: 2025-09-05 23:54:59.087 [WARNING][5243] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" HandleID="k8s-pod-network.8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" Workload="ip--172--31--23--98-k8s-coredns--674b8bbfcf--g4vsz-eth0" Sep 5 23:54:59.149572 containerd[2003]: 2025-09-05 23:54:59.087 [INFO][5243] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" HandleID="k8s-pod-network.8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" Workload="ip--172--31--23--98-k8s-coredns--674b8bbfcf--g4vsz-eth0" Sep 5 23:54:59.149572 containerd[2003]: 2025-09-05 23:54:59.100 [INFO][5243] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:54:59.149572 containerd[2003]: 2025-09-05 23:54:59.116 [INFO][5180] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" Sep 5 23:54:59.160664 containerd[2003]: time="2025-09-05T23:54:59.153543567Z" level=info msg="TearDown network for sandbox \"8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840\" successfully" Sep 5 23:54:59.160664 containerd[2003]: time="2025-09-05T23:54:59.153600171Z" level=info msg="StopPodSandbox for \"8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840\" returns successfully" Sep 5 23:54:59.160664 containerd[2003]: time="2025-09-05T23:54:59.156601299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-g4vsz,Uid:a0f339f8-d06b-4fc1-92df-a5d9b2d87813,Namespace:kube-system,Attempt:1,}" Sep 5 23:54:59.167824 containerd[2003]: time="2025-09-05T23:54:59.165460251Z" level=info msg="StopPodSandbox for \"15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb\"" Sep 5 23:54:59.545011 systemd[1]: run-netns-cni\x2dbc5554e3\x2dce6c\x2d17bc\x2d9d71\x2d5c8cd832ef79.mount: Deactivated successfully. Sep 5 23:54:59.545252 systemd[1]: run-netns-cni\x2dc8fe4ab3\x2dc7d2\x2d3d72\x2d8cd2\x2d29d353e32cd4.mount: Deactivated successfully. Sep 5 23:54:59.595486 containerd[2003]: time="2025-09-05T23:54:59.594478241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85cb674cb8-skhtj,Uid:69af1b0c-a845-4919-910b-83540ca47865,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"130ddfa7dd9babf01c929a319f5b03f042fbd354087ab26106d8289087b1bf9b\"" Sep 5 23:54:59.603446 containerd[2003]: time="2025-09-05T23:54:59.603369605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-jlk4g,Uid:f77d155b-2827-48f6-a494-70cb819e25d7,Namespace:calico-system,Attempt:1,} returns sandbox id \"054214fb88c135390445dae38d4e29a1816224ecc84c04df26adbdd80e5713e9\"" Sep 5 23:54:59.644512 containerd[2003]: time="2025-09-05T23:54:59.644455877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 23:54:59.648508 containerd[2003]: time="2025-09-05T23:54:59.648426437Z" level=info msg="StartContainer for \"caaff9a54c527da41584f0a22706c757de8144d9b06b72f3a3a34fa1e6492de0\" returns successfully" Sep 5 23:54:59.836693 systemd-networkd[1935]: calidb4f94c45d7: Link UP Sep 5 23:54:59.842445 systemd-networkd[1935]: calidb4f94c45d7: Gained carrier Sep 5 23:54:59.878682 systemd-networkd[1935]: calia08a393e916: Gained IPv6LL Sep 5 23:54:59.896097 containerd[2003]: 2025-09-05 23:54:59.439 [INFO][5323] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--98-k8s-calico--kube--controllers--65bc579fb7--j8488-eth0 calico-kube-controllers-65bc579fb7- calico-system af3665b2-d693-4ba1-907b-899c82f2055d 990 0 2025-09-05 23:54:34 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:65bc579fb7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-23-98 calico-kube-controllers-65bc579fb7-j8488 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calidb4f94c45d7 [] [] }} ContainerID="4cdc744700d926d5cd645e25231c8fc3aa70577aa18a3263660ee863a153eb57" Namespace="calico-system" Pod="calico-kube-controllers-65bc579fb7-j8488" WorkloadEndpoint="ip--172--31--23--98-k8s-calico--kube--controllers--65bc579fb7--j8488-" Sep 5 23:54:59.896097 containerd[2003]: 2025-09-05 23:54:59.439 [INFO][5323] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4cdc744700d926d5cd645e25231c8fc3aa70577aa18a3263660ee863a153eb57" Namespace="calico-system" Pod="calico-kube-controllers-65bc579fb7-j8488" WorkloadEndpoint="ip--172--31--23--98-k8s-calico--kube--controllers--65bc579fb7--j8488-eth0" Sep 5 23:54:59.896097 containerd[2003]: 2025-09-05 23:54:59.688 [INFO][5365] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4cdc744700d926d5cd645e25231c8fc3aa70577aa18a3263660ee863a153eb57" HandleID="k8s-pod-network.4cdc744700d926d5cd645e25231c8fc3aa70577aa18a3263660ee863a153eb57" Workload="ip--172--31--23--98-k8s-calico--kube--controllers--65bc579fb7--j8488-eth0" Sep 5 23:54:59.896097 containerd[2003]: 2025-09-05 23:54:59.689 [INFO][5365] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4cdc744700d926d5cd645e25231c8fc3aa70577aa18a3263660ee863a153eb57" HandleID="k8s-pod-network.4cdc744700d926d5cd645e25231c8fc3aa70577aa18a3263660ee863a153eb57" Workload="ip--172--31--23--98-k8s-calico--kube--controllers--65bc579fb7--j8488-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400033c7e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-98", "pod":"calico-kube-controllers-65bc579fb7-j8488", "timestamp":"2025-09-05 23:54:59.687442997 +0000 UTC"}, Hostname:"ip-172-31-23-98", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:54:59.896097 containerd[2003]: 2025-09-05 23:54:59.689 [INFO][5365] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:54:59.896097 containerd[2003]: 2025-09-05 23:54:59.689 [INFO][5365] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:54:59.896097 containerd[2003]: 2025-09-05 23:54:59.689 [INFO][5365] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-98' Sep 5 23:54:59.896097 containerd[2003]: 2025-09-05 23:54:59.732 [INFO][5365] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4cdc744700d926d5cd645e25231c8fc3aa70577aa18a3263660ee863a153eb57" host="ip-172-31-23-98" Sep 5 23:54:59.896097 containerd[2003]: 2025-09-05 23:54:59.761 [INFO][5365] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-98" Sep 5 23:54:59.896097 containerd[2003]: 2025-09-05 23:54:59.776 [INFO][5365] ipam/ipam.go 511: Trying affinity for 192.168.30.128/26 host="ip-172-31-23-98" Sep 5 23:54:59.896097 containerd[2003]: 2025-09-05 23:54:59.783 [INFO][5365] ipam/ipam.go 158: Attempting to load block cidr=192.168.30.128/26 host="ip-172-31-23-98" Sep 5 23:54:59.896097 containerd[2003]: 2025-09-05 23:54:59.789 [INFO][5365] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.30.128/26 host="ip-172-31-23-98" Sep 5 23:54:59.896097 containerd[2003]: 2025-09-05 23:54:59.793 [INFO][5365] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.30.128/26 handle="k8s-pod-network.4cdc744700d926d5cd645e25231c8fc3aa70577aa18a3263660ee863a153eb57" host="ip-172-31-23-98" Sep 5 23:54:59.896097 containerd[2003]: 2025-09-05 23:54:59.797 [INFO][5365] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4cdc744700d926d5cd645e25231c8fc3aa70577aa18a3263660ee863a153eb57 Sep 5 23:54:59.896097 containerd[2003]: 2025-09-05 23:54:59.806 [INFO][5365] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.30.128/26 handle="k8s-pod-network.4cdc744700d926d5cd645e25231c8fc3aa70577aa18a3263660ee863a153eb57" host="ip-172-31-23-98" Sep 5 23:54:59.896097 containerd[2003]: 2025-09-05 23:54:59.822 [INFO][5365] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.30.132/26] block=192.168.30.128/26 handle="k8s-pod-network.4cdc744700d926d5cd645e25231c8fc3aa70577aa18a3263660ee863a153eb57" host="ip-172-31-23-98" Sep 5 23:54:59.896097 containerd[2003]: 2025-09-05 23:54:59.822 [INFO][5365] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.30.132/26] handle="k8s-pod-network.4cdc744700d926d5cd645e25231c8fc3aa70577aa18a3263660ee863a153eb57" host="ip-172-31-23-98" Sep 5 23:54:59.896097 containerd[2003]: 2025-09-05 23:54:59.823 [INFO][5365] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:54:59.896097 containerd[2003]: 2025-09-05 23:54:59.823 [INFO][5365] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.30.132/26] IPv6=[] ContainerID="4cdc744700d926d5cd645e25231c8fc3aa70577aa18a3263660ee863a153eb57" HandleID="k8s-pod-network.4cdc744700d926d5cd645e25231c8fc3aa70577aa18a3263660ee863a153eb57" Workload="ip--172--31--23--98-k8s-calico--kube--controllers--65bc579fb7--j8488-eth0" Sep 5 23:54:59.897544 containerd[2003]: 2025-09-05 23:54:59.829 [INFO][5323] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4cdc744700d926d5cd645e25231c8fc3aa70577aa18a3263660ee863a153eb57" Namespace="calico-system" Pod="calico-kube-controllers-65bc579fb7-j8488" WorkloadEndpoint="ip--172--31--23--98-k8s-calico--kube--controllers--65bc579fb7--j8488-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--98-k8s-calico--kube--controllers--65bc579fb7--j8488-eth0", GenerateName:"calico-kube-controllers-65bc579fb7-", Namespace:"calico-system", SelfLink:"", UID:"af3665b2-d693-4ba1-907b-899c82f2055d", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65bc579fb7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-98", ContainerID:"", Pod:"calico-kube-controllers-65bc579fb7-j8488", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.30.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidb4f94c45d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:54:59.897544 containerd[2003]: 2025-09-05 23:54:59.830 [INFO][5323] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.30.132/32] ContainerID="4cdc744700d926d5cd645e25231c8fc3aa70577aa18a3263660ee863a153eb57" Namespace="calico-system" Pod="calico-kube-controllers-65bc579fb7-j8488" WorkloadEndpoint="ip--172--31--23--98-k8s-calico--kube--controllers--65bc579fb7--j8488-eth0" Sep 5 23:54:59.897544 containerd[2003]: 2025-09-05 23:54:59.830 [INFO][5323] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb4f94c45d7 ContainerID="4cdc744700d926d5cd645e25231c8fc3aa70577aa18a3263660ee863a153eb57" Namespace="calico-system" Pod="calico-kube-controllers-65bc579fb7-j8488" WorkloadEndpoint="ip--172--31--23--98-k8s-calico--kube--controllers--65bc579fb7--j8488-eth0" Sep 5 23:54:59.897544 containerd[2003]: 2025-09-05 23:54:59.848 [INFO][5323] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4cdc744700d926d5cd645e25231c8fc3aa70577aa18a3263660ee863a153eb57" Namespace="calico-system" Pod="calico-kube-controllers-65bc579fb7-j8488" WorkloadEndpoint="ip--172--31--23--98-k8s-calico--kube--controllers--65bc579fb7--j8488-eth0" Sep 5 23:54:59.897544 containerd[2003]: 2025-09-05 23:54:59.849 [INFO][5323] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4cdc744700d926d5cd645e25231c8fc3aa70577aa18a3263660ee863a153eb57" Namespace="calico-system" Pod="calico-kube-controllers-65bc579fb7-j8488" WorkloadEndpoint="ip--172--31--23--98-k8s-calico--kube--controllers--65bc579fb7--j8488-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--98-k8s-calico--kube--controllers--65bc579fb7--j8488-eth0", GenerateName:"calico-kube-controllers-65bc579fb7-", Namespace:"calico-system", SelfLink:"", UID:"af3665b2-d693-4ba1-907b-899c82f2055d", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65bc579fb7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-98", ContainerID:"4cdc744700d926d5cd645e25231c8fc3aa70577aa18a3263660ee863a153eb57", Pod:"calico-kube-controllers-65bc579fb7-j8488", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.30.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidb4f94c45d7", MAC:"5e:19:c3:4c:03:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:54:59.897544 containerd[2003]: 2025-09-05 23:54:59.887 [INFO][5323] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4cdc744700d926d5cd645e25231c8fc3aa70577aa18a3263660ee863a153eb57" Namespace="calico-system" Pod="calico-kube-controllers-65bc579fb7-j8488" WorkloadEndpoint="ip--172--31--23--98-k8s-calico--kube--controllers--65bc579fb7--j8488-eth0" Sep 5 23:54:59.915206 containerd[2003]: 2025-09-05 23:54:59.468 [INFO][5342] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" Sep 5 23:54:59.915206 containerd[2003]: 2025-09-05 23:54:59.474 [INFO][5342] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" iface="eth0" netns="/var/run/netns/cni-60f05d0c-90ab-208e-8437-c2f7c44e30ff" Sep 5 23:54:59.915206 containerd[2003]: 2025-09-05 23:54:59.475 [INFO][5342] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" iface="eth0" netns="/var/run/netns/cni-60f05d0c-90ab-208e-8437-c2f7c44e30ff" Sep 5 23:54:59.915206 containerd[2003]: 2025-09-05 23:54:59.476 [INFO][5342] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" iface="eth0" netns="/var/run/netns/cni-60f05d0c-90ab-208e-8437-c2f7c44e30ff" Sep 5 23:54:59.915206 containerd[2003]: 2025-09-05 23:54:59.479 [INFO][5342] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" Sep 5 23:54:59.915206 containerd[2003]: 2025-09-05 23:54:59.482 [INFO][5342] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" Sep 5 23:54:59.915206 containerd[2003]: 2025-09-05 23:54:59.700 [INFO][5375] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" HandleID="k8s-pod-network.15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" Workload="ip--172--31--23--98-k8s-coredns--674b8bbfcf--tpnrx-eth0" Sep 5 23:54:59.915206 containerd[2003]: 2025-09-05 23:54:59.700 [INFO][5375] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:54:59.915206 containerd[2003]: 2025-09-05 23:54:59.823 [INFO][5375] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:54:59.915206 containerd[2003]: 2025-09-05 23:54:59.878 [WARNING][5375] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" HandleID="k8s-pod-network.15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" Workload="ip--172--31--23--98-k8s-coredns--674b8bbfcf--tpnrx-eth0" Sep 5 23:54:59.915206 containerd[2003]: 2025-09-05 23:54:59.878 [INFO][5375] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" HandleID="k8s-pod-network.15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" Workload="ip--172--31--23--98-k8s-coredns--674b8bbfcf--tpnrx-eth0" Sep 5 23:54:59.915206 containerd[2003]: 2025-09-05 23:54:59.884 [INFO][5375] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:54:59.915206 containerd[2003]: 2025-09-05 23:54:59.898 [INFO][5342] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" Sep 5 23:54:59.925732 containerd[2003]: time="2025-09-05T23:54:59.925525663Z" level=info msg="TearDown network for sandbox \"15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb\" successfully" Sep 5 23:54:59.925732 containerd[2003]: time="2025-09-05T23:54:59.925584187Z" level=info msg="StopPodSandbox for \"15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb\" returns successfully" Sep 5 23:54:59.930792 containerd[2003]: time="2025-09-05T23:54:59.928566271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tpnrx,Uid:1ce18320-0b1e-4d63-958f-2d9f5f435dca,Namespace:kube-system,Attempt:1,}" Sep 5 23:54:59.932038 systemd[1]: run-netns-cni\x2d60f05d0c\x2d90ab\x2d208e\x2d8437\x2dc2f7c44e30ff.mount: Deactivated successfully. Sep 5 23:55:00.009153 containerd[2003]: time="2025-09-05T23:55:00.008263443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:55:00.009153 containerd[2003]: time="2025-09-05T23:55:00.008418327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:55:00.009153 containerd[2003]: time="2025-09-05T23:55:00.008494323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:55:00.009153 containerd[2003]: time="2025-09-05T23:55:00.008702355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:55:00.117810 systemd[1]: Started cri-containerd-4cdc744700d926d5cd645e25231c8fc3aa70577aa18a3263660ee863a153eb57.scope - libcontainer container 4cdc744700d926d5cd645e25231c8fc3aa70577aa18a3263660ee863a153eb57. Sep 5 23:55:00.175101 containerd[2003]: time="2025-09-05T23:55:00.170759260Z" level=info msg="StopPodSandbox for \"9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c\"" Sep 5 23:55:00.182207 systemd-networkd[1935]: cali926ad072e55: Link UP Sep 5 23:55:00.186832 systemd-networkd[1935]: cali926ad072e55: Gained carrier Sep 5 23:55:00.214091 systemd-networkd[1935]: cali12011a860df: Gained IPv6LL Sep 5 23:55:00.319159 containerd[2003]: 2025-09-05 23:54:59.469 [INFO][5345] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--98-k8s-coredns--674b8bbfcf--g4vsz-eth0 coredns-674b8bbfcf- kube-system a0f339f8-d06b-4fc1-92df-a5d9b2d87813 989 0 2025-09-05 23:54:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-98 coredns-674b8bbfcf-g4vsz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali926ad072e55 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="dc9deea86a0acb842143c4c1d809d9f3a42602c224b50ff435e757623706199a" Namespace="kube-system" Pod="coredns-674b8bbfcf-g4vsz" WorkloadEndpoint="ip--172--31--23--98-k8s-coredns--674b8bbfcf--g4vsz-" Sep 5 23:55:00.319159 containerd[2003]: 2025-09-05 23:54:59.469 [INFO][5345] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dc9deea86a0acb842143c4c1d809d9f3a42602c224b50ff435e757623706199a" Namespace="kube-system" Pod="coredns-674b8bbfcf-g4vsz" WorkloadEndpoint="ip--172--31--23--98-k8s-coredns--674b8bbfcf--g4vsz-eth0" Sep 5 23:55:00.319159 containerd[2003]: 2025-09-05 23:54:59.765 [INFO][5378] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dc9deea86a0acb842143c4c1d809d9f3a42602c224b50ff435e757623706199a" HandleID="k8s-pod-network.dc9deea86a0acb842143c4c1d809d9f3a42602c224b50ff435e757623706199a" Workload="ip--172--31--23--98-k8s-coredns--674b8bbfcf--g4vsz-eth0" Sep 5 23:55:00.319159 containerd[2003]: 2025-09-05 23:54:59.766 [INFO][5378] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dc9deea86a0acb842143c4c1d809d9f3a42602c224b50ff435e757623706199a" HandleID="k8s-pod-network.dc9deea86a0acb842143c4c1d809d9f3a42602c224b50ff435e757623706199a" Workload="ip--172--31--23--98-k8s-coredns--674b8bbfcf--g4vsz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000483910), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-98", "pod":"coredns-674b8bbfcf-g4vsz", "timestamp":"2025-09-05 23:54:59.765006882 +0000 UTC"}, Hostname:"ip-172-31-23-98", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:55:00.319159 containerd[2003]: 2025-09-05 23:54:59.769 [INFO][5378] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:00.319159 containerd[2003]: 2025-09-05 23:54:59.884 [INFO][5378] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:00.319159 containerd[2003]: 2025-09-05 23:54:59.884 [INFO][5378] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-98' Sep 5 23:55:00.319159 containerd[2003]: 2025-09-05 23:54:59.918 [INFO][5378] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dc9deea86a0acb842143c4c1d809d9f3a42602c224b50ff435e757623706199a" host="ip-172-31-23-98" Sep 5 23:55:00.319159 containerd[2003]: 2025-09-05 23:54:59.950 [INFO][5378] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-98" Sep 5 23:55:00.319159 containerd[2003]: 2025-09-05 23:54:59.985 [INFO][5378] ipam/ipam.go 511: Trying affinity for 192.168.30.128/26 host="ip-172-31-23-98" Sep 5 23:55:00.319159 containerd[2003]: 2025-09-05 23:55:00.010 [INFO][5378] ipam/ipam.go 158: Attempting to load block cidr=192.168.30.128/26 host="ip-172-31-23-98" Sep 5 23:55:00.319159 containerd[2003]: 2025-09-05 23:55:00.051 [INFO][5378] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.30.128/26 host="ip-172-31-23-98" Sep 5 23:55:00.319159 containerd[2003]: 2025-09-05 23:55:00.052 [INFO][5378] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.30.128/26 handle="k8s-pod-network.dc9deea86a0acb842143c4c1d809d9f3a42602c224b50ff435e757623706199a" host="ip-172-31-23-98" Sep 5 23:55:00.319159 containerd[2003]: 2025-09-05 23:55:00.073 [INFO][5378] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dc9deea86a0acb842143c4c1d809d9f3a42602c224b50ff435e757623706199a Sep 5 23:55:00.319159 containerd[2003]: 2025-09-05 23:55:00.092 [INFO][5378] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.30.128/26 handle="k8s-pod-network.dc9deea86a0acb842143c4c1d809d9f3a42602c224b50ff435e757623706199a" host="ip-172-31-23-98" Sep 5 23:55:00.319159 containerd[2003]: 2025-09-05 23:55:00.137 [INFO][5378] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.30.133/26] block=192.168.30.128/26 handle="k8s-pod-network.dc9deea86a0acb842143c4c1d809d9f3a42602c224b50ff435e757623706199a" host="ip-172-31-23-98" Sep 5 23:55:00.319159 containerd[2003]: 2025-09-05 23:55:00.138 [INFO][5378] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.30.133/26] handle="k8s-pod-network.dc9deea86a0acb842143c4c1d809d9f3a42602c224b50ff435e757623706199a" host="ip-172-31-23-98" Sep 5 23:55:00.319159 containerd[2003]: 2025-09-05 23:55:00.138 [INFO][5378] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:00.319159 containerd[2003]: 2025-09-05 23:55:00.138 [INFO][5378] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.30.133/26] IPv6=[] ContainerID="dc9deea86a0acb842143c4c1d809d9f3a42602c224b50ff435e757623706199a" HandleID="k8s-pod-network.dc9deea86a0acb842143c4c1d809d9f3a42602c224b50ff435e757623706199a" Workload="ip--172--31--23--98-k8s-coredns--674b8bbfcf--g4vsz-eth0" Sep 5 23:55:00.320424 containerd[2003]: 2025-09-05 23:55:00.147 [INFO][5345] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dc9deea86a0acb842143c4c1d809d9f3a42602c224b50ff435e757623706199a" Namespace="kube-system" Pod="coredns-674b8bbfcf-g4vsz" WorkloadEndpoint="ip--172--31--23--98-k8s-coredns--674b8bbfcf--g4vsz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--98-k8s-coredns--674b8bbfcf--g4vsz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a0f339f8-d06b-4fc1-92df-a5d9b2d87813", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-98", ContainerID:"", Pod:"coredns-674b8bbfcf-g4vsz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali926ad072e55", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:00.320424 containerd[2003]: 2025-09-05 23:55:00.148 [INFO][5345] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.30.133/32] ContainerID="dc9deea86a0acb842143c4c1d809d9f3a42602c224b50ff435e757623706199a" Namespace="kube-system" Pod="coredns-674b8bbfcf-g4vsz" WorkloadEndpoint="ip--172--31--23--98-k8s-coredns--674b8bbfcf--g4vsz-eth0" Sep 5 23:55:00.320424 containerd[2003]: 2025-09-05 23:55:00.148 [INFO][5345] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali926ad072e55 ContainerID="dc9deea86a0acb842143c4c1d809d9f3a42602c224b50ff435e757623706199a" Namespace="kube-system" Pod="coredns-674b8bbfcf-g4vsz" WorkloadEndpoint="ip--172--31--23--98-k8s-coredns--674b8bbfcf--g4vsz-eth0" Sep 5 23:55:00.320424 containerd[2003]: 2025-09-05 23:55:00.233 [INFO][5345] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dc9deea86a0acb842143c4c1d809d9f3a42602c224b50ff435e757623706199a" Namespace="kube-system" Pod="coredns-674b8bbfcf-g4vsz" WorkloadEndpoint="ip--172--31--23--98-k8s-coredns--674b8bbfcf--g4vsz-eth0" Sep 5 23:55:00.320424 containerd[2003]: 2025-09-05 23:55:00.247 [INFO][5345] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dc9deea86a0acb842143c4c1d809d9f3a42602c224b50ff435e757623706199a" Namespace="kube-system" Pod="coredns-674b8bbfcf-g4vsz" WorkloadEndpoint="ip--172--31--23--98-k8s-coredns--674b8bbfcf--g4vsz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--98-k8s-coredns--674b8bbfcf--g4vsz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a0f339f8-d06b-4fc1-92df-a5d9b2d87813", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-98", ContainerID:"dc9deea86a0acb842143c4c1d809d9f3a42602c224b50ff435e757623706199a", Pod:"coredns-674b8bbfcf-g4vsz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali926ad072e55", MAC:"96:74:d2:d5:7e:5a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:00.320424 containerd[2003]: 2025-09-05 23:55:00.309 [INFO][5345] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dc9deea86a0acb842143c4c1d809d9f3a42602c224b50ff435e757623706199a" Namespace="kube-system" Pod="coredns-674b8bbfcf-g4vsz" WorkloadEndpoint="ip--172--31--23--98-k8s-coredns--674b8bbfcf--g4vsz-eth0" Sep 5 23:55:00.512047 containerd[2003]: time="2025-09-05T23:55:00.511878342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:55:00.513390 containerd[2003]: time="2025-09-05T23:55:00.511991994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:55:00.513390 containerd[2003]: time="2025-09-05T23:55:00.512037162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:55:00.513390 containerd[2003]: time="2025-09-05T23:55:00.512206578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:55:00.637253 containerd[2003]: time="2025-09-05T23:55:00.637189962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65bc579fb7-j8488,Uid:af3665b2-d693-4ba1-907b-899c82f2055d,Namespace:calico-system,Attempt:1,} returns sandbox id \"4cdc744700d926d5cd645e25231c8fc3aa70577aa18a3263660ee863a153eb57\"" Sep 5 23:55:00.649328 systemd[1]: Started cri-containerd-dc9deea86a0acb842143c4c1d809d9f3a42602c224b50ff435e757623706199a.scope - libcontainer container dc9deea86a0acb842143c4c1d809d9f3a42602c224b50ff435e757623706199a. Sep 5 23:55:00.840695 systemd-networkd[1935]: cali18b65359a9a: Link UP Sep 5 23:55:00.842603 systemd-networkd[1935]: cali18b65359a9a: Gained carrier Sep 5 23:55:00.871285 containerd[2003]: time="2025-09-05T23:55:00.871179895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-g4vsz,Uid:a0f339f8-d06b-4fc1-92df-a5d9b2d87813,Namespace:kube-system,Attempt:1,} returns sandbox id \"dc9deea86a0acb842143c4c1d809d9f3a42602c224b50ff435e757623706199a\"" Sep 5 23:55:00.893176 containerd[2003]: time="2025-09-05T23:55:00.893090119Z" level=info msg="CreateContainer within sandbox \"dc9deea86a0acb842143c4c1d809d9f3a42602c224b50ff435e757623706199a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 23:55:00.933597 containerd[2003]: 2025-09-05 23:55:00.594 [INFO][5474] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" Sep 5 23:55:00.933597 containerd[2003]: 2025-09-05 23:55:00.594 [INFO][5474] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" iface="eth0" netns="/var/run/netns/cni-4f378801-9b8f-7e54-f2f3-1a0d3c9de87f" Sep 5 23:55:00.933597 containerd[2003]: 2025-09-05 23:55:00.602 [INFO][5474] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" iface="eth0" netns="/var/run/netns/cni-4f378801-9b8f-7e54-f2f3-1a0d3c9de87f" Sep 5 23:55:00.933597 containerd[2003]: 2025-09-05 23:55:00.612 [INFO][5474] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" iface="eth0" netns="/var/run/netns/cni-4f378801-9b8f-7e54-f2f3-1a0d3c9de87f" Sep 5 23:55:00.933597 containerd[2003]: 2025-09-05 23:55:00.616 [INFO][5474] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" Sep 5 23:55:00.933597 containerd[2003]: 2025-09-05 23:55:00.616 [INFO][5474] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" Sep 5 23:55:00.933597 containerd[2003]: 2025-09-05 23:55:00.768 [INFO][5531] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" HandleID="k8s-pod-network.9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" Workload="ip--172--31--23--98-k8s-csi--node--driver--5d9jn-eth0" Sep 5 23:55:00.933597 containerd[2003]: 2025-09-05 23:55:00.769 [INFO][5531] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:00.933597 containerd[2003]: 2025-09-05 23:55:00.809 [INFO][5531] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:00.933597 containerd[2003]: 2025-09-05 23:55:00.902 [WARNING][5531] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" HandleID="k8s-pod-network.9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" Workload="ip--172--31--23--98-k8s-csi--node--driver--5d9jn-eth0" Sep 5 23:55:00.933597 containerd[2003]: 2025-09-05 23:55:00.902 [INFO][5531] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" HandleID="k8s-pod-network.9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" Workload="ip--172--31--23--98-k8s-csi--node--driver--5d9jn-eth0" Sep 5 23:55:00.933597 containerd[2003]: 2025-09-05 23:55:00.909 [INFO][5531] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:00.933597 containerd[2003]: 2025-09-05 23:55:00.916 [INFO][5474] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" Sep 5 23:55:00.935635 containerd[2003]: 2025-09-05 23:55:00.361 [INFO][5433] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--98-k8s-coredns--674b8bbfcf--tpnrx-eth0 coredns-674b8bbfcf- kube-system 1ce18320-0b1e-4d63-958f-2d9f5f435dca 995 0 2025-09-05 23:54:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-98 coredns-674b8bbfcf-tpnrx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali18b65359a9a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="00a9a239e51f51cc9b5e7060c2792e15b4e9b0d1e96d8fe6c13f490e95b7bc84" Namespace="kube-system" Pod="coredns-674b8bbfcf-tpnrx" WorkloadEndpoint="ip--172--31--23--98-k8s-coredns--674b8bbfcf--tpnrx-" Sep 5 23:55:00.935635 containerd[2003]: 2025-09-05 23:55:00.361 [INFO][5433] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="00a9a239e51f51cc9b5e7060c2792e15b4e9b0d1e96d8fe6c13f490e95b7bc84" Namespace="kube-system" Pod="coredns-674b8bbfcf-tpnrx" WorkloadEndpoint="ip--172--31--23--98-k8s-coredns--674b8bbfcf--tpnrx-eth0" Sep 5 23:55:00.935635 containerd[2003]: 2025-09-05 23:55:00.586 [INFO][5487] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="00a9a239e51f51cc9b5e7060c2792e15b4e9b0d1e96d8fe6c13f490e95b7bc84" HandleID="k8s-pod-network.00a9a239e51f51cc9b5e7060c2792e15b4e9b0d1e96d8fe6c13f490e95b7bc84" Workload="ip--172--31--23--98-k8s-coredns--674b8bbfcf--tpnrx-eth0" Sep 5 23:55:00.935635 containerd[2003]: 2025-09-05 23:55:00.586 [INFO][5487] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="00a9a239e51f51cc9b5e7060c2792e15b4e9b0d1e96d8fe6c13f490e95b7bc84" HandleID="k8s-pod-network.00a9a239e51f51cc9b5e7060c2792e15b4e9b0d1e96d8fe6c13f490e95b7bc84" Workload="ip--172--31--23--98-k8s-coredns--674b8bbfcf--tpnrx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400038c3b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-98", "pod":"coredns-674b8bbfcf-tpnrx", "timestamp":"2025-09-05 23:55:00.585947226 +0000 UTC"}, Hostname:"ip-172-31-23-98", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:55:00.935635 containerd[2003]: 2025-09-05 23:55:00.587 [INFO][5487] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:00.935635 containerd[2003]: 2025-09-05 23:55:00.588 [INFO][5487] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:00.935635 containerd[2003]: 2025-09-05 23:55:00.588 [INFO][5487] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-98' Sep 5 23:55:00.935635 containerd[2003]: 2025-09-05 23:55:00.649 [INFO][5487] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.00a9a239e51f51cc9b5e7060c2792e15b4e9b0d1e96d8fe6c13f490e95b7bc84" host="ip-172-31-23-98" Sep 5 23:55:00.935635 containerd[2003]: 2025-09-05 23:55:00.680 [INFO][5487] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-98" Sep 5 23:55:00.935635 containerd[2003]: 2025-09-05 23:55:00.706 [INFO][5487] ipam/ipam.go 511: Trying affinity for 192.168.30.128/26 host="ip-172-31-23-98" Sep 5 23:55:00.935635 containerd[2003]: 2025-09-05 23:55:00.714 [INFO][5487] ipam/ipam.go 158: Attempting to load block cidr=192.168.30.128/26 host="ip-172-31-23-98" Sep 5 23:55:00.935635 containerd[2003]: 2025-09-05 23:55:00.721 [INFO][5487] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.30.128/26 host="ip-172-31-23-98" Sep 5 23:55:00.935635 containerd[2003]: 2025-09-05 23:55:00.721 [INFO][5487] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.30.128/26 handle="k8s-pod-network.00a9a239e51f51cc9b5e7060c2792e15b4e9b0d1e96d8fe6c13f490e95b7bc84" host="ip-172-31-23-98" Sep 5 23:55:00.935635 containerd[2003]: 2025-09-05 23:55:00.729 [INFO][5487] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.00a9a239e51f51cc9b5e7060c2792e15b4e9b0d1e96d8fe6c13f490e95b7bc84 Sep 5 23:55:00.935635 containerd[2003]: 2025-09-05 23:55:00.747 [INFO][5487] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.30.128/26 handle="k8s-pod-network.00a9a239e51f51cc9b5e7060c2792e15b4e9b0d1e96d8fe6c13f490e95b7bc84" host="ip-172-31-23-98" Sep 5 23:55:00.935635 containerd[2003]: 2025-09-05 23:55:00.806 [INFO][5487] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.30.134/26] block=192.168.30.128/26 handle="k8s-pod-network.00a9a239e51f51cc9b5e7060c2792e15b4e9b0d1e96d8fe6c13f490e95b7bc84" host="ip-172-31-23-98" Sep 5 23:55:00.935635 containerd[2003]: 2025-09-05 23:55:00.809 [INFO][5487] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.30.134/26] handle="k8s-pod-network.00a9a239e51f51cc9b5e7060c2792e15b4e9b0d1e96d8fe6c13f490e95b7bc84" host="ip-172-31-23-98" Sep 5 23:55:00.935635 containerd[2003]: 2025-09-05 23:55:00.809 [INFO][5487] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:00.935635 containerd[2003]: 2025-09-05 23:55:00.809 [INFO][5487] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.30.134/26] IPv6=[] ContainerID="00a9a239e51f51cc9b5e7060c2792e15b4e9b0d1e96d8fe6c13f490e95b7bc84" HandleID="k8s-pod-network.00a9a239e51f51cc9b5e7060c2792e15b4e9b0d1e96d8fe6c13f490e95b7bc84" Workload="ip--172--31--23--98-k8s-coredns--674b8bbfcf--tpnrx-eth0" Sep 5 23:55:00.939433 containerd[2003]: 2025-09-05 23:55:00.824 [INFO][5433] cni-plugin/k8s.go 418: Populated endpoint ContainerID="00a9a239e51f51cc9b5e7060c2792e15b4e9b0d1e96d8fe6c13f490e95b7bc84" Namespace="kube-system" Pod="coredns-674b8bbfcf-tpnrx" WorkloadEndpoint="ip--172--31--23--98-k8s-coredns--674b8bbfcf--tpnrx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--98-k8s-coredns--674b8bbfcf--tpnrx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1ce18320-0b1e-4d63-958f-2d9f5f435dca", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-98", ContainerID:"", Pod:"coredns-674b8bbfcf-tpnrx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18b65359a9a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:00.939433 containerd[2003]: 2025-09-05 23:55:00.824 [INFO][5433] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.30.134/32] ContainerID="00a9a239e51f51cc9b5e7060c2792e15b4e9b0d1e96d8fe6c13f490e95b7bc84" Namespace="kube-system" Pod="coredns-674b8bbfcf-tpnrx" WorkloadEndpoint="ip--172--31--23--98-k8s-coredns--674b8bbfcf--tpnrx-eth0" Sep 5 23:55:00.939433 containerd[2003]: 2025-09-05 23:55:00.824 [INFO][5433] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18b65359a9a ContainerID="00a9a239e51f51cc9b5e7060c2792e15b4e9b0d1e96d8fe6c13f490e95b7bc84" Namespace="kube-system" Pod="coredns-674b8bbfcf-tpnrx" WorkloadEndpoint="ip--172--31--23--98-k8s-coredns--674b8bbfcf--tpnrx-eth0" Sep 5 23:55:00.939433 containerd[2003]: 2025-09-05 23:55:00.843 [INFO][5433] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="00a9a239e51f51cc9b5e7060c2792e15b4e9b0d1e96d8fe6c13f490e95b7bc84" Namespace="kube-system" Pod="coredns-674b8bbfcf-tpnrx" WorkloadEndpoint="ip--172--31--23--98-k8s-coredns--674b8bbfcf--tpnrx-eth0" Sep 5 23:55:00.939433 containerd[2003]: 2025-09-05 23:55:00.847 [INFO][5433] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="00a9a239e51f51cc9b5e7060c2792e15b4e9b0d1e96d8fe6c13f490e95b7bc84" Namespace="kube-system" Pod="coredns-674b8bbfcf-tpnrx" WorkloadEndpoint="ip--172--31--23--98-k8s-coredns--674b8bbfcf--tpnrx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--98-k8s-coredns--674b8bbfcf--tpnrx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1ce18320-0b1e-4d63-958f-2d9f5f435dca", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-98", ContainerID:"00a9a239e51f51cc9b5e7060c2792e15b4e9b0d1e96d8fe6c13f490e95b7bc84", Pod:"coredns-674b8bbfcf-tpnrx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18b65359a9a", MAC:"d6:68:60:93:b8:92", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:00.939433 containerd[2003]: 2025-09-05 23:55:00.920 [INFO][5433] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="00a9a239e51f51cc9b5e7060c2792e15b4e9b0d1e96d8fe6c13f490e95b7bc84" Namespace="kube-system" Pod="coredns-674b8bbfcf-tpnrx" WorkloadEndpoint="ip--172--31--23--98-k8s-coredns--674b8bbfcf--tpnrx-eth0" Sep 5 23:55:00.945243 systemd[1]: run-netns-cni\x2d4f378801\x2d9b8f\x2d7e54\x2df2f3\x2d1a0d3c9de87f.mount: Deactivated successfully. Sep 5 23:55:00.966245 systemd-networkd[1935]: calidb4f94c45d7: Gained IPv6LL Sep 5 23:55:00.999693 containerd[2003]: time="2025-09-05T23:55:00.999480860Z" level=info msg="TearDown network for sandbox \"9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c\" successfully" Sep 5 23:55:00.999693 containerd[2003]: time="2025-09-05T23:55:00.999571772Z" level=info msg="StopPodSandbox for \"9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c\" returns successfully" Sep 5 23:55:01.002675 containerd[2003]: time="2025-09-05T23:55:01.002137396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5d9jn,Uid:a29a0520-465f-4a15-9908-cc439e2ca7ce,Namespace:calico-system,Attempt:1,}" Sep 5 23:55:01.032212 containerd[2003]: time="2025-09-05T23:55:01.031999756Z" level=info msg="CreateContainer within sandbox \"dc9deea86a0acb842143c4c1d809d9f3a42602c224b50ff435e757623706199a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a17f1072d851d629456015614224114040a96f47811651b31e9861d54b14c4ef\"" Sep 5 23:55:01.036943 containerd[2003]: time="2025-09-05T23:55:01.036592912Z" level=info msg="StartContainer for \"a17f1072d851d629456015614224114040a96f47811651b31e9861d54b14c4ef\"" Sep 5 23:55:01.065840 containerd[2003]: time="2025-09-05T23:55:01.065632588Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:55:01.068701 containerd[2003]: time="2025-09-05T23:55:01.068441044Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:55:01.068701 containerd[2003]: time="2025-09-05T23:55:01.068597344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:55:01.076377 containerd[2003]: time="2025-09-05T23:55:01.075680932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:55:01.166077 systemd[1]: Started cri-containerd-00a9a239e51f51cc9b5e7060c2792e15b4e9b0d1e96d8fe6c13f490e95b7bc84.scope - libcontainer container 00a9a239e51f51cc9b5e7060c2792e15b4e9b0d1e96d8fe6c13f490e95b7bc84. Sep 5 23:55:01.167321 containerd[2003]: time="2025-09-05T23:55:01.162836057Z" level=info msg="StopPodSandbox for \"c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e\"" Sep 5 23:55:01.259693 systemd[1]: Started cri-containerd-a17f1072d851d629456015614224114040a96f47811651b31e9861d54b14c4ef.scope - libcontainer container a17f1072d851d629456015614224114040a96f47811651b31e9861d54b14c4ef. Sep 5 23:55:01.416716 systemd-networkd[1935]: cali926ad072e55: Gained IPv6LL Sep 5 23:55:01.454073 containerd[2003]: time="2025-09-05T23:55:01.454013850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tpnrx,Uid:1ce18320-0b1e-4d63-958f-2d9f5f435dca,Namespace:kube-system,Attempt:1,} returns sandbox id \"00a9a239e51f51cc9b5e7060c2792e15b4e9b0d1e96d8fe6c13f490e95b7bc84\"" Sep 5 23:55:01.459917 containerd[2003]: time="2025-09-05T23:55:01.459729870Z" level=info msg="StartContainer for \"a17f1072d851d629456015614224114040a96f47811651b31e9861d54b14c4ef\" returns successfully" Sep 5 23:55:01.478778 containerd[2003]: time="2025-09-05T23:55:01.477617286Z" level=info msg="CreateContainer within sandbox \"00a9a239e51f51cc9b5e7060c2792e15b4e9b0d1e96d8fe6c13f490e95b7bc84\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 23:55:01.543267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2192431296.mount: Deactivated successfully. Sep 5 23:55:01.568875 containerd[2003]: time="2025-09-05T23:55:01.568708711Z" level=info msg="CreateContainer within sandbox \"00a9a239e51f51cc9b5e7060c2792e15b4e9b0d1e96d8fe6c13f490e95b7bc84\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"34743b30d91549878dac69e13ce945c998c831fe658283032c722330bba63bed\"" Sep 5 23:55:01.571092 containerd[2003]: time="2025-09-05T23:55:01.570750643Z" level=info msg="StartContainer for \"34743b30d91549878dac69e13ce945c998c831fe658283032c722330bba63bed\"" Sep 5 23:55:01.901211 systemd[1]: run-containerd-runc-k8s.io-34743b30d91549878dac69e13ce945c998c831fe658283032c722330bba63bed-runc.CwXPpW.mount: Deactivated successfully. Sep 5 23:55:01.941826 systemd[1]: Started cri-containerd-34743b30d91549878dac69e13ce945c998c831fe658283032c722330bba63bed.scope - libcontainer container 34743b30d91549878dac69e13ce945c998c831fe658283032c722330bba63bed. Sep 5 23:55:02.060197 systemd-networkd[1935]: cali12a56036810: Link UP Sep 5 23:55:02.066402 systemd-networkd[1935]: cali12a56036810: Gained carrier Sep 5 23:55:02.073377 containerd[2003]: time="2025-09-05T23:55:02.071256257Z" level=info msg="StopPodSandbox for \"e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7\"" Sep 5 23:55:02.163768 containerd[2003]: 2025-09-05 23:55:01.664 [INFO][5636] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" Sep 5 23:55:02.163768 containerd[2003]: 2025-09-05 23:55:01.664 [INFO][5636] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" iface="eth0" netns="/var/run/netns/cni-32b818b1-9090-c082-1b57-d6143855b7fa" Sep 5 23:55:02.163768 containerd[2003]: 2025-09-05 23:55:01.667 [INFO][5636] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" iface="eth0" netns="/var/run/netns/cni-32b818b1-9090-c082-1b57-d6143855b7fa" Sep 5 23:55:02.163768 containerd[2003]: 2025-09-05 23:55:01.668 [INFO][5636] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" iface="eth0" netns="/var/run/netns/cni-32b818b1-9090-c082-1b57-d6143855b7fa" Sep 5 23:55:02.163768 containerd[2003]: 2025-09-05 23:55:01.668 [INFO][5636] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" Sep 5 23:55:02.163768 containerd[2003]: 2025-09-05 23:55:01.668 [INFO][5636] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" Sep 5 23:55:02.163768 containerd[2003]: 2025-09-05 23:55:01.873 [INFO][5685] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" HandleID="k8s-pod-network.c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" Workload="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--xmj4t-eth0" Sep 5 23:55:02.163768 containerd[2003]: 2025-09-05 23:55:01.880 [INFO][5685] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:02.163768 containerd[2003]: 2025-09-05 23:55:02.012 [INFO][5685] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:02.163768 containerd[2003]: 2025-09-05 23:55:02.100 [WARNING][5685] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" HandleID="k8s-pod-network.c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" Workload="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--xmj4t-eth0" Sep 5 23:55:02.163768 containerd[2003]: 2025-09-05 23:55:02.100 [INFO][5685] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" HandleID="k8s-pod-network.c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" Workload="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--xmj4t-eth0" Sep 5 23:55:02.163768 containerd[2003]: 2025-09-05 23:55:02.115 [INFO][5685] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:02.163768 containerd[2003]: 2025-09-05 23:55:02.136 [INFO][5636] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" Sep 5 23:55:02.178387 containerd[2003]: time="2025-09-05T23:55:02.175995546Z" level=info msg="TearDown network for sandbox \"c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e\" successfully" Sep 5 23:55:02.178387 containerd[2003]: time="2025-09-05T23:55:02.176487834Z" level=info msg="StopPodSandbox for \"c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e\" returns successfully" Sep 5 23:55:02.194605 kubelet[3518]: I0905 23:55:02.194500 3518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-g4vsz" podStartSLOduration=55.194473422 podStartE2EDuration="55.194473422s" podCreationTimestamp="2025-09-05 23:54:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:55:01.976428681 +0000 UTC m=+60.188721228" watchObservedRunningTime="2025-09-05 23:55:02.194473422 +0000 UTC m=+60.406765993" Sep 5 23:55:02.203842 containerd[2003]: time="2025-09-05T23:55:02.203768718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85cb674cb8-xmj4t,Uid:b45a6056-0154-4b46-9f54-64314ddc0dd5,Namespace:calico-apiserver,Attempt:1,}" Sep 5 23:55:02.222491 containerd[2003]: 2025-09-05 23:55:01.403 [INFO][5578] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--98-k8s-csi--node--driver--5d9jn-eth0 csi-node-driver- calico-system a29a0520-465f-4a15-9908-cc439e2ca7ce 1010 0 2025-09-05 23:54:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-23-98 csi-node-driver-5d9jn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali12a56036810 [] [] }} ContainerID="e4335ae994e5fc5410d363bb4231a92ea806fd8af28b32ee1f573f1df8669314" Namespace="calico-system" Pod="csi-node-driver-5d9jn" WorkloadEndpoint="ip--172--31--23--98-k8s-csi--node--driver--5d9jn-" Sep 5 23:55:02.222491 containerd[2003]: 2025-09-05 23:55:01.404 [INFO][5578] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e4335ae994e5fc5410d363bb4231a92ea806fd8af28b32ee1f573f1df8669314" Namespace="calico-system" Pod="csi-node-driver-5d9jn" WorkloadEndpoint="ip--172--31--23--98-k8s-csi--node--driver--5d9jn-eth0" Sep 5 23:55:02.222491 containerd[2003]: 2025-09-05 23:55:01.686 [INFO][5666] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e4335ae994e5fc5410d363bb4231a92ea806fd8af28b32ee1f573f1df8669314" HandleID="k8s-pod-network.e4335ae994e5fc5410d363bb4231a92ea806fd8af28b32ee1f573f1df8669314" Workload="ip--172--31--23--98-k8s-csi--node--driver--5d9jn-eth0" Sep 5 23:55:02.222491 containerd[2003]: 2025-09-05 23:55:01.687 [INFO][5666] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e4335ae994e5fc5410d363bb4231a92ea806fd8af28b32ee1f573f1df8669314" HandleID="k8s-pod-network.e4335ae994e5fc5410d363bb4231a92ea806fd8af28b32ee1f573f1df8669314" Workload="ip--172--31--23--98-k8s-csi--node--driver--5d9jn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3a10), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-98", "pod":"csi-node-driver-5d9jn", "timestamp":"2025-09-05 23:55:01.686958175 +0000 UTC"}, Hostname:"ip-172-31-23-98", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:55:02.222491 containerd[2003]: 2025-09-05 23:55:01.694 [INFO][5666] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:02.222491 containerd[2003]: 2025-09-05 23:55:01.695 [INFO][5666] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:02.222491 containerd[2003]: 2025-09-05 23:55:01.697 [INFO][5666] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-98' Sep 5 23:55:02.222491 containerd[2003]: 2025-09-05 23:55:01.754 [INFO][5666] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e4335ae994e5fc5410d363bb4231a92ea806fd8af28b32ee1f573f1df8669314" host="ip-172-31-23-98" Sep 5 23:55:02.222491 containerd[2003]: 2025-09-05 23:55:01.792 [INFO][5666] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-98" Sep 5 23:55:02.222491 containerd[2003]: 2025-09-05 23:55:01.836 [INFO][5666] ipam/ipam.go 511: Trying affinity for 192.168.30.128/26 host="ip-172-31-23-98" Sep 5 23:55:02.222491 containerd[2003]: 2025-09-05 23:55:01.854 [INFO][5666] ipam/ipam.go 158: Attempting to load block cidr=192.168.30.128/26 host="ip-172-31-23-98" Sep 5 23:55:02.222491 containerd[2003]: 2025-09-05 23:55:01.872 [INFO][5666] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.30.128/26 host="ip-172-31-23-98" Sep 5 23:55:02.222491 containerd[2003]: 2025-09-05 23:55:01.872 [INFO][5666] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.30.128/26 handle="k8s-pod-network.e4335ae994e5fc5410d363bb4231a92ea806fd8af28b32ee1f573f1df8669314" host="ip-172-31-23-98" Sep 5 23:55:02.222491 containerd[2003]: 2025-09-05 23:55:01.922 [INFO][5666] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e4335ae994e5fc5410d363bb4231a92ea806fd8af28b32ee1f573f1df8669314 Sep 5 23:55:02.222491 containerd[2003]: 2025-09-05 23:55:01.957 [INFO][5666] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.30.128/26 handle="k8s-pod-network.e4335ae994e5fc5410d363bb4231a92ea806fd8af28b32ee1f573f1df8669314" host="ip-172-31-23-98" Sep 5 23:55:02.222491 containerd[2003]: 2025-09-05 23:55:02.010 [INFO][5666] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.30.135/26] block=192.168.30.128/26 handle="k8s-pod-network.e4335ae994e5fc5410d363bb4231a92ea806fd8af28b32ee1f573f1df8669314" host="ip-172-31-23-98" Sep 5 23:55:02.222491 containerd[2003]: 2025-09-05 23:55:02.010 [INFO][5666] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.30.135/26] handle="k8s-pod-network.e4335ae994e5fc5410d363bb4231a92ea806fd8af28b32ee1f573f1df8669314" host="ip-172-31-23-98" Sep 5 23:55:02.222491 containerd[2003]: 2025-09-05 23:55:02.012 [INFO][5666] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:02.222491 containerd[2003]: 2025-09-05 23:55:02.012 [INFO][5666] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.30.135/26] IPv6=[] ContainerID="e4335ae994e5fc5410d363bb4231a92ea806fd8af28b32ee1f573f1df8669314" HandleID="k8s-pod-network.e4335ae994e5fc5410d363bb4231a92ea806fd8af28b32ee1f573f1df8669314" Workload="ip--172--31--23--98-k8s-csi--node--driver--5d9jn-eth0" Sep 5 23:55:02.223877 containerd[2003]: 2025-09-05 23:55:02.029 [INFO][5578] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e4335ae994e5fc5410d363bb4231a92ea806fd8af28b32ee1f573f1df8669314" Namespace="calico-system" Pod="csi-node-driver-5d9jn" WorkloadEndpoint="ip--172--31--23--98-k8s-csi--node--driver--5d9jn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--98-k8s-csi--node--driver--5d9jn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a29a0520-465f-4a15-9908-cc439e2ca7ce", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-98", ContainerID:"", Pod:"csi-node-driver-5d9jn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.30.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali12a56036810", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:02.223877 containerd[2003]: 2025-09-05 23:55:02.030 [INFO][5578] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.30.135/32] ContainerID="e4335ae994e5fc5410d363bb4231a92ea806fd8af28b32ee1f573f1df8669314" Namespace="calico-system" Pod="csi-node-driver-5d9jn" WorkloadEndpoint="ip--172--31--23--98-k8s-csi--node--driver--5d9jn-eth0" Sep 5 23:55:02.223877 containerd[2003]: 2025-09-05 23:55:02.031 [INFO][5578] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali12a56036810 ContainerID="e4335ae994e5fc5410d363bb4231a92ea806fd8af28b32ee1f573f1df8669314" Namespace="calico-system" Pod="csi-node-driver-5d9jn" WorkloadEndpoint="ip--172--31--23--98-k8s-csi--node--driver--5d9jn-eth0" Sep 5 23:55:02.223877 containerd[2003]: 2025-09-05 23:55:02.080 [INFO][5578] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e4335ae994e5fc5410d363bb4231a92ea806fd8af28b32ee1f573f1df8669314" Namespace="calico-system" Pod="csi-node-driver-5d9jn" WorkloadEndpoint="ip--172--31--23--98-k8s-csi--node--driver--5d9jn-eth0" Sep 5 23:55:02.223877 containerd[2003]: 2025-09-05 23:55:02.142 [INFO][5578] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e4335ae994e5fc5410d363bb4231a92ea806fd8af28b32ee1f573f1df8669314" Namespace="calico-system" Pod="csi-node-driver-5d9jn" WorkloadEndpoint="ip--172--31--23--98-k8s-csi--node--driver--5d9jn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--98-k8s-csi--node--driver--5d9jn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a29a0520-465f-4a15-9908-cc439e2ca7ce", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-98", ContainerID:"e4335ae994e5fc5410d363bb4231a92ea806fd8af28b32ee1f573f1df8669314", Pod:"csi-node-driver-5d9jn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.30.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali12a56036810", MAC:"8e:86:bc:bf:2c:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:02.223877 containerd[2003]: 2025-09-05 23:55:02.191 [INFO][5578] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e4335ae994e5fc5410d363bb4231a92ea806fd8af28b32ee1f573f1df8669314" Namespace="calico-system" Pod="csi-node-driver-5d9jn" WorkloadEndpoint="ip--172--31--23--98-k8s-csi--node--driver--5d9jn-eth0" Sep 5 23:55:02.332716 containerd[2003]: time="2025-09-05T23:55:02.328315315Z" level=info msg="StartContainer for \"34743b30d91549878dac69e13ce945c998c831fe658283032c722330bba63bed\" returns successfully" Sep 5 23:55:02.336004 containerd[2003]: time="2025-09-05T23:55:02.330214483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:55:02.336004 containerd[2003]: time="2025-09-05T23:55:02.330371839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:55:02.336004 containerd[2003]: time="2025-09-05T23:55:02.330417907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:55:02.336004 containerd[2003]: time="2025-09-05T23:55:02.330935767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:55:02.449907 systemd[1]: Started cri-containerd-e4335ae994e5fc5410d363bb4231a92ea806fd8af28b32ee1f573f1df8669314.scope - libcontainer container e4335ae994e5fc5410d363bb4231a92ea806fd8af28b32ee1f573f1df8669314. Sep 5 23:55:02.549782 systemd[1]: run-netns-cni\x2d32b818b1\x2d9090\x2dc082\x2d1b57\x2dd6143855b7fa.mount: Deactivated successfully. Sep 5 23:55:02.758311 systemd-networkd[1935]: cali18b65359a9a: Gained IPv6LL Sep 5 23:55:03.124366 kubelet[3518]: I0905 23:55:03.123447 3518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-tpnrx" podStartSLOduration=56.123416683 podStartE2EDuration="56.123416683s" podCreationTimestamp="2025-09-05 23:54:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 23:55:02.952708858 +0000 UTC m=+61.165001405" watchObservedRunningTime="2025-09-05 23:55:03.123416683 +0000 UTC m=+61.335709326" Sep 5 23:55:03.202658 containerd[2003]: 2025-09-05 23:55:02.730 [WARNING][5720] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--98-k8s-goldmane--54d579b49d--jlk4g-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"f77d155b-2827-48f6-a494-70cb819e25d7", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-98", ContainerID:"054214fb88c135390445dae38d4e29a1816224ecc84c04df26adbdd80e5713e9", Pod:"goldmane-54d579b49d-jlk4g", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.30.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali12011a860df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:03.202658 containerd[2003]: 2025-09-05 23:55:02.730 [INFO][5720] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" Sep 5 23:55:03.202658 containerd[2003]: 2025-09-05 23:55:02.730 [INFO][5720] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" iface="eth0" netns="" Sep 5 23:55:03.202658 containerd[2003]: 2025-09-05 23:55:02.730 [INFO][5720] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" Sep 5 23:55:03.202658 containerd[2003]: 2025-09-05 23:55:02.730 [INFO][5720] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" Sep 5 23:55:03.202658 containerd[2003]: 2025-09-05 23:55:02.934 [INFO][5796] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" HandleID="k8s-pod-network.e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" Workload="ip--172--31--23--98-k8s-goldmane--54d579b49d--jlk4g-eth0" Sep 5 23:55:03.202658 containerd[2003]: 2025-09-05 23:55:02.940 [INFO][5796] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:03.202658 containerd[2003]: 2025-09-05 23:55:02.943 [INFO][5796] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:03.202658 containerd[2003]: 2025-09-05 23:55:03.095 [WARNING][5796] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" HandleID="k8s-pod-network.e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" Workload="ip--172--31--23--98-k8s-goldmane--54d579b49d--jlk4g-eth0" Sep 5 23:55:03.202658 containerd[2003]: 2025-09-05 23:55:03.096 [INFO][5796] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" HandleID="k8s-pod-network.e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" Workload="ip--172--31--23--98-k8s-goldmane--54d579b49d--jlk4g-eth0" Sep 5 23:55:03.202658 containerd[2003]: 2025-09-05 23:55:03.180 [INFO][5796] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:03.202658 containerd[2003]: 2025-09-05 23:55:03.190 [INFO][5720] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" Sep 5 23:55:03.202658 containerd[2003]: time="2025-09-05T23:55:03.201037627Z" level=info msg="TearDown network for sandbox \"e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7\" successfully" Sep 5 23:55:03.202658 containerd[2003]: time="2025-09-05T23:55:03.201077419Z" level=info msg="StopPodSandbox for \"e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7\" returns successfully" Sep 5 23:55:03.202658 containerd[2003]: time="2025-09-05T23:55:03.202225063Z" level=info msg="RemovePodSandbox for \"e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7\"" Sep 5 23:55:03.202658 containerd[2003]: time="2025-09-05T23:55:03.202295779Z" level=info msg="Forcibly stopping sandbox \"e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7\"" Sep 5 23:55:03.270649 systemd-networkd[1935]: cali12a56036810: Gained IPv6LL Sep 5 23:55:03.818070 systemd-networkd[1935]: cali02c16042e63: Link UP Sep 5 23:55:03.824466 systemd-networkd[1935]: cali02c16042e63: Gained carrier Sep 5 23:55:03.871091 containerd[2003]: time="2025-09-05T23:55:03.870678034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5d9jn,Uid:a29a0520-465f-4a15-9908-cc439e2ca7ce,Namespace:calico-system,Attempt:1,} returns sandbox id \"e4335ae994e5fc5410d363bb4231a92ea806fd8af28b32ee1f573f1df8669314\"" Sep 5 23:55:03.891564 containerd[2003]: 2025-09-05 23:55:02.726 [INFO][5761] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--xmj4t-eth0 calico-apiserver-85cb674cb8- calico-apiserver b45a6056-0154-4b46-9f54-64314ddc0dd5 1023 0 2025-09-05 23:54:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85cb674cb8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-98 calico-apiserver-85cb674cb8-xmj4t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali02c16042e63 [] [] }} ContainerID="382bb6492616fa244aabdadc08928c442132291031f7776c0089df9bf2c24c54" Namespace="calico-apiserver" Pod="calico-apiserver-85cb674cb8-xmj4t" WorkloadEndpoint="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--xmj4t-" Sep 5 23:55:03.891564 containerd[2003]: 2025-09-05 23:55:02.726 [INFO][5761] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="382bb6492616fa244aabdadc08928c442132291031f7776c0089df9bf2c24c54" Namespace="calico-apiserver" Pod="calico-apiserver-85cb674cb8-xmj4t" WorkloadEndpoint="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--xmj4t-eth0" Sep 5 23:55:03.891564 containerd[2003]: 2025-09-05 23:55:02.977 [INFO][5801] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="382bb6492616fa244aabdadc08928c442132291031f7776c0089df9bf2c24c54" HandleID="k8s-pod-network.382bb6492616fa244aabdadc08928c442132291031f7776c0089df9bf2c24c54" Workload="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--xmj4t-eth0" Sep 5 23:55:03.891564 containerd[2003]: 2025-09-05 23:55:02.978 [INFO][5801] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="382bb6492616fa244aabdadc08928c442132291031f7776c0089df9bf2c24c54" HandleID="k8s-pod-network.382bb6492616fa244aabdadc08928c442132291031f7776c0089df9bf2c24c54" Workload="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--xmj4t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035ef00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-98", "pod":"calico-apiserver-85cb674cb8-xmj4t", "timestamp":"2025-09-05 23:55:02.977902582 +0000 UTC"}, Hostname:"ip-172-31-23-98", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 23:55:03.891564 containerd[2003]: 2025-09-05 23:55:02.983 [INFO][5801] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:03.891564 containerd[2003]: 2025-09-05 23:55:03.180 [INFO][5801] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:03.891564 containerd[2003]: 2025-09-05 23:55:03.180 [INFO][5801] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-98' Sep 5 23:55:03.891564 containerd[2003]: 2025-09-05 23:55:03.335 [INFO][5801] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.382bb6492616fa244aabdadc08928c442132291031f7776c0089df9bf2c24c54" host="ip-172-31-23-98" Sep 5 23:55:03.891564 containerd[2003]: 2025-09-05 23:55:03.461 [INFO][5801] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-98" Sep 5 23:55:03.891564 containerd[2003]: 2025-09-05 23:55:03.591 [INFO][5801] ipam/ipam.go 511: Trying affinity for 192.168.30.128/26 host="ip-172-31-23-98" Sep 5 23:55:03.891564 containerd[2003]: 2025-09-05 23:55:03.609 [INFO][5801] ipam/ipam.go 158: Attempting to load block cidr=192.168.30.128/26 host="ip-172-31-23-98" Sep 5 23:55:03.891564 containerd[2003]: 2025-09-05 23:55:03.646 [INFO][5801] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.30.128/26 host="ip-172-31-23-98" Sep 5 23:55:03.891564 containerd[2003]: 2025-09-05 23:55:03.646 [INFO][5801] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.30.128/26 handle="k8s-pod-network.382bb6492616fa244aabdadc08928c442132291031f7776c0089df9bf2c24c54" host="ip-172-31-23-98" Sep 5 23:55:03.891564 containerd[2003]: 2025-09-05 23:55:03.654 [INFO][5801] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.382bb6492616fa244aabdadc08928c442132291031f7776c0089df9bf2c24c54 Sep 5 23:55:03.891564 containerd[2003]: 2025-09-05 23:55:03.679 [INFO][5801] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.30.128/26 handle="k8s-pod-network.382bb6492616fa244aabdadc08928c442132291031f7776c0089df9bf2c24c54" host="ip-172-31-23-98" Sep 5 23:55:03.891564 containerd[2003]: 2025-09-05 23:55:03.713 [INFO][5801] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.30.136/26] block=192.168.30.128/26 handle="k8s-pod-network.382bb6492616fa244aabdadc08928c442132291031f7776c0089df9bf2c24c54" host="ip-172-31-23-98" Sep 5 23:55:03.891564 containerd[2003]: 2025-09-05 23:55:03.716 [INFO][5801] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.30.136/26] handle="k8s-pod-network.382bb6492616fa244aabdadc08928c442132291031f7776c0089df9bf2c24c54" host="ip-172-31-23-98" Sep 5 23:55:03.891564 containerd[2003]: 2025-09-05 23:55:03.716 [INFO][5801] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:03.891564 containerd[2003]: 2025-09-05 23:55:03.718 [INFO][5801] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.30.136/26] IPv6=[] ContainerID="382bb6492616fa244aabdadc08928c442132291031f7776c0089df9bf2c24c54" HandleID="k8s-pod-network.382bb6492616fa244aabdadc08928c442132291031f7776c0089df9bf2c24c54" Workload="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--xmj4t-eth0" Sep 5 23:55:03.894273 containerd[2003]: 2025-09-05 23:55:03.743 [INFO][5761] cni-plugin/k8s.go 418: Populated endpoint ContainerID="382bb6492616fa244aabdadc08928c442132291031f7776c0089df9bf2c24c54" Namespace="calico-apiserver" Pod="calico-apiserver-85cb674cb8-xmj4t" WorkloadEndpoint="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--xmj4t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--xmj4t-eth0", GenerateName:"calico-apiserver-85cb674cb8-", Namespace:"calico-apiserver", SelfLink:"", UID:"b45a6056-0154-4b46-9f54-64314ddc0dd5", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85cb674cb8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-98", ContainerID:"", Pod:"calico-apiserver-85cb674cb8-xmj4t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali02c16042e63", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:03.894273 containerd[2003]: 2025-09-05 23:55:03.745 [INFO][5761] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.30.136/32] ContainerID="382bb6492616fa244aabdadc08928c442132291031f7776c0089df9bf2c24c54" Namespace="calico-apiserver" Pod="calico-apiserver-85cb674cb8-xmj4t" WorkloadEndpoint="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--xmj4t-eth0" Sep 5 23:55:03.894273 containerd[2003]: 2025-09-05 23:55:03.746 [INFO][5761] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali02c16042e63 ContainerID="382bb6492616fa244aabdadc08928c442132291031f7776c0089df9bf2c24c54" Namespace="calico-apiserver" Pod="calico-apiserver-85cb674cb8-xmj4t" WorkloadEndpoint="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--xmj4t-eth0" Sep 5 23:55:03.894273 containerd[2003]: 2025-09-05 23:55:03.832 [INFO][5761] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="382bb6492616fa244aabdadc08928c442132291031f7776c0089df9bf2c24c54" Namespace="calico-apiserver" Pod="calico-apiserver-85cb674cb8-xmj4t" WorkloadEndpoint="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--xmj4t-eth0" Sep 5 23:55:03.894273 containerd[2003]: 2025-09-05 23:55:03.836 [INFO][5761] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="382bb6492616fa244aabdadc08928c442132291031f7776c0089df9bf2c24c54" Namespace="calico-apiserver" Pod="calico-apiserver-85cb674cb8-xmj4t" WorkloadEndpoint="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--xmj4t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--xmj4t-eth0", GenerateName:"calico-apiserver-85cb674cb8-", Namespace:"calico-apiserver", SelfLink:"", UID:"b45a6056-0154-4b46-9f54-64314ddc0dd5", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85cb674cb8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-98", ContainerID:"382bb6492616fa244aabdadc08928c442132291031f7776c0089df9bf2c24c54", Pod:"calico-apiserver-85cb674cb8-xmj4t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali02c16042e63", MAC:"1e:0f:67:02:8f:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:03.894273 containerd[2003]: 2025-09-05 23:55:03.858 [INFO][5761] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="382bb6492616fa244aabdadc08928c442132291031f7776c0089df9bf2c24c54" Namespace="calico-apiserver" Pod="calico-apiserver-85cb674cb8-xmj4t" WorkloadEndpoint="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--xmj4t-eth0" Sep 5 23:55:03.945294 containerd[2003]: 2025-09-05 23:55:03.557 [WARNING][5824] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--98-k8s-goldmane--54d579b49d--jlk4g-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"f77d155b-2827-48f6-a494-70cb819e25d7", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-98", ContainerID:"054214fb88c135390445dae38d4e29a1816224ecc84c04df26adbdd80e5713e9", Pod:"goldmane-54d579b49d-jlk4g", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.30.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali12011a860df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:03.945294 containerd[2003]: 2025-09-05 23:55:03.559 [INFO][5824] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" Sep 5 23:55:03.945294 containerd[2003]: 2025-09-05 23:55:03.559 [INFO][5824] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" iface="eth0" netns="" Sep 5 23:55:03.945294 containerd[2003]: 2025-09-05 23:55:03.559 [INFO][5824] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" Sep 5 23:55:03.945294 containerd[2003]: 2025-09-05 23:55:03.559 [INFO][5824] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" Sep 5 23:55:03.945294 containerd[2003]: 2025-09-05 23:55:03.786 [INFO][5834] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" HandleID="k8s-pod-network.e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" Workload="ip--172--31--23--98-k8s-goldmane--54d579b49d--jlk4g-eth0" Sep 5 23:55:03.945294 containerd[2003]: 2025-09-05 23:55:03.798 [INFO][5834] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:03.945294 containerd[2003]: 2025-09-05 23:55:03.798 [INFO][5834] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:03.945294 containerd[2003]: 2025-09-05 23:55:03.875 [WARNING][5834] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" HandleID="k8s-pod-network.e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" Workload="ip--172--31--23--98-k8s-goldmane--54d579b49d--jlk4g-eth0" Sep 5 23:55:03.945294 containerd[2003]: 2025-09-05 23:55:03.875 [INFO][5834] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" HandleID="k8s-pod-network.e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" Workload="ip--172--31--23--98-k8s-goldmane--54d579b49d--jlk4g-eth0" Sep 5 23:55:03.945294 containerd[2003]: 2025-09-05 23:55:03.881 [INFO][5834] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:03.945294 containerd[2003]: 2025-09-05 23:55:03.914 [INFO][5824] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7" Sep 5 23:55:03.945294 containerd[2003]: time="2025-09-05T23:55:03.943683359Z" level=info msg="TearDown network for sandbox \"e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7\" successfully" Sep 5 23:55:03.970953 containerd[2003]: time="2025-09-05T23:55:03.970800911Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:55:03.971461 containerd[2003]: time="2025-09-05T23:55:03.971302031Z" level=info msg="RemovePodSandbox \"e72d3fbbb128bf33dad1b8616be62b22680688913ba0e918c01bec968b452bd7\" returns successfully" Sep 5 23:55:03.985662 containerd[2003]: time="2025-09-05T23:55:03.985483283Z" level=info msg="StopPodSandbox for \"15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb\"" Sep 5 23:55:04.122279 containerd[2003]: time="2025-09-05T23:55:04.117094375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 23:55:04.122279 containerd[2003]: time="2025-09-05T23:55:04.117242071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 23:55:04.122279 containerd[2003]: time="2025-09-05T23:55:04.117275023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:55:04.127470 containerd[2003]: time="2025-09-05T23:55:04.122028980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 23:55:04.269906 systemd[1]: Started sshd@9-172.31.23.98:22-139.178.68.195:40030.service - OpenSSH per-connection server daemon (139.178.68.195:40030). Sep 5 23:55:04.304057 systemd[1]: Started cri-containerd-382bb6492616fa244aabdadc08928c442132291031f7776c0089df9bf2c24c54.scope - libcontainer container 382bb6492616fa244aabdadc08928c442132291031f7776c0089df9bf2c24c54. Sep 5 23:55:04.508976 sshd[5909]: Accepted publickey for core from 139.178.68.195 port 40030 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:55:04.514954 sshd[5909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:04.532039 systemd-logind[1993]: New session 10 of user core. Sep 5 23:55:04.537722 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 23:55:04.592382 containerd[2003]: time="2025-09-05T23:55:04.592299082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85cb674cb8-xmj4t,Uid:b45a6056-0154-4b46-9f54-64314ddc0dd5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"382bb6492616fa244aabdadc08928c442132291031f7776c0089df9bf2c24c54\"" Sep 5 23:55:04.623604 containerd[2003]: 2025-09-05 23:55:04.388 [WARNING][5878] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--98-k8s-coredns--674b8bbfcf--tpnrx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1ce18320-0b1e-4d63-958f-2d9f5f435dca", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-98", ContainerID:"00a9a239e51f51cc9b5e7060c2792e15b4e9b0d1e96d8fe6c13f490e95b7bc84", Pod:"coredns-674b8bbfcf-tpnrx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18b65359a9a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:04.623604 containerd[2003]: 2025-09-05 23:55:04.399 [INFO][5878] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" Sep 5 23:55:04.623604 containerd[2003]: 2025-09-05 23:55:04.403 [INFO][5878] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" iface="eth0" netns="" Sep 5 23:55:04.623604 containerd[2003]: 2025-09-05 23:55:04.405 [INFO][5878] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" Sep 5 23:55:04.623604 containerd[2003]: 2025-09-05 23:55:04.406 [INFO][5878] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" Sep 5 23:55:04.623604 containerd[2003]: 2025-09-05 23:55:04.552 [INFO][5925] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" HandleID="k8s-pod-network.15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" Workload="ip--172--31--23--98-k8s-coredns--674b8bbfcf--tpnrx-eth0" Sep 5 23:55:04.623604 containerd[2003]: 2025-09-05 23:55:04.553 [INFO][5925] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:04.623604 containerd[2003]: 2025-09-05 23:55:04.553 [INFO][5925] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:04.623604 containerd[2003]: 2025-09-05 23:55:04.589 [WARNING][5925] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" HandleID="k8s-pod-network.15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" Workload="ip--172--31--23--98-k8s-coredns--674b8bbfcf--tpnrx-eth0" Sep 5 23:55:04.623604 containerd[2003]: 2025-09-05 23:55:04.589 [INFO][5925] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" HandleID="k8s-pod-network.15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" Workload="ip--172--31--23--98-k8s-coredns--674b8bbfcf--tpnrx-eth0" Sep 5 23:55:04.623604 containerd[2003]: 2025-09-05 23:55:04.606 [INFO][5925] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:04.623604 containerd[2003]: 2025-09-05 23:55:04.613 [INFO][5878] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" Sep 5 23:55:04.623604 containerd[2003]: time="2025-09-05T23:55:04.623042674Z" level=info msg="TearDown network for sandbox \"15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb\" successfully" Sep 5 23:55:04.623604 containerd[2003]: time="2025-09-05T23:55:04.623082598Z" level=info msg="StopPodSandbox for \"15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb\" returns successfully" Sep 5 23:55:04.629588 containerd[2003]: time="2025-09-05T23:55:04.627773062Z" level=info msg="RemovePodSandbox for \"15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb\"" Sep 5 23:55:04.629588 containerd[2003]: time="2025-09-05T23:55:04.627978826Z" level=info msg="Forcibly stopping sandbox \"15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb\"" Sep 5 23:55:04.928731 sshd[5909]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:04.943126 systemd[1]: sshd@9-172.31.23.98:22-139.178.68.195:40030.service: Deactivated successfully. Sep 5 23:55:04.949141 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 23:55:04.953184 systemd-logind[1993]: Session 10 logged out. Waiting for processes to exit. Sep 5 23:55:04.957594 systemd-logind[1993]: Removed session 10. Sep 5 23:55:05.062987 containerd[2003]: 2025-09-05 23:55:04.895 [WARNING][5953] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--98-k8s-coredns--674b8bbfcf--tpnrx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1ce18320-0b1e-4d63-958f-2d9f5f435dca", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-98", ContainerID:"00a9a239e51f51cc9b5e7060c2792e15b4e9b0d1e96d8fe6c13f490e95b7bc84", Pod:"coredns-674b8bbfcf-tpnrx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18b65359a9a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:05.062987 containerd[2003]: 2025-09-05 23:55:04.900 [INFO][5953] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" Sep 5 23:55:05.062987 containerd[2003]: 2025-09-05 23:55:04.900 [INFO][5953] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" iface="eth0" netns="" Sep 5 23:55:05.062987 containerd[2003]: 2025-09-05 23:55:04.900 [INFO][5953] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" Sep 5 23:55:05.062987 containerd[2003]: 2025-09-05 23:55:04.900 [INFO][5953] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" Sep 5 23:55:05.062987 containerd[2003]: 2025-09-05 23:55:04.992 [INFO][5961] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" HandleID="k8s-pod-network.15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" Workload="ip--172--31--23--98-k8s-coredns--674b8bbfcf--tpnrx-eth0" Sep 5 23:55:05.062987 containerd[2003]: 2025-09-05 23:55:04.994 [INFO][5961] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:05.062987 containerd[2003]: 2025-09-05 23:55:04.994 [INFO][5961] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:05.062987 containerd[2003]: 2025-09-05 23:55:05.035 [WARNING][5961] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" HandleID="k8s-pod-network.15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" Workload="ip--172--31--23--98-k8s-coredns--674b8bbfcf--tpnrx-eth0" Sep 5 23:55:05.062987 containerd[2003]: 2025-09-05 23:55:05.035 [INFO][5961] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" HandleID="k8s-pod-network.15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" Workload="ip--172--31--23--98-k8s-coredns--674b8bbfcf--tpnrx-eth0" Sep 5 23:55:05.062987 containerd[2003]: 2025-09-05 23:55:05.039 [INFO][5961] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:05.062987 containerd[2003]: 2025-09-05 23:55:05.048 [INFO][5953] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb" Sep 5 23:55:05.065372 containerd[2003]: time="2025-09-05T23:55:05.063968108Z" level=info msg="TearDown network for sandbox \"15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb\" successfully" Sep 5 23:55:05.082581 containerd[2003]: time="2025-09-05T23:55:05.082477004Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:55:05.082903 containerd[2003]: time="2025-09-05T23:55:05.082625228Z" level=info msg="RemovePodSandbox \"15d9e0c78537db79b6b4059a09003f78d713198550475d7754c91d269eaf48fb\" returns successfully" Sep 5 23:55:05.085073 containerd[2003]: time="2025-09-05T23:55:05.085007216Z" level=info msg="StopPodSandbox for \"87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668\"" Sep 5 23:55:05.318182 systemd-networkd[1935]: cali02c16042e63: Gained IPv6LL Sep 5 23:55:05.519647 containerd[2003]: 2025-09-05 23:55:05.261 [WARNING][5978] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--98-k8s-calico--kube--controllers--65bc579fb7--j8488-eth0", GenerateName:"calico-kube-controllers-65bc579fb7-", Namespace:"calico-system", SelfLink:"", UID:"af3665b2-d693-4ba1-907b-899c82f2055d", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65bc579fb7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-98", ContainerID:"4cdc744700d926d5cd645e25231c8fc3aa70577aa18a3263660ee863a153eb57", Pod:"calico-kube-controllers-65bc579fb7-j8488", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.30.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidb4f94c45d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:05.519647 containerd[2003]: 2025-09-05 23:55:05.262 [INFO][5978] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" Sep 5 23:55:05.519647 containerd[2003]: 2025-09-05 23:55:05.263 [INFO][5978] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" iface="eth0" netns="" Sep 5 23:55:05.519647 containerd[2003]: 2025-09-05 23:55:05.263 [INFO][5978] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" Sep 5 23:55:05.519647 containerd[2003]: 2025-09-05 23:55:05.263 [INFO][5978] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" Sep 5 23:55:05.519647 containerd[2003]: 2025-09-05 23:55:05.462 [INFO][5985] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" HandleID="k8s-pod-network.87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" Workload="ip--172--31--23--98-k8s-calico--kube--controllers--65bc579fb7--j8488-eth0" Sep 5 23:55:05.519647 containerd[2003]: 2025-09-05 23:55:05.463 [INFO][5985] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:05.519647 containerd[2003]: 2025-09-05 23:55:05.466 [INFO][5985] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:05.519647 containerd[2003]: 2025-09-05 23:55:05.496 [WARNING][5985] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" HandleID="k8s-pod-network.87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" Workload="ip--172--31--23--98-k8s-calico--kube--controllers--65bc579fb7--j8488-eth0" Sep 5 23:55:05.519647 containerd[2003]: 2025-09-05 23:55:05.497 [INFO][5985] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" HandleID="k8s-pod-network.87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" Workload="ip--172--31--23--98-k8s-calico--kube--controllers--65bc579fb7--j8488-eth0" Sep 5 23:55:05.519647 containerd[2003]: 2025-09-05 23:55:05.504 [INFO][5985] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:05.519647 containerd[2003]: 2025-09-05 23:55:05.512 [INFO][5978] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" Sep 5 23:55:05.519647 containerd[2003]: time="2025-09-05T23:55:05.519490750Z" level=info msg="TearDown network for sandbox \"87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668\" successfully" Sep 5 23:55:05.522420 containerd[2003]: time="2025-09-05T23:55:05.519747130Z" level=info msg="StopPodSandbox for \"87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668\" returns successfully" Sep 5 23:55:05.522420 containerd[2003]: time="2025-09-05T23:55:05.521781694Z" level=info msg="RemovePodSandbox for \"87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668\"" Sep 5 23:55:05.522420 containerd[2003]: time="2025-09-05T23:55:05.521879854Z" level=info msg="Forcibly stopping sandbox \"87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668\"" Sep 5 23:55:05.793619 containerd[2003]: 2025-09-05 23:55:05.678 [WARNING][5999] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--98-k8s-calico--kube--controllers--65bc579fb7--j8488-eth0", GenerateName:"calico-kube-controllers-65bc579fb7-", Namespace:"calico-system", SelfLink:"", UID:"af3665b2-d693-4ba1-907b-899c82f2055d", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65bc579fb7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-98", ContainerID:"4cdc744700d926d5cd645e25231c8fc3aa70577aa18a3263660ee863a153eb57", Pod:"calico-kube-controllers-65bc579fb7-j8488", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.30.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidb4f94c45d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:05.793619 containerd[2003]: 2025-09-05 23:55:05.681 [INFO][5999] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" Sep 5 23:55:05.793619 containerd[2003]: 2025-09-05 23:55:05.681 [INFO][5999] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" iface="eth0" netns="" Sep 5 23:55:05.793619 containerd[2003]: 2025-09-05 23:55:05.681 [INFO][5999] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" Sep 5 23:55:05.793619 containerd[2003]: 2025-09-05 23:55:05.681 [INFO][5999] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" Sep 5 23:55:05.793619 containerd[2003]: 2025-09-05 23:55:05.748 [INFO][6006] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" HandleID="k8s-pod-network.87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" Workload="ip--172--31--23--98-k8s-calico--kube--controllers--65bc579fb7--j8488-eth0" Sep 5 23:55:05.793619 containerd[2003]: 2025-09-05 23:55:05.749 [INFO][6006] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:05.793619 containerd[2003]: 2025-09-05 23:55:05.749 [INFO][6006] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:05.793619 containerd[2003]: 2025-09-05 23:55:05.776 [WARNING][6006] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" HandleID="k8s-pod-network.87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" Workload="ip--172--31--23--98-k8s-calico--kube--controllers--65bc579fb7--j8488-eth0" Sep 5 23:55:05.793619 containerd[2003]: 2025-09-05 23:55:05.777 [INFO][6006] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" HandleID="k8s-pod-network.87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" Workload="ip--172--31--23--98-k8s-calico--kube--controllers--65bc579fb7--j8488-eth0" Sep 5 23:55:05.793619 containerd[2003]: 2025-09-05 23:55:05.782 [INFO][6006] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:05.793619 containerd[2003]: 2025-09-05 23:55:05.787 [INFO][5999] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668" Sep 5 23:55:05.795625 containerd[2003]: time="2025-09-05T23:55:05.794898336Z" level=info msg="TearDown network for sandbox \"87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668\" successfully" Sep 5 23:55:05.815049 containerd[2003]: time="2025-09-05T23:55:05.814994796Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:55:05.815295 containerd[2003]: time="2025-09-05T23:55:05.815263668Z" level=info msg="RemovePodSandbox \"87abd26e6767f70498db8d1c3275361b0c4b806e7fe598fbe4b975de35993668\" returns successfully" Sep 5 23:55:05.816992 containerd[2003]: time="2025-09-05T23:55:05.816943356Z" level=info msg="StopPodSandbox for \"8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840\"" Sep 5 23:55:05.896370 containerd[2003]: time="2025-09-05T23:55:05.895016952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:05.898280 containerd[2003]: time="2025-09-05T23:55:05.898209456Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 5 23:55:05.900613 containerd[2003]: time="2025-09-05T23:55:05.900549960Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:05.912071 containerd[2003]: time="2025-09-05T23:55:05.912014832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:05.914420 containerd[2003]: time="2025-09-05T23:55:05.914330472Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 6.269209975s" Sep 5 23:55:05.914714 containerd[2003]: time="2025-09-05T23:55:05.914677344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 5 23:55:05.919287 containerd[2003]: time="2025-09-05T23:55:05.919234188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 5 23:55:05.929536 containerd[2003]: time="2025-09-05T23:55:05.929471329Z" level=info msg="CreateContainer within sandbox \"130ddfa7dd9babf01c929a319f5b03f042fbd354087ab26106d8289087b1bf9b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 23:55:05.970907 containerd[2003]: time="2025-09-05T23:55:05.970834201Z" level=info msg="CreateContainer within sandbox \"130ddfa7dd9babf01c929a319f5b03f042fbd354087ab26106d8289087b1bf9b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0a77a8c00bcf6dd837302f232781a4de3b065b824bb63f5714f4146a4490f39d\"" Sep 5 23:55:05.975927 containerd[2003]: time="2025-09-05T23:55:05.975863401Z" level=info msg="StartContainer for \"0a77a8c00bcf6dd837302f232781a4de3b065b824bb63f5714f4146a4490f39d\"" Sep 5 23:55:06.060885 containerd[2003]: 2025-09-05 23:55:05.907 [WARNING][6020] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--98-k8s-coredns--674b8bbfcf--g4vsz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a0f339f8-d06b-4fc1-92df-a5d9b2d87813", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-98", ContainerID:"dc9deea86a0acb842143c4c1d809d9f3a42602c224b50ff435e757623706199a", Pod:"coredns-674b8bbfcf-g4vsz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali926ad072e55", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:06.060885 containerd[2003]: 2025-09-05 23:55:05.910 [INFO][6020] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" Sep 5 23:55:06.060885 containerd[2003]: 2025-09-05 23:55:05.910 [INFO][6020] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" iface="eth0" netns="" Sep 5 23:55:06.060885 containerd[2003]: 2025-09-05 23:55:05.910 [INFO][6020] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" Sep 5 23:55:06.060885 containerd[2003]: 2025-09-05 23:55:05.910 [INFO][6020] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" Sep 5 23:55:06.060885 containerd[2003]: 2025-09-05 23:55:06.006 [INFO][6031] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" HandleID="k8s-pod-network.8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" Workload="ip--172--31--23--98-k8s-coredns--674b8bbfcf--g4vsz-eth0" Sep 5 23:55:06.060885 containerd[2003]: 2025-09-05 23:55:06.006 [INFO][6031] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:06.060885 containerd[2003]: 2025-09-05 23:55:06.006 [INFO][6031] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:06.060885 containerd[2003]: 2025-09-05 23:55:06.045 [WARNING][6031] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" HandleID="k8s-pod-network.8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" Workload="ip--172--31--23--98-k8s-coredns--674b8bbfcf--g4vsz-eth0" Sep 5 23:55:06.060885 containerd[2003]: 2025-09-05 23:55:06.045 [INFO][6031] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" HandleID="k8s-pod-network.8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" Workload="ip--172--31--23--98-k8s-coredns--674b8bbfcf--g4vsz-eth0" Sep 5 23:55:06.060885 containerd[2003]: 2025-09-05 23:55:06.053 [INFO][6031] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:06.060885 containerd[2003]: 2025-09-05 23:55:06.057 [INFO][6020] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" Sep 5 23:55:06.062282 containerd[2003]: time="2025-09-05T23:55:06.061602969Z" level=info msg="TearDown network for sandbox \"8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840\" successfully" Sep 5 23:55:06.062282 containerd[2003]: time="2025-09-05T23:55:06.061682205Z" level=info msg="StopPodSandbox for \"8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840\" returns successfully" Sep 5 23:55:06.065051 containerd[2003]: time="2025-09-05T23:55:06.064148181Z" level=info msg="RemovePodSandbox for \"8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840\"" Sep 5 23:55:06.065051 containerd[2003]: time="2025-09-05T23:55:06.064209261Z" level=info msg="Forcibly stopping sandbox \"8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840\"" Sep 5 23:55:06.092934 systemd[1]: run-containerd-runc-k8s.io-0a77a8c00bcf6dd837302f232781a4de3b065b824bb63f5714f4146a4490f39d-runc.ObkHWj.mount: Deactivated successfully. Sep 5 23:55:06.104900 systemd[1]: Started cri-containerd-0a77a8c00bcf6dd837302f232781a4de3b065b824bb63f5714f4146a4490f39d.scope - libcontainer container 0a77a8c00bcf6dd837302f232781a4de3b065b824bb63f5714f4146a4490f39d. Sep 5 23:55:06.213025 containerd[2003]: time="2025-09-05T23:55:06.210798562Z" level=info msg="StartContainer for \"0a77a8c00bcf6dd837302f232781a4de3b065b824bb63f5714f4146a4490f39d\" returns successfully" Sep 5 23:55:06.309791 containerd[2003]: 2025-09-05 23:55:06.200 [WARNING][6065] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--98-k8s-coredns--674b8bbfcf--g4vsz-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a0f339f8-d06b-4fc1-92df-a5d9b2d87813", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-98", ContainerID:"dc9deea86a0acb842143c4c1d809d9f3a42602c224b50ff435e757623706199a", Pod:"coredns-674b8bbfcf-g4vsz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.30.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali926ad072e55", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:06.309791 containerd[2003]: 2025-09-05 23:55:06.201 [INFO][6065] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" Sep 5 23:55:06.309791 containerd[2003]: 2025-09-05 23:55:06.201 [INFO][6065] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" iface="eth0" netns="" Sep 5 23:55:06.309791 containerd[2003]: 2025-09-05 23:55:06.201 [INFO][6065] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" Sep 5 23:55:06.309791 containerd[2003]: 2025-09-05 23:55:06.201 [INFO][6065] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" Sep 5 23:55:06.309791 containerd[2003]: 2025-09-05 23:55:06.276 [INFO][6085] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" HandleID="k8s-pod-network.8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" Workload="ip--172--31--23--98-k8s-coredns--674b8bbfcf--g4vsz-eth0" Sep 5 23:55:06.309791 containerd[2003]: 2025-09-05 23:55:06.277 [INFO][6085] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:06.309791 containerd[2003]: 2025-09-05 23:55:06.277 [INFO][6085] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:06.309791 containerd[2003]: 2025-09-05 23:55:06.298 [WARNING][6085] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" HandleID="k8s-pod-network.8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" Workload="ip--172--31--23--98-k8s-coredns--674b8bbfcf--g4vsz-eth0" Sep 5 23:55:06.309791 containerd[2003]: 2025-09-05 23:55:06.298 [INFO][6085] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" HandleID="k8s-pod-network.8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" Workload="ip--172--31--23--98-k8s-coredns--674b8bbfcf--g4vsz-eth0" Sep 5 23:55:06.309791 containerd[2003]: 2025-09-05 23:55:06.302 [INFO][6085] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:06.309791 containerd[2003]: 2025-09-05 23:55:06.305 [INFO][6065] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840" Sep 5 23:55:06.312403 containerd[2003]: time="2025-09-05T23:55:06.310419346Z" level=info msg="TearDown network for sandbox \"8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840\" successfully" Sep 5 23:55:06.328221 containerd[2003]: time="2025-09-05T23:55:06.318312958Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:55:06.328221 containerd[2003]: time="2025-09-05T23:55:06.318445522Z" level=info msg="RemovePodSandbox \"8738bccbe51aa7b58f892c892918b09b373dc32172bb230247612bcba95fd840\" returns successfully" Sep 5 23:55:06.330163 containerd[2003]: time="2025-09-05T23:55:06.329631214Z" level=info msg="StopPodSandbox for \"79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7\"" Sep 5 23:55:06.515393 containerd[2003]: 2025-09-05 23:55:06.447 [WARNING][6102] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" WorkloadEndpoint="ip--172--31--23--98-k8s-whisker--6d5f8bf8bf--czhd4-eth0" Sep 5 23:55:06.515393 containerd[2003]: 2025-09-05 23:55:06.448 [INFO][6102] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" Sep 5 23:55:06.515393 containerd[2003]: 2025-09-05 23:55:06.448 [INFO][6102] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" iface="eth0" netns="" Sep 5 23:55:06.515393 containerd[2003]: 2025-09-05 23:55:06.448 [INFO][6102] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" Sep 5 23:55:06.515393 containerd[2003]: 2025-09-05 23:55:06.448 [INFO][6102] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" Sep 5 23:55:06.515393 containerd[2003]: 2025-09-05 23:55:06.493 [INFO][6114] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" HandleID="k8s-pod-network.79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" Workload="ip--172--31--23--98-k8s-whisker--6d5f8bf8bf--czhd4-eth0" Sep 5 23:55:06.515393 containerd[2003]: 2025-09-05 23:55:06.493 [INFO][6114] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:06.515393 containerd[2003]: 2025-09-05 23:55:06.493 [INFO][6114] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:06.515393 containerd[2003]: 2025-09-05 23:55:06.506 [WARNING][6114] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" HandleID="k8s-pod-network.79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" Workload="ip--172--31--23--98-k8s-whisker--6d5f8bf8bf--czhd4-eth0" Sep 5 23:55:06.515393 containerd[2003]: 2025-09-05 23:55:06.506 [INFO][6114] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" HandleID="k8s-pod-network.79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" Workload="ip--172--31--23--98-k8s-whisker--6d5f8bf8bf--czhd4-eth0" Sep 5 23:55:06.515393 containerd[2003]: 2025-09-05 23:55:06.509 [INFO][6114] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:06.515393 containerd[2003]: 2025-09-05 23:55:06.512 [INFO][6102] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" Sep 5 23:55:06.517297 containerd[2003]: time="2025-09-05T23:55:06.516194579Z" level=info msg="TearDown network for sandbox \"79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7\" successfully" Sep 5 23:55:06.517297 containerd[2003]: time="2025-09-05T23:55:06.516245795Z" level=info msg="StopPodSandbox for \"79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7\" returns successfully" Sep 5 23:55:06.517297 containerd[2003]: time="2025-09-05T23:55:06.517142003Z" level=info msg="RemovePodSandbox for \"79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7\"" Sep 5 23:55:06.517297 containerd[2003]: time="2025-09-05T23:55:06.517208567Z" level=info msg="Forcibly stopping sandbox \"79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7\"" Sep 5 23:55:06.698697 containerd[2003]: 2025-09-05 23:55:06.595 [WARNING][6130] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" WorkloadEndpoint="ip--172--31--23--98-k8s-whisker--6d5f8bf8bf--czhd4-eth0" Sep 5 23:55:06.698697 containerd[2003]: 2025-09-05 23:55:06.596 [INFO][6130] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" Sep 5 23:55:06.698697 containerd[2003]: 2025-09-05 23:55:06.596 [INFO][6130] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" iface="eth0" netns="" Sep 5 23:55:06.698697 containerd[2003]: 2025-09-05 23:55:06.596 [INFO][6130] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" Sep 5 23:55:06.698697 containerd[2003]: 2025-09-05 23:55:06.597 [INFO][6130] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" Sep 5 23:55:06.698697 containerd[2003]: 2025-09-05 23:55:06.660 [INFO][6138] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" HandleID="k8s-pod-network.79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" Workload="ip--172--31--23--98-k8s-whisker--6d5f8bf8bf--czhd4-eth0" Sep 5 23:55:06.698697 containerd[2003]: 2025-09-05 23:55:06.661 [INFO][6138] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:06.698697 containerd[2003]: 2025-09-05 23:55:06.661 [INFO][6138] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:06.698697 containerd[2003]: 2025-09-05 23:55:06.680 [WARNING][6138] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" HandleID="k8s-pod-network.79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" Workload="ip--172--31--23--98-k8s-whisker--6d5f8bf8bf--czhd4-eth0" Sep 5 23:55:06.698697 containerd[2003]: 2025-09-05 23:55:06.680 [INFO][6138] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" HandleID="k8s-pod-network.79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" Workload="ip--172--31--23--98-k8s-whisker--6d5f8bf8bf--czhd4-eth0" Sep 5 23:55:06.698697 containerd[2003]: 2025-09-05 23:55:06.683 [INFO][6138] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:06.698697 containerd[2003]: 2025-09-05 23:55:06.692 [INFO][6130] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7" Sep 5 23:55:06.701367 containerd[2003]: time="2025-09-05T23:55:06.699643548Z" level=info msg="TearDown network for sandbox \"79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7\" successfully" Sep 5 23:55:06.716178 containerd[2003]: time="2025-09-05T23:55:06.715490088Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:55:06.716178 containerd[2003]: time="2025-09-05T23:55:06.715821036Z" level=info msg="RemovePodSandbox \"79e90a5211abaf9ec9aaeba78c91a3bce5a6ebb1146da1d8b24df9f79635f7e7\" returns successfully" Sep 5 23:55:06.721521 containerd[2003]: time="2025-09-05T23:55:06.721016520Z" level=info msg="StopPodSandbox for \"3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303\"" Sep 5 23:55:06.981057 containerd[2003]: 2025-09-05 23:55:06.882 [WARNING][6153] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--skhtj-eth0", GenerateName:"calico-apiserver-85cb674cb8-", Namespace:"calico-apiserver", SelfLink:"", UID:"69af1b0c-a845-4919-910b-83540ca47865", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85cb674cb8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-98", ContainerID:"130ddfa7dd9babf01c929a319f5b03f042fbd354087ab26106d8289087b1bf9b", Pod:"calico-apiserver-85cb674cb8-skhtj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia08a393e916", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:06.981057 containerd[2003]: 2025-09-05 23:55:06.882 [INFO][6153] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" Sep 5 23:55:06.981057 containerd[2003]: 2025-09-05 23:55:06.882 [INFO][6153] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" iface="eth0" netns="" Sep 5 23:55:06.981057 containerd[2003]: 2025-09-05 23:55:06.882 [INFO][6153] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" Sep 5 23:55:06.981057 containerd[2003]: 2025-09-05 23:55:06.882 [INFO][6153] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" Sep 5 23:55:06.981057 containerd[2003]: 2025-09-05 23:55:06.943 [INFO][6160] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" HandleID="k8s-pod-network.3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" Workload="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--skhtj-eth0" Sep 5 23:55:06.981057 containerd[2003]: 2025-09-05 23:55:06.943 [INFO][6160] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:06.981057 containerd[2003]: 2025-09-05 23:55:06.943 [INFO][6160] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:06.981057 containerd[2003]: 2025-09-05 23:55:06.960 [WARNING][6160] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" HandleID="k8s-pod-network.3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" Workload="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--skhtj-eth0" Sep 5 23:55:06.981057 containerd[2003]: 2025-09-05 23:55:06.960 [INFO][6160] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" HandleID="k8s-pod-network.3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" Workload="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--skhtj-eth0" Sep 5 23:55:06.981057 containerd[2003]: 2025-09-05 23:55:06.965 [INFO][6160] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:06.981057 containerd[2003]: 2025-09-05 23:55:06.971 [INFO][6153] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" Sep 5 23:55:06.981057 containerd[2003]: time="2025-09-05T23:55:06.980488418Z" level=info msg="TearDown network for sandbox \"3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303\" successfully" Sep 5 23:55:06.981057 containerd[2003]: time="2025-09-05T23:55:06.980532854Z" level=info msg="StopPodSandbox for \"3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303\" returns successfully" Sep 5 23:55:06.987814 containerd[2003]: time="2025-09-05T23:55:06.981752894Z" level=info msg="RemovePodSandbox for \"3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303\"" Sep 5 23:55:06.987814 containerd[2003]: time="2025-09-05T23:55:06.981826226Z" level=info msg="Forcibly stopping sandbox \"3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303\"" Sep 5 23:55:07.047525 kubelet[3518]: I0905 23:55:07.046642 3518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-85cb674cb8-skhtj" podStartSLOduration=36.767113075 podStartE2EDuration="43.046614226s" podCreationTimestamp="2025-09-05 23:54:24 +0000 UTC" firstStartedPulling="2025-09-05 23:54:59.639405737 +0000 UTC m=+57.851698272" lastFinishedPulling="2025-09-05 23:55:05.9189069 +0000 UTC m=+64.131199423" observedRunningTime="2025-09-05 23:55:07.04546729 +0000 UTC m=+65.257759837" watchObservedRunningTime="2025-09-05 23:55:07.046614226 +0000 UTC m=+65.258906881" Sep 5 23:55:07.231547 containerd[2003]: 2025-09-05 23:55:07.103 [WARNING][6174] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--skhtj-eth0", GenerateName:"calico-apiserver-85cb674cb8-", Namespace:"calico-apiserver", SelfLink:"", UID:"69af1b0c-a845-4919-910b-83540ca47865", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85cb674cb8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-98", ContainerID:"130ddfa7dd9babf01c929a319f5b03f042fbd354087ab26106d8289087b1bf9b", Pod:"calico-apiserver-85cb674cb8-skhtj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia08a393e916", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:55:07.231547 containerd[2003]: 2025-09-05 23:55:07.103 [INFO][6174] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" Sep 5 23:55:07.231547 containerd[2003]: 2025-09-05 23:55:07.103 [INFO][6174] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" iface="eth0" netns="" Sep 5 23:55:07.231547 containerd[2003]: 2025-09-05 23:55:07.103 [INFO][6174] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" Sep 5 23:55:07.231547 containerd[2003]: 2025-09-05 23:55:07.104 [INFO][6174] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" Sep 5 23:55:07.231547 containerd[2003]: 2025-09-05 23:55:07.202 [INFO][6184] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" HandleID="k8s-pod-network.3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" Workload="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--skhtj-eth0" Sep 5 23:55:07.231547 containerd[2003]: 2025-09-05 23:55:07.204 [INFO][6184] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:55:07.231547 containerd[2003]: 2025-09-05 23:55:07.204 [INFO][6184] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:55:07.231547 containerd[2003]: 2025-09-05 23:55:07.221 [WARNING][6184] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" HandleID="k8s-pod-network.3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" Workload="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--skhtj-eth0" Sep 5 23:55:07.231547 containerd[2003]: 2025-09-05 23:55:07.221 [INFO][6184] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" HandleID="k8s-pod-network.3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" Workload="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--skhtj-eth0" Sep 5 23:55:07.231547 containerd[2003]: 2025-09-05 23:55:07.225 [INFO][6184] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:55:07.231547 containerd[2003]: 2025-09-05 23:55:07.227 [INFO][6174] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303" Sep 5 23:55:07.234411 containerd[2003]: time="2025-09-05T23:55:07.231323315Z" level=info msg="TearDown network for sandbox \"3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303\" successfully" Sep 5 23:55:07.242056 containerd[2003]: time="2025-09-05T23:55:07.241193591Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:55:07.242056 containerd[2003]: time="2025-09-05T23:55:07.241304435Z" level=info msg="RemovePodSandbox \"3ec49a52061359dc77b4966f240742c736b785db91fcf51014eb3b0f14c1c303\" returns successfully" Sep 5 23:55:08.207265 ntpd[1986]: Listen normally on 8 vxlan.calico 192.168.30.128:123 Sep 5 23:55:08.209055 ntpd[1986]: 5 Sep 23:55:08 ntpd[1986]: Listen normally on 8 vxlan.calico 192.168.30.128:123 Sep 5 23:55:08.209055 ntpd[1986]: 5 Sep 23:55:08 ntpd[1986]: Listen normally on 9 cali9f55ae7a108 [fe80::ecee:eeff:feee:eeee%4]:123 Sep 5 23:55:08.209055 ntpd[1986]: 5 Sep 23:55:08 ntpd[1986]: Listen normally on 10 vxlan.calico [fe80::6475:aaff:feda:1ee0%5]:123 Sep 5 23:55:08.209055 ntpd[1986]: 5 Sep 23:55:08 ntpd[1986]: Listen normally on 11 cali12011a860df [fe80::ecee:eeff:feee:eeee%8]:123 Sep 5 23:55:08.209055 ntpd[1986]: 5 Sep 23:55:08 ntpd[1986]: Listen normally on 12 calia08a393e916 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 5 23:55:08.209055 ntpd[1986]: 5 Sep 23:55:08 ntpd[1986]: Listen normally on 13 calidb4f94c45d7 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 5 23:55:08.209055 ntpd[1986]: 5 Sep 23:55:08 ntpd[1986]: Listen normally on 14 cali926ad072e55 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 5 23:55:08.209055 ntpd[1986]: 5 Sep 23:55:08 ntpd[1986]: Listen normally on 15 cali18b65359a9a [fe80::ecee:eeff:feee:eeee%12]:123 Sep 5 23:55:08.209055 ntpd[1986]: 5 Sep 23:55:08 ntpd[1986]: Listen normally on 16 cali12a56036810 [fe80::ecee:eeff:feee:eeee%13]:123 Sep 5 23:55:08.209055 ntpd[1986]: 5 Sep 23:55:08 ntpd[1986]: Listen normally on 17 cali02c16042e63 [fe80::ecee:eeff:feee:eeee%14]:123 Sep 5 23:55:08.207600 ntpd[1986]: Listen normally on 9 cali9f55ae7a108 [fe80::ecee:eeff:feee:eeee%4]:123 Sep 5 23:55:08.207683 ntpd[1986]: Listen normally on 10 vxlan.calico [fe80::6475:aaff:feda:1ee0%5]:123 Sep 5 23:55:08.207753 ntpd[1986]: Listen normally on 11 cali12011a860df [fe80::ecee:eeff:feee:eeee%8]:123 Sep 5 23:55:08.207821 ntpd[1986]: Listen normally on 12 calia08a393e916 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 5 23:55:08.207888 ntpd[1986]: Listen normally on 13 calidb4f94c45d7 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 5 23:55:08.207959 ntpd[1986]: Listen normally on 14 cali926ad072e55 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 5 23:55:08.208026 ntpd[1986]: Listen normally on 15 cali18b65359a9a [fe80::ecee:eeff:feee:eeee%12]:123 Sep 5 23:55:08.208098 ntpd[1986]: Listen normally on 16 cali12a56036810 [fe80::ecee:eeff:feee:eeee%13]:123 Sep 5 23:55:08.208167 ntpd[1986]: Listen normally on 17 cali02c16042e63 [fe80::ecee:eeff:feee:eeee%14]:123 Sep 5 23:55:09.331515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2671173043.mount: Deactivated successfully. Sep 5 23:55:09.972160 systemd[1]: Started sshd@10-172.31.23.98:22-139.178.68.195:56292.service - OpenSSH per-connection server daemon (139.178.68.195:56292). Sep 5 23:55:10.195601 sshd[6211]: Accepted publickey for core from 139.178.68.195 port 56292 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:55:10.204961 sshd[6211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:10.222367 systemd-logind[1993]: New session 11 of user core. Sep 5 23:55:10.228123 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 23:55:10.588063 sshd[6211]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:10.602136 systemd[1]: sshd@10-172.31.23.98:22-139.178.68.195:56292.service: Deactivated successfully. Sep 5 23:55:10.613034 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 23:55:10.617588 systemd-logind[1993]: Session 11 logged out. Waiting for processes to exit. Sep 5 23:55:10.623256 systemd-logind[1993]: Removed session 11. Sep 5 23:55:10.751554 containerd[2003]: time="2025-09-05T23:55:10.751475152Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:10.755551 containerd[2003]: time="2025-09-05T23:55:10.755446096Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 5 23:55:10.757898 containerd[2003]: time="2025-09-05T23:55:10.757799656Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:10.764396 containerd[2003]: time="2025-09-05T23:55:10.764123945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:10.766700 containerd[2003]: time="2025-09-05T23:55:10.766412969Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 4.845654829s" Sep 5 23:55:10.766700 containerd[2003]: time="2025-09-05T23:55:10.766489985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 5 23:55:10.771447 containerd[2003]: time="2025-09-05T23:55:10.770821781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 5 23:55:10.783173 containerd[2003]: time="2025-09-05T23:55:10.782940977Z" level=info msg="CreateContainer within sandbox \"054214fb88c135390445dae38d4e29a1816224ecc84c04df26adbdd80e5713e9\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 5 23:55:10.815173 containerd[2003]: time="2025-09-05T23:55:10.812959325Z" level=info msg="CreateContainer within sandbox \"054214fb88c135390445dae38d4e29a1816224ecc84c04df26adbdd80e5713e9\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"99e8fe26ed54811d375ddd1c10d1766f6ffe94a612b06de9c2e3ec758b003b56\"" Sep 5 23:55:10.819578 containerd[2003]: time="2025-09-05T23:55:10.817565525Z" level=info msg="StartContainer for \"99e8fe26ed54811d375ddd1c10d1766f6ffe94a612b06de9c2e3ec758b003b56\"" Sep 5 23:55:10.933705 systemd[1]: Started cri-containerd-99e8fe26ed54811d375ddd1c10d1766f6ffe94a612b06de9c2e3ec758b003b56.scope - libcontainer container 99e8fe26ed54811d375ddd1c10d1766f6ffe94a612b06de9c2e3ec758b003b56. Sep 5 23:55:11.027171 containerd[2003]: time="2025-09-05T23:55:11.027067370Z" level=info msg="StartContainer for \"99e8fe26ed54811d375ddd1c10d1766f6ffe94a612b06de9c2e3ec758b003b56\" returns successfully" Sep 5 23:55:13.403167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2408526316.mount: Deactivated successfully. Sep 5 23:55:13.440195 containerd[2003]: time="2025-09-05T23:55:13.440123874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:13.442494 containerd[2003]: time="2025-09-05T23:55:13.442427958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 5 23:55:13.444669 containerd[2003]: time="2025-09-05T23:55:13.444588450Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:13.452376 containerd[2003]: time="2025-09-05T23:55:13.451467270Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:13.454970 containerd[2003]: time="2025-09-05T23:55:13.454879230Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 2.683993081s" Sep 5 23:55:13.455154 containerd[2003]: time="2025-09-05T23:55:13.454956306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 5 23:55:13.463838 containerd[2003]: time="2025-09-05T23:55:13.463790970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 5 23:55:13.471453 containerd[2003]: time="2025-09-05T23:55:13.471394662Z" level=info msg="CreateContainer within sandbox \"22e14d460c616a9744970ab6afca7c7afd4222d16fe332186df6c2f3e378262e\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 5 23:55:13.504773 containerd[2003]: time="2025-09-05T23:55:13.504713646Z" level=info msg="CreateContainer within sandbox \"22e14d460c616a9744970ab6afca7c7afd4222d16fe332186df6c2f3e378262e\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"0e0fac7d5b946ae628eedbdbd04d2d65c10abefc9e620d8fe890be0cca03c18d\"" Sep 5 23:55:13.508432 containerd[2003]: time="2025-09-05T23:55:13.506560866Z" level=info msg="StartContainer for \"0e0fac7d5b946ae628eedbdbd04d2d65c10abefc9e620d8fe890be0cca03c18d\"" Sep 5 23:55:13.578733 systemd[1]: Started cri-containerd-0e0fac7d5b946ae628eedbdbd04d2d65c10abefc9e620d8fe890be0cca03c18d.scope - libcontainer container 0e0fac7d5b946ae628eedbdbd04d2d65c10abefc9e620d8fe890be0cca03c18d. Sep 5 23:55:13.698543 containerd[2003]: time="2025-09-05T23:55:13.698470303Z" level=info msg="StartContainer for \"0e0fac7d5b946ae628eedbdbd04d2d65c10abefc9e620d8fe890be0cca03c18d\" returns successfully" Sep 5 23:55:14.090137 kubelet[3518]: I0905 23:55:14.089891 3518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-jlk4g" podStartSLOduration=29.968986377 podStartE2EDuration="41.089871029s" podCreationTimestamp="2025-09-05 23:54:33 +0000 UTC" firstStartedPulling="2025-09-05 23:54:59.649143173 +0000 UTC m=+57.861435696" lastFinishedPulling="2025-09-05 23:55:10.770027813 +0000 UTC m=+68.982320348" observedRunningTime="2025-09-05 23:55:11.136738118 +0000 UTC m=+69.349030653" watchObservedRunningTime="2025-09-05 23:55:14.089871029 +0000 UTC m=+72.302163588" Sep 5 23:55:15.641920 systemd[1]: Started sshd@11-172.31.23.98:22-139.178.68.195:56306.service - OpenSSH per-connection server daemon (139.178.68.195:56306). Sep 5 23:55:15.866438 sshd[6363]: Accepted publickey for core from 139.178.68.195 port 56306 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:55:15.871989 sshd[6363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:15.888711 systemd-logind[1993]: New session 12 of user core. Sep 5 23:55:15.894657 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 23:55:16.358729 sshd[6363]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:16.370868 systemd[1]: sshd@11-172.31.23.98:22-139.178.68.195:56306.service: Deactivated successfully. Sep 5 23:55:16.379631 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 23:55:16.383172 systemd-logind[1993]: Session 12 logged out. Waiting for processes to exit. Sep 5 23:55:16.409008 systemd[1]: Started sshd@12-172.31.23.98:22-139.178.68.195:56312.service - OpenSSH per-connection server daemon (139.178.68.195:56312). Sep 5 23:55:16.412653 systemd-logind[1993]: Removed session 12. Sep 5 23:55:16.579433 containerd[2003]: time="2025-09-05T23:55:16.579318189Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:16.582119 containerd[2003]: time="2025-09-05T23:55:16.582039885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 5 23:55:16.584324 containerd[2003]: time="2025-09-05T23:55:16.584178465Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:16.590979 containerd[2003]: time="2025-09-05T23:55:16.590868729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:16.592789 containerd[2003]: time="2025-09-05T23:55:16.592697901Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 3.128665527s" Sep 5 23:55:16.592789 containerd[2003]: time="2025-09-05T23:55:16.592773117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 5 23:55:16.595965 containerd[2003]: time="2025-09-05T23:55:16.595745073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 5 23:55:16.624056 containerd[2003]: time="2025-09-05T23:55:16.623836918Z" level=info msg="CreateContainer within sandbox \"4cdc744700d926d5cd645e25231c8fc3aa70577aa18a3263660ee863a153eb57\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 5 23:55:16.634261 sshd[6377]: Accepted publickey for core from 139.178.68.195 port 56312 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:55:16.640937 sshd[6377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:16.660545 systemd-logind[1993]: New session 13 of user core. Sep 5 23:55:16.673850 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 23:55:16.693440 containerd[2003]: time="2025-09-05T23:55:16.692065354Z" level=info msg="CreateContainer within sandbox \"4cdc744700d926d5cd645e25231c8fc3aa70577aa18a3263660ee863a153eb57\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"51cc284f2e7801fb17eb35014d789b058b86842a805398e96bea4997564bcdf2\"" Sep 5 23:55:16.698000 containerd[2003]: time="2025-09-05T23:55:16.696105082Z" level=info msg="StartContainer for \"51cc284f2e7801fb17eb35014d789b058b86842a805398e96bea4997564bcdf2\"" Sep 5 23:55:16.778844 systemd[1]: Started cri-containerd-51cc284f2e7801fb17eb35014d789b058b86842a805398e96bea4997564bcdf2.scope - libcontainer container 51cc284f2e7801fb17eb35014d789b058b86842a805398e96bea4997564bcdf2. Sep 5 23:55:16.903095 containerd[2003]: time="2025-09-05T23:55:16.902789795Z" level=info msg="StartContainer for \"51cc284f2e7801fb17eb35014d789b058b86842a805398e96bea4997564bcdf2\" returns successfully" Sep 5 23:55:17.128529 sshd[6377]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:17.142767 systemd[1]: sshd@12-172.31.23.98:22-139.178.68.195:56312.service: Deactivated successfully. Sep 5 23:55:17.153319 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 23:55:17.157863 kubelet[3518]: I0905 23:55:17.154387 3518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-65bc579fb7-j8488" podStartSLOduration=27.205091861 podStartE2EDuration="43.154313036s" podCreationTimestamp="2025-09-05 23:54:34 +0000 UTC" firstStartedPulling="2025-09-05 23:55:00.645814626 +0000 UTC m=+58.858107161" lastFinishedPulling="2025-09-05 23:55:16.595035801 +0000 UTC m=+74.807328336" observedRunningTime="2025-09-05 23:55:17.147884096 +0000 UTC m=+75.360176643" watchObservedRunningTime="2025-09-05 23:55:17.154313036 +0000 UTC m=+75.366605571" Sep 5 23:55:17.157863 kubelet[3518]: I0905 23:55:17.154604 3518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7ff96fb9d8-qmwmw" podStartSLOduration=5.6900235850000005 podStartE2EDuration="23.154590884s" podCreationTimestamp="2025-09-05 23:54:54 +0000 UTC" firstStartedPulling="2025-09-05 23:54:55.993983967 +0000 UTC m=+54.206276502" lastFinishedPulling="2025-09-05 23:55:13.458551266 +0000 UTC m=+71.670843801" observedRunningTime="2025-09-05 23:55:14.093580649 +0000 UTC m=+72.305873184" watchObservedRunningTime="2025-09-05 23:55:17.154590884 +0000 UTC m=+75.366883419" Sep 5 23:55:17.203692 systemd-logind[1993]: Session 13 logged out. Waiting for processes to exit. Sep 5 23:55:17.206236 systemd[1]: Started sshd@13-172.31.23.98:22-139.178.68.195:56316.service - OpenSSH per-connection server daemon (139.178.68.195:56316). Sep 5 23:55:17.217907 systemd-logind[1993]: Removed session 13. Sep 5 23:55:17.456966 sshd[6441]: Accepted publickey for core from 139.178.68.195 port 56316 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:55:17.459933 sshd[6441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:17.470451 systemd-logind[1993]: New session 14 of user core. Sep 5 23:55:17.479733 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 23:55:17.764485 sshd[6441]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:17.772142 systemd[1]: sshd@13-172.31.23.98:22-139.178.68.195:56316.service: Deactivated successfully. Sep 5 23:55:17.777960 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 23:55:17.779968 systemd-logind[1993]: Session 14 logged out. Waiting for processes to exit. Sep 5 23:55:17.782226 systemd-logind[1993]: Removed session 14. Sep 5 23:55:18.079163 containerd[2003]: time="2025-09-05T23:55:18.078954285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:18.082627 containerd[2003]: time="2025-09-05T23:55:18.082546593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 5 23:55:18.084950 containerd[2003]: time="2025-09-05T23:55:18.084845229Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:18.090460 containerd[2003]: time="2025-09-05T23:55:18.090391209Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:18.092635 containerd[2003]: time="2025-09-05T23:55:18.092259177Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.4964416s" Sep 5 23:55:18.092635 containerd[2003]: time="2025-09-05T23:55:18.092333709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 5 23:55:18.097374 containerd[2003]: time="2025-09-05T23:55:18.097113837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 23:55:18.104240 containerd[2003]: time="2025-09-05T23:55:18.104138409Z" level=info msg="CreateContainer within sandbox \"e4335ae994e5fc5410d363bb4231a92ea806fd8af28b32ee1f573f1df8669314\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 5 23:55:18.170149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2479885107.mount: Deactivated successfully. Sep 5 23:55:18.197888 containerd[2003]: time="2025-09-05T23:55:18.197815053Z" level=info msg="CreateContainer within sandbox \"e4335ae994e5fc5410d363bb4231a92ea806fd8af28b32ee1f573f1df8669314\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"814eb190b7bc6ccbdf31873d7bf30ff54466d57af83ff357fb51286a20ec7b6c\"" Sep 5 23:55:18.201951 containerd[2003]: time="2025-09-05T23:55:18.201865089Z" level=info msg="StartContainer for \"814eb190b7bc6ccbdf31873d7bf30ff54466d57af83ff357fb51286a20ec7b6c\"" Sep 5 23:55:18.317292 systemd[1]: Started cri-containerd-814eb190b7bc6ccbdf31873d7bf30ff54466d57af83ff357fb51286a20ec7b6c.scope - libcontainer container 814eb190b7bc6ccbdf31873d7bf30ff54466d57af83ff357fb51286a20ec7b6c. Sep 5 23:55:18.428644 containerd[2003]: time="2025-09-05T23:55:18.428449727Z" level=info msg="StartContainer for \"814eb190b7bc6ccbdf31873d7bf30ff54466d57af83ff357fb51286a20ec7b6c\" returns successfully" Sep 5 23:55:18.460941 containerd[2003]: time="2025-09-05T23:55:18.460854743Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:18.463413 containerd[2003]: time="2025-09-05T23:55:18.462935435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 5 23:55:18.469687 containerd[2003]: time="2025-09-05T23:55:18.469460111Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 372.27683ms" Sep 5 23:55:18.469687 containerd[2003]: time="2025-09-05T23:55:18.469532747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 5 23:55:18.471791 containerd[2003]: time="2025-09-05T23:55:18.471711707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 5 23:55:18.481458 containerd[2003]: time="2025-09-05T23:55:18.481111079Z" level=info msg="CreateContainer within sandbox \"382bb6492616fa244aabdadc08928c442132291031f7776c0089df9bf2c24c54\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 23:55:18.512487 containerd[2003]: time="2025-09-05T23:55:18.512317187Z" level=info msg="CreateContainer within sandbox \"382bb6492616fa244aabdadc08928c442132291031f7776c0089df9bf2c24c54\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5b51783cea63584c32d9ee8e1725512a10998b2dfaf64520dcca590be39952d6\"" Sep 5 23:55:18.515468 containerd[2003]: time="2025-09-05T23:55:18.515413115Z" level=info msg="StartContainer for \"5b51783cea63584c32d9ee8e1725512a10998b2dfaf64520dcca590be39952d6\"" Sep 5 23:55:18.569732 systemd[1]: Started cri-containerd-5b51783cea63584c32d9ee8e1725512a10998b2dfaf64520dcca590be39952d6.scope - libcontainer container 5b51783cea63584c32d9ee8e1725512a10998b2dfaf64520dcca590be39952d6. Sep 5 23:55:18.661816 containerd[2003]: time="2025-09-05T23:55:18.661707600Z" level=info msg="StartContainer for \"5b51783cea63584c32d9ee8e1725512a10998b2dfaf64520dcca590be39952d6\" returns successfully" Sep 5 23:55:19.144044 kubelet[3518]: I0905 23:55:19.143937 3518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-85cb674cb8-xmj4t" podStartSLOduration=41.270911733 podStartE2EDuration="55.143892046s" podCreationTimestamp="2025-09-05 23:54:24 +0000 UTC" firstStartedPulling="2025-09-05 23:55:04.598147642 +0000 UTC m=+62.810440165" lastFinishedPulling="2025-09-05 23:55:18.471127859 +0000 UTC m=+76.683420478" observedRunningTime="2025-09-05 23:55:19.141117934 +0000 UTC m=+77.353410493" watchObservedRunningTime="2025-09-05 23:55:19.143892046 +0000 UTC m=+77.356184605" Sep 5 23:55:20.123455 kubelet[3518]: I0905 23:55:20.123382 3518 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 23:55:20.338377 containerd[2003]: time="2025-09-05T23:55:20.336425892Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:20.340636 containerd[2003]: time="2025-09-05T23:55:20.339526692Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 5 23:55:20.341555 containerd[2003]: time="2025-09-05T23:55:20.341475708Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:20.353873 containerd[2003]: time="2025-09-05T23:55:20.353165784Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 23:55:20.358365 containerd[2003]: time="2025-09-05T23:55:20.358210368Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.886422209s" Sep 5 23:55:20.358542 containerd[2003]: time="2025-09-05T23:55:20.358407996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 5 23:55:20.372593 containerd[2003]: time="2025-09-05T23:55:20.372436584Z" level=info msg="CreateContainer within sandbox \"e4335ae994e5fc5410d363bb4231a92ea806fd8af28b32ee1f573f1df8669314\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 5 23:55:20.439594 containerd[2003]: time="2025-09-05T23:55:20.439316965Z" level=info msg="CreateContainer within sandbox \"e4335ae994e5fc5410d363bb4231a92ea806fd8af28b32ee1f573f1df8669314\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"fbdac8ad30d853063b911597d394980662805b5458a149ba316229c732ea320d\"" Sep 5 23:55:20.440688 containerd[2003]: time="2025-09-05T23:55:20.440634049Z" level=info msg="StartContainer for \"fbdac8ad30d853063b911597d394980662805b5458a149ba316229c732ea320d\"" Sep 5 23:55:20.551788 systemd[1]: Started cri-containerd-fbdac8ad30d853063b911597d394980662805b5458a149ba316229c732ea320d.scope - libcontainer container fbdac8ad30d853063b911597d394980662805b5458a149ba316229c732ea320d. Sep 5 23:55:20.689200 containerd[2003]: time="2025-09-05T23:55:20.689100650Z" level=info msg="StartContainer for \"fbdac8ad30d853063b911597d394980662805b5458a149ba316229c732ea320d\" returns successfully" Sep 5 23:55:21.353103 kubelet[3518]: I0905 23:55:21.352596 3518 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 5 23:55:21.353103 kubelet[3518]: I0905 23:55:21.352680 3518 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 5 23:55:22.808910 systemd[1]: Started sshd@14-172.31.23.98:22-139.178.68.195:38394.service - OpenSSH per-connection server daemon (139.178.68.195:38394). Sep 5 23:55:23.012728 sshd[6620]: Accepted publickey for core from 139.178.68.195 port 38394 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:55:23.016790 sshd[6620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:23.027473 systemd-logind[1993]: New session 15 of user core. Sep 5 23:55:23.038759 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 23:55:23.394309 sshd[6620]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:23.409472 systemd[1]: sshd@14-172.31.23.98:22-139.178.68.195:38394.service: Deactivated successfully. Sep 5 23:55:23.418380 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 23:55:23.422660 systemd-logind[1993]: Session 15 logged out. Waiting for processes to exit. Sep 5 23:55:23.427486 systemd-logind[1993]: Removed session 15. Sep 5 23:55:28.436977 systemd[1]: Started sshd@15-172.31.23.98:22-139.178.68.195:38406.service - OpenSSH per-connection server daemon (139.178.68.195:38406). Sep 5 23:55:28.625692 sshd[6660]: Accepted publickey for core from 139.178.68.195 port 38406 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:55:28.629852 sshd[6660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:28.644661 systemd-logind[1993]: New session 16 of user core. Sep 5 23:55:28.653796 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 23:55:28.983566 sshd[6660]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:28.992847 systemd[1]: sshd@15-172.31.23.98:22-139.178.68.195:38406.service: Deactivated successfully. Sep 5 23:55:29.001068 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 23:55:29.008657 systemd-logind[1993]: Session 16 logged out. Waiting for processes to exit. Sep 5 23:55:29.013205 systemd-logind[1993]: Removed session 16. Sep 5 23:55:29.699560 kubelet[3518]: I0905 23:55:29.699421 3518 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 23:55:29.746595 kubelet[3518]: I0905 23:55:29.744723 3518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-5d9jn" podStartSLOduration=39.263652617 podStartE2EDuration="55.744697931s" podCreationTimestamp="2025-09-05 23:54:34 +0000 UTC" firstStartedPulling="2025-09-05 23:55:03.878958466 +0000 UTC m=+62.091251001" lastFinishedPulling="2025-09-05 23:55:20.360003792 +0000 UTC m=+78.572296315" observedRunningTime="2025-09-05 23:55:21.170809956 +0000 UTC m=+79.383102515" watchObservedRunningTime="2025-09-05 23:55:29.744697931 +0000 UTC m=+87.956990466" Sep 5 23:55:34.035114 systemd[1]: Started sshd@16-172.31.23.98:22-139.178.68.195:44466.service - OpenSSH per-connection server daemon (139.178.68.195:44466). Sep 5 23:55:34.249468 sshd[6675]: Accepted publickey for core from 139.178.68.195 port 44466 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:55:34.255101 sshd[6675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:34.276000 systemd-logind[1993]: New session 17 of user core. Sep 5 23:55:34.282853 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 23:55:34.589935 sshd[6675]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:34.596741 systemd[1]: sshd@16-172.31.23.98:22-139.178.68.195:44466.service: Deactivated successfully. Sep 5 23:55:34.602825 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 23:55:34.612317 systemd-logind[1993]: Session 17 logged out. Waiting for processes to exit. Sep 5 23:55:34.618150 systemd-logind[1993]: Removed session 17. Sep 5 23:55:39.636893 systemd[1]: Started sshd@17-172.31.23.98:22-139.178.68.195:44472.service - OpenSSH per-connection server daemon (139.178.68.195:44472). Sep 5 23:55:39.834957 sshd[6721]: Accepted publickey for core from 139.178.68.195 port 44472 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:55:39.838547 sshd[6721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:39.849255 systemd-logind[1993]: New session 18 of user core. Sep 5 23:55:39.855678 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 23:55:40.126616 sshd[6721]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:40.134151 systemd[1]: sshd@17-172.31.23.98:22-139.178.68.195:44472.service: Deactivated successfully. Sep 5 23:55:40.139628 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 23:55:40.141416 systemd-logind[1993]: Session 18 logged out. Waiting for processes to exit. Sep 5 23:55:40.143732 systemd-logind[1993]: Removed session 18. Sep 5 23:55:40.172128 systemd[1]: Started sshd@18-172.31.23.98:22-139.178.68.195:35858.service - OpenSSH per-connection server daemon (139.178.68.195:35858). Sep 5 23:55:40.360631 sshd[6734]: Accepted publickey for core from 139.178.68.195 port 35858 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:55:40.363716 sshd[6734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:40.375994 systemd-logind[1993]: New session 19 of user core. Sep 5 23:55:40.383716 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 23:55:41.017574 sshd[6734]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:41.026250 systemd[1]: sshd@18-172.31.23.98:22-139.178.68.195:35858.service: Deactivated successfully. Sep 5 23:55:41.032910 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 23:55:41.036219 systemd-logind[1993]: Session 19 logged out. Waiting for processes to exit. Sep 5 23:55:41.039240 systemd-logind[1993]: Removed session 19. Sep 5 23:55:41.066749 systemd[1]: Started sshd@19-172.31.23.98:22-139.178.68.195:35862.service - OpenSSH per-connection server daemon (139.178.68.195:35862). Sep 5 23:55:41.263787 sshd[6745]: Accepted publickey for core from 139.178.68.195 port 35862 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:55:41.267315 sshd[6745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:41.277536 systemd-logind[1993]: New session 20 of user core. Sep 5 23:55:41.285713 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 5 23:55:42.660433 sshd[6745]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:42.676055 systemd[1]: sshd@19-172.31.23.98:22-139.178.68.195:35862.service: Deactivated successfully. Sep 5 23:55:42.689766 systemd[1]: session-20.scope: Deactivated successfully. Sep 5 23:55:42.693485 systemd-logind[1993]: Session 20 logged out. Waiting for processes to exit. Sep 5 23:55:42.713987 systemd[1]: Started sshd@20-172.31.23.98:22-139.178.68.195:35870.service - OpenSSH per-connection server daemon (139.178.68.195:35870). Sep 5 23:55:42.716758 systemd-logind[1993]: Removed session 20. Sep 5 23:55:42.904477 sshd[6785]: Accepted publickey for core from 139.178.68.195 port 35870 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:55:42.907745 sshd[6785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:42.917970 systemd-logind[1993]: New session 21 of user core. Sep 5 23:55:42.928685 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 5 23:55:43.518402 sshd[6785]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:43.528153 systemd[1]: sshd@20-172.31.23.98:22-139.178.68.195:35870.service: Deactivated successfully. Sep 5 23:55:43.535062 systemd[1]: session-21.scope: Deactivated successfully. Sep 5 23:55:43.538734 systemd-logind[1993]: Session 21 logged out. Waiting for processes to exit. Sep 5 23:55:43.560922 systemd[1]: Started sshd@21-172.31.23.98:22-139.178.68.195:35874.service - OpenSSH per-connection server daemon (139.178.68.195:35874). Sep 5 23:55:43.563448 systemd-logind[1993]: Removed session 21. Sep 5 23:55:43.747704 sshd[6796]: Accepted publickey for core from 139.178.68.195 port 35874 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:55:43.750779 sshd[6796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:43.760181 systemd-logind[1993]: New session 22 of user core. Sep 5 23:55:43.768123 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 5 23:55:44.025787 sshd[6796]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:44.032059 systemd[1]: sshd@21-172.31.23.98:22-139.178.68.195:35874.service: Deactivated successfully. Sep 5 23:55:44.038094 systemd[1]: session-22.scope: Deactivated successfully. Sep 5 23:55:44.042019 systemd-logind[1993]: Session 22 logged out. Waiting for processes to exit. Sep 5 23:55:44.045040 systemd-logind[1993]: Removed session 22. Sep 5 23:55:48.157682 systemd[1]: run-containerd-runc-k8s.io-51cc284f2e7801fb17eb35014d789b058b86842a805398e96bea4997564bcdf2-runc.A2IHSC.mount: Deactivated successfully. Sep 5 23:55:49.067951 systemd[1]: Started sshd@22-172.31.23.98:22-139.178.68.195:35888.service - OpenSSH per-connection server daemon (139.178.68.195:35888). Sep 5 23:55:49.255057 sshd[6832]: Accepted publickey for core from 139.178.68.195 port 35888 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:55:49.263067 sshd[6832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:49.274188 systemd-logind[1993]: New session 23 of user core. Sep 5 23:55:49.279825 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 5 23:55:49.571276 sshd[6832]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:49.579509 systemd[1]: sshd@22-172.31.23.98:22-139.178.68.195:35888.service: Deactivated successfully. Sep 5 23:55:49.584656 systemd[1]: session-23.scope: Deactivated successfully. Sep 5 23:55:49.588096 systemd-logind[1993]: Session 23 logged out. Waiting for processes to exit. Sep 5 23:55:49.590839 systemd-logind[1993]: Removed session 23. Sep 5 23:55:54.622876 systemd[1]: Started sshd@23-172.31.23.98:22-139.178.68.195:43088.service - OpenSSH per-connection server daemon (139.178.68.195:43088). Sep 5 23:55:54.820649 sshd[6866]: Accepted publickey for core from 139.178.68.195 port 43088 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:55:54.822887 sshd[6866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:55:54.832704 systemd-logind[1993]: New session 24 of user core. Sep 5 23:55:54.841967 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 5 23:55:55.115919 sshd[6866]: pam_unix(sshd:session): session closed for user core Sep 5 23:55:55.124278 systemd[1]: sshd@23-172.31.23.98:22-139.178.68.195:43088.service: Deactivated successfully. Sep 5 23:55:55.132069 systemd[1]: session-24.scope: Deactivated successfully. Sep 5 23:55:55.134598 systemd-logind[1993]: Session 24 logged out. Waiting for processes to exit. Sep 5 23:55:55.136396 systemd-logind[1993]: Removed session 24. Sep 5 23:56:00.161878 systemd[1]: Started sshd@24-172.31.23.98:22-139.178.68.195:50144.service - OpenSSH per-connection server daemon (139.178.68.195:50144). Sep 5 23:56:00.360397 sshd[6901]: Accepted publickey for core from 139.178.68.195 port 50144 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:56:00.364486 sshd[6901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:56:00.381213 systemd-logind[1993]: New session 25 of user core. Sep 5 23:56:00.387179 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 5 23:56:00.719701 sshd[6901]: pam_unix(sshd:session): session closed for user core Sep 5 23:56:00.729940 systemd[1]: sshd@24-172.31.23.98:22-139.178.68.195:50144.service: Deactivated successfully. Sep 5 23:56:00.737533 systemd[1]: session-25.scope: Deactivated successfully. Sep 5 23:56:00.742248 systemd-logind[1993]: Session 25 logged out. Waiting for processes to exit. Sep 5 23:56:00.747738 systemd-logind[1993]: Removed session 25. Sep 5 23:56:05.763880 systemd[1]: Started sshd@25-172.31.23.98:22-139.178.68.195:50150.service - OpenSSH per-connection server daemon (139.178.68.195:50150). Sep 5 23:56:05.965368 sshd[6918]: Accepted publickey for core from 139.178.68.195 port 50150 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:56:05.969794 sshd[6918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:56:05.982540 systemd-logind[1993]: New session 26 of user core. Sep 5 23:56:05.990670 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 5 23:56:06.303464 sshd[6918]: pam_unix(sshd:session): session closed for user core Sep 5 23:56:06.310892 systemd[1]: sshd@25-172.31.23.98:22-139.178.68.195:50150.service: Deactivated successfully. Sep 5 23:56:06.318207 systemd[1]: session-26.scope: Deactivated successfully. Sep 5 23:56:06.323663 systemd-logind[1993]: Session 26 logged out. Waiting for processes to exit. Sep 5 23:56:06.327327 systemd-logind[1993]: Removed session 26. Sep 5 23:56:07.252437 containerd[2003]: time="2025-09-05T23:56:07.252144981Z" level=info msg="StopPodSandbox for \"9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c\"" Sep 5 23:56:07.441201 containerd[2003]: 2025-09-05 23:56:07.347 [WARNING][6938] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--98-k8s-csi--node--driver--5d9jn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a29a0520-465f-4a15-9908-cc439e2ca7ce", ResourceVersion:"1225", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-98", ContainerID:"e4335ae994e5fc5410d363bb4231a92ea806fd8af28b32ee1f573f1df8669314", Pod:"csi-node-driver-5d9jn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.30.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali12a56036810", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:07.441201 containerd[2003]: 2025-09-05 23:56:07.348 [INFO][6938] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" Sep 5 23:56:07.441201 containerd[2003]: 2025-09-05 23:56:07.348 [INFO][6938] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" iface="eth0" netns="" Sep 5 23:56:07.441201 containerd[2003]: 2025-09-05 23:56:07.348 [INFO][6938] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" Sep 5 23:56:07.441201 containerd[2003]: 2025-09-05 23:56:07.348 [INFO][6938] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" Sep 5 23:56:07.441201 containerd[2003]: 2025-09-05 23:56:07.414 [INFO][6945] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" HandleID="k8s-pod-network.9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" Workload="ip--172--31--23--98-k8s-csi--node--driver--5d9jn-eth0" Sep 5 23:56:07.441201 containerd[2003]: 2025-09-05 23:56:07.414 [INFO][6945] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:07.441201 containerd[2003]: 2025-09-05 23:56:07.414 [INFO][6945] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:07.441201 containerd[2003]: 2025-09-05 23:56:07.430 [WARNING][6945] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" HandleID="k8s-pod-network.9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" Workload="ip--172--31--23--98-k8s-csi--node--driver--5d9jn-eth0" Sep 5 23:56:07.441201 containerd[2003]: 2025-09-05 23:56:07.430 [INFO][6945] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" HandleID="k8s-pod-network.9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" Workload="ip--172--31--23--98-k8s-csi--node--driver--5d9jn-eth0" Sep 5 23:56:07.441201 containerd[2003]: 2025-09-05 23:56:07.433 [INFO][6945] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:07.441201 containerd[2003]: 2025-09-05 23:56:07.435 [INFO][6938] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" Sep 5 23:56:07.441201 containerd[2003]: time="2025-09-05T23:56:07.441104134Z" level=info msg="TearDown network for sandbox \"9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c\" successfully" Sep 5 23:56:07.441201 containerd[2003]: time="2025-09-05T23:56:07.441144874Z" level=info msg="StopPodSandbox for \"9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c\" returns successfully" Sep 5 23:56:07.444560 containerd[2003]: time="2025-09-05T23:56:07.444480694Z" level=info msg="RemovePodSandbox for \"9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c\"" Sep 5 23:56:07.444730 containerd[2003]: time="2025-09-05T23:56:07.444585550Z" level=info msg="Forcibly stopping sandbox \"9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c\"" Sep 5 23:56:07.626627 containerd[2003]: 2025-09-05 23:56:07.530 [WARNING][6959] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--98-k8s-csi--node--driver--5d9jn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a29a0520-465f-4a15-9908-cc439e2ca7ce", ResourceVersion:"1225", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-98", ContainerID:"e4335ae994e5fc5410d363bb4231a92ea806fd8af28b32ee1f573f1df8669314", Pod:"csi-node-driver-5d9jn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.30.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali12a56036810", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:07.626627 containerd[2003]: 2025-09-05 23:56:07.530 [INFO][6959] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" Sep 5 23:56:07.626627 containerd[2003]: 2025-09-05 23:56:07.530 [INFO][6959] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" iface="eth0" netns="" Sep 5 23:56:07.626627 containerd[2003]: 2025-09-05 23:56:07.530 [INFO][6959] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" Sep 5 23:56:07.626627 containerd[2003]: 2025-09-05 23:56:07.530 [INFO][6959] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" Sep 5 23:56:07.626627 containerd[2003]: 2025-09-05 23:56:07.584 [INFO][6966] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" HandleID="k8s-pod-network.9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" Workload="ip--172--31--23--98-k8s-csi--node--driver--5d9jn-eth0" Sep 5 23:56:07.626627 containerd[2003]: 2025-09-05 23:56:07.585 [INFO][6966] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:07.626627 containerd[2003]: 2025-09-05 23:56:07.585 [INFO][6966] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:07.626627 containerd[2003]: 2025-09-05 23:56:07.611 [WARNING][6966] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" HandleID="k8s-pod-network.9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" Workload="ip--172--31--23--98-k8s-csi--node--driver--5d9jn-eth0" Sep 5 23:56:07.626627 containerd[2003]: 2025-09-05 23:56:07.612 [INFO][6966] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" HandleID="k8s-pod-network.9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" Workload="ip--172--31--23--98-k8s-csi--node--driver--5d9jn-eth0" Sep 5 23:56:07.626627 containerd[2003]: 2025-09-05 23:56:07.619 [INFO][6966] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:07.626627 containerd[2003]: 2025-09-05 23:56:07.622 [INFO][6959] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c" Sep 5 23:56:07.628642 containerd[2003]: time="2025-09-05T23:56:07.626754827Z" level=info msg="TearDown network for sandbox \"9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c\" successfully" Sep 5 23:56:07.672174 containerd[2003]: time="2025-09-05T23:56:07.672080327Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:56:07.672374 containerd[2003]: time="2025-09-05T23:56:07.672222983Z" level=info msg="RemovePodSandbox \"9600553c922bfcc190a8a09cef8ef88527de662265f987674cdf39d13513e64c\" returns successfully" Sep 5 23:56:07.673183 containerd[2003]: time="2025-09-05T23:56:07.673036067Z" level=info msg="StopPodSandbox for \"c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e\"" Sep 5 23:56:07.840996 containerd[2003]: 2025-09-05 23:56:07.753 [WARNING][6981] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--xmj4t-eth0", GenerateName:"calico-apiserver-85cb674cb8-", Namespace:"calico-apiserver", SelfLink:"", UID:"b45a6056-0154-4b46-9f54-64314ddc0dd5", ResourceVersion:"1266", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85cb674cb8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-98", ContainerID:"382bb6492616fa244aabdadc08928c442132291031f7776c0089df9bf2c24c54", Pod:"calico-apiserver-85cb674cb8-xmj4t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali02c16042e63", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:07.840996 containerd[2003]: 2025-09-05 23:56:07.754 [INFO][6981] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" Sep 5 23:56:07.840996 containerd[2003]: 2025-09-05 23:56:07.754 [INFO][6981] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" iface="eth0" netns="" Sep 5 23:56:07.840996 containerd[2003]: 2025-09-05 23:56:07.754 [INFO][6981] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" Sep 5 23:56:07.840996 containerd[2003]: 2025-09-05 23:56:07.754 [INFO][6981] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" Sep 5 23:56:07.840996 containerd[2003]: 2025-09-05 23:56:07.806 [INFO][6988] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" HandleID="k8s-pod-network.c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" Workload="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--xmj4t-eth0" Sep 5 23:56:07.840996 containerd[2003]: 2025-09-05 23:56:07.806 [INFO][6988] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:07.840996 containerd[2003]: 2025-09-05 23:56:07.806 [INFO][6988] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:07.840996 containerd[2003]: 2025-09-05 23:56:07.826 [WARNING][6988] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" HandleID="k8s-pod-network.c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" Workload="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--xmj4t-eth0" Sep 5 23:56:07.840996 containerd[2003]: 2025-09-05 23:56:07.826 [INFO][6988] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" HandleID="k8s-pod-network.c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" Workload="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--xmj4t-eth0" Sep 5 23:56:07.840996 containerd[2003]: 2025-09-05 23:56:07.831 [INFO][6988] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:07.840996 containerd[2003]: 2025-09-05 23:56:07.835 [INFO][6981] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" Sep 5 23:56:07.840996 containerd[2003]: time="2025-09-05T23:56:07.840801012Z" level=info msg="TearDown network for sandbox \"c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e\" successfully" Sep 5 23:56:07.840996 containerd[2003]: time="2025-09-05T23:56:07.840844284Z" level=info msg="StopPodSandbox for \"c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e\" returns successfully" Sep 5 23:56:07.842075 containerd[2003]: time="2025-09-05T23:56:07.841580676Z" level=info msg="RemovePodSandbox for \"c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e\"" Sep 5 23:56:07.842075 containerd[2003]: time="2025-09-05T23:56:07.841633848Z" level=info msg="Forcibly stopping sandbox \"c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e\"" Sep 5 23:56:08.105418 containerd[2003]: 2025-09-05 23:56:07.987 [WARNING][7002] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--xmj4t-eth0", GenerateName:"calico-apiserver-85cb674cb8-", Namespace:"calico-apiserver", SelfLink:"", UID:"b45a6056-0154-4b46-9f54-64314ddc0dd5", ResourceVersion:"1266", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 23, 54, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85cb674cb8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-98", ContainerID:"382bb6492616fa244aabdadc08928c442132291031f7776c0089df9bf2c24c54", Pod:"calico-apiserver-85cb674cb8-xmj4t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.30.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali02c16042e63", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 23:56:08.105418 containerd[2003]: 2025-09-05 23:56:07.988 [INFO][7002] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" Sep 5 23:56:08.105418 containerd[2003]: 2025-09-05 23:56:07.988 [INFO][7002] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" iface="eth0" netns="" Sep 5 23:56:08.105418 containerd[2003]: 2025-09-05 23:56:07.988 [INFO][7002] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" Sep 5 23:56:08.105418 containerd[2003]: 2025-09-05 23:56:07.988 [INFO][7002] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" Sep 5 23:56:08.105418 containerd[2003]: 2025-09-05 23:56:08.069 [INFO][7011] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" HandleID="k8s-pod-network.c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" Workload="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--xmj4t-eth0" Sep 5 23:56:08.105418 containerd[2003]: 2025-09-05 23:56:08.069 [INFO][7011] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 23:56:08.105418 containerd[2003]: 2025-09-05 23:56:08.070 [INFO][7011] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 23:56:08.105418 containerd[2003]: 2025-09-05 23:56:08.093 [WARNING][7011] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" HandleID="k8s-pod-network.c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" Workload="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--xmj4t-eth0" Sep 5 23:56:08.105418 containerd[2003]: 2025-09-05 23:56:08.093 [INFO][7011] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" HandleID="k8s-pod-network.c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" Workload="ip--172--31--23--98-k8s-calico--apiserver--85cb674cb8--xmj4t-eth0" Sep 5 23:56:08.105418 containerd[2003]: 2025-09-05 23:56:08.097 [INFO][7011] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 23:56:08.105418 containerd[2003]: 2025-09-05 23:56:08.101 [INFO][7002] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e" Sep 5 23:56:08.108644 containerd[2003]: time="2025-09-05T23:56:08.106556481Z" level=info msg="TearDown network for sandbox \"c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e\" successfully" Sep 5 23:56:08.116368 containerd[2003]: time="2025-09-05T23:56:08.116228973Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 23:56:08.116741 containerd[2003]: time="2025-09-05T23:56:08.116624853Z" level=info msg="RemovePodSandbox \"c40da9855abc049689ade8a35ee0afba8e983d5a3db3bd891a978d3bc9512d7e\" returns successfully" Sep 5 23:56:11.348569 systemd[1]: Started sshd@26-172.31.23.98:22-139.178.68.195:36616.service - OpenSSH per-connection server daemon (139.178.68.195:36616). Sep 5 23:56:11.538096 sshd[7018]: Accepted publickey for core from 139.178.68.195 port 36616 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:56:11.542593 sshd[7018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:56:11.553897 systemd-logind[1993]: New session 27 of user core. Sep 5 23:56:11.558675 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 5 23:56:11.859955 sshd[7018]: pam_unix(sshd:session): session closed for user core Sep 5 23:56:11.869061 systemd[1]: sshd@26-172.31.23.98:22-139.178.68.195:36616.service: Deactivated successfully. Sep 5 23:56:11.869556 systemd-logind[1993]: Session 27 logged out. Waiting for processes to exit. Sep 5 23:56:11.878365 systemd[1]: session-27.scope: Deactivated successfully. Sep 5 23:56:11.882937 systemd-logind[1993]: Removed session 27. Sep 5 23:56:16.906425 systemd[1]: Started sshd@27-172.31.23.98:22-139.178.68.195:36624.service - OpenSSH per-connection server daemon (139.178.68.195:36624). Sep 5 23:56:17.102227 sshd[7052]: Accepted publickey for core from 139.178.68.195 port 36624 ssh2: RSA SHA256:vADW7QTWQ4wuHdKF8jUL6KxfiBYQUAY2qUkO4wqdhJM Sep 5 23:56:17.107066 sshd[7052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 23:56:17.128514 systemd-logind[1993]: New session 28 of user core. Sep 5 23:56:17.134005 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 5 23:56:17.424193 sshd[7052]: pam_unix(sshd:session): session closed for user core Sep 5 23:56:17.431945 systemd[1]: sshd@27-172.31.23.98:22-139.178.68.195:36624.service: Deactivated successfully. Sep 5 23:56:17.437841 systemd[1]: session-28.scope: Deactivated successfully. Sep 5 23:56:17.439836 systemd-logind[1993]: Session 28 logged out. Waiting for processes to exit. Sep 5 23:56:17.443801 systemd-logind[1993]: Removed session 28. Sep 5 23:56:31.959404 systemd[1]: cri-containerd-136ccf76cf4b721baf4b8429925cc6d21e694515e3dcd5ba65b3ac490984d74b.scope: Deactivated successfully. Sep 5 23:56:31.961661 systemd[1]: cri-containerd-136ccf76cf4b721baf4b8429925cc6d21e694515e3dcd5ba65b3ac490984d74b.scope: Consumed 5.996s CPU time, 18.2M memory peak, 0B memory swap peak. Sep 5 23:56:32.018849 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-136ccf76cf4b721baf4b8429925cc6d21e694515e3dcd5ba65b3ac490984d74b-rootfs.mount: Deactivated successfully. Sep 5 23:56:32.043035 containerd[2003]: time="2025-09-05T23:56:32.013631912Z" level=info msg="shim disconnected" id=136ccf76cf4b721baf4b8429925cc6d21e694515e3dcd5ba65b3ac490984d74b namespace=k8s.io Sep 5 23:56:32.043035 containerd[2003]: time="2025-09-05T23:56:32.043017092Z" level=warning msg="cleaning up after shim disconnected" id=136ccf76cf4b721baf4b8429925cc6d21e694515e3dcd5ba65b3ac490984d74b namespace=k8s.io Sep 5 23:56:32.043689 containerd[2003]: time="2025-09-05T23:56:32.043049144Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:56:32.125601 systemd[1]: cri-containerd-5e095b30f2fd85cd377cf5f415593514aeb3937ce5b70b4f216383a686857de3.scope: Deactivated successfully. Sep 5 23:56:32.126070 systemd[1]: cri-containerd-5e095b30f2fd85cd377cf5f415593514aeb3937ce5b70b4f216383a686857de3.scope: Consumed 27.271s CPU time. Sep 5 23:56:32.171834 containerd[2003]: time="2025-09-05T23:56:32.171744621Z" level=info msg="shim disconnected" id=5e095b30f2fd85cd377cf5f415593514aeb3937ce5b70b4f216383a686857de3 namespace=k8s.io Sep 5 23:56:32.171834 containerd[2003]: time="2025-09-05T23:56:32.171821241Z" level=warning msg="cleaning up after shim disconnected" id=5e095b30f2fd85cd377cf5f415593514aeb3937ce5b70b4f216383a686857de3 namespace=k8s.io Sep 5 23:56:32.172105 containerd[2003]: time="2025-09-05T23:56:32.171843537Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:56:32.178979 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e095b30f2fd85cd377cf5f415593514aeb3937ce5b70b4f216383a686857de3-rootfs.mount: Deactivated successfully. Sep 5 23:56:32.463297 kubelet[3518]: I0905 23:56:32.463178 3518 scope.go:117] "RemoveContainer" containerID="5e095b30f2fd85cd377cf5f415593514aeb3937ce5b70b4f216383a686857de3" Sep 5 23:56:32.470172 containerd[2003]: time="2025-09-05T23:56:32.469748194Z" level=info msg="CreateContainer within sandbox \"566f588f5f0954bc4c7d193b345684491a4b5b926ce40fd2f7b9220f7693fa31\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 5 23:56:32.473960 kubelet[3518]: I0905 23:56:32.472060 3518 scope.go:117] "RemoveContainer" containerID="136ccf76cf4b721baf4b8429925cc6d21e694515e3dcd5ba65b3ac490984d74b" Sep 5 23:56:32.491696 containerd[2003]: time="2025-09-05T23:56:32.491637982Z" level=info msg="CreateContainer within sandbox \"18a3c05440c0b11816f7164f0afa21879319e20da9290264fe5b159a487bbf80\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 5 23:56:32.502782 containerd[2003]: time="2025-09-05T23:56:32.502724351Z" level=info msg="CreateContainer within sandbox \"566f588f5f0954bc4c7d193b345684491a4b5b926ce40fd2f7b9220f7693fa31\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"f38d8f04de68f31ad855d7a6aa8e6dafb3ee92babde887fd587eca0a8fa860d3\"" Sep 5 23:56:32.504266 containerd[2003]: time="2025-09-05T23:56:32.504219311Z" level=info msg="StartContainer for \"f38d8f04de68f31ad855d7a6aa8e6dafb3ee92babde887fd587eca0a8fa860d3\"" Sep 5 23:56:32.542063 containerd[2003]: time="2025-09-05T23:56:32.541727303Z" level=info msg="CreateContainer within sandbox \"18a3c05440c0b11816f7164f0afa21879319e20da9290264fe5b159a487bbf80\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"40f87b7caa05092f82c717162d2f5668b251e8c7be7fa00988fb0b71f9fc1e52\"" Sep 5 23:56:32.542902 containerd[2003]: time="2025-09-05T23:56:32.542846087Z" level=info msg="StartContainer for \"40f87b7caa05092f82c717162d2f5668b251e8c7be7fa00988fb0b71f9fc1e52\"" Sep 5 23:56:32.566703 systemd[1]: Started cri-containerd-f38d8f04de68f31ad855d7a6aa8e6dafb3ee92babde887fd587eca0a8fa860d3.scope - libcontainer container f38d8f04de68f31ad855d7a6aa8e6dafb3ee92babde887fd587eca0a8fa860d3. Sep 5 23:56:32.615874 systemd[1]: Started cri-containerd-40f87b7caa05092f82c717162d2f5668b251e8c7be7fa00988fb0b71f9fc1e52.scope - libcontainer container 40f87b7caa05092f82c717162d2f5668b251e8c7be7fa00988fb0b71f9fc1e52. Sep 5 23:56:32.651863 containerd[2003]: time="2025-09-05T23:56:32.651797627Z" level=info msg="StartContainer for \"f38d8f04de68f31ad855d7a6aa8e6dafb3ee92babde887fd587eca0a8fa860d3\" returns successfully" Sep 5 23:56:32.714406 containerd[2003]: time="2025-09-05T23:56:32.714046392Z" level=info msg="StartContainer for \"40f87b7caa05092f82c717162d2f5668b251e8c7be7fa00988fb0b71f9fc1e52\" returns successfully" Sep 5 23:56:33.028463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount143158032.mount: Deactivated successfully. Sep 5 23:56:34.953926 kubelet[3518]: E0905 23:56:34.953831 3518 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-98?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Sep 5 23:56:35.940264 systemd[1]: cri-containerd-1acacc9217baff95e7c2818ae8a205f5d13dc159c9e3ecf3272afacb0269ce1f.scope: Deactivated successfully. Sep 5 23:56:35.942707 systemd[1]: cri-containerd-1acacc9217baff95e7c2818ae8a205f5d13dc159c9e3ecf3272afacb0269ce1f.scope: Consumed 5.047s CPU time, 15.6M memory peak, 0B memory swap peak. Sep 5 23:56:35.985123 containerd[2003]: time="2025-09-05T23:56:35.985019200Z" level=info msg="shim disconnected" id=1acacc9217baff95e7c2818ae8a205f5d13dc159c9e3ecf3272afacb0269ce1f namespace=k8s.io Sep 5 23:56:35.985123 containerd[2003]: time="2025-09-05T23:56:35.985119496Z" level=warning msg="cleaning up after shim disconnected" id=1acacc9217baff95e7c2818ae8a205f5d13dc159c9e3ecf3272afacb0269ce1f namespace=k8s.io Sep 5 23:56:35.987133 containerd[2003]: time="2025-09-05T23:56:35.985141756Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:56:35.989396 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1acacc9217baff95e7c2818ae8a205f5d13dc159c9e3ecf3272afacb0269ce1f-rootfs.mount: Deactivated successfully. Sep 5 23:56:36.498770 kubelet[3518]: I0905 23:56:36.498716 3518 scope.go:117] "RemoveContainer" containerID="1acacc9217baff95e7c2818ae8a205f5d13dc159c9e3ecf3272afacb0269ce1f" Sep 5 23:56:36.504374 containerd[2003]: time="2025-09-05T23:56:36.504130106Z" level=info msg="CreateContainer within sandbox \"9e55655471a95584d347cadf16a9d22f47e4008164637cfbec38f2160ce9fb5e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 5 23:56:36.545665 containerd[2003]: time="2025-09-05T23:56:36.545179935Z" level=info msg="CreateContainer within sandbox \"9e55655471a95584d347cadf16a9d22f47e4008164637cfbec38f2160ce9fb5e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"b3727bd6cd6399d6965fad5b2755c4310f0d3a1221dcfca7037ab92d12291974\"" Sep 5 23:56:36.546985 containerd[2003]: time="2025-09-05T23:56:36.546887271Z" level=info msg="StartContainer for \"b3727bd6cd6399d6965fad5b2755c4310f0d3a1221dcfca7037ab92d12291974\"" Sep 5 23:56:36.621959 systemd[1]: Started cri-containerd-b3727bd6cd6399d6965fad5b2755c4310f0d3a1221dcfca7037ab92d12291974.scope - libcontainer container b3727bd6cd6399d6965fad5b2755c4310f0d3a1221dcfca7037ab92d12291974. Sep 5 23:56:36.702632 containerd[2003]: time="2025-09-05T23:56:36.702556095Z" level=info msg="StartContainer for \"b3727bd6cd6399d6965fad5b2755c4310f0d3a1221dcfca7037ab92d12291974\" returns successfully" Sep 5 23:56:44.169477 systemd[1]: cri-containerd-f38d8f04de68f31ad855d7a6aa8e6dafb3ee92babde887fd587eca0a8fa860d3.scope: Deactivated successfully. Sep 5 23:56:44.216566 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f38d8f04de68f31ad855d7a6aa8e6dafb3ee92babde887fd587eca0a8fa860d3-rootfs.mount: Deactivated successfully. Sep 5 23:56:44.237462 containerd[2003]: time="2025-09-05T23:56:44.237036189Z" level=info msg="shim disconnected" id=f38d8f04de68f31ad855d7a6aa8e6dafb3ee92babde887fd587eca0a8fa860d3 namespace=k8s.io Sep 5 23:56:44.238972 containerd[2003]: time="2025-09-05T23:56:44.238330461Z" level=warning msg="cleaning up after shim disconnected" id=f38d8f04de68f31ad855d7a6aa8e6dafb3ee92babde887fd587eca0a8fa860d3 namespace=k8s.io Sep 5 23:56:44.238972 containerd[2003]: time="2025-09-05T23:56:44.238501749Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 23:56:44.526323 kubelet[3518]: I0905 23:56:44.525783 3518 scope.go:117] "RemoveContainer" containerID="5e095b30f2fd85cd377cf5f415593514aeb3937ce5b70b4f216383a686857de3" Sep 5 23:56:44.526964 kubelet[3518]: I0905 23:56:44.526364 3518 scope.go:117] "RemoveContainer" containerID="f38d8f04de68f31ad855d7a6aa8e6dafb3ee92babde887fd587eca0a8fa860d3" Sep 5 23:56:44.526964 kubelet[3518]: E0905 23:56:44.526628 3518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-755d956888-xkskx_tigera-operator(1006d99f-beb6-4635-9c2b-26c746882cfd)\"" pod="tigera-operator/tigera-operator-755d956888-xkskx" podUID="1006d99f-beb6-4635-9c2b-26c746882cfd" Sep 5 23:56:44.529119 containerd[2003]: time="2025-09-05T23:56:44.529057846Z" level=info msg="RemoveContainer for \"5e095b30f2fd85cd377cf5f415593514aeb3937ce5b70b4f216383a686857de3\"" Sep 5 23:56:44.536141 containerd[2003]: time="2025-09-05T23:56:44.536064730Z" level=info msg="RemoveContainer for \"5e095b30f2fd85cd377cf5f415593514aeb3937ce5b70b4f216383a686857de3\" returns successfully" Sep 5 23:56:44.957505 kubelet[3518]: E0905 23:56:44.954776 3518 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-98?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"