Aug 13 00:18:41.270514 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Aug 13 00:18:41.270563 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Aug 12 22:21:53 -00 2025 Aug 13 00:18:41.270589 kernel: KASLR disabled due to lack of seed Aug 13 00:18:41.270606 kernel: efi: EFI v2.7 by EDK II Aug 13 00:18:41.270622 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x7852ee18 Aug 13 00:18:41.270637 kernel: ACPI: Early table checksum verification disabled Aug 13 00:18:41.270655 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Aug 13 00:18:41.270671 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Aug 13 00:18:41.270687 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Aug 13 00:18:41.270703 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Aug 13 00:18:41.270724 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Aug 13 00:18:41.270764 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Aug 13 00:18:41.270784 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Aug 13 00:18:41.270801 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Aug 13 00:18:41.270821 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Aug 13 00:18:41.270844 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Aug 13 00:18:41.270861 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Aug 13 00:18:41.270879 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Aug 13 00:18:41.270896 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Aug 13 00:18:41.270914 kernel: printk: bootconsole [uart0] enabled Aug 13 00:18:41.270931 kernel: NUMA: Failed to initialise from firmware Aug 13 00:18:41.270948 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Aug 13 00:18:41.270965 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Aug 13 00:18:41.270982 kernel: Zone ranges: Aug 13 00:18:41.270999 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Aug 13 00:18:41.271015 kernel: DMA32 empty Aug 13 00:18:41.271037 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Aug 13 00:18:41.271055 kernel: Movable zone start for each node Aug 13 00:18:41.271071 kernel: Early memory node ranges Aug 13 00:18:41.271087 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Aug 13 00:18:41.271104 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Aug 13 00:18:41.271120 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Aug 13 00:18:41.271137 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Aug 13 00:18:41.271153 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Aug 13 00:18:41.271170 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Aug 13 00:18:41.271187 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Aug 13 00:18:41.271203 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Aug 13 00:18:41.271220 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Aug 13 00:18:41.271240 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Aug 13 00:18:41.271258 kernel: psci: probing for conduit method from ACPI. Aug 13 00:18:41.271282 kernel: psci: PSCIv1.0 detected in firmware. Aug 13 00:18:41.271300 kernel: psci: Using standard PSCI v0.2 function IDs Aug 13 00:18:41.271318 kernel: psci: Trusted OS migration not required Aug 13 00:18:41.271340 kernel: psci: SMC Calling Convention v1.1 Aug 13 00:18:41.271358 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Aug 13 00:18:41.271376 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Aug 13 00:18:41.271393 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Aug 13 00:18:41.271412 kernel: pcpu-alloc: [0] 0 [0] 1 Aug 13 00:18:41.271429 kernel: Detected PIPT I-cache on CPU0 Aug 13 00:18:41.271447 kernel: CPU features: detected: GIC system register CPU interface Aug 13 00:18:41.271465 kernel: CPU features: detected: Spectre-v2 Aug 13 00:18:41.271483 kernel: CPU features: detected: Spectre-v3a Aug 13 00:18:41.271501 kernel: CPU features: detected: Spectre-BHB Aug 13 00:18:41.271519 kernel: CPU features: detected: ARM erratum 1742098 Aug 13 00:18:41.271541 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Aug 13 00:18:41.271559 kernel: alternatives: applying boot alternatives Aug 13 00:18:41.271579 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 13 00:18:41.271598 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:18:41.271617 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:18:41.271635 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:18:41.271652 kernel: Fallback order for Node 0: 0 Aug 13 00:18:41.271670 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Aug 13 00:18:41.271688 kernel: Policy zone: Normal Aug 13 00:18:41.271706 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:18:41.271724 kernel: software IO TLB: area num 2. Aug 13 00:18:41.273800 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Aug 13 00:18:41.273831 kernel: Memory: 3820088K/4030464K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 210376K reserved, 0K cma-reserved) Aug 13 00:18:41.273850 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:18:41.273868 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:18:41.273887 kernel: rcu: RCU event tracing is enabled. Aug 13 00:18:41.273906 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:18:41.273924 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:18:41.273942 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:18:41.273960 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:18:41.273978 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:18:41.273996 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 13 00:18:41.274025 kernel: GICv3: 96 SPIs implemented Aug 13 00:18:41.274043 kernel: GICv3: 0 Extended SPIs implemented Aug 13 00:18:41.274061 kernel: Root IRQ handler: gic_handle_irq Aug 13 00:18:41.274079 kernel: GICv3: GICv3 features: 16 PPIs Aug 13 00:18:41.274097 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Aug 13 00:18:41.274115 kernel: ITS [mem 0x10080000-0x1009ffff] Aug 13 00:18:41.274133 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Aug 13 00:18:41.274151 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Aug 13 00:18:41.274169 kernel: GICv3: using LPI property table @0x00000004000d0000 Aug 13 00:18:41.274187 kernel: ITS: Using hypervisor restricted LPI range [128] Aug 13 00:18:41.274205 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Aug 13 00:18:41.274223 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 00:18:41.274246 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Aug 13 00:18:41.274265 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Aug 13 00:18:41.274283 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Aug 13 00:18:41.274302 kernel: Console: colour dummy device 80x25 Aug 13 00:18:41.274322 kernel: printk: console [tty1] enabled Aug 13 00:18:41.274353 kernel: ACPI: Core revision 20230628 Aug 13 00:18:41.274382 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Aug 13 00:18:41.274402 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:18:41.274421 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 00:18:41.274447 kernel: landlock: Up and running. Aug 13 00:18:41.274466 kernel: SELinux: Initializing. Aug 13 00:18:41.274485 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:18:41.274503 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:18:41.274522 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:18:41.274540 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:18:41.274558 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:18:41.274577 kernel: rcu: Max phase no-delay instances is 400. Aug 13 00:18:41.274595 kernel: Platform MSI: ITS@0x10080000 domain created Aug 13 00:18:41.274618 kernel: PCI/MSI: ITS@0x10080000 domain created Aug 13 00:18:41.274637 kernel: Remapping and enabling EFI services. Aug 13 00:18:41.274654 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:18:41.274672 kernel: Detected PIPT I-cache on CPU1 Aug 13 00:18:41.274690 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Aug 13 00:18:41.274708 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Aug 13 00:18:41.274727 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Aug 13 00:18:41.274769 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:18:41.274790 kernel: SMP: Total of 2 processors activated. Aug 13 00:18:41.274815 kernel: CPU features: detected: 32-bit EL0 Support Aug 13 00:18:41.274833 kernel: CPU features: detected: 32-bit EL1 Support Aug 13 00:18:41.274852 kernel: CPU features: detected: CRC32 instructions Aug 13 00:18:41.274883 kernel: CPU: All CPU(s) started at EL1 Aug 13 00:18:41.274907 kernel: alternatives: applying system-wide alternatives Aug 13 00:18:41.274926 kernel: devtmpfs: initialized Aug 13 00:18:41.274946 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:18:41.274965 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:18:41.274984 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:18:41.275004 kernel: SMBIOS 3.0.0 present. Aug 13 00:18:41.275028 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Aug 13 00:18:41.275047 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:18:41.275066 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 13 00:18:41.275086 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 13 00:18:41.275105 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 13 00:18:41.275124 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:18:41.275143 kernel: audit: type=2000 audit(0.306:1): state=initialized audit_enabled=0 res=1 Aug 13 00:18:41.275166 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:18:41.275186 kernel: cpuidle: using governor menu Aug 13 00:18:41.275205 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 13 00:18:41.275224 kernel: ASID allocator initialised with 65536 entries Aug 13 00:18:41.275243 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:18:41.275262 kernel: Serial: AMBA PL011 UART driver Aug 13 00:18:41.275281 kernel: Modules: 17488 pages in range for non-PLT usage Aug 13 00:18:41.275300 kernel: Modules: 509008 pages in range for PLT usage Aug 13 00:18:41.275319 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:18:41.275343 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 00:18:41.275362 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 13 00:18:41.275381 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 13 00:18:41.275400 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:18:41.275419 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 00:18:41.275438 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 13 00:18:41.275457 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 13 00:18:41.275476 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:18:41.275495 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:18:41.275519 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:18:41.275538 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:18:41.275557 kernel: ACPI: Interpreter enabled Aug 13 00:18:41.275576 kernel: ACPI: Using GIC for interrupt routing Aug 13 00:18:41.275595 kernel: ACPI: MCFG table detected, 1 entries Aug 13 00:18:41.275614 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Aug 13 00:18:41.278047 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:18:41.278299 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 13 00:18:41.278610 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 13 00:18:41.278903 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Aug 13 00:18:41.279129 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Aug 13 00:18:41.279157 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Aug 13 00:18:41.279178 kernel: acpiphp: Slot [1] registered Aug 13 00:18:41.279198 kernel: acpiphp: Slot [2] registered Aug 13 00:18:41.279217 kernel: acpiphp: Slot [3] registered Aug 13 00:18:41.279235 kernel: acpiphp: Slot [4] registered Aug 13 00:18:41.279263 kernel: acpiphp: Slot [5] registered Aug 13 00:18:41.279283 kernel: acpiphp: Slot [6] registered Aug 13 00:18:41.279301 kernel: acpiphp: Slot [7] registered Aug 13 00:18:41.279320 kernel: acpiphp: Slot [8] registered Aug 13 00:18:41.279339 kernel: acpiphp: Slot [9] registered Aug 13 00:18:41.279358 kernel: acpiphp: Slot [10] registered Aug 13 00:18:41.279377 kernel: acpiphp: Slot [11] registered Aug 13 00:18:41.279395 kernel: acpiphp: Slot [12] registered Aug 13 00:18:41.279416 kernel: acpiphp: Slot [13] registered Aug 13 00:18:41.279436 kernel: acpiphp: Slot [14] registered Aug 13 00:18:41.279460 kernel: acpiphp: Slot [15] registered Aug 13 00:18:41.279479 kernel: acpiphp: Slot [16] registered Aug 13 00:18:41.279497 kernel: acpiphp: Slot [17] registered Aug 13 00:18:41.279516 kernel: acpiphp: Slot [18] registered Aug 13 00:18:41.279535 kernel: acpiphp: Slot [19] registered Aug 13 00:18:41.279554 kernel: acpiphp: Slot [20] registered Aug 13 00:18:41.279572 kernel: acpiphp: Slot [21] registered Aug 13 00:18:41.279591 kernel: acpiphp: Slot [22] registered Aug 13 00:18:41.279609 kernel: acpiphp: Slot [23] registered Aug 13 00:18:41.279632 kernel: acpiphp: Slot [24] registered Aug 13 00:18:41.279652 kernel: acpiphp: Slot [25] registered Aug 13 00:18:41.279670 kernel: acpiphp: Slot [26] registered Aug 13 00:18:41.279689 kernel: acpiphp: Slot [27] registered Aug 13 00:18:41.279707 kernel: acpiphp: Slot [28] registered Aug 13 00:18:41.279726 kernel: acpiphp: Slot [29] registered Aug 13 00:18:41.279799 kernel: acpiphp: Slot [30] registered Aug 13 00:18:41.279824 kernel: acpiphp: Slot [31] registered Aug 13 00:18:41.279843 kernel: PCI host bridge to bus 0000:00 Aug 13 00:18:41.280095 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Aug 13 00:18:41.280305 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 13 00:18:41.280503 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Aug 13 00:18:41.280701 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Aug 13 00:18:41.281040 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Aug 13 00:18:41.281299 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Aug 13 00:18:41.281536 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Aug 13 00:18:41.282323 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Aug 13 00:18:41.282596 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Aug 13 00:18:41.282847 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Aug 13 00:18:41.283492 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Aug 13 00:18:41.284214 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Aug 13 00:18:41.284596 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Aug 13 00:18:41.284940 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Aug 13 00:18:41.285184 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Aug 13 00:18:41.285413 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Aug 13 00:18:41.285640 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Aug 13 00:18:41.285999 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Aug 13 00:18:41.286232 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Aug 13 00:18:41.286514 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Aug 13 00:18:41.286795 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Aug 13 00:18:41.287015 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 13 00:18:41.287220 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Aug 13 00:18:41.287248 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 13 00:18:41.287268 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 13 00:18:41.287290 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 13 00:18:41.287310 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 13 00:18:41.287329 kernel: iommu: Default domain type: Translated Aug 13 00:18:41.287359 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 13 00:18:41.287379 kernel: efivars: Registered efivars operations Aug 13 00:18:41.287399 kernel: vgaarb: loaded Aug 13 00:18:41.287418 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 13 00:18:41.287438 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:18:41.287457 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:18:41.287476 kernel: pnp: PnP ACPI init Aug 13 00:18:41.287733 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Aug 13 00:18:41.287818 kernel: pnp: PnP ACPI: found 1 devices Aug 13 00:18:41.287848 kernel: NET: Registered PF_INET protocol family Aug 13 00:18:41.287868 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:18:41.287888 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:18:41.287907 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:18:41.287927 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:18:41.287946 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 00:18:41.287967 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:18:41.287986 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:18:41.288006 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:18:41.288031 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:18:41.288050 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:18:41.288069 kernel: kvm [1]: HYP mode not available Aug 13 00:18:41.288088 kernel: Initialise system trusted keyrings Aug 13 00:18:41.288108 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:18:41.288127 kernel: Key type asymmetric registered Aug 13 00:18:41.288146 kernel: Asymmetric key parser 'x509' registered Aug 13 00:18:41.288165 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 00:18:41.288184 kernel: io scheduler mq-deadline registered Aug 13 00:18:41.288208 kernel: io scheduler kyber registered Aug 13 00:18:41.288228 kernel: io scheduler bfq registered Aug 13 00:18:41.288540 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Aug 13 00:18:41.288581 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 13 00:18:41.288602 kernel: ACPI: button: Power Button [PWRB] Aug 13 00:18:41.288622 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Aug 13 00:18:41.288642 kernel: ACPI: button: Sleep Button [SLPB] Aug 13 00:18:41.288661 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:18:41.288693 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Aug 13 00:18:41.289025 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Aug 13 00:18:41.289061 kernel: printk: console [ttyS0] disabled Aug 13 00:18:41.289082 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Aug 13 00:18:41.289101 kernel: printk: console [ttyS0] enabled Aug 13 00:18:41.289121 kernel: printk: bootconsole [uart0] disabled Aug 13 00:18:41.289141 kernel: thunder_xcv, ver 1.0 Aug 13 00:18:41.289161 kernel: thunder_bgx, ver 1.0 Aug 13 00:18:41.289181 kernel: nicpf, ver 1.0 Aug 13 00:18:41.289213 kernel: nicvf, ver 1.0 Aug 13 00:18:41.289476 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 13 00:18:41.289714 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-13T00:18:40 UTC (1755044320) Aug 13 00:18:41.289836 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 00:18:41.289861 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Aug 13 00:18:41.289882 kernel: watchdog: Delayed init of the lockup detector failed: -19 Aug 13 00:18:41.289902 kernel: watchdog: Hard watchdog permanently disabled Aug 13 00:18:41.289921 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:18:41.289950 kernel: Segment Routing with IPv6 Aug 13 00:18:41.289970 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:18:41.289989 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:18:41.290008 kernel: Key type dns_resolver registered Aug 13 00:18:41.290027 kernel: registered taskstats version 1 Aug 13 00:18:41.290047 kernel: Loading compiled-in X.509 certificates Aug 13 00:18:41.290066 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 7263800c6d21650660e2b030c1023dce09b1e8b6' Aug 13 00:18:41.290085 kernel: Key type .fscrypt registered Aug 13 00:18:41.290104 kernel: Key type fscrypt-provisioning registered Aug 13 00:18:41.290128 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:18:41.290148 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:18:41.290167 kernel: ima: No architecture policies found Aug 13 00:18:41.290186 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 13 00:18:41.290804 kernel: clk: Disabling unused clocks Aug 13 00:18:41.290845 kernel: Freeing unused kernel memory: 39424K Aug 13 00:18:41.290865 kernel: Run /init as init process Aug 13 00:18:41.290885 kernel: with arguments: Aug 13 00:18:41.290904 kernel: /init Aug 13 00:18:41.290922 kernel: with environment: Aug 13 00:18:41.290951 kernel: HOME=/ Aug 13 00:18:41.290970 kernel: TERM=linux Aug 13 00:18:41.290989 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:18:41.291012 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 00:18:41.291037 systemd[1]: Detected virtualization amazon. Aug 13 00:18:41.291058 systemd[1]: Detected architecture arm64. Aug 13 00:18:41.291079 systemd[1]: Running in initrd. Aug 13 00:18:41.291104 systemd[1]: No hostname configured, using default hostname. Aug 13 00:18:41.291125 systemd[1]: Hostname set to . Aug 13 00:18:41.291146 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:18:41.291167 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:18:41.291187 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:18:41.291208 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:18:41.291231 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 00:18:41.291253 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:18:41.291279 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 00:18:41.291302 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 00:18:41.291327 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 00:18:41.291349 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 00:18:41.291370 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:18:41.291390 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:18:41.291412 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:18:41.291439 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:18:41.291460 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:18:41.291482 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:18:41.291503 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:18:41.291525 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:18:41.291547 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:18:41.291569 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 00:18:41.291591 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:18:41.291619 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:18:41.291640 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:18:41.291663 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:18:41.291684 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 00:18:41.291705 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:18:41.291726 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 00:18:41.292837 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:18:41.294845 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:18:41.294873 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:18:41.294906 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:18:41.294977 systemd-journald[251]: Collecting audit messages is disabled. Aug 13 00:18:41.295025 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 00:18:41.295047 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:18:41.295074 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:18:41.295097 systemd-journald[251]: Journal started Aug 13 00:18:41.295142 systemd-journald[251]: Runtime Journal (/run/log/journal/ec22c96b312372502cfc37bde435a33e) is 8.0M, max 75.3M, 67.3M free. Aug 13 00:18:41.293344 systemd-modules-load[252]: Inserted module 'overlay' Aug 13 00:18:41.318976 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:18:41.327783 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:18:41.327850 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:18:41.331673 kernel: Bridge firewalling registered Aug 13 00:18:41.330731 systemd-modules-load[252]: Inserted module 'br_netfilter' Aug 13 00:18:41.342698 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:18:41.349203 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:18:41.354869 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:18:41.370191 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:18:41.378112 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:18:41.386125 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:18:41.410128 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:18:41.443833 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:18:41.452402 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:18:41.462516 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:18:41.480429 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:18:41.490275 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:18:41.502964 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 00:18:41.548654 dracut-cmdline[289]: dracut-dracut-053 Aug 13 00:18:41.556165 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 13 00:18:41.562196 systemd-resolved[287]: Positive Trust Anchors: Aug 13 00:18:41.562216 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:18:41.562280 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:18:41.724771 kernel: SCSI subsystem initialized Aug 13 00:18:41.730779 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:18:41.743785 kernel: iscsi: registered transport (tcp) Aug 13 00:18:41.765926 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:18:41.765999 kernel: QLogic iSCSI HBA Driver Aug 13 00:18:41.806812 kernel: random: crng init done Aug 13 00:18:41.807033 systemd-resolved[287]: Defaulting to hostname 'linux'. Aug 13 00:18:41.810843 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:18:41.816102 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:18:41.850245 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 00:18:41.865041 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 00:18:41.901018 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:18:41.901093 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:18:41.903000 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 00:18:41.983795 kernel: raid6: neonx8 gen() 6719 MB/s Aug 13 00:18:41.985773 kernel: raid6: neonx4 gen() 6527 MB/s Aug 13 00:18:42.002774 kernel: raid6: neonx2 gen() 5447 MB/s Aug 13 00:18:42.019773 kernel: raid6: neonx1 gen() 3946 MB/s Aug 13 00:18:42.036774 kernel: raid6: int64x8 gen() 3830 MB/s Aug 13 00:18:42.053775 kernel: raid6: int64x4 gen() 3711 MB/s Aug 13 00:18:42.070774 kernel: raid6: int64x2 gen() 3616 MB/s Aug 13 00:18:42.088779 kernel: raid6: int64x1 gen() 2771 MB/s Aug 13 00:18:42.088817 kernel: raid6: using algorithm neonx8 gen() 6719 MB/s Aug 13 00:18:42.107772 kernel: raid6: .... xor() 4885 MB/s, rmw enabled Aug 13 00:18:42.107810 kernel: raid6: using neon recovery algorithm Aug 13 00:18:42.116989 kernel: xor: measuring software checksum speed Aug 13 00:18:42.117061 kernel: 8regs : 10972 MB/sec Aug 13 00:18:42.118189 kernel: 32regs : 11948 MB/sec Aug 13 00:18:42.119525 kernel: arm64_neon : 9064 MB/sec Aug 13 00:18:42.119570 kernel: xor: using function: 32regs (11948 MB/sec) Aug 13 00:18:42.205788 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 00:18:42.228308 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:18:42.241076 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:18:42.287588 systemd-udevd[470]: Using default interface naming scheme 'v255'. Aug 13 00:18:42.296339 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:18:42.316103 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 00:18:42.356351 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation Aug 13 00:18:42.419598 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:18:42.441117 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:18:42.561808 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:18:42.576110 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 00:18:42.638507 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 00:18:42.643545 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:18:42.646867 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:18:42.650910 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:18:42.677135 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 00:18:42.707136 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:18:42.793829 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 13 00:18:42.793897 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Aug 13 00:18:42.804976 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:18:42.808176 kernel: ena 0000:00:05.0: ENA device version: 0.10 Aug 13 00:18:42.808543 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Aug 13 00:18:42.805535 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:18:42.813332 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:18:42.819244 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:18:42.819915 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:18:42.827840 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:18:42.837793 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:4c:7d:3a:c8:97 Aug 13 00:18:42.841225 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:18:42.848845 (udev-worker)[519]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:18:42.875182 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Aug 13 00:18:42.875268 kernel: nvme nvme0: pci function 0000:00:04.0 Aug 13 00:18:42.884774 kernel: nvme nvme0: 2/0/0 default/read/poll queues Aug 13 00:18:42.892604 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:18:42.892708 kernel: GPT:9289727 != 16777215 Aug 13 00:18:42.892756 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:18:42.892789 kernel: GPT:9289727 != 16777215 Aug 13 00:18:42.892815 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:18:42.892842 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 00:18:42.900904 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:18:42.915205 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:18:42.948146 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:18:42.997636 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (529) Aug 13 00:18:43.032878 kernel: BTRFS: device fsid 03408483-5051-409a-aab4-4e6d5027e982 devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (518) Aug 13 00:18:43.079186 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Aug 13 00:18:43.127136 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Aug 13 00:18:43.150011 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 13 00:18:43.189678 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Aug 13 00:18:43.192574 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Aug 13 00:18:43.212073 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 00:18:43.228830 disk-uuid[660]: Primary Header is updated. Aug 13 00:18:43.228830 disk-uuid[660]: Secondary Entries is updated. Aug 13 00:18:43.228830 disk-uuid[660]: Secondary Header is updated. Aug 13 00:18:43.246518 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 00:18:43.265844 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 00:18:44.280801 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 00:18:44.284250 disk-uuid[662]: The operation has completed successfully. Aug 13 00:18:44.484867 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:18:44.488133 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 00:18:44.561082 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 00:18:44.574959 sh[921]: Success Aug 13 00:18:44.602814 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 13 00:18:44.740159 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 00:18:44.751083 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 00:18:44.769979 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 00:18:44.808429 kernel: BTRFS info (device dm-0): first mount of filesystem 03408483-5051-409a-aab4-4e6d5027e982 Aug 13 00:18:44.808513 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:18:44.808557 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 00:18:44.811927 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 00:18:44.812010 kernel: BTRFS info (device dm-0): using free space tree Aug 13 00:18:44.945797 kernel: BTRFS info (device dm-0): enabling ssd optimizations Aug 13 00:18:44.987303 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 00:18:44.992439 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 00:18:45.003059 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 00:18:45.016063 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 00:18:45.040282 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:18:45.040364 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:18:45.041803 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 13 00:18:45.048772 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 13 00:18:45.069669 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 00:18:45.075398 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:18:45.085491 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 00:18:45.098137 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 00:18:45.233804 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:18:45.254585 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:18:45.323564 systemd-networkd[1113]: lo: Link UP Aug 13 00:18:45.323589 systemd-networkd[1113]: lo: Gained carrier Aug 13 00:18:45.328695 systemd-networkd[1113]: Enumeration completed Aug 13 00:18:45.329951 systemd-networkd[1113]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:18:45.329958 systemd-networkd[1113]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:18:45.339579 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:18:45.345189 systemd-networkd[1113]: eth0: Link UP Aug 13 00:18:45.345210 systemd-networkd[1113]: eth0: Gained carrier Aug 13 00:18:45.345229 systemd-networkd[1113]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:18:45.358709 systemd[1]: Reached target network.target - Network. Aug 13 00:18:45.371891 systemd-networkd[1113]: eth0: DHCPv4 address 172.31.18.251/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 13 00:18:45.644373 ignition[1020]: Ignition 2.19.0 Aug 13 00:18:45.646347 ignition[1020]: Stage: fetch-offline Aug 13 00:18:45.649669 ignition[1020]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:18:45.651869 ignition[1020]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:18:45.654966 ignition[1020]: Ignition finished successfully Aug 13 00:18:45.658525 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:18:45.670251 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 00:18:45.706235 ignition[1122]: Ignition 2.19.0 Aug 13 00:18:45.706264 ignition[1122]: Stage: fetch Aug 13 00:18:45.707030 ignition[1122]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:18:45.707160 ignition[1122]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:18:45.708442 ignition[1122]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:18:45.734420 ignition[1122]: PUT result: OK Aug 13 00:18:45.740903 ignition[1122]: parsed url from cmdline: "" Aug 13 00:18:45.740929 ignition[1122]: no config URL provided Aug 13 00:18:45.740945 ignition[1122]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:18:45.740978 ignition[1122]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:18:45.741027 ignition[1122]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:18:45.747891 ignition[1122]: PUT result: OK Aug 13 00:18:45.748561 ignition[1122]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Aug 13 00:18:45.754946 ignition[1122]: GET result: OK Aug 13 00:18:45.755166 ignition[1122]: parsing config with SHA512: a95103a0159919cfe97f31b529bc614beb56d00c19f42ae33083435eb6aff044f3c1f800b879e5847a949d55f84b996b2073c23f7117a3558bd09b8b9ceab838 Aug 13 00:18:45.766032 unknown[1122]: fetched base config from "system" Aug 13 00:18:45.766064 unknown[1122]: fetched base config from "system" Aug 13 00:18:45.767045 ignition[1122]: fetch: fetch complete Aug 13 00:18:45.766080 unknown[1122]: fetched user config from "aws" Aug 13 00:18:45.767057 ignition[1122]: fetch: fetch passed Aug 13 00:18:45.776791 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 00:18:45.767171 ignition[1122]: Ignition finished successfully Aug 13 00:18:45.792108 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 00:18:45.835979 ignition[1129]: Ignition 2.19.0 Aug 13 00:18:45.836010 ignition[1129]: Stage: kargs Aug 13 00:18:45.836784 ignition[1129]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:18:45.836815 ignition[1129]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:18:45.836988 ignition[1129]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:18:45.840433 ignition[1129]: PUT result: OK Aug 13 00:18:45.852162 ignition[1129]: kargs: kargs passed Aug 13 00:18:45.852638 ignition[1129]: Ignition finished successfully Aug 13 00:18:45.858427 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 00:18:45.869073 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 00:18:45.903474 ignition[1136]: Ignition 2.19.0 Aug 13 00:18:45.903497 ignition[1136]: Stage: disks Aug 13 00:18:45.904202 ignition[1136]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:18:45.904229 ignition[1136]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:18:45.904403 ignition[1136]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:18:45.918639 ignition[1136]: PUT result: OK Aug 13 00:18:45.925365 ignition[1136]: disks: disks passed Aug 13 00:18:45.925507 ignition[1136]: Ignition finished successfully Aug 13 00:18:45.934070 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 00:18:45.941522 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 00:18:45.947200 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:18:45.956046 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:18:45.960537 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:18:45.963558 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:18:45.976140 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 00:18:46.031163 systemd-fsck[1144]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 00:18:46.038304 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 00:18:46.051066 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 00:18:46.161807 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 128aec8b-f05d-48ed-8996-c9e8b21a7810 r/w with ordered data mode. Quota mode: none. Aug 13 00:18:46.163043 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 00:18:46.167688 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 00:18:46.184091 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:18:46.191989 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 00:18:46.200355 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 00:18:46.208959 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:18:46.224585 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1163) Aug 13 00:18:46.209049 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:18:46.234819 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:18:46.234869 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:18:46.234897 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 13 00:18:46.246930 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 13 00:18:46.252390 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:18:46.256847 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 00:18:46.269203 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 00:18:46.719890 initrd-setup-root[1187]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:18:46.731189 initrd-setup-root[1194]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:18:46.742621 initrd-setup-root[1201]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:18:46.753895 initrd-setup-root[1208]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:18:46.756644 systemd-networkd[1113]: eth0: Gained IPv6LL Aug 13 00:18:47.122128 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 00:18:47.137002 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 00:18:47.142262 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 00:18:47.172333 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 00:18:47.175327 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:18:47.211935 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 00:18:47.225345 ignition[1276]: INFO : Ignition 2.19.0 Aug 13 00:18:47.227660 ignition[1276]: INFO : Stage: mount Aug 13 00:18:47.227660 ignition[1276]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:18:47.227660 ignition[1276]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:18:47.238886 ignition[1276]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:18:47.242216 ignition[1276]: INFO : PUT result: OK Aug 13 00:18:47.247473 ignition[1276]: INFO : mount: mount passed Aug 13 00:18:47.249622 ignition[1276]: INFO : Ignition finished successfully Aug 13 00:18:47.255875 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 00:18:47.270104 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 00:18:47.289676 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:18:47.326779 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1287) Aug 13 00:18:47.331515 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:18:47.331590 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:18:47.332991 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 13 00:18:47.339796 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 13 00:18:47.342526 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:18:47.387855 ignition[1304]: INFO : Ignition 2.19.0 Aug 13 00:18:47.387855 ignition[1304]: INFO : Stage: files Aug 13 00:18:47.391690 ignition[1304]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:18:47.391690 ignition[1304]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:18:47.391690 ignition[1304]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:18:47.400171 ignition[1304]: INFO : PUT result: OK Aug 13 00:18:47.410619 ignition[1304]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:18:47.414484 ignition[1304]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:18:47.414484 ignition[1304]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:18:47.453501 ignition[1304]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:18:47.456975 ignition[1304]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:18:47.462990 unknown[1304]: wrote ssh authorized keys file for user: core Aug 13 00:18:47.468172 ignition[1304]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:18:47.468172 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 13 00:18:47.468172 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Aug 13 00:18:47.546290 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:18:47.771274 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 13 00:18:47.775795 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:18:47.775795 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:18:47.775795 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:18:47.775795 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:18:47.775795 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:18:47.775795 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:18:47.775795 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:18:47.775795 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:18:47.807717 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:18:47.807717 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:18:47.807717 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:18:47.807717 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:18:47.807717 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:18:47.807717 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Aug 13 00:18:48.128716 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 00:18:48.554670 ignition[1304]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:18:48.560934 ignition[1304]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 00:18:48.560934 ignition[1304]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:18:48.560934 ignition[1304]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:18:48.560934 ignition[1304]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 00:18:48.560934 ignition[1304]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:18:48.560934 ignition[1304]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:18:48.560934 ignition[1304]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:18:48.560934 ignition[1304]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:18:48.560934 ignition[1304]: INFO : files: files passed Aug 13 00:18:48.591864 ignition[1304]: INFO : Ignition finished successfully Aug 13 00:18:48.594864 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 00:18:48.605268 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 00:18:48.614141 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 00:18:48.631290 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:18:48.636931 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 00:18:48.657934 initrd-setup-root-after-ignition[1332]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:18:48.666031 initrd-setup-root-after-ignition[1336]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:18:48.673566 initrd-setup-root-after-ignition[1332]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:18:48.677446 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:18:48.682169 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 00:18:48.704237 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 00:18:48.766190 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:18:48.766898 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 00:18:48.774859 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 00:18:48.775594 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 00:18:48.776332 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 00:18:48.786061 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 00:18:48.836850 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:18:48.851186 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 00:18:48.879830 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:18:48.885327 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:18:48.888727 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 00:18:48.895208 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:18:48.895488 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:18:48.899110 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 00:18:48.904033 systemd[1]: Stopped target basic.target - Basic System. Aug 13 00:18:48.912480 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 00:18:48.915274 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:18:48.918476 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 00:18:48.925843 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 00:18:48.928837 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:18:48.934861 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 00:18:48.940074 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 00:18:48.949252 systemd[1]: Stopped target swap.target - Swaps. Aug 13 00:18:48.951443 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:18:48.951778 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:18:48.961371 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:18:48.964832 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:18:48.972277 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 00:18:48.976260 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:18:48.982606 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:18:48.982926 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 00:18:48.990266 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:18:48.990728 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:18:48.993931 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:18:48.994470 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 00:18:49.010185 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 00:18:49.018222 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 00:18:49.024717 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:18:49.026406 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:18:49.027353 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:18:49.027625 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:18:49.054695 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:18:49.058321 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 00:18:49.093852 ignition[1356]: INFO : Ignition 2.19.0 Aug 13 00:18:49.093852 ignition[1356]: INFO : Stage: umount Aug 13 00:18:49.093620 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:18:49.100716 ignition[1356]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:18:49.100716 ignition[1356]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:18:49.100716 ignition[1356]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:18:49.108858 ignition[1356]: INFO : PUT result: OK Aug 13 00:18:49.115715 ignition[1356]: INFO : umount: umount passed Aug 13 00:18:49.118934 ignition[1356]: INFO : Ignition finished successfully Aug 13 00:18:49.123103 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:18:49.125000 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 00:18:49.132341 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:18:49.132488 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 00:18:49.134809 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:18:49.134939 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 00:18:49.135862 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:18:49.135969 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 00:18:49.136157 systemd[1]: Stopped target network.target - Network. Aug 13 00:18:49.136453 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:18:49.136534 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:18:49.137271 systemd[1]: Stopped target paths.target - Path Units. Aug 13 00:18:49.140198 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:18:49.146857 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:18:49.149826 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 00:18:49.154051 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 00:18:49.158381 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:18:49.158735 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:18:49.171404 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:18:49.171500 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:18:49.173196 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:18:49.173310 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 00:18:49.186356 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 00:18:49.186479 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 00:18:49.190285 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 00:18:49.196162 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 00:18:49.212835 systemd-networkd[1113]: eth0: DHCPv6 lease lost Aug 13 00:18:49.215674 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:18:49.218503 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 00:18:49.238706 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:18:49.239088 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 00:18:49.249111 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:18:49.249252 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:18:49.306617 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 00:18:49.311191 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:18:49.313016 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:18:49.316374 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:18:49.316495 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:18:49.325381 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:18:49.325508 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 00:18:49.328111 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 00:18:49.328221 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:18:49.335025 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:18:49.355477 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:18:49.360232 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 00:18:49.367086 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:18:49.367300 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 00:18:49.386433 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:18:49.387068 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:18:49.401141 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:18:49.401484 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 00:18:49.411235 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:18:49.411351 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:18:49.416614 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:18:49.416777 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:18:49.429032 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:18:49.429149 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 00:18:49.439957 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:18:49.440083 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:18:49.477049 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 00:18:49.479993 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:18:49.480140 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:18:49.492725 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 00:18:49.492895 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:18:49.496123 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:18:49.496244 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:18:49.499541 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:18:49.499651 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:18:49.503727 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:18:49.504501 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 00:18:49.550356 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:18:49.551644 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 00:18:49.559484 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 00:18:49.577711 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 00:18:49.595164 systemd[1]: Switching root. Aug 13 00:18:49.648904 systemd-journald[251]: Journal stopped Aug 13 00:18:52.108512 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Aug 13 00:18:52.108655 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:18:52.108707 kernel: SELinux: policy capability open_perms=1 Aug 13 00:18:52.108760 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:18:52.110855 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:18:52.110896 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:18:52.110927 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:18:52.110961 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:18:52.110999 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:18:52.111032 kernel: audit: type=1403 audit(1755044330.101:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:18:52.111078 systemd[1]: Successfully loaded SELinux policy in 94.613ms. Aug 13 00:18:52.111125 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.115ms. Aug 13 00:18:52.111162 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 00:18:52.111194 systemd[1]: Detected virtualization amazon. Aug 13 00:18:52.111226 systemd[1]: Detected architecture arm64. Aug 13 00:18:52.111256 systemd[1]: Detected first boot. Aug 13 00:18:52.111288 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:18:52.111339 zram_generator::config[1399]: No configuration found. Aug 13 00:18:52.111376 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:18:52.111406 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:18:52.111437 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 00:18:52.111471 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:18:52.111504 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 00:18:52.111536 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 00:18:52.111568 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 00:18:52.111603 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 00:18:52.111637 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 00:18:52.111670 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 00:18:52.111700 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 00:18:52.111731 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 00:18:52.111859 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:18:52.111894 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:18:52.111926 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 00:18:52.111960 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 00:18:52.112000 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 00:18:52.112033 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:18:52.112064 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 00:18:52.112097 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:18:52.112131 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 00:18:52.112161 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 00:18:52.112194 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 00:18:52.112229 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 00:18:52.112260 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:18:52.112292 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:18:52.112322 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:18:52.112356 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:18:52.112385 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 00:18:52.112415 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 00:18:52.112449 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:18:52.112478 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:18:52.112510 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:18:52.112544 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 00:18:52.112576 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 00:18:52.112606 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 00:18:52.112637 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 00:18:52.112668 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 00:18:52.112701 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 00:18:52.112732 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 00:18:52.114884 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:18:52.114934 systemd[1]: Reached target machines.target - Containers. Aug 13 00:18:52.114967 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 00:18:52.115002 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:18:52.115034 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:18:52.115068 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 00:18:52.115100 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:18:52.115134 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:18:52.115164 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:18:52.115194 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 00:18:52.115230 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:18:52.115262 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:18:52.115306 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:18:52.115338 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 00:18:52.115368 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:18:52.115398 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:18:52.115431 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:18:52.115463 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:18:52.115494 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:18:52.115533 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 00:18:52.115563 kernel: ACPI: bus type drm_connector registered Aug 13 00:18:52.115595 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:18:52.115628 kernel: loop: module loaded Aug 13 00:18:52.115660 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:18:52.115690 systemd[1]: Stopped verity-setup.service. Aug 13 00:18:52.115723 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 00:18:52.115824 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 00:18:52.115858 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 00:18:52.115894 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 00:18:52.115925 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 00:18:52.115958 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 00:18:52.115988 kernel: fuse: init (API version 7.39) Aug 13 00:18:52.116022 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:18:52.116055 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:18:52.116084 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 00:18:52.116114 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:18:52.116144 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:18:52.116174 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:18:52.116204 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:18:52.116238 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:18:52.116268 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:18:52.116303 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:18:52.116336 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 00:18:52.116372 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:18:52.116402 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:18:52.116435 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:18:52.116469 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 00:18:52.116550 systemd-journald[1481]: Collecting audit messages is disabled. Aug 13 00:18:52.116618 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:18:52.116650 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:18:52.116680 systemd-journald[1481]: Journal started Aug 13 00:18:52.116728 systemd-journald[1481]: Runtime Journal (/run/log/journal/ec22c96b312372502cfc37bde435a33e) is 8.0M, max 75.3M, 67.3M free. Aug 13 00:18:52.122834 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 00:18:51.421824 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:18:51.477850 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Aug 13 00:18:51.478650 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:18:52.149790 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 00:18:52.149893 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:18:52.152302 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:18:52.167241 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 00:18:52.178613 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 00:18:52.197323 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 00:18:52.200800 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:18:52.214296 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 00:18:52.220607 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:18:52.225830 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 00:18:52.230651 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:18:52.250783 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:18:52.266297 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 00:18:52.275690 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:18:52.283794 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:18:52.286860 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 00:18:52.290370 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 00:18:52.294336 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 00:18:52.299571 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 00:18:52.347152 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 00:18:52.357387 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 00:18:52.367272 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 00:18:52.372114 kernel: loop0: detected capacity change from 0 to 52536 Aug 13 00:18:52.374067 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 00:18:52.403808 systemd-journald[1481]: Time spent on flushing to /var/log/journal/ec22c96b312372502cfc37bde435a33e is 67.423ms for 911 entries. Aug 13 00:18:52.403808 systemd-journald[1481]: System Journal (/var/log/journal/ec22c96b312372502cfc37bde435a33e) is 8.0M, max 195.6M, 187.6M free. Aug 13 00:18:52.504635 systemd-journald[1481]: Received client request to flush runtime journal. Aug 13 00:18:52.504733 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:18:52.505190 kernel: loop1: detected capacity change from 0 to 203944 Aug 13 00:18:52.440878 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:18:52.451006 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:18:52.460620 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 00:18:52.510588 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 00:18:52.525792 systemd-tmpfiles[1511]: ACLs are not supported, ignoring. Aug 13 00:18:52.525834 systemd-tmpfiles[1511]: ACLs are not supported, ignoring. Aug 13 00:18:52.546586 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:18:52.562074 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 00:18:52.582831 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:18:52.597237 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 00:18:52.658150 udevadm[1548]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 00:18:52.660793 kernel: loop2: detected capacity change from 0 to 114432 Aug 13 00:18:52.685211 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 00:18:52.696167 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:18:52.737992 systemd-tmpfiles[1553]: ACLs are not supported, ignoring. Aug 13 00:18:52.738032 systemd-tmpfiles[1553]: ACLs are not supported, ignoring. Aug 13 00:18:52.750332 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:18:52.789794 kernel: loop3: detected capacity change from 0 to 114328 Aug 13 00:18:52.905826 kernel: loop4: detected capacity change from 0 to 52536 Aug 13 00:18:52.929820 kernel: loop5: detected capacity change from 0 to 203944 Aug 13 00:18:52.963807 kernel: loop6: detected capacity change from 0 to 114432 Aug 13 00:18:52.978814 kernel: loop7: detected capacity change from 0 to 114328 Aug 13 00:18:52.989004 (sd-merge)[1558]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Aug 13 00:18:52.990053 (sd-merge)[1558]: Merged extensions into '/usr'. Aug 13 00:18:52.998607 systemd[1]: Reloading requested from client PID 1510 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 00:18:52.998938 systemd[1]: Reloading... Aug 13 00:18:53.183848 zram_generator::config[1581]: No configuration found. Aug 13 00:18:53.530395 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:18:53.655938 systemd[1]: Reloading finished in 651 ms. Aug 13 00:18:53.696961 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 00:18:53.700733 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 00:18:53.713183 systemd[1]: Starting ensure-sysext.service... Aug 13 00:18:53.722082 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:18:53.727322 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:18:53.750886 systemd[1]: Reloading requested from client PID 1636 ('systemctl') (unit ensure-sysext.service)... Aug 13 00:18:53.750912 systemd[1]: Reloading... Aug 13 00:18:53.816354 systemd-tmpfiles[1637]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:18:53.819643 systemd-tmpfiles[1637]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 00:18:53.826130 systemd-tmpfiles[1637]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:18:53.826900 systemd-tmpfiles[1637]: ACLs are not supported, ignoring. Aug 13 00:18:53.827043 systemd-tmpfiles[1637]: ACLs are not supported, ignoring. Aug 13 00:18:53.848839 systemd-tmpfiles[1637]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:18:53.848873 systemd-tmpfiles[1637]: Skipping /boot Aug 13 00:18:53.872498 systemd-udevd[1638]: Using default interface naming scheme 'v255'. Aug 13 00:18:53.910180 systemd-tmpfiles[1637]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:18:53.910410 systemd-tmpfiles[1637]: Skipping /boot Aug 13 00:18:53.983999 ldconfig[1506]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:18:53.987846 zram_generator::config[1668]: No configuration found. Aug 13 00:18:54.168661 (udev-worker)[1718]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:18:54.347694 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:18:54.368783 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1700) Aug 13 00:18:54.529546 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 00:18:54.530360 systemd[1]: Reloading finished in 778 ms. Aug 13 00:18:54.572830 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:18:54.576522 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 00:18:54.599342 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:18:54.682553 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 00:18:54.706356 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 00:18:54.710832 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:18:54.719265 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:18:54.739299 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:18:54.744864 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:18:54.750617 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:18:54.760255 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 00:18:54.770302 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:18:54.781228 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:18:54.791274 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 00:18:54.799328 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:18:54.802869 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:18:54.845055 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:18:54.847191 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:18:54.910218 systemd[1]: Finished ensure-sysext.service. Aug 13 00:18:54.916773 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:18:54.917639 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:18:54.946538 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 00:18:54.966007 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 13 00:18:54.971086 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 00:18:54.993068 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 00:18:55.000429 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 00:18:55.013078 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:18:55.013796 augenrules[1862]: No rules Aug 13 00:18:55.019190 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 00:18:55.027719 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:18:55.035085 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:18:55.044885 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:18:55.047958 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:18:55.057253 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 00:18:55.060165 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 00:18:55.075302 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 00:18:55.084189 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 00:18:55.101780 lvm[1869]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:18:55.104170 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:18:55.106913 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:18:55.108442 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 00:18:55.111556 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:18:55.113874 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:18:55.117250 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:18:55.118837 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:18:55.123483 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:18:55.123984 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:18:55.135421 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:18:55.135595 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:18:55.170084 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 00:18:55.179917 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 00:18:55.189348 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 00:18:55.193860 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:18:55.204053 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 00:18:55.224089 lvm[1887]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:18:55.238899 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 00:18:55.283773 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 00:18:55.354667 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:18:55.378159 systemd-networkd[1845]: lo: Link UP Aug 13 00:18:55.378194 systemd-networkd[1845]: lo: Gained carrier Aug 13 00:18:55.381495 systemd-networkd[1845]: Enumeration completed Aug 13 00:18:55.381883 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:18:55.383507 systemd-networkd[1845]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:18:55.383636 systemd-networkd[1845]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:18:55.393613 systemd-networkd[1845]: eth0: Link UP Aug 13 00:18:55.394267 systemd-networkd[1845]: eth0: Gained carrier Aug 13 00:18:55.394424 systemd-networkd[1845]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:18:55.394947 systemd-resolved[1846]: Positive Trust Anchors: Aug 13 00:18:55.394983 systemd-resolved[1846]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:18:55.395047 systemd-resolved[1846]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:18:55.399019 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 00:18:55.409862 systemd-networkd[1845]: eth0: DHCPv4 address 172.31.18.251/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 13 00:18:55.416330 systemd-resolved[1846]: Defaulting to hostname 'linux'. Aug 13 00:18:55.419716 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:18:55.422671 systemd[1]: Reached target network.target - Network. Aug 13 00:18:55.426433 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:18:55.429234 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:18:55.431731 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 00:18:55.434792 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 00:18:55.437885 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 00:18:55.440437 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 00:18:55.443284 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 00:18:55.445972 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:18:55.446025 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:18:55.447942 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:18:55.451159 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 00:18:55.456317 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 00:18:55.469054 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 00:18:55.472381 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 00:18:55.474941 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:18:55.477006 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:18:55.479348 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:18:55.479398 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:18:55.481713 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 00:18:55.489112 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 00:18:55.506063 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 00:18:55.519934 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 00:18:55.528101 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 00:18:55.531057 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 00:18:55.537090 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 00:18:55.544198 systemd[1]: Started ntpd.service - Network Time Service. Aug 13 00:18:55.549981 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 00:18:55.559979 systemd[1]: Starting setup-oem.service - Setup OEM... Aug 13 00:18:55.567167 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 00:18:55.575125 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 00:18:55.579907 jq[1905]: false Aug 13 00:18:55.595153 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 00:18:55.599457 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:18:55.602249 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:18:55.607080 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 00:18:55.616797 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 00:18:55.627377 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:18:55.627783 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 00:18:55.648485 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:18:55.648991 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 00:18:55.677984 jq[1916]: true Aug 13 00:18:55.710101 jq[1927]: true Aug 13 00:18:55.749328 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 00:18:55.760948 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 00:18:55.760574 dbus-daemon[1904]: [system] SELinux support is enabled Aug 13 00:18:55.768538 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:18:55.768606 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 00:18:55.771731 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:18:55.771787 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 00:18:55.794545 dbus-daemon[1904]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1845 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 00:18:55.803928 dbus-daemon[1904]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 13 00:18:55.806344 (ntainerd)[1941]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 00:18:55.813833 extend-filesystems[1906]: Found loop4 Aug 13 00:18:55.813833 extend-filesystems[1906]: Found loop5 Aug 13 00:18:55.813833 extend-filesystems[1906]: Found loop6 Aug 13 00:18:55.825503 extend-filesystems[1906]: Found loop7 Aug 13 00:18:55.825503 extend-filesystems[1906]: Found nvme0n1 Aug 13 00:18:55.825503 extend-filesystems[1906]: Found nvme0n1p1 Aug 13 00:18:55.825503 extend-filesystems[1906]: Found nvme0n1p2 Aug 13 00:18:55.825503 extend-filesystems[1906]: Found nvme0n1p3 Aug 13 00:18:55.825503 extend-filesystems[1906]: Found usr Aug 13 00:18:55.825503 extend-filesystems[1906]: Found nvme0n1p4 Aug 13 00:18:55.825503 extend-filesystems[1906]: Found nvme0n1p6 Aug 13 00:18:55.825503 extend-filesystems[1906]: Found nvme0n1p7 Aug 13 00:18:55.825503 extend-filesystems[1906]: Found nvme0n1p9 Aug 13 00:18:55.825503 extend-filesystems[1906]: Checking size of /dev/nvme0n1p9 Aug 13 00:18:55.826952 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 00:18:55.884719 ntpd[1908]: ntpd 4.2.8p17@1.4004-o Tue Aug 12 21:30:33 UTC 2025 (1): Starting Aug 13 00:18:55.884806 ntpd[1908]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 13 00:18:55.885277 ntpd[1908]: 13 Aug 00:18:55 ntpd[1908]: ntpd 4.2.8p17@1.4004-o Tue Aug 12 21:30:33 UTC 2025 (1): Starting Aug 13 00:18:55.885277 ntpd[1908]: 13 Aug 00:18:55 ntpd[1908]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 13 00:18:55.885277 ntpd[1908]: 13 Aug 00:18:55 ntpd[1908]: ---------------------------------------------------- Aug 13 00:18:55.885277 ntpd[1908]: 13 Aug 00:18:55 ntpd[1908]: ntp-4 is maintained by Network Time Foundation, Aug 13 00:18:55.885277 ntpd[1908]: 13 Aug 00:18:55 ntpd[1908]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 13 00:18:55.885277 ntpd[1908]: 13 Aug 00:18:55 ntpd[1908]: corporation. Support and training for ntp-4 are Aug 13 00:18:55.885277 ntpd[1908]: 13 Aug 00:18:55 ntpd[1908]: available at https://www.nwtime.org/support Aug 13 00:18:55.885277 ntpd[1908]: 13 Aug 00:18:55 ntpd[1908]: ---------------------------------------------------- Aug 13 00:18:55.884827 ntpd[1908]: ---------------------------------------------------- Aug 13 00:18:55.884846 ntpd[1908]: ntp-4 is maintained by Network Time Foundation, Aug 13 00:18:55.884864 ntpd[1908]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 13 00:18:55.884883 ntpd[1908]: corporation. Support and training for ntp-4 are Aug 13 00:18:55.884901 ntpd[1908]: available at https://www.nwtime.org/support Aug 13 00:18:55.884919 ntpd[1908]: ---------------------------------------------------- Aug 13 00:18:55.896721 ntpd[1908]: proto: precision = 0.108 usec (-23) Aug 13 00:18:55.902936 ntpd[1908]: 13 Aug 00:18:55 ntpd[1908]: proto: precision = 0.108 usec (-23) Aug 13 00:18:55.902936 ntpd[1908]: 13 Aug 00:18:55 ntpd[1908]: basedate set to 2025-07-31 Aug 13 00:18:55.902936 ntpd[1908]: 13 Aug 00:18:55 ntpd[1908]: gps base set to 2025-08-03 (week 2378) Aug 13 00:18:55.902097 ntpd[1908]: basedate set to 2025-07-31 Aug 13 00:18:55.902131 ntpd[1908]: gps base set to 2025-08-03 (week 2378) Aug 13 00:18:55.913029 ntpd[1908]: Listen and drop on 0 v6wildcard [::]:123 Aug 13 00:18:55.917948 ntpd[1908]: 13 Aug 00:18:55 ntpd[1908]: Listen and drop on 0 v6wildcard [::]:123 Aug 13 00:18:55.917948 ntpd[1908]: 13 Aug 00:18:55 ntpd[1908]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 13 00:18:55.917948 ntpd[1908]: 13 Aug 00:18:55 ntpd[1908]: Listen normally on 2 lo 127.0.0.1:123 Aug 13 00:18:55.913119 ntpd[1908]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 13 00:18:55.916331 ntpd[1908]: Listen normally on 2 lo 127.0.0.1:123 Aug 13 00:18:55.921609 ntpd[1908]: Listen normally on 3 eth0 172.31.18.251:123 Aug 13 00:18:55.922429 ntpd[1908]: 13 Aug 00:18:55 ntpd[1908]: Listen normally on 3 eth0 172.31.18.251:123 Aug 13 00:18:55.922429 ntpd[1908]: 13 Aug 00:18:55 ntpd[1908]: Listen normally on 4 lo [::1]:123 Aug 13 00:18:55.922429 ntpd[1908]: 13 Aug 00:18:55 ntpd[1908]: bind(21) AF_INET6 fe80::44c:7dff:fe3a:c897%2#123 flags 0x11 failed: Cannot assign requested address Aug 13 00:18:55.922429 ntpd[1908]: 13 Aug 00:18:55 ntpd[1908]: unable to create socket on eth0 (5) for fe80::44c:7dff:fe3a:c897%2#123 Aug 13 00:18:55.922429 ntpd[1908]: 13 Aug 00:18:55 ntpd[1908]: failed to init interface for address fe80::44c:7dff:fe3a:c897%2 Aug 13 00:18:55.922429 ntpd[1908]: 13 Aug 00:18:55 ntpd[1908]: Listening on routing socket on fd #21 for interface updates Aug 13 00:18:55.921960 ntpd[1908]: Listen normally on 4 lo [::1]:123 Aug 13 00:18:55.922073 ntpd[1908]: bind(21) AF_INET6 fe80::44c:7dff:fe3a:c897%2#123 flags 0x11 failed: Cannot assign requested address Aug 13 00:18:55.922113 ntpd[1908]: unable to create socket on eth0 (5) for fe80::44c:7dff:fe3a:c897%2#123 Aug 13 00:18:55.922146 ntpd[1908]: failed to init interface for address fe80::44c:7dff:fe3a:c897%2 Aug 13 00:18:55.922244 ntpd[1908]: Listening on routing socket on fd #21 for interface updates Aug 13 00:18:55.924514 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:18:55.924886 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 00:18:55.945771 tar[1921]: linux-arm64/helm Aug 13 00:18:55.951103 extend-filesystems[1906]: Resized partition /dev/nvme0n1p9 Aug 13 00:18:55.966782 extend-filesystems[1968]: resize2fs 1.47.1 (20-May-2024) Aug 13 00:18:55.975771 update_engine[1915]: I20250813 00:18:55.971643 1915 main.cc:92] Flatcar Update Engine starting Aug 13 00:18:55.978768 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Aug 13 00:18:55.984664 ntpd[1908]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 00:18:55.984854 ntpd[1908]: 13 Aug 00:18:55 ntpd[1908]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 00:18:55.988188 coreos-metadata[1903]: Aug 13 00:18:55.988 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 13 00:18:55.990994 systemd[1]: Started update-engine.service - Update Engine. Aug 13 00:18:55.994416 coreos-metadata[1903]: Aug 13 00:18:55.993 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Aug 13 00:18:55.999246 ntpd[1908]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 00:18:55.999621 ntpd[1908]: 13 Aug 00:18:55 ntpd[1908]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 00:18:56.007579 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 00:18:56.014330 coreos-metadata[1903]: Aug 13 00:18:56.013 INFO Fetch successful Aug 13 00:18:56.014330 coreos-metadata[1903]: Aug 13 00:18:56.014 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Aug 13 00:18:56.016997 systemd[1]: Finished setup-oem.service - Setup OEM. Aug 13 00:18:56.023379 update_engine[1915]: I20250813 00:18:56.019505 1915 update_check_scheduler.cc:74] Next update check in 8m13s Aug 13 00:18:56.024356 coreos-metadata[1903]: Aug 13 00:18:56.022 INFO Fetch successful Aug 13 00:18:56.024356 coreos-metadata[1903]: Aug 13 00:18:56.022 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Aug 13 00:18:56.024356 coreos-metadata[1903]: Aug 13 00:18:56.024 INFO Fetch successful Aug 13 00:18:56.024356 coreos-metadata[1903]: Aug 13 00:18:56.024 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Aug 13 00:18:56.027203 coreos-metadata[1903]: Aug 13 00:18:56.026 INFO Fetch successful Aug 13 00:18:56.027203 coreos-metadata[1903]: Aug 13 00:18:56.026 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Aug 13 00:18:56.030003 coreos-metadata[1903]: Aug 13 00:18:56.029 INFO Fetch failed with 404: resource not found Aug 13 00:18:56.030003 coreos-metadata[1903]: Aug 13 00:18:56.029 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Aug 13 00:18:56.032384 coreos-metadata[1903]: Aug 13 00:18:56.031 INFO Fetch successful Aug 13 00:18:56.032384 coreos-metadata[1903]: Aug 13 00:18:56.031 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Aug 13 00:18:56.033656 coreos-metadata[1903]: Aug 13 00:18:56.033 INFO Fetch successful Aug 13 00:18:56.033656 coreos-metadata[1903]: Aug 13 00:18:56.033 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Aug 13 00:18:56.041368 coreos-metadata[1903]: Aug 13 00:18:56.040 INFO Fetch successful Aug 13 00:18:56.044093 coreos-metadata[1903]: Aug 13 00:18:56.042 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Aug 13 00:18:56.044886 coreos-metadata[1903]: Aug 13 00:18:56.044 INFO Fetch successful Aug 13 00:18:56.044886 coreos-metadata[1903]: Aug 13 00:18:56.044 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Aug 13 00:18:56.045803 coreos-metadata[1903]: Aug 13 00:18:56.045 INFO Fetch successful Aug 13 00:18:56.078780 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Aug 13 00:18:56.098114 extend-filesystems[1968]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Aug 13 00:18:56.098114 extend-filesystems[1968]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 00:18:56.098114 extend-filesystems[1968]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Aug 13 00:18:56.110397 extend-filesystems[1906]: Resized filesystem in /dev/nvme0n1p9 Aug 13 00:18:56.113901 bash[1975]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:18:56.114625 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:18:56.115079 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 00:18:56.122036 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 00:18:56.150109 systemd[1]: Starting sshkeys.service... Aug 13 00:18:56.171190 systemd-logind[1913]: Watching system buttons on /dev/input/event0 (Power Button) Aug 13 00:18:56.171242 systemd-logind[1913]: Watching system buttons on /dev/input/event1 (Sleep Button) Aug 13 00:18:56.171578 systemd-logind[1913]: New seat seat0. Aug 13 00:18:56.176568 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 00:18:56.237879 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 00:18:56.241412 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 00:18:56.278871 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1722) Aug 13 00:18:56.284333 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 00:18:56.306821 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 00:18:56.352637 dbus-daemon[1904]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 00:18:56.355095 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 00:18:56.366998 dbus-daemon[1904]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1947 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 00:18:56.380456 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 00:18:56.465627 polkitd[2007]: Started polkitd version 121 Aug 13 00:18:56.507563 polkitd[2007]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 00:18:56.509902 polkitd[2007]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 00:18:56.518305 polkitd[2007]: Finished loading, compiling and executing 2 rules Aug 13 00:18:56.524934 dbus-daemon[1904]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 00:18:56.526118 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 00:18:56.532641 polkitd[2007]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 00:18:56.547911 systemd-networkd[1845]: eth0: Gained IPv6LL Aug 13 00:18:56.580833 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 00:18:56.586103 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 00:18:56.615556 locksmithd[1972]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:18:56.619650 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Aug 13 00:18:56.629406 coreos-metadata[1996]: Aug 13 00:18:56.622 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 13 00:18:56.626305 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:18:56.631653 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 00:18:56.645432 coreos-metadata[1996]: Aug 13 00:18:56.643 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Aug 13 00:18:56.645432 coreos-metadata[1996]: Aug 13 00:18:56.645 INFO Fetch successful Aug 13 00:18:56.645432 coreos-metadata[1996]: Aug 13 00:18:56.645 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Aug 13 00:18:56.646985 coreos-metadata[1996]: Aug 13 00:18:56.646 INFO Fetch successful Aug 13 00:18:56.659827 systemd-hostnamed[1947]: Hostname set to (transient) Aug 13 00:18:56.660602 systemd-resolved[1846]: System hostname changed to 'ip-172-31-18-251'. Aug 13 00:18:56.686219 unknown[1996]: wrote ssh authorized keys file for user: core Aug 13 00:18:56.803182 update-ssh-keys[2079]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:18:56.805838 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 00:18:56.816851 systemd[1]: Finished sshkeys.service. Aug 13 00:18:56.901063 containerd[1941]: time="2025-08-13T00:18:56.897207913Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 00:18:56.919566 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 00:18:57.007521 amazon-ssm-agent[2065]: Initializing new seelog logger Aug 13 00:18:57.010639 amazon-ssm-agent[2065]: New Seelog Logger Creation Complete Aug 13 00:18:57.013912 amazon-ssm-agent[2065]: 2025/08/13 00:18:57 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:18:57.013912 amazon-ssm-agent[2065]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:18:57.013912 amazon-ssm-agent[2065]: 2025/08/13 00:18:57 processing appconfig overrides Aug 13 00:18:57.019089 amazon-ssm-agent[2065]: 2025/08/13 00:18:57 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:18:57.019191 amazon-ssm-agent[2065]: 2025-08-13 00:18:57 INFO Proxy environment variables: Aug 13 00:18:57.022175 amazon-ssm-agent[2065]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:18:57.022175 amazon-ssm-agent[2065]: 2025/08/13 00:18:57 processing appconfig overrides Aug 13 00:18:57.027972 amazon-ssm-agent[2065]: 2025/08/13 00:18:57 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:18:57.027972 amazon-ssm-agent[2065]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:18:57.027972 amazon-ssm-agent[2065]: 2025/08/13 00:18:57 processing appconfig overrides Aug 13 00:18:57.045893 amazon-ssm-agent[2065]: 2025/08/13 00:18:57 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:18:57.045893 amazon-ssm-agent[2065]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:18:57.045893 amazon-ssm-agent[2065]: 2025/08/13 00:18:57 processing appconfig overrides Aug 13 00:18:57.125660 amazon-ssm-agent[2065]: 2025-08-13 00:18:57 INFO https_proxy: Aug 13 00:18:57.135598 containerd[1941]: time="2025-08-13T00:18:57.135521458Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:18:57.140901 containerd[1941]: time="2025-08-13T00:18:57.140462818Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:18:57.140901 containerd[1941]: time="2025-08-13T00:18:57.140567302Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:18:57.140901 containerd[1941]: time="2025-08-13T00:18:57.140629966Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:18:57.143207 containerd[1941]: time="2025-08-13T00:18:57.143106790Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 00:18:57.143323 containerd[1941]: time="2025-08-13T00:18:57.143226358Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 00:18:57.143509 containerd[1941]: time="2025-08-13T00:18:57.143419510Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:18:57.143509 containerd[1941]: time="2025-08-13T00:18:57.143462086Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:18:57.144787 containerd[1941]: time="2025-08-13T00:18:57.143944858Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:18:57.144787 containerd[1941]: time="2025-08-13T00:18:57.144020290Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:18:57.144787 containerd[1941]: time="2025-08-13T00:18:57.144055498Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:18:57.144787 containerd[1941]: time="2025-08-13T00:18:57.144103618Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:18:57.144787 containerd[1941]: time="2025-08-13T00:18:57.144373534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:18:57.147463 containerd[1941]: time="2025-08-13T00:18:57.147334162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:18:57.147897 containerd[1941]: time="2025-08-13T00:18:57.147691006Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:18:57.147897 containerd[1941]: time="2025-08-13T00:18:57.147795838Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:18:57.150964 containerd[1941]: time="2025-08-13T00:18:57.148104286Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:18:57.150964 containerd[1941]: time="2025-08-13T00:18:57.148292950Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:18:57.156174 containerd[1941]: time="2025-08-13T00:18:57.155863126Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:18:57.156174 containerd[1941]: time="2025-08-13T00:18:57.155972758Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:18:57.157953 containerd[1941]: time="2025-08-13T00:18:57.156333550Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 00:18:57.157953 containerd[1941]: time="2025-08-13T00:18:57.156385138Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 00:18:57.157953 containerd[1941]: time="2025-08-13T00:18:57.156422050Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:18:57.157953 containerd[1941]: time="2025-08-13T00:18:57.156673654Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:18:57.159321 containerd[1941]: time="2025-08-13T00:18:57.159264862Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:18:57.165560 containerd[1941]: time="2025-08-13T00:18:57.164055034Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 00:18:57.165560 containerd[1941]: time="2025-08-13T00:18:57.164120770Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 00:18:57.165560 containerd[1941]: time="2025-08-13T00:18:57.164163418Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 00:18:57.165560 containerd[1941]: time="2025-08-13T00:18:57.164200594Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:18:57.165560 containerd[1941]: time="2025-08-13T00:18:57.164244298Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:18:57.165560 containerd[1941]: time="2025-08-13T00:18:57.164288386Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:18:57.165560 containerd[1941]: time="2025-08-13T00:18:57.164329750Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:18:57.165560 containerd[1941]: time="2025-08-13T00:18:57.164373310Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:18:57.165560 containerd[1941]: time="2025-08-13T00:18:57.164421718Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:18:57.165560 containerd[1941]: time="2025-08-13T00:18:57.164461054Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:18:57.165560 containerd[1941]: time="2025-08-13T00:18:57.164491306Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:18:57.165560 containerd[1941]: time="2025-08-13T00:18:57.164550070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:18:57.165560 containerd[1941]: time="2025-08-13T00:18:57.164584858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:18:57.165560 containerd[1941]: time="2025-08-13T00:18:57.164626774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:18:57.166383 containerd[1941]: time="2025-08-13T00:18:57.164670262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:18:57.166383 containerd[1941]: time="2025-08-13T00:18:57.164710786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:18:57.166773 containerd[1941]: time="2025-08-13T00:18:57.166703638Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:18:57.170772 containerd[1941]: time="2025-08-13T00:18:57.168829486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:18:57.170772 containerd[1941]: time="2025-08-13T00:18:57.168894178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:18:57.170772 containerd[1941]: time="2025-08-13T00:18:57.168940714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 00:18:57.170772 containerd[1941]: time="2025-08-13T00:18:57.168989074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 00:18:57.170772 containerd[1941]: time="2025-08-13T00:18:57.169031350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:18:57.170772 containerd[1941]: time="2025-08-13T00:18:57.169071562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 00:18:57.170772 containerd[1941]: time="2025-08-13T00:18:57.169124686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:18:57.170772 containerd[1941]: time="2025-08-13T00:18:57.169176118Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 00:18:57.170772 containerd[1941]: time="2025-08-13T00:18:57.169236298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 00:18:57.170772 containerd[1941]: time="2025-08-13T00:18:57.169275778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:18:57.170772 containerd[1941]: time="2025-08-13T00:18:57.169312450Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:18:57.170772 containerd[1941]: time="2025-08-13T00:18:57.169562890Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:18:57.170772 containerd[1941]: time="2025-08-13T00:18:57.169616290Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 00:18:57.170772 containerd[1941]: time="2025-08-13T00:18:57.169653190Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:18:57.171450 containerd[1941]: time="2025-08-13T00:18:57.169695310Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 00:18:57.171450 containerd[1941]: time="2025-08-13T00:18:57.169729594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:18:57.171450 containerd[1941]: time="2025-08-13T00:18:57.169803310Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 00:18:57.171450 containerd[1941]: time="2025-08-13T00:18:57.169842238Z" level=info msg="NRI interface is disabled by configuration." Aug 13 00:18:57.171450 containerd[1941]: time="2025-08-13T00:18:57.169880926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:18:57.171668 containerd[1941]: time="2025-08-13T00:18:57.170599198Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:18:57.175725 containerd[1941]: time="2025-08-13T00:18:57.170730346Z" level=info msg="Connect containerd service" Aug 13 00:18:57.175725 containerd[1941]: time="2025-08-13T00:18:57.174880378Z" level=info msg="using legacy CRI server" Aug 13 00:18:57.175725 containerd[1941]: time="2025-08-13T00:18:57.174908254Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 00:18:57.175725 containerd[1941]: time="2025-08-13T00:18:57.175062634Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:18:57.179653 containerd[1941]: time="2025-08-13T00:18:57.178387726Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:18:57.179653 containerd[1941]: time="2025-08-13T00:18:57.178801798Z" level=info msg="Start subscribing containerd event" Aug 13 00:18:57.179653 containerd[1941]: time="2025-08-13T00:18:57.178877710Z" level=info msg="Start recovering state" Aug 13 00:18:57.179653 containerd[1941]: time="2025-08-13T00:18:57.179002270Z" level=info msg="Start event monitor" Aug 13 00:18:57.179653 containerd[1941]: time="2025-08-13T00:18:57.179026666Z" level=info msg="Start snapshots syncer" Aug 13 00:18:57.179653 containerd[1941]: time="2025-08-13T00:18:57.179047234Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:18:57.179653 containerd[1941]: time="2025-08-13T00:18:57.179076238Z" level=info msg="Start streaming server" Aug 13 00:18:57.182556 containerd[1941]: time="2025-08-13T00:18:57.182511010Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:18:57.182822 containerd[1941]: time="2025-08-13T00:18:57.182794450Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:18:57.187979 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 00:18:57.191073 containerd[1941]: time="2025-08-13T00:18:57.189352618Z" level=info msg="containerd successfully booted in 0.298918s" Aug 13 00:18:57.234381 amazon-ssm-agent[2065]: 2025-08-13 00:18:57 INFO http_proxy: Aug 13 00:18:57.337944 amazon-ssm-agent[2065]: 2025-08-13 00:18:57 INFO no_proxy: Aug 13 00:18:57.436917 amazon-ssm-agent[2065]: 2025-08-13 00:18:57 INFO Checking if agent identity type OnPrem can be assumed Aug 13 00:18:57.535391 amazon-ssm-agent[2065]: 2025-08-13 00:18:57 INFO Checking if agent identity type EC2 can be assumed Aug 13 00:18:57.634674 amazon-ssm-agent[2065]: 2025-08-13 00:18:57 INFO Agent will take identity from EC2 Aug 13 00:18:57.733991 amazon-ssm-agent[2065]: 2025-08-13 00:18:57 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 13 00:18:57.833210 amazon-ssm-agent[2065]: 2025-08-13 00:18:57 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 13 00:18:57.927160 tar[1921]: linux-arm64/LICENSE Aug 13 00:18:57.929689 tar[1921]: linux-arm64/README.md Aug 13 00:18:57.934793 amazon-ssm-agent[2065]: 2025-08-13 00:18:57 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 13 00:18:57.965135 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 00:18:57.993513 amazon-ssm-agent[2065]: 2025-08-13 00:18:57 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Aug 13 00:18:57.993513 amazon-ssm-agent[2065]: 2025-08-13 00:18:57 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Aug 13 00:18:57.993513 amazon-ssm-agent[2065]: 2025-08-13 00:18:57 INFO [amazon-ssm-agent] Starting Core Agent Aug 13 00:18:57.993513 amazon-ssm-agent[2065]: 2025-08-13 00:18:57 INFO [amazon-ssm-agent] registrar detected. Attempting registration Aug 13 00:18:57.993513 amazon-ssm-agent[2065]: 2025-08-13 00:18:57 INFO [Registrar] Starting registrar module Aug 13 00:18:57.993513 amazon-ssm-agent[2065]: 2025-08-13 00:18:57 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Aug 13 00:18:57.993513 amazon-ssm-agent[2065]: 2025-08-13 00:18:57 INFO [EC2Identity] EC2 registration was successful. Aug 13 00:18:57.993513 amazon-ssm-agent[2065]: 2025-08-13 00:18:57 INFO [CredentialRefresher] credentialRefresher has started Aug 13 00:18:57.993513 amazon-ssm-agent[2065]: 2025-08-13 00:18:57 INFO [CredentialRefresher] Starting credentials refresher loop Aug 13 00:18:57.994187 amazon-ssm-agent[2065]: 2025-08-13 00:18:57 INFO EC2RoleProvider Successfully connected with instance profile role credentials Aug 13 00:18:58.033385 amazon-ssm-agent[2065]: 2025-08-13 00:18:57 INFO [CredentialRefresher] Next credential rotation will be in 31.6499695552 minutes Aug 13 00:18:58.885890 ntpd[1908]: Listen normally on 6 eth0 [fe80::44c:7dff:fe3a:c897%2]:123 Aug 13 00:18:58.890999 ntpd[1908]: 13 Aug 00:18:58 ntpd[1908]: Listen normally on 6 eth0 [fe80::44c:7dff:fe3a:c897%2]:123 Aug 13 00:18:58.916175 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:18:58.929645 (kubelet)[2136]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:18:59.040116 amazon-ssm-agent[2065]: 2025-08-13 00:18:59 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Aug 13 00:18:59.142506 amazon-ssm-agent[2065]: 2025-08-13 00:18:59 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2139) started Aug 13 00:18:59.243031 amazon-ssm-agent[2065]: 2025-08-13 00:18:59 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Aug 13 00:18:59.647418 sshd_keygen[1934]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:18:59.693837 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 00:18:59.706707 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 00:18:59.714027 systemd[1]: Started sshd@0-172.31.18.251:22-139.178.89.65:39944.service - OpenSSH per-connection server daemon (139.178.89.65:39944). Aug 13 00:18:59.738948 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:18:59.739359 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 00:18:59.759266 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 00:18:59.802970 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 00:18:59.814976 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 00:18:59.825416 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 00:18:59.828585 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 00:18:59.831259 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 00:18:59.834272 systemd[1]: Startup finished in 1.279s (kernel) + 9.235s (initrd) + 9.826s (userspace) = 20.341s. Aug 13 00:18:59.978965 sshd[2162]: Accepted publickey for core from 139.178.89.65 port 39944 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:18:59.986264 sshd[2162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:19:00.005814 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 00:19:00.013401 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 00:19:00.024048 systemd-logind[1913]: New session 1 of user core. Aug 13 00:19:00.058822 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 00:19:00.073861 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 00:19:00.081256 (systemd)[2177]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:19:00.127856 kubelet[2136]: E0813 00:19:00.127652 2136 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:19:00.131116 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:19:00.131427 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:19:00.131960 systemd[1]: kubelet.service: Consumed 1.458s CPU time. Aug 13 00:19:00.311698 systemd[2177]: Queued start job for default target default.target. Aug 13 00:19:00.321835 systemd[2177]: Created slice app.slice - User Application Slice. Aug 13 00:19:00.321903 systemd[2177]: Reached target paths.target - Paths. Aug 13 00:19:00.321937 systemd[2177]: Reached target timers.target - Timers. Aug 13 00:19:00.324436 systemd[2177]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 00:19:00.345420 systemd[2177]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 00:19:00.345878 systemd[2177]: Reached target sockets.target - Sockets. Aug 13 00:19:00.345918 systemd[2177]: Reached target basic.target - Basic System. Aug 13 00:19:00.346018 systemd[2177]: Reached target default.target - Main User Target. Aug 13 00:19:00.346085 systemd[2177]: Startup finished in 251ms. Aug 13 00:19:00.346671 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 00:19:00.357993 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 00:19:00.513300 systemd[1]: Started sshd@1-172.31.18.251:22-139.178.89.65:47952.service - OpenSSH per-connection server daemon (139.178.89.65:47952). Aug 13 00:19:00.691479 sshd[2189]: Accepted publickey for core from 139.178.89.65 port 47952 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:19:00.693960 sshd[2189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:19:00.701554 systemd-logind[1913]: New session 2 of user core. Aug 13 00:19:00.714007 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 00:19:00.840151 sshd[2189]: pam_unix(sshd:session): session closed for user core Aug 13 00:19:00.846421 systemd[1]: sshd@1-172.31.18.251:22-139.178.89.65:47952.service: Deactivated successfully. Aug 13 00:19:00.849218 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:19:00.850681 systemd-logind[1913]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:19:00.852572 systemd-logind[1913]: Removed session 2. Aug 13 00:19:00.879262 systemd[1]: Started sshd@2-172.31.18.251:22-139.178.89.65:47954.service - OpenSSH per-connection server daemon (139.178.89.65:47954). Aug 13 00:19:01.063591 sshd[2196]: Accepted publickey for core from 139.178.89.65 port 47954 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:19:01.065591 sshd[2196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:19:01.075075 systemd-logind[1913]: New session 3 of user core. Aug 13 00:19:01.083029 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 00:19:01.202021 sshd[2196]: pam_unix(sshd:session): session closed for user core Aug 13 00:19:01.207214 systemd-logind[1913]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:19:01.208141 systemd[1]: sshd@2-172.31.18.251:22-139.178.89.65:47954.service: Deactivated successfully. Aug 13 00:19:01.211051 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:19:01.215495 systemd-logind[1913]: Removed session 3. Aug 13 00:19:01.242272 systemd[1]: Started sshd@3-172.31.18.251:22-139.178.89.65:47958.service - OpenSSH per-connection server daemon (139.178.89.65:47958). Aug 13 00:19:01.414384 sshd[2203]: Accepted publickey for core from 139.178.89.65 port 47958 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:19:01.417566 sshd[2203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:19:01.426873 systemd-logind[1913]: New session 4 of user core. Aug 13 00:19:01.437007 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 00:19:01.561059 sshd[2203]: pam_unix(sshd:session): session closed for user core Aug 13 00:19:01.567734 systemd[1]: sshd@3-172.31.18.251:22-139.178.89.65:47958.service: Deactivated successfully. Aug 13 00:19:01.571426 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:19:01.572806 systemd-logind[1913]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:19:01.574545 systemd-logind[1913]: Removed session 4. Aug 13 00:19:01.606213 systemd[1]: Started sshd@4-172.31.18.251:22-139.178.89.65:47974.service - OpenSSH per-connection server daemon (139.178.89.65:47974). Aug 13 00:19:01.772388 sshd[2211]: Accepted publickey for core from 139.178.89.65 port 47974 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:19:01.774413 sshd[2211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:19:01.782988 systemd-logind[1913]: New session 5 of user core. Aug 13 00:19:01.793019 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 00:19:01.937106 sudo[2214]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:19:01.937793 sudo[2214]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:19:01.950499 sudo[2214]: pam_unix(sudo:session): session closed for user root Aug 13 00:19:01.974342 sshd[2211]: pam_unix(sshd:session): session closed for user core Aug 13 00:19:01.980349 systemd[1]: sshd@4-172.31.18.251:22-139.178.89.65:47974.service: Deactivated successfully. Aug 13 00:19:01.983544 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:19:01.987438 systemd-logind[1913]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:19:01.990890 systemd-logind[1913]: Removed session 5. Aug 13 00:19:02.015295 systemd[1]: Started sshd@5-172.31.18.251:22-139.178.89.65:47976.service - OpenSSH per-connection server daemon (139.178.89.65:47976). Aug 13 00:19:02.195888 sshd[2219]: Accepted publickey for core from 139.178.89.65 port 47976 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:19:02.198553 sshd[2219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:19:02.206650 systemd-logind[1913]: New session 6 of user core. Aug 13 00:19:02.217052 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 00:19:02.321148 sudo[2223]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:19:02.321824 sudo[2223]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:19:02.328020 sudo[2223]: pam_unix(sudo:session): session closed for user root Aug 13 00:19:02.338177 sudo[2222]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 00:19:02.338833 sudo[2222]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:19:02.365218 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 00:19:02.367883 auditctl[2226]: No rules Aug 13 00:19:02.368580 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:19:02.369097 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 00:19:02.377482 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 00:19:02.439705 augenrules[2244]: No rules Aug 13 00:19:02.442555 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 00:19:02.445211 sudo[2222]: pam_unix(sudo:session): session closed for user root Aug 13 00:19:02.469071 sshd[2219]: pam_unix(sshd:session): session closed for user core Aug 13 00:19:02.474669 systemd[1]: sshd@5-172.31.18.251:22-139.178.89.65:47976.service: Deactivated successfully. Aug 13 00:19:02.477510 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:19:02.480814 systemd-logind[1913]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:19:02.482577 systemd-logind[1913]: Removed session 6. Aug 13 00:19:02.511478 systemd[1]: Started sshd@6-172.31.18.251:22-139.178.89.65:47982.service - OpenSSH per-connection server daemon (139.178.89.65:47982). Aug 13 00:19:02.676057 sshd[2252]: Accepted publickey for core from 139.178.89.65 port 47982 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:19:02.678603 sshd[2252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:19:02.685946 systemd-logind[1913]: New session 7 of user core. Aug 13 00:19:02.698995 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 00:19:02.802232 sudo[2255]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:19:02.803912 sudo[2255]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:19:03.149057 systemd-resolved[1846]: Clock change detected. Flushing caches. Aug 13 00:19:03.778259 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 00:19:03.791277 (dockerd)[2271]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 00:19:04.350320 dockerd[2271]: time="2025-08-13T00:19:04.350245625Z" level=info msg="Starting up" Aug 13 00:19:04.642119 dockerd[2271]: time="2025-08-13T00:19:04.641984599Z" level=info msg="Loading containers: start." Aug 13 00:19:04.827995 kernel: Initializing XFRM netlink socket Aug 13 00:19:04.902707 (udev-worker)[2293]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:19:04.983907 systemd-networkd[1845]: docker0: Link UP Aug 13 00:19:05.009152 dockerd[2271]: time="2025-08-13T00:19:05.009095393Z" level=info msg="Loading containers: done." Aug 13 00:19:05.031437 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1357817995-merged.mount: Deactivated successfully. Aug 13 00:19:05.036789 dockerd[2271]: time="2025-08-13T00:19:05.036233105Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:19:05.036789 dockerd[2271]: time="2025-08-13T00:19:05.036369521Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 00:19:05.036789 dockerd[2271]: time="2025-08-13T00:19:05.036546269Z" level=info msg="Daemon has completed initialization" Aug 13 00:19:05.093801 dockerd[2271]: time="2025-08-13T00:19:05.093534785Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:19:05.095704 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 00:19:06.255775 containerd[1941]: time="2025-08-13T00:19:06.255702667Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 00:19:06.902324 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3343618124.mount: Deactivated successfully. Aug 13 00:19:08.135700 containerd[1941]: time="2025-08-13T00:19:08.135613172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:08.137691 containerd[1941]: time="2025-08-13T00:19:08.137631860Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=25651813" Aug 13 00:19:08.139057 containerd[1941]: time="2025-08-13T00:19:08.138966956Z" level=info msg="ImageCreate event name:\"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:08.145969 containerd[1941]: time="2025-08-13T00:19:08.145889708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:08.148308 containerd[1941]: time="2025-08-13T00:19:08.148161680Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"25648613\" in 1.892392629s" Aug 13 00:19:08.148308 containerd[1941]: time="2025-08-13T00:19:08.148227176Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\"" Aug 13 00:19:08.153478 containerd[1941]: time="2025-08-13T00:19:08.153313664Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 00:19:09.618661 containerd[1941]: time="2025-08-13T00:19:09.618595692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:09.620731 containerd[1941]: time="2025-08-13T00:19:09.620651016Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=22460283" Aug 13 00:19:09.621800 containerd[1941]: time="2025-08-13T00:19:09.621540648Z" level=info msg="ImageCreate event name:\"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:09.627181 containerd[1941]: time="2025-08-13T00:19:09.627082008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:09.629820 containerd[1941]: time="2025-08-13T00:19:09.629567904Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"23996073\" in 1.4761706s" Aug 13 00:19:09.629820 containerd[1941]: time="2025-08-13T00:19:09.629626404Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\"" Aug 13 00:19:09.630742 containerd[1941]: time="2025-08-13T00:19:09.630467940Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 00:19:10.645943 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:19:10.655164 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:19:11.059973 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:19:11.066907 containerd[1941]: time="2025-08-13T00:19:11.066830003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:11.069040 containerd[1941]: time="2025-08-13T00:19:11.068938487Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=17125089" Aug 13 00:19:11.072392 containerd[1941]: time="2025-08-13T00:19:11.072305363Z" level=info msg="ImageCreate event name:\"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:11.075851 (kubelet)[2479]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:19:11.082400 containerd[1941]: time="2025-08-13T00:19:11.082313171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:11.087466 containerd[1941]: time="2025-08-13T00:19:11.087256535Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"18660897\" in 1.456698691s" Aug 13 00:19:11.087466 containerd[1941]: time="2025-08-13T00:19:11.087324251Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\"" Aug 13 00:19:11.088472 containerd[1941]: time="2025-08-13T00:19:11.088189067Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 00:19:11.160174 kubelet[2479]: E0813 00:19:11.160077 2479 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:19:11.166985 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:19:11.167345 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:19:12.546860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1654675068.mount: Deactivated successfully. Aug 13 00:19:13.082467 containerd[1941]: time="2025-08-13T00:19:13.082411669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:13.084671 containerd[1941]: time="2025-08-13T00:19:13.084608269Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=26915993" Aug 13 00:19:13.085824 containerd[1941]: time="2025-08-13T00:19:13.085745665Z" level=info msg="ImageCreate event name:\"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:13.089634 containerd[1941]: time="2025-08-13T00:19:13.089573185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:13.091650 containerd[1941]: time="2025-08-13T00:19:13.091578025Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"26915012\" in 2.00333149s" Aug 13 00:19:13.091789 containerd[1941]: time="2025-08-13T00:19:13.091647889Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\"" Aug 13 00:19:13.092730 containerd[1941]: time="2025-08-13T00:19:13.092647669Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:19:13.690017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3849825469.mount: Deactivated successfully. Aug 13 00:19:14.800469 containerd[1941]: time="2025-08-13T00:19:14.798718697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:14.801737 containerd[1941]: time="2025-08-13T00:19:14.801665381Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Aug 13 00:19:14.803052 containerd[1941]: time="2025-08-13T00:19:14.802977677Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:14.808556 containerd[1941]: time="2025-08-13T00:19:14.808449821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:14.811119 containerd[1941]: time="2025-08-13T00:19:14.811060481Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.718325272s" Aug 13 00:19:14.811400 containerd[1941]: time="2025-08-13T00:19:14.811259537Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Aug 13 00:19:14.813186 containerd[1941]: time="2025-08-13T00:19:14.813121937Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:19:15.288236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3263683214.mount: Deactivated successfully. Aug 13 00:19:15.295948 containerd[1941]: time="2025-08-13T00:19:15.295876456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:15.297550 containerd[1941]: time="2025-08-13T00:19:15.297499360Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Aug 13 00:19:15.298360 containerd[1941]: time="2025-08-13T00:19:15.298082872Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:15.303809 containerd[1941]: time="2025-08-13T00:19:15.303461992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:15.310613 containerd[1941]: time="2025-08-13T00:19:15.310536868Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 497.348235ms" Aug 13 00:19:15.310613 containerd[1941]: time="2025-08-13T00:19:15.310606324Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 13 00:19:15.312164 containerd[1941]: time="2025-08-13T00:19:15.311893984Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 00:19:15.829722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2186756980.mount: Deactivated successfully. Aug 13 00:19:17.804468 containerd[1941]: time="2025-08-13T00:19:17.804394976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:17.806833 containerd[1941]: time="2025-08-13T00:19:17.806740352Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406465" Aug 13 00:19:17.807226 containerd[1941]: time="2025-08-13T00:19:17.807152996Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:17.813416 containerd[1941]: time="2025-08-13T00:19:17.813360968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:17.816259 containerd[1941]: time="2025-08-13T00:19:17.816070616Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.504126844s" Aug 13 00:19:17.816259 containerd[1941]: time="2025-08-13T00:19:17.816126440Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Aug 13 00:19:21.417708 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:19:21.427233 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:19:21.803118 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:19:21.816289 (kubelet)[2631]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:19:21.951685 kubelet[2631]: E0813 00:19:21.951580 2631 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:19:21.956024 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:19:21.956322 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:19:24.419712 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:19:24.431284 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:19:24.484533 systemd[1]: Reloading requested from client PID 2645 ('systemctl') (unit session-7.scope)... Aug 13 00:19:24.484570 systemd[1]: Reloading... Aug 13 00:19:24.712799 zram_generator::config[2689]: No configuration found. Aug 13 00:19:24.960467 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:19:25.133522 systemd[1]: Reloading finished in 648 ms. Aug 13 00:19:25.223794 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:19:25.234587 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:19:25.237507 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:19:25.238022 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:19:25.244363 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:19:25.565929 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:19:25.579575 (kubelet)[2751]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:19:25.658570 kubelet[2751]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:19:25.659085 kubelet[2751]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:19:25.659180 kubelet[2751]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:19:25.660810 kubelet[2751]: I0813 00:19:25.659372 2751 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:19:26.952233 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 00:19:27.135792 kubelet[2751]: I0813 00:19:27.134779 2751 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:19:27.135792 kubelet[2751]: I0813 00:19:27.134830 2751 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:19:27.135792 kubelet[2751]: I0813 00:19:27.135239 2751 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:19:27.182502 kubelet[2751]: E0813 00:19:27.182434 2751 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.18.251:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.251:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:19:27.184475 kubelet[2751]: I0813 00:19:27.184436 2751 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:19:27.195630 kubelet[2751]: E0813 00:19:27.195573 2751 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:19:27.195885 kubelet[2751]: I0813 00:19:27.195864 2751 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:19:27.203450 kubelet[2751]: I0813 00:19:27.202939 2751 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:19:27.205379 kubelet[2751]: I0813 00:19:27.205347 2751 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:19:27.205923 kubelet[2751]: I0813 00:19:27.205872 2751 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:19:27.206787 kubelet[2751]: I0813 00:19:27.206044 2751 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-251","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:19:27.206787 kubelet[2751]: I0813 00:19:27.206470 2751 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:19:27.206787 kubelet[2751]: I0813 00:19:27.206491 2751 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:19:27.207254 kubelet[2751]: I0813 00:19:27.207232 2751 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:19:27.212261 kubelet[2751]: I0813 00:19:27.212227 2751 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:19:27.212432 kubelet[2751]: I0813 00:19:27.212412 2751 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:19:27.212548 kubelet[2751]: I0813 00:19:27.212530 2751 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:19:27.212822 kubelet[2751]: I0813 00:19:27.212791 2751 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:19:27.222357 kubelet[2751]: W0813 00:19:27.222260 2751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.251:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-251&limit=500&resourceVersion=0": dial tcp 172.31.18.251:6443: connect: connection refused Aug 13 00:19:27.222509 kubelet[2751]: E0813 00:19:27.222368 2751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.251:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-251&limit=500&resourceVersion=0\": dial tcp 172.31.18.251:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:19:27.223810 kubelet[2751]: I0813 00:19:27.223684 2751 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 00:19:27.225043 kubelet[2751]: I0813 00:19:27.224898 2751 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:19:27.225368 kubelet[2751]: W0813 00:19:27.225203 2751 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:19:27.227252 kubelet[2751]: I0813 00:19:27.227202 2751 server.go:1274] "Started kubelet" Aug 13 00:19:27.230604 kubelet[2751]: W0813 00:19:27.230407 2751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.251:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.18.251:6443: connect: connection refused Aug 13 00:19:27.230604 kubelet[2751]: E0813 00:19:27.230542 2751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.18.251:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.251:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:19:27.232852 kubelet[2751]: I0813 00:19:27.231736 2751 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:19:27.232852 kubelet[2751]: I0813 00:19:27.232333 2751 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:19:27.232852 kubelet[2751]: I0813 00:19:27.232566 2751 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:19:27.234555 kubelet[2751]: I0813 00:19:27.234496 2751 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:19:27.238278 kubelet[2751]: I0813 00:19:27.238235 2751 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:19:27.253290 kubelet[2751]: I0813 00:19:27.253245 2751 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:19:27.253577 kubelet[2751]: I0813 00:19:27.253284 2751 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:19:27.255239 kubelet[2751]: E0813 00:19:27.253738 2751 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-251\" not found" Aug 13 00:19:27.255239 kubelet[2751]: I0813 00:19:27.253336 2751 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:19:27.255239 kubelet[2751]: I0813 00:19:27.254254 2751 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:19:27.259870 kubelet[2751]: I0813 00:19:27.259834 2751 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:19:27.260239 kubelet[2751]: I0813 00:19:27.260204 2751 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:19:27.263278 kubelet[2751]: E0813 00:19:27.261076 2751 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.251:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.251:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-251.185b2b8da9a025f7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-251,UID:ip-172-31-18-251,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-251,},FirstTimestamp:2025-08-13 00:19:27.227168247 +0000 UTC m=+1.641002853,LastTimestamp:2025-08-13 00:19:27.227168247 +0000 UTC m=+1.641002853,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-251,}" Aug 13 00:19:27.263689 kubelet[2751]: E0813 00:19:27.263644 2751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.251:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-251?timeout=10s\": dial tcp 172.31.18.251:6443: connect: connection refused" interval="200ms" Aug 13 00:19:27.266382 kubelet[2751]: W0813 00:19:27.266276 2751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.251:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.251:6443: connect: connection refused Aug 13 00:19:27.266845 kubelet[2751]: E0813 00:19:27.266608 2751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.18.251:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.251:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:19:27.267117 kubelet[2751]: E0813 00:19:27.267086 2751 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:19:27.271801 kubelet[2751]: I0813 00:19:27.270402 2751 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:19:27.285685 kubelet[2751]: I0813 00:19:27.285634 2751 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:19:27.292684 kubelet[2751]: I0813 00:19:27.292642 2751 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:19:27.292955 kubelet[2751]: I0813 00:19:27.292936 2751 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:19:27.293105 kubelet[2751]: I0813 00:19:27.293086 2751 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:19:27.293484 kubelet[2751]: E0813 00:19:27.293451 2751 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:19:27.295556 kubelet[2751]: W0813 00:19:27.295514 2751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.251:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.251:6443: connect: connection refused Aug 13 00:19:27.296190 kubelet[2751]: E0813 00:19:27.296149 2751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.18.251:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.251:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:19:27.303574 kubelet[2751]: I0813 00:19:27.303531 2751 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:19:27.303574 kubelet[2751]: I0813 00:19:27.303567 2751 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:19:27.303851 kubelet[2751]: I0813 00:19:27.303599 2751 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:19:27.309405 kubelet[2751]: I0813 00:19:27.309239 2751 policy_none.go:49] "None policy: Start" Aug 13 00:19:27.310811 kubelet[2751]: I0813 00:19:27.310695 2751 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:19:27.310811 kubelet[2751]: I0813 00:19:27.310749 2751 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:19:27.325267 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 00:19:27.344572 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 00:19:27.351808 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 00:19:27.353867 kubelet[2751]: E0813 00:19:27.353819 2751 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-251\" not found" Aug 13 00:19:27.362394 kubelet[2751]: I0813 00:19:27.362353 2751 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:19:27.362674 kubelet[2751]: I0813 00:19:27.362647 2751 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:19:27.362783 kubelet[2751]: I0813 00:19:27.362681 2751 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:19:27.365265 kubelet[2751]: I0813 00:19:27.365083 2751 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:19:27.366308 kubelet[2751]: E0813 00:19:27.366073 2751 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-251\" not found" Aug 13 00:19:27.414659 systemd[1]: Created slice kubepods-burstable-pod2f4fc546174c6751c1bc7525474a4ed0.slice - libcontainer container kubepods-burstable-pod2f4fc546174c6751c1bc7525474a4ed0.slice. Aug 13 00:19:27.435246 systemd[1]: Created slice kubepods-burstable-pod03058d5e93e06300c19a152925600033.slice - libcontainer container kubepods-burstable-pod03058d5e93e06300c19a152925600033.slice. Aug 13 00:19:27.448410 systemd[1]: Created slice kubepods-burstable-poda5cc28d884bb847b50b8fb4d758425c0.slice - libcontainer container kubepods-burstable-poda5cc28d884bb847b50b8fb4d758425c0.slice. Aug 13 00:19:27.456120 kubelet[2751]: I0813 00:19:27.455896 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f4fc546174c6751c1bc7525474a4ed0-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-251\" (UID: \"2f4fc546174c6751c1bc7525474a4ed0\") " pod="kube-system/kube-scheduler-ip-172-31-18-251" Aug 13 00:19:27.456120 kubelet[2751]: I0813 00:19:27.455968 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/03058d5e93e06300c19a152925600033-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-251\" (UID: \"03058d5e93e06300c19a152925600033\") " pod="kube-system/kube-apiserver-ip-172-31-18-251" Aug 13 00:19:27.456120 kubelet[2751]: I0813 00:19:27.456007 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a5cc28d884bb847b50b8fb4d758425c0-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-251\" (UID: \"a5cc28d884bb847b50b8fb4d758425c0\") " pod="kube-system/kube-controller-manager-ip-172-31-18-251" Aug 13 00:19:27.456120 kubelet[2751]: I0813 00:19:27.456050 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a5cc28d884bb847b50b8fb4d758425c0-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-251\" (UID: \"a5cc28d884bb847b50b8fb4d758425c0\") " pod="kube-system/kube-controller-manager-ip-172-31-18-251" Aug 13 00:19:27.457263 kubelet[2751]: I0813 00:19:27.456917 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a5cc28d884bb847b50b8fb4d758425c0-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-251\" (UID: \"a5cc28d884bb847b50b8fb4d758425c0\") " pod="kube-system/kube-controller-manager-ip-172-31-18-251" Aug 13 00:19:27.457263 kubelet[2751]: I0813 00:19:27.457017 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a5cc28d884bb847b50b8fb4d758425c0-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-251\" (UID: \"a5cc28d884bb847b50b8fb4d758425c0\") " pod="kube-system/kube-controller-manager-ip-172-31-18-251" Aug 13 00:19:27.457263 kubelet[2751]: I0813 00:19:27.457058 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/03058d5e93e06300c19a152925600033-ca-certs\") pod \"kube-apiserver-ip-172-31-18-251\" (UID: \"03058d5e93e06300c19a152925600033\") " pod="kube-system/kube-apiserver-ip-172-31-18-251" Aug 13 00:19:27.457263 kubelet[2751]: I0813 00:19:27.457098 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/03058d5e93e06300c19a152925600033-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-251\" (UID: \"03058d5e93e06300c19a152925600033\") " pod="kube-system/kube-apiserver-ip-172-31-18-251" Aug 13 00:19:27.457263 kubelet[2751]: I0813 00:19:27.457133 2751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a5cc28d884bb847b50b8fb4d758425c0-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-251\" (UID: \"a5cc28d884bb847b50b8fb4d758425c0\") " pod="kube-system/kube-controller-manager-ip-172-31-18-251" Aug 13 00:19:27.464849 kubelet[2751]: E0813 00:19:27.464596 2751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.251:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-251?timeout=10s\": dial tcp 172.31.18.251:6443: connect: connection refused" interval="400ms" Aug 13 00:19:27.465732 kubelet[2751]: I0813 00:19:27.465678 2751 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-251" Aug 13 00:19:27.466507 kubelet[2751]: E0813 00:19:27.466457 2751 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.251:6443/api/v1/nodes\": dial tcp 172.31.18.251:6443: connect: connection refused" node="ip-172-31-18-251" Aug 13 00:19:27.562598 kubelet[2751]: E0813 00:19:27.562419 2751 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.251:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.251:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-251.185b2b8da9a025f7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-251,UID:ip-172-31-18-251,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-251,},FirstTimestamp:2025-08-13 00:19:27.227168247 +0000 UTC m=+1.641002853,LastTimestamp:2025-08-13 00:19:27.227168247 +0000 UTC m=+1.641002853,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-251,}" Aug 13 00:19:27.669018 kubelet[2751]: I0813 00:19:27.668915 2751 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-251" Aug 13 00:19:27.669487 kubelet[2751]: E0813 00:19:27.669424 2751 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.251:6443/api/v1/nodes\": dial tcp 172.31.18.251:6443: connect: connection refused" node="ip-172-31-18-251" Aug 13 00:19:27.730011 containerd[1941]: time="2025-08-13T00:19:27.729854406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-251,Uid:2f4fc546174c6751c1bc7525474a4ed0,Namespace:kube-system,Attempt:0,}" Aug 13 00:19:27.741829 containerd[1941]: time="2025-08-13T00:19:27.741717102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-251,Uid:03058d5e93e06300c19a152925600033,Namespace:kube-system,Attempt:0,}" Aug 13 00:19:27.753536 containerd[1941]: time="2025-08-13T00:19:27.753132498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-251,Uid:a5cc28d884bb847b50b8fb4d758425c0,Namespace:kube-system,Attempt:0,}" Aug 13 00:19:27.865630 kubelet[2751]: E0813 00:19:27.865504 2751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.251:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-251?timeout=10s\": dial tcp 172.31.18.251:6443: connect: connection refused" interval="800ms" Aug 13 00:19:28.072966 kubelet[2751]: I0813 00:19:28.072656 2751 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-251" Aug 13 00:19:28.073299 kubelet[2751]: E0813 00:19:28.073216 2751 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.251:6443/api/v1/nodes\": dial tcp 172.31.18.251:6443: connect: connection refused" node="ip-172-31-18-251" Aug 13 00:19:28.218588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3972594484.mount: Deactivated successfully. Aug 13 00:19:28.227216 containerd[1941]: time="2025-08-13T00:19:28.227139412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:19:28.228993 containerd[1941]: time="2025-08-13T00:19:28.228936664Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:19:28.230647 containerd[1941]: time="2025-08-13T00:19:28.230339992Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Aug 13 00:19:28.230831 containerd[1941]: time="2025-08-13T00:19:28.230693056Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:19:28.232579 containerd[1941]: time="2025-08-13T00:19:28.232429048Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:19:28.233884 containerd[1941]: time="2025-08-13T00:19:28.233706472Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:19:28.239302 containerd[1941]: time="2025-08-13T00:19:28.239238340Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:19:28.243344 containerd[1941]: time="2025-08-13T00:19:28.242687548Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 500.83871ms" Aug 13 00:19:28.246031 containerd[1941]: time="2025-08-13T00:19:28.245975356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:19:28.250583 containerd[1941]: time="2025-08-13T00:19:28.250072132Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 520.096802ms" Aug 13 00:19:28.253664 containerd[1941]: time="2025-08-13T00:19:28.253530760Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 500.28803ms" Aug 13 00:19:28.325994 kubelet[2751]: W0813 00:19:28.325573 2751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.251:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.18.251:6443: connect: connection refused Aug 13 00:19:28.325994 kubelet[2751]: E0813 00:19:28.325716 2751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.18.251:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.251:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:19:28.451694 kubelet[2751]: W0813 00:19:28.451584 2751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.251:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.251:6443: connect: connection refused Aug 13 00:19:28.451694 kubelet[2751]: E0813 00:19:28.451666 2751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.18.251:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.251:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:19:28.467162 containerd[1941]: time="2025-08-13T00:19:28.466734749Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:19:28.467845 containerd[1941]: time="2025-08-13T00:19:28.466994201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:19:28.467845 containerd[1941]: time="2025-08-13T00:19:28.467052965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:28.469834 containerd[1941]: time="2025-08-13T00:19:28.469657385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:28.480520 containerd[1941]: time="2025-08-13T00:19:28.480356837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:19:28.480520 containerd[1941]: time="2025-08-13T00:19:28.480459341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:19:28.480880 containerd[1941]: time="2025-08-13T00:19:28.480726185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:28.481482 containerd[1941]: time="2025-08-13T00:19:28.481350821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:28.482312 containerd[1941]: time="2025-08-13T00:19:28.482147093Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:19:28.482312 containerd[1941]: time="2025-08-13T00:19:28.482250941Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:19:28.485247 containerd[1941]: time="2025-08-13T00:19:28.482288417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:28.486247 containerd[1941]: time="2025-08-13T00:19:28.486106733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:28.516893 systemd[1]: Started cri-containerd-c74bdd66dc64a3f802d451c4a5037d53b3625ce2b418390a25f54aaf9d61f0eb.scope - libcontainer container c74bdd66dc64a3f802d451c4a5037d53b3625ce2b418390a25f54aaf9d61f0eb. Aug 13 00:19:28.540104 systemd[1]: Started cri-containerd-354fed5e6eca83a4168d60f496e8493d4e29ad4721dad1e2d832a4cfa214b02a.scope - libcontainer container 354fed5e6eca83a4168d60f496e8493d4e29ad4721dad1e2d832a4cfa214b02a. Aug 13 00:19:28.566214 systemd[1]: Started cri-containerd-9baf0beff4028447d2109af2eb9e2ef182cd41084d38eee9f1553c16cb386c00.scope - libcontainer container 9baf0beff4028447d2109af2eb9e2ef182cd41084d38eee9f1553c16cb386c00. Aug 13 00:19:28.598934 kubelet[2751]: W0813 00:19:28.597840 2751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.251:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.251:6443: connect: connection refused Aug 13 00:19:28.598934 kubelet[2751]: E0813 00:19:28.597921 2751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.18.251:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.251:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:19:28.635575 containerd[1941]: time="2025-08-13T00:19:28.635373954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-251,Uid:03058d5e93e06300c19a152925600033,Namespace:kube-system,Attempt:0,} returns sandbox id \"c74bdd66dc64a3f802d451c4a5037d53b3625ce2b418390a25f54aaf9d61f0eb\"" Aug 13 00:19:28.645799 containerd[1941]: time="2025-08-13T00:19:28.644111082Z" level=info msg="CreateContainer within sandbox \"c74bdd66dc64a3f802d451c4a5037d53b3625ce2b418390a25f54aaf9d61f0eb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:19:28.666548 kubelet[2751]: E0813 00:19:28.666470 2751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.251:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-251?timeout=10s\": dial tcp 172.31.18.251:6443: connect: connection refused" interval="1.6s" Aug 13 00:19:28.687138 containerd[1941]: time="2025-08-13T00:19:28.687073458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-251,Uid:a5cc28d884bb847b50b8fb4d758425c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"9baf0beff4028447d2109af2eb9e2ef182cd41084d38eee9f1553c16cb386c00\"" Aug 13 00:19:28.687408 containerd[1941]: time="2025-08-13T00:19:28.687355986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-251,Uid:2f4fc546174c6751c1bc7525474a4ed0,Namespace:kube-system,Attempt:0,} returns sandbox id \"354fed5e6eca83a4168d60f496e8493d4e29ad4721dad1e2d832a4cfa214b02a\"" Aug 13 00:19:28.690796 containerd[1941]: time="2025-08-13T00:19:28.690724026Z" level=info msg="CreateContainer within sandbox \"c74bdd66dc64a3f802d451c4a5037d53b3625ce2b418390a25f54aaf9d61f0eb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e76917d84e56aafa0dbfbca155a55d80e7af7fcc9c54e796cf4055766ebbd505\"" Aug 13 00:19:28.692925 containerd[1941]: time="2025-08-13T00:19:28.692877594Z" level=info msg="CreateContainer within sandbox \"354fed5e6eca83a4168d60f496e8493d4e29ad4721dad1e2d832a4cfa214b02a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:19:28.693631 containerd[1941]: time="2025-08-13T00:19:28.693343698Z" level=info msg="CreateContainer within sandbox \"9baf0beff4028447d2109af2eb9e2ef182cd41084d38eee9f1553c16cb386c00\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:19:28.693746 containerd[1941]: time="2025-08-13T00:19:28.693458334Z" level=info msg="StartContainer for \"e76917d84e56aafa0dbfbca155a55d80e7af7fcc9c54e796cf4055766ebbd505\"" Aug 13 00:19:28.729376 kubelet[2751]: W0813 00:19:28.729256 2751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.251:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-251&limit=500&resourceVersion=0": dial tcp 172.31.18.251:6443: connect: connection refused Aug 13 00:19:28.729376 kubelet[2751]: E0813 00:19:28.729373 2751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.251:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-251&limit=500&resourceVersion=0\": dial tcp 172.31.18.251:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:19:28.733287 containerd[1941]: time="2025-08-13T00:19:28.733216639Z" level=info msg="CreateContainer within sandbox \"354fed5e6eca83a4168d60f496e8493d4e29ad4721dad1e2d832a4cfa214b02a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"87f05eb8047ff116df02dc4d7a702476844f9bf9f5ea7b310c8f7e1c4c444389\"" Aug 13 00:19:28.735815 containerd[1941]: time="2025-08-13T00:19:28.735420739Z" level=info msg="StartContainer for \"87f05eb8047ff116df02dc4d7a702476844f9bf9f5ea7b310c8f7e1c4c444389\"" Aug 13 00:19:28.744090 systemd[1]: Started cri-containerd-e76917d84e56aafa0dbfbca155a55d80e7af7fcc9c54e796cf4055766ebbd505.scope - libcontainer container e76917d84e56aafa0dbfbca155a55d80e7af7fcc9c54e796cf4055766ebbd505. Aug 13 00:19:28.757310 containerd[1941]: time="2025-08-13T00:19:28.757236487Z" level=info msg="CreateContainer within sandbox \"9baf0beff4028447d2109af2eb9e2ef182cd41084d38eee9f1553c16cb386c00\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8c068fb99f2dced9a2c61fe87dd7b2867c792b75802ed6b988ce6b646b128fc5\"" Aug 13 00:19:28.758718 containerd[1941]: time="2025-08-13T00:19:28.758639695Z" level=info msg="StartContainer for \"8c068fb99f2dced9a2c61fe87dd7b2867c792b75802ed6b988ce6b646b128fc5\"" Aug 13 00:19:28.830626 systemd[1]: Started cri-containerd-87f05eb8047ff116df02dc4d7a702476844f9bf9f5ea7b310c8f7e1c4c444389.scope - libcontainer container 87f05eb8047ff116df02dc4d7a702476844f9bf9f5ea7b310c8f7e1c4c444389. Aug 13 00:19:28.853392 containerd[1941]: time="2025-08-13T00:19:28.853223791Z" level=info msg="StartContainer for \"e76917d84e56aafa0dbfbca155a55d80e7af7fcc9c54e796cf4055766ebbd505\" returns successfully" Aug 13 00:19:28.857182 systemd[1]: Started cri-containerd-8c068fb99f2dced9a2c61fe87dd7b2867c792b75802ed6b988ce6b646b128fc5.scope - libcontainer container 8c068fb99f2dced9a2c61fe87dd7b2867c792b75802ed6b988ce6b646b128fc5. Aug 13 00:19:28.876799 kubelet[2751]: I0813 00:19:28.876614 2751 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-251" Aug 13 00:19:28.877874 kubelet[2751]: E0813 00:19:28.877659 2751 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.18.251:6443/api/v1/nodes\": dial tcp 172.31.18.251:6443: connect: connection refused" node="ip-172-31-18-251" Aug 13 00:19:28.947858 containerd[1941]: time="2025-08-13T00:19:28.947746520Z" level=info msg="StartContainer for \"87f05eb8047ff116df02dc4d7a702476844f9bf9f5ea7b310c8f7e1c4c444389\" returns successfully" Aug 13 00:19:28.990089 containerd[1941]: time="2025-08-13T00:19:28.988998860Z" level=info msg="StartContainer for \"8c068fb99f2dced9a2c61fe87dd7b2867c792b75802ed6b988ce6b646b128fc5\" returns successfully" Aug 13 00:19:30.482615 kubelet[2751]: I0813 00:19:30.481386 2751 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-251" Aug 13 00:19:32.535477 kubelet[2751]: E0813 00:19:32.535384 2751 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-251\" not found" node="ip-172-31-18-251" Aug 13 00:19:32.590283 kubelet[2751]: I0813 00:19:32.589895 2751 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-18-251" Aug 13 00:19:32.590283 kubelet[2751]: E0813 00:19:32.589955 2751 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-18-251\": node \"ip-172-31-18-251\" not found" Aug 13 00:19:33.226189 kubelet[2751]: I0813 00:19:33.226129 2751 apiserver.go:52] "Watching apiserver" Aug 13 00:19:33.254641 kubelet[2751]: I0813 00:19:33.254584 2751 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:19:34.570078 systemd[1]: Reloading requested from client PID 3035 ('systemctl') (unit session-7.scope)... Aug 13 00:19:34.570115 systemd[1]: Reloading... Aug 13 00:19:34.788823 zram_generator::config[3075]: No configuration found. Aug 13 00:19:35.019856 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:19:35.225620 systemd[1]: Reloading finished in 654 ms. Aug 13 00:19:35.299269 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:19:35.323532 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:19:35.324039 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:19:35.324139 systemd[1]: kubelet.service: Consumed 2.349s CPU time, 129.8M memory peak, 0B memory swap peak. Aug 13 00:19:35.331437 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:19:35.665509 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:19:35.682358 (kubelet)[3135]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:19:35.769185 kubelet[3135]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:19:35.769185 kubelet[3135]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:19:35.769185 kubelet[3135]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:19:35.769701 kubelet[3135]: I0813 00:19:35.769332 3135 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:19:35.789517 kubelet[3135]: I0813 00:19:35.789022 3135 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:19:35.789517 kubelet[3135]: I0813 00:19:35.789072 3135 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:19:35.790800 kubelet[3135]: I0813 00:19:35.789740 3135 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:19:35.793088 kubelet[3135]: I0813 00:19:35.793023 3135 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:19:35.797232 kubelet[3135]: I0813 00:19:35.797163 3135 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:19:35.807857 kubelet[3135]: E0813 00:19:35.807311 3135 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:19:35.807857 kubelet[3135]: I0813 00:19:35.807366 3135 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:19:35.813061 kubelet[3135]: I0813 00:19:35.813006 3135 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:19:35.813284 kubelet[3135]: I0813 00:19:35.813224 3135 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:19:35.813556 kubelet[3135]: I0813 00:19:35.813498 3135 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:19:35.813871 kubelet[3135]: I0813 00:19:35.813545 3135 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-251","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:19:35.814061 kubelet[3135]: I0813 00:19:35.813883 3135 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:19:35.814061 kubelet[3135]: I0813 00:19:35.813905 3135 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:19:35.814061 kubelet[3135]: I0813 00:19:35.813969 3135 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:19:35.815134 kubelet[3135]: I0813 00:19:35.814145 3135 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:19:35.815134 kubelet[3135]: I0813 00:19:35.814172 3135 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:19:35.815134 kubelet[3135]: I0813 00:19:35.814207 3135 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:19:35.815134 kubelet[3135]: I0813 00:19:35.814227 3135 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:19:35.825134 kubelet[3135]: I0813 00:19:35.824915 3135 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 00:19:35.827660 kubelet[3135]: I0813 00:19:35.827356 3135 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:19:35.834721 kubelet[3135]: I0813 00:19:35.834076 3135 server.go:1274] "Started kubelet" Aug 13 00:19:35.840549 kubelet[3135]: I0813 00:19:35.840309 3135 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:19:35.851808 kubelet[3135]: I0813 00:19:35.849964 3135 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:19:35.854535 kubelet[3135]: I0813 00:19:35.854458 3135 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:19:35.855905 kubelet[3135]: I0813 00:19:35.855878 3135 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:19:35.858141 kubelet[3135]: I0813 00:19:35.856829 3135 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:19:35.864957 kubelet[3135]: I0813 00:19:35.864921 3135 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:19:35.865891 kubelet[3135]: E0813 00:19:35.865460 3135 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-18-251\" not found" Aug 13 00:19:35.888946 kubelet[3135]: I0813 00:19:35.888305 3135 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:19:35.893720 kubelet[3135]: I0813 00:19:35.893680 3135 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:19:35.894121 kubelet[3135]: I0813 00:19:35.894101 3135 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:19:35.898077 kubelet[3135]: I0813 00:19:35.898039 3135 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:19:35.899243 kubelet[3135]: I0813 00:19:35.898373 3135 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:19:35.922656 kubelet[3135]: I0813 00:19:35.920960 3135 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:19:35.928333 kubelet[3135]: I0813 00:19:35.928068 3135 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:19:35.928333 kubelet[3135]: I0813 00:19:35.928146 3135 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:19:35.928333 kubelet[3135]: I0813 00:19:35.928209 3135 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:19:35.928583 kubelet[3135]: E0813 00:19:35.928557 3135 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:19:35.932327 kubelet[3135]: I0813 00:19:35.931844 3135 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:19:35.949648 kubelet[3135]: E0813 00:19:35.947833 3135 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:19:36.031152 kubelet[3135]: E0813 00:19:36.029702 3135 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 00:19:36.068070 kubelet[3135]: I0813 00:19:36.068019 3135 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:19:36.068070 kubelet[3135]: I0813 00:19:36.068057 3135 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:19:36.068070 kubelet[3135]: I0813 00:19:36.068095 3135 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:19:36.068402 kubelet[3135]: I0813 00:19:36.068349 3135 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:19:36.068480 kubelet[3135]: I0813 00:19:36.068397 3135 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:19:36.068480 kubelet[3135]: I0813 00:19:36.068435 3135 policy_none.go:49] "None policy: Start" Aug 13 00:19:36.072787 kubelet[3135]: I0813 00:19:36.072654 3135 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:19:36.072787 kubelet[3135]: I0813 00:19:36.072752 3135 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:19:36.075419 kubelet[3135]: I0813 00:19:36.073870 3135 state_mem.go:75] "Updated machine memory state" Aug 13 00:19:36.096801 kubelet[3135]: I0813 00:19:36.096592 3135 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:19:36.097632 kubelet[3135]: I0813 00:19:36.097349 3135 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:19:36.098091 kubelet[3135]: I0813 00:19:36.097371 3135 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:19:36.099215 kubelet[3135]: I0813 00:19:36.099170 3135 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:19:36.232386 kubelet[3135]: I0813 00:19:36.228025 3135 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-18-251" Aug 13 00:19:36.265548 kubelet[3135]: I0813 00:19:36.265491 3135 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-18-251" Aug 13 00:19:36.265696 kubelet[3135]: I0813 00:19:36.265623 3135 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-18-251" Aug 13 00:19:36.298644 kubelet[3135]: I0813 00:19:36.298574 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a5cc28d884bb847b50b8fb4d758425c0-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-251\" (UID: \"a5cc28d884bb847b50b8fb4d758425c0\") " pod="kube-system/kube-controller-manager-ip-172-31-18-251" Aug 13 00:19:36.300804 kubelet[3135]: I0813 00:19:36.298709 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/03058d5e93e06300c19a152925600033-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-251\" (UID: \"03058d5e93e06300c19a152925600033\") " pod="kube-system/kube-apiserver-ip-172-31-18-251" Aug 13 00:19:36.300804 kubelet[3135]: I0813 00:19:36.299061 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/03058d5e93e06300c19a152925600033-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-251\" (UID: \"03058d5e93e06300c19a152925600033\") " pod="kube-system/kube-apiserver-ip-172-31-18-251" Aug 13 00:19:36.300804 kubelet[3135]: I0813 00:19:36.299237 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a5cc28d884bb847b50b8fb4d758425c0-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-251\" (UID: \"a5cc28d884bb847b50b8fb4d758425c0\") " pod="kube-system/kube-controller-manager-ip-172-31-18-251" Aug 13 00:19:36.300804 kubelet[3135]: I0813 00:19:36.299416 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a5cc28d884bb847b50b8fb4d758425c0-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-251\" (UID: \"a5cc28d884bb847b50b8fb4d758425c0\") " pod="kube-system/kube-controller-manager-ip-172-31-18-251" Aug 13 00:19:36.300804 kubelet[3135]: I0813 00:19:36.299604 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/03058d5e93e06300c19a152925600033-ca-certs\") pod \"kube-apiserver-ip-172-31-18-251\" (UID: \"03058d5e93e06300c19a152925600033\") " pod="kube-system/kube-apiserver-ip-172-31-18-251" Aug 13 00:19:36.301166 kubelet[3135]: I0813 00:19:36.299663 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a5cc28d884bb847b50b8fb4d758425c0-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-251\" (UID: \"a5cc28d884bb847b50b8fb4d758425c0\") " pod="kube-system/kube-controller-manager-ip-172-31-18-251" Aug 13 00:19:36.301166 kubelet[3135]: I0813 00:19:36.299703 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a5cc28d884bb847b50b8fb4d758425c0-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-251\" (UID: \"a5cc28d884bb847b50b8fb4d758425c0\") " pod="kube-system/kube-controller-manager-ip-172-31-18-251" Aug 13 00:19:36.301166 kubelet[3135]: I0813 00:19:36.299742 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f4fc546174c6751c1bc7525474a4ed0-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-251\" (UID: \"2f4fc546174c6751c1bc7525474a4ed0\") " pod="kube-system/kube-scheduler-ip-172-31-18-251" Aug 13 00:19:36.815280 kubelet[3135]: I0813 00:19:36.815177 3135 apiserver.go:52] "Watching apiserver" Aug 13 00:19:36.894173 kubelet[3135]: I0813 00:19:36.894070 3135 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:19:37.008116 kubelet[3135]: E0813 00:19:37.008055 3135 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-18-251\" already exists" pod="kube-system/kube-apiserver-ip-172-31-18-251" Aug 13 00:19:37.143092 kubelet[3135]: I0813 00:19:37.142569 3135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-251" podStartSLOduration=1.142547508 podStartE2EDuration="1.142547508s" podCreationTimestamp="2025-08-13 00:19:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:19:37.110274792 +0000 UTC m=+1.420622696" watchObservedRunningTime="2025-08-13 00:19:37.142547508 +0000 UTC m=+1.452895376" Aug 13 00:19:37.185409 kubelet[3135]: I0813 00:19:37.185108 3135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-251" podStartSLOduration=1.185086465 podStartE2EDuration="1.185086465s" podCreationTimestamp="2025-08-13 00:19:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:19:37.143990748 +0000 UTC m=+1.454338640" watchObservedRunningTime="2025-08-13 00:19:37.185086465 +0000 UTC m=+1.495434357" Aug 13 00:19:37.242800 kubelet[3135]: I0813 00:19:37.241550 3135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-251" podStartSLOduration=1.241530421 podStartE2EDuration="1.241530421s" podCreationTimestamp="2025-08-13 00:19:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:19:37.187248637 +0000 UTC m=+1.497596541" watchObservedRunningTime="2025-08-13 00:19:37.241530421 +0000 UTC m=+1.551878301" Aug 13 00:19:40.682811 kubelet[3135]: I0813 00:19:40.682443 3135 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:19:40.683476 containerd[1941]: time="2025-08-13T00:19:40.683083518Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:19:40.686517 kubelet[3135]: I0813 00:19:40.683554 3135 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:19:41.530636 systemd[1]: Created slice kubepods-besteffort-pod35beb6b3_9632_4f1c_85a8_73b1a4be6902.slice - libcontainer container kubepods-besteffort-pod35beb6b3_9632_4f1c_85a8_73b1a4be6902.slice. Aug 13 00:19:41.536990 kubelet[3135]: I0813 00:19:41.536691 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35beb6b3-9632-4f1c-85a8-73b1a4be6902-xtables-lock\") pod \"kube-proxy-kt9tg\" (UID: \"35beb6b3-9632-4f1c-85a8-73b1a4be6902\") " pod="kube-system/kube-proxy-kt9tg" Aug 13 00:19:41.536990 kubelet[3135]: I0813 00:19:41.536752 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjmp9\" (UniqueName: \"kubernetes.io/projected/35beb6b3-9632-4f1c-85a8-73b1a4be6902-kube-api-access-kjmp9\") pod \"kube-proxy-kt9tg\" (UID: \"35beb6b3-9632-4f1c-85a8-73b1a4be6902\") " pod="kube-system/kube-proxy-kt9tg" Aug 13 00:19:41.536990 kubelet[3135]: I0813 00:19:41.536830 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/35beb6b3-9632-4f1c-85a8-73b1a4be6902-kube-proxy\") pod \"kube-proxy-kt9tg\" (UID: \"35beb6b3-9632-4f1c-85a8-73b1a4be6902\") " pod="kube-system/kube-proxy-kt9tg" Aug 13 00:19:41.536990 kubelet[3135]: I0813 00:19:41.536869 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35beb6b3-9632-4f1c-85a8-73b1a4be6902-lib-modules\") pod \"kube-proxy-kt9tg\" (UID: \"35beb6b3-9632-4f1c-85a8-73b1a4be6902\") " pod="kube-system/kube-proxy-kt9tg" Aug 13 00:19:41.761678 update_engine[1915]: I20250813 00:19:41.761587 1915 update_attempter.cc:509] Updating boot flags... Aug 13 00:19:41.851660 containerd[1941]: time="2025-08-13T00:19:41.851159828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kt9tg,Uid:35beb6b3-9632-4f1c-85a8-73b1a4be6902,Namespace:kube-system,Attempt:0,}" Aug 13 00:19:41.956576 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3194) Aug 13 00:19:41.991847 systemd[1]: Created slice kubepods-besteffort-pod7b5fb2ea_e517_41ef_ba7c_98f08d65dd7b.slice - libcontainer container kubepods-besteffort-pod7b5fb2ea_e517_41ef_ba7c_98f08d65dd7b.slice. Aug 13 00:19:42.003811 containerd[1941]: time="2025-08-13T00:19:41.998434208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:19:42.003811 containerd[1941]: time="2025-08-13T00:19:41.998531720Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:19:42.003811 containerd[1941]: time="2025-08-13T00:19:41.998568824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:42.003811 containerd[1941]: time="2025-08-13T00:19:41.998728568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:42.044221 kubelet[3135]: I0813 00:19:42.044171 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7b5fb2ea-e517-41ef-ba7c-98f08d65dd7b-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-v5vss\" (UID: \"7b5fb2ea-e517-41ef-ba7c-98f08d65dd7b\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-v5vss" Aug 13 00:19:42.046045 kubelet[3135]: I0813 00:19:42.045320 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmm6k\" (UniqueName: \"kubernetes.io/projected/7b5fb2ea-e517-41ef-ba7c-98f08d65dd7b-kube-api-access-tmm6k\") pod \"tigera-operator-5bf8dfcb4-v5vss\" (UID: \"7b5fb2ea-e517-41ef-ba7c-98f08d65dd7b\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-v5vss" Aug 13 00:19:42.059328 systemd[1]: Started cri-containerd-582972216cbe708af1e6a2f3721b860cfddf178c60b96ff0d7803eb13f20f259.scope - libcontainer container 582972216cbe708af1e6a2f3721b860cfddf178c60b96ff0d7803eb13f20f259. Aug 13 00:19:42.160656 containerd[1941]: time="2025-08-13T00:19:42.157463681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kt9tg,Uid:35beb6b3-9632-4f1c-85a8-73b1a4be6902,Namespace:kube-system,Attempt:0,} returns sandbox id \"582972216cbe708af1e6a2f3721b860cfddf178c60b96ff0d7803eb13f20f259\"" Aug 13 00:19:42.195105 containerd[1941]: time="2025-08-13T00:19:42.195053033Z" level=info msg="CreateContainer within sandbox \"582972216cbe708af1e6a2f3721b860cfddf178c60b96ff0d7803eb13f20f259\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:19:42.251618 containerd[1941]: time="2025-08-13T00:19:42.251557650Z" level=info msg="CreateContainer within sandbox \"582972216cbe708af1e6a2f3721b860cfddf178c60b96ff0d7803eb13f20f259\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c79fa73eb75ac4d1aaa72a97543a17d73b4b03876bcc7807a6e66abc4c0445af\"" Aug 13 00:19:42.257740 containerd[1941]: time="2025-08-13T00:19:42.255936942Z" level=info msg="StartContainer for \"c79fa73eb75ac4d1aaa72a97543a17d73b4b03876bcc7807a6e66abc4c0445af\"" Aug 13 00:19:42.303389 containerd[1941]: time="2025-08-13T00:19:42.303199002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-v5vss,Uid:7b5fb2ea-e517-41ef-ba7c-98f08d65dd7b,Namespace:tigera-operator,Attempt:0,}" Aug 13 00:19:42.400933 systemd[1]: Started cri-containerd-c79fa73eb75ac4d1aaa72a97543a17d73b4b03876bcc7807a6e66abc4c0445af.scope - libcontainer container c79fa73eb75ac4d1aaa72a97543a17d73b4b03876bcc7807a6e66abc4c0445af. Aug 13 00:19:42.474911 containerd[1941]: time="2025-08-13T00:19:42.473038003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:19:42.474911 containerd[1941]: time="2025-08-13T00:19:42.473158651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:19:42.474911 containerd[1941]: time="2025-08-13T00:19:42.473202223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:42.474911 containerd[1941]: time="2025-08-13T00:19:42.473386615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:42.504858 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (3193) Aug 13 00:19:42.523844 systemd[1]: Started cri-containerd-b1ea712b4be93aa5da8e157f11c4675e52601605bd46aaf7b9e92e2c76aa52be.scope - libcontainer container b1ea712b4be93aa5da8e157f11c4675e52601605bd46aaf7b9e92e2c76aa52be. Aug 13 00:19:42.586709 containerd[1941]: time="2025-08-13T00:19:42.586300087Z" level=info msg="StartContainer for \"c79fa73eb75ac4d1aaa72a97543a17d73b4b03876bcc7807a6e66abc4c0445af\" returns successfully" Aug 13 00:19:42.700041 containerd[1941]: time="2025-08-13T00:19:42.699794828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-v5vss,Uid:7b5fb2ea-e517-41ef-ba7c-98f08d65dd7b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b1ea712b4be93aa5da8e157f11c4675e52601605bd46aaf7b9e92e2c76aa52be\"" Aug 13 00:19:42.705328 containerd[1941]: time="2025-08-13T00:19:42.704790152Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 00:19:43.936043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1585426592.mount: Deactivated successfully. Aug 13 00:19:44.337365 kubelet[3135]: I0813 00:19:44.337171 3135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kt9tg" podStartSLOduration=3.337147844 podStartE2EDuration="3.337147844s" podCreationTimestamp="2025-08-13 00:19:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:19:43.061086066 +0000 UTC m=+7.371433958" watchObservedRunningTime="2025-08-13 00:19:44.337147844 +0000 UTC m=+8.647495712" Aug 13 00:19:47.672658 containerd[1941]: time="2025-08-13T00:19:47.672371197Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:47.674057 containerd[1941]: time="2025-08-13T00:19:47.673954009Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Aug 13 00:19:47.674740 containerd[1941]: time="2025-08-13T00:19:47.674655037Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:47.680821 containerd[1941]: time="2025-08-13T00:19:47.680002813Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:47.686713 containerd[1941]: time="2025-08-13T00:19:47.686623525Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 4.981765993s" Aug 13 00:19:47.686713 containerd[1941]: time="2025-08-13T00:19:47.686713273Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Aug 13 00:19:47.693438 containerd[1941]: time="2025-08-13T00:19:47.693110209Z" level=info msg="CreateContainer within sandbox \"b1ea712b4be93aa5da8e157f11c4675e52601605bd46aaf7b9e92e2c76aa52be\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 00:19:47.714627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount928659984.mount: Deactivated successfully. Aug 13 00:19:47.716812 containerd[1941]: time="2025-08-13T00:19:47.715713565Z" level=info msg="CreateContainer within sandbox \"b1ea712b4be93aa5da8e157f11c4675e52601605bd46aaf7b9e92e2c76aa52be\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"023f593a9ebc35a6b142dceca38c6618ad6c6b1043b095cb6a364c448f273f86\"" Aug 13 00:19:47.717871 containerd[1941]: time="2025-08-13T00:19:47.717184501Z" level=info msg="StartContainer for \"023f593a9ebc35a6b142dceca38c6618ad6c6b1043b095cb6a364c448f273f86\"" Aug 13 00:19:47.775166 systemd[1]: Started cri-containerd-023f593a9ebc35a6b142dceca38c6618ad6c6b1043b095cb6a364c448f273f86.scope - libcontainer container 023f593a9ebc35a6b142dceca38c6618ad6c6b1043b095cb6a364c448f273f86. Aug 13 00:19:47.819920 containerd[1941]: time="2025-08-13T00:19:47.819847573Z" level=info msg="StartContainer for \"023f593a9ebc35a6b142dceca38c6618ad6c6b1043b095cb6a364c448f273f86\" returns successfully" Aug 13 00:19:48.064377 kubelet[3135]: I0813 00:19:48.063957 3135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-v5vss" podStartSLOduration=2.077740842 podStartE2EDuration="7.063819611s" podCreationTimestamp="2025-08-13 00:19:41 +0000 UTC" firstStartedPulling="2025-08-13 00:19:42.702173684 +0000 UTC m=+7.012521552" lastFinishedPulling="2025-08-13 00:19:47.688252453 +0000 UTC m=+11.998600321" observedRunningTime="2025-08-13 00:19:48.063213023 +0000 UTC m=+12.373560915" watchObservedRunningTime="2025-08-13 00:19:48.063819611 +0000 UTC m=+12.374167563" Aug 13 00:19:54.442846 sudo[2255]: pam_unix(sudo:session): session closed for user root Aug 13 00:19:54.467336 sshd[2252]: pam_unix(sshd:session): session closed for user core Aug 13 00:19:54.477348 systemd[1]: sshd@6-172.31.18.251:22-139.178.89.65:47982.service: Deactivated successfully. Aug 13 00:19:54.481928 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:19:54.482267 systemd[1]: session-7.scope: Consumed 10.134s CPU time, 154.8M memory peak, 0B memory swap peak. Aug 13 00:19:54.489148 systemd-logind[1913]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:19:54.493918 systemd-logind[1913]: Removed session 7. Aug 13 00:20:10.563641 systemd[1]: Created slice kubepods-besteffort-poda7f3110c_9acb_4cd3_8eea_2f06cded2e8c.slice - libcontainer container kubepods-besteffort-poda7f3110c_9acb_4cd3_8eea_2f06cded2e8c.slice. Aug 13 00:20:10.641617 kubelet[3135]: I0813 00:20:10.641399 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7f3110c-9acb-4cd3-8eea-2f06cded2e8c-tigera-ca-bundle\") pod \"calico-typha-5cc7cb64f8-xn556\" (UID: \"a7f3110c-9acb-4cd3-8eea-2f06cded2e8c\") " pod="calico-system/calico-typha-5cc7cb64f8-xn556" Aug 13 00:20:10.641617 kubelet[3135]: I0813 00:20:10.641483 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a7f3110c-9acb-4cd3-8eea-2f06cded2e8c-typha-certs\") pod \"calico-typha-5cc7cb64f8-xn556\" (UID: \"a7f3110c-9acb-4cd3-8eea-2f06cded2e8c\") " pod="calico-system/calico-typha-5cc7cb64f8-xn556" Aug 13 00:20:10.641617 kubelet[3135]: I0813 00:20:10.641524 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62bq5\" (UniqueName: \"kubernetes.io/projected/a7f3110c-9acb-4cd3-8eea-2f06cded2e8c-kube-api-access-62bq5\") pod \"calico-typha-5cc7cb64f8-xn556\" (UID: \"a7f3110c-9acb-4cd3-8eea-2f06cded2e8c\") " pod="calico-system/calico-typha-5cc7cb64f8-xn556" Aug 13 00:20:10.894018 containerd[1941]: time="2025-08-13T00:20:10.893947872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cc7cb64f8-xn556,Uid:a7f3110c-9acb-4cd3-8eea-2f06cded2e8c,Namespace:calico-system,Attempt:0,}" Aug 13 00:20:10.906681 systemd[1]: Created slice kubepods-besteffort-pod412e4a56_0974_4b5c_815d_fa779bebccd2.slice - libcontainer container kubepods-besteffort-pod412e4a56_0974_4b5c_815d_fa779bebccd2.slice. Aug 13 00:20:10.945277 kubelet[3135]: I0813 00:20:10.944974 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/412e4a56-0974-4b5c-815d-fa779bebccd2-cni-log-dir\") pod \"calico-node-qkxkv\" (UID: \"412e4a56-0974-4b5c-815d-fa779bebccd2\") " pod="calico-system/calico-node-qkxkv" Aug 13 00:20:10.945277 kubelet[3135]: I0813 00:20:10.945053 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/412e4a56-0974-4b5c-815d-fa779bebccd2-cni-bin-dir\") pod \"calico-node-qkxkv\" (UID: \"412e4a56-0974-4b5c-815d-fa779bebccd2\") " pod="calico-system/calico-node-qkxkv" Aug 13 00:20:10.945277 kubelet[3135]: I0813 00:20:10.945097 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/412e4a56-0974-4b5c-815d-fa779bebccd2-tigera-ca-bundle\") pod \"calico-node-qkxkv\" (UID: \"412e4a56-0974-4b5c-815d-fa779bebccd2\") " pod="calico-system/calico-node-qkxkv" Aug 13 00:20:10.945277 kubelet[3135]: I0813 00:20:10.945136 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/412e4a56-0974-4b5c-815d-fa779bebccd2-var-run-calico\") pod \"calico-node-qkxkv\" (UID: \"412e4a56-0974-4b5c-815d-fa779bebccd2\") " pod="calico-system/calico-node-qkxkv" Aug 13 00:20:10.945277 kubelet[3135]: I0813 00:20:10.945171 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/412e4a56-0974-4b5c-815d-fa779bebccd2-policysync\") pod \"calico-node-qkxkv\" (UID: \"412e4a56-0974-4b5c-815d-fa779bebccd2\") " pod="calico-system/calico-node-qkxkv" Aug 13 00:20:10.948069 kubelet[3135]: I0813 00:20:10.945208 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/412e4a56-0974-4b5c-815d-fa779bebccd2-cni-net-dir\") pod \"calico-node-qkxkv\" (UID: \"412e4a56-0974-4b5c-815d-fa779bebccd2\") " pod="calico-system/calico-node-qkxkv" Aug 13 00:20:10.948069 kubelet[3135]: I0813 00:20:10.945243 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/412e4a56-0974-4b5c-815d-fa779bebccd2-flexvol-driver-host\") pod \"calico-node-qkxkv\" (UID: \"412e4a56-0974-4b5c-815d-fa779bebccd2\") " pod="calico-system/calico-node-qkxkv" Aug 13 00:20:10.948069 kubelet[3135]: I0813 00:20:10.945283 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/412e4a56-0974-4b5c-815d-fa779bebccd2-var-lib-calico\") pod \"calico-node-qkxkv\" (UID: \"412e4a56-0974-4b5c-815d-fa779bebccd2\") " pod="calico-system/calico-node-qkxkv" Aug 13 00:20:10.948069 kubelet[3135]: I0813 00:20:10.945321 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/412e4a56-0974-4b5c-815d-fa779bebccd2-xtables-lock\") pod \"calico-node-qkxkv\" (UID: \"412e4a56-0974-4b5c-815d-fa779bebccd2\") " pod="calico-system/calico-node-qkxkv" Aug 13 00:20:10.948069 kubelet[3135]: I0813 00:20:10.945356 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vngh\" (UniqueName: \"kubernetes.io/projected/412e4a56-0974-4b5c-815d-fa779bebccd2-kube-api-access-9vngh\") pod \"calico-node-qkxkv\" (UID: \"412e4a56-0974-4b5c-815d-fa779bebccd2\") " pod="calico-system/calico-node-qkxkv" Aug 13 00:20:10.950121 kubelet[3135]: I0813 00:20:10.945390 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/412e4a56-0974-4b5c-815d-fa779bebccd2-node-certs\") pod \"calico-node-qkxkv\" (UID: \"412e4a56-0974-4b5c-815d-fa779bebccd2\") " pod="calico-system/calico-node-qkxkv" Aug 13 00:20:10.950121 kubelet[3135]: I0813 00:20:10.945430 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/412e4a56-0974-4b5c-815d-fa779bebccd2-lib-modules\") pod \"calico-node-qkxkv\" (UID: \"412e4a56-0974-4b5c-815d-fa779bebccd2\") " pod="calico-system/calico-node-qkxkv" Aug 13 00:20:11.006907 containerd[1941]: time="2025-08-13T00:20:11.002798349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:20:11.006907 containerd[1941]: time="2025-08-13T00:20:11.002894625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:20:11.006907 containerd[1941]: time="2025-08-13T00:20:11.002921313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:11.006907 containerd[1941]: time="2025-08-13T00:20:11.003064581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:11.048553 kubelet[3135]: E0813 00:20:11.048501 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.049110 kubelet[3135]: W0813 00:20:11.049054 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.049356 kubelet[3135]: E0813 00:20:11.049287 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.054220 kubelet[3135]: E0813 00:20:11.052249 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.054220 kubelet[3135]: W0813 00:20:11.052726 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.055291 kubelet[3135]: E0813 00:20:11.055184 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.058147 kubelet[3135]: E0813 00:20:11.056068 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.058147 kubelet[3135]: W0813 00:20:11.056173 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.058147 kubelet[3135]: E0813 00:20:11.056660 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.058448 kubelet[3135]: E0813 00:20:11.058147 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.058448 kubelet[3135]: W0813 00:20:11.058198 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.058551 kubelet[3135]: E0813 00:20:11.058457 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.062734 kubelet[3135]: E0813 00:20:11.062240 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.062734 kubelet[3135]: W0813 00:20:11.062285 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.062734 kubelet[3135]: E0813 00:20:11.062682 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.065599 kubelet[3135]: E0813 00:20:11.064525 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.066745 kubelet[3135]: W0813 00:20:11.066646 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.069091 kubelet[3135]: E0813 00:20:11.067824 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.069091 kubelet[3135]: E0813 00:20:11.068671 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.069091 kubelet[3135]: W0813 00:20:11.068697 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.070088 kubelet[3135]: E0813 00:20:11.070046 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.072438 kubelet[3135]: E0813 00:20:11.071538 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.072438 kubelet[3135]: W0813 00:20:11.072216 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.073513 kubelet[3135]: E0813 00:20:11.073387 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.075926 kubelet[3135]: E0813 00:20:11.074269 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.075926 kubelet[3135]: W0813 00:20:11.074302 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.075926 kubelet[3135]: E0813 00:20:11.074344 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.078978 kubelet[3135]: E0813 00:20:11.078935 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.079877 kubelet[3135]: W0813 00:20:11.079151 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.081993 kubelet[3135]: E0813 00:20:11.081951 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.083889 kubelet[3135]: W0813 00:20:11.083838 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.085078 systemd[1]: Started cri-containerd-a0c4c086f05d75ce3ee03841f8826434518d59e3dab097d1da99017046c5a93f.scope - libcontainer container a0c4c086f05d75ce3ee03841f8826434518d59e3dab097d1da99017046c5a93f. Aug 13 00:20:11.090809 kubelet[3135]: E0813 00:20:11.089952 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.090809 kubelet[3135]: W0813 00:20:11.089988 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.090809 kubelet[3135]: E0813 00:20:11.090022 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.092807 kubelet[3135]: E0813 00:20:11.092018 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.093048 kubelet[3135]: E0813 00:20:11.093014 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.098083 kubelet[3135]: E0813 00:20:11.098028 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.099353 kubelet[3135]: W0813 00:20:11.098850 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.099353 kubelet[3135]: E0813 00:20:11.098969 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.101215 kubelet[3135]: E0813 00:20:11.101127 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.101215 kubelet[3135]: W0813 00:20:11.101203 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.101431 kubelet[3135]: E0813 00:20:11.101330 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.116970 kubelet[3135]: E0813 00:20:11.115906 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.116970 kubelet[3135]: W0813 00:20:11.115939 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.116970 kubelet[3135]: E0813 00:20:11.115970 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.218438 containerd[1941]: time="2025-08-13T00:20:11.218272918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qkxkv,Uid:412e4a56-0974-4b5c-815d-fa779bebccd2,Namespace:calico-system,Attempt:0,}" Aug 13 00:20:11.295467 containerd[1941]: time="2025-08-13T00:20:11.293832790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:20:11.295467 containerd[1941]: time="2025-08-13T00:20:11.293950414Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:20:11.295467 containerd[1941]: time="2025-08-13T00:20:11.293981086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:11.295911 containerd[1941]: time="2025-08-13T00:20:11.294430186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:11.344104 systemd[1]: Started cri-containerd-a697d3245b6b15fcaf793b6d93b79010fb1011dbeeb4a1c1c086322f345e637f.scope - libcontainer container a697d3245b6b15fcaf793b6d93b79010fb1011dbeeb4a1c1c086322f345e637f. Aug 13 00:20:11.405541 containerd[1941]: time="2025-08-13T00:20:11.404390459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cc7cb64f8-xn556,Uid:a7f3110c-9acb-4cd3-8eea-2f06cded2e8c,Namespace:calico-system,Attempt:0,} returns sandbox id \"a0c4c086f05d75ce3ee03841f8826434518d59e3dab097d1da99017046c5a93f\"" Aug 13 00:20:11.409305 containerd[1941]: time="2025-08-13T00:20:11.409233971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 00:20:11.547047 containerd[1941]: time="2025-08-13T00:20:11.546755291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qkxkv,Uid:412e4a56-0974-4b5c-815d-fa779bebccd2,Namespace:calico-system,Attempt:0,} returns sandbox id \"a697d3245b6b15fcaf793b6d93b79010fb1011dbeeb4a1c1c086322f345e637f\"" Aug 13 00:20:11.612851 kubelet[3135]: E0813 00:20:11.610861 3135 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzmtn" podUID="e5f0f3a3-68e2-4f84-92cb-c460ed58604c" Aug 13 00:20:11.636065 kubelet[3135]: E0813 00:20:11.635837 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.636065 kubelet[3135]: W0813 00:20:11.635871 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.636065 kubelet[3135]: E0813 00:20:11.635902 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.636909 kubelet[3135]: E0813 00:20:11.636586 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.636909 kubelet[3135]: W0813 00:20:11.636616 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.636909 kubelet[3135]: E0813 00:20:11.636645 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.637629 kubelet[3135]: E0813 00:20:11.637233 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.637629 kubelet[3135]: W0813 00:20:11.637260 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.637629 kubelet[3135]: E0813 00:20:11.637285 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.639824 kubelet[3135]: E0813 00:20:11.637995 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.639824 kubelet[3135]: W0813 00:20:11.638023 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.639824 kubelet[3135]: E0813 00:20:11.638051 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.640640 kubelet[3135]: E0813 00:20:11.640408 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.640640 kubelet[3135]: W0813 00:20:11.640441 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.640640 kubelet[3135]: E0813 00:20:11.640475 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.642822 kubelet[3135]: E0813 00:20:11.642447 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.642822 kubelet[3135]: W0813 00:20:11.642489 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.642822 kubelet[3135]: E0813 00:20:11.642522 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.644176 kubelet[3135]: E0813 00:20:11.643918 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.644176 kubelet[3135]: W0813 00:20:11.643953 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.644176 kubelet[3135]: E0813 00:20:11.643986 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.644978 kubelet[3135]: E0813 00:20:11.644412 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.644978 kubelet[3135]: W0813 00:20:11.644431 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.644978 kubelet[3135]: E0813 00:20:11.644455 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.647189 kubelet[3135]: E0813 00:20:11.647147 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.647594 kubelet[3135]: W0813 00:20:11.647370 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.647594 kubelet[3135]: E0813 00:20:11.647411 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.648251 kubelet[3135]: E0813 00:20:11.648032 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.648251 kubelet[3135]: W0813 00:20:11.648060 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.648251 kubelet[3135]: E0813 00:20:11.648086 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.648699 kubelet[3135]: E0813 00:20:11.648673 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.649029 kubelet[3135]: W0813 00:20:11.648821 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.649029 kubelet[3135]: E0813 00:20:11.648856 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.649439 kubelet[3135]: E0813 00:20:11.649412 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.649551 kubelet[3135]: W0813 00:20:11.649527 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.649855 kubelet[3135]: E0813 00:20:11.649645 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.650183 kubelet[3135]: E0813 00:20:11.650157 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.650453 kubelet[3135]: W0813 00:20:11.650268 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.650453 kubelet[3135]: E0813 00:20:11.650301 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.650866 kubelet[3135]: E0813 00:20:11.650843 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.651137 kubelet[3135]: W0813 00:20:11.650963 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.651137 kubelet[3135]: E0813 00:20:11.650994 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.651668 kubelet[3135]: E0813 00:20:11.651459 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.651668 kubelet[3135]: W0813 00:20:11.651484 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.651668 kubelet[3135]: E0813 00:20:11.651511 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.652442 kubelet[3135]: E0813 00:20:11.652162 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.652442 kubelet[3135]: W0813 00:20:11.652187 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.652442 kubelet[3135]: E0813 00:20:11.652211 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.652944 kubelet[3135]: E0813 00:20:11.652917 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.654895 kubelet[3135]: W0813 00:20:11.654845 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.655325 kubelet[3135]: E0813 00:20:11.655109 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.655707 kubelet[3135]: E0813 00:20:11.655678 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.656126 kubelet[3135]: W0813 00:20:11.655908 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.656126 kubelet[3135]: E0813 00:20:11.655949 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.656861 kubelet[3135]: E0813 00:20:11.656535 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.656861 kubelet[3135]: W0813 00:20:11.656564 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.656861 kubelet[3135]: E0813 00:20:11.656593 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.657910 kubelet[3135]: E0813 00:20:11.657468 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.657910 kubelet[3135]: W0813 00:20:11.657498 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.657910 kubelet[3135]: E0813 00:20:11.657528 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.658930 kubelet[3135]: E0813 00:20:11.658589 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.658930 kubelet[3135]: W0813 00:20:11.658621 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.658930 kubelet[3135]: E0813 00:20:11.658651 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.658930 kubelet[3135]: I0813 00:20:11.658711 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e5f0f3a3-68e2-4f84-92cb-c460ed58604c-registration-dir\") pod \"csi-node-driver-wzmtn\" (UID: \"e5f0f3a3-68e2-4f84-92cb-c460ed58604c\") " pod="calico-system/csi-node-driver-wzmtn" Aug 13 00:20:11.659537 kubelet[3135]: E0813 00:20:11.659504 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.659858 kubelet[3135]: W0813 00:20:11.659641 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.659858 kubelet[3135]: E0813 00:20:11.659693 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.659858 kubelet[3135]: I0813 00:20:11.659738 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e5f0f3a3-68e2-4f84-92cb-c460ed58604c-varrun\") pod \"csi-node-driver-wzmtn\" (UID: \"e5f0f3a3-68e2-4f84-92cb-c460ed58604c\") " pod="calico-system/csi-node-driver-wzmtn" Aug 13 00:20:11.661348 kubelet[3135]: E0813 00:20:11.661280 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.661348 kubelet[3135]: W0813 00:20:11.661330 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.661348 kubelet[3135]: E0813 00:20:11.661370 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.664194 kubelet[3135]: E0813 00:20:11.664132 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.664194 kubelet[3135]: W0813 00:20:11.664176 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.664440 kubelet[3135]: E0813 00:20:11.664213 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.666458 kubelet[3135]: E0813 00:20:11.666391 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.666458 kubelet[3135]: W0813 00:20:11.666437 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.666650 kubelet[3135]: E0813 00:20:11.666488 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.666650 kubelet[3135]: I0813 00:20:11.666534 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e5f0f3a3-68e2-4f84-92cb-c460ed58604c-kubelet-dir\") pod \"csi-node-driver-wzmtn\" (UID: \"e5f0f3a3-68e2-4f84-92cb-c460ed58604c\") " pod="calico-system/csi-node-driver-wzmtn" Aug 13 00:20:11.668095 kubelet[3135]: E0813 00:20:11.667937 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.668930 kubelet[3135]: W0813 00:20:11.668840 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.670066 kubelet[3135]: E0813 00:20:11.669121 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.670066 kubelet[3135]: I0813 00:20:11.669179 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e5f0f3a3-68e2-4f84-92cb-c460ed58604c-socket-dir\") pod \"csi-node-driver-wzmtn\" (UID: \"e5f0f3a3-68e2-4f84-92cb-c460ed58604c\") " pod="calico-system/csi-node-driver-wzmtn" Aug 13 00:20:11.671130 kubelet[3135]: E0813 00:20:11.671072 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.671130 kubelet[3135]: W0813 00:20:11.671116 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.671404 kubelet[3135]: E0813 00:20:11.671359 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.672813 kubelet[3135]: E0813 00:20:11.672742 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.672979 kubelet[3135]: W0813 00:20:11.672934 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.673983 kubelet[3135]: E0813 00:20:11.673642 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.674248 kubelet[3135]: E0813 00:20:11.674208 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.674248 kubelet[3135]: W0813 00:20:11.674241 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.674516 kubelet[3135]: E0813 00:20:11.674429 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.674516 kubelet[3135]: I0813 00:20:11.674483 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-944d6\" (UniqueName: \"kubernetes.io/projected/e5f0f3a3-68e2-4f84-92cb-c460ed58604c-kube-api-access-944d6\") pod \"csi-node-driver-wzmtn\" (UID: \"e5f0f3a3-68e2-4f84-92cb-c460ed58604c\") " pod="calico-system/csi-node-driver-wzmtn" Aug 13 00:20:11.676100 kubelet[3135]: E0813 00:20:11.676040 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.676100 kubelet[3135]: W0813 00:20:11.676086 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.676595 kubelet[3135]: E0813 00:20:11.676366 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.676955 kubelet[3135]: E0813 00:20:11.676913 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.676955 kubelet[3135]: W0813 00:20:11.676947 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.677120 kubelet[3135]: E0813 00:20:11.676979 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.678998 kubelet[3135]: E0813 00:20:11.678939 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.678998 kubelet[3135]: W0813 00:20:11.678984 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.679234 kubelet[3135]: E0813 00:20:11.679065 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.681596 kubelet[3135]: E0813 00:20:11.681513 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.681596 kubelet[3135]: W0813 00:20:11.681553 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.681853 kubelet[3135]: E0813 00:20:11.681602 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.682801 kubelet[3135]: E0813 00:20:11.682709 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.684667 kubelet[3135]: W0813 00:20:11.682839 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.684667 kubelet[3135]: E0813 00:20:11.682876 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.684667 kubelet[3135]: E0813 00:20:11.684270 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.684667 kubelet[3135]: W0813 00:20:11.684326 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.684667 kubelet[3135]: E0813 00:20:11.684360 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.778881 kubelet[3135]: E0813 00:20:11.778745 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.778881 kubelet[3135]: W0813 00:20:11.778868 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.779100 kubelet[3135]: E0813 00:20:11.778906 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.779722 kubelet[3135]: E0813 00:20:11.779671 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.779842 kubelet[3135]: W0813 00:20:11.779711 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.779899 kubelet[3135]: E0813 00:20:11.779848 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.781518 kubelet[3135]: E0813 00:20:11.781447 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.781518 kubelet[3135]: W0813 00:20:11.781506 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.781751 kubelet[3135]: E0813 00:20:11.781561 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.782088 kubelet[3135]: E0813 00:20:11.782021 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.782088 kubelet[3135]: W0813 00:20:11.782074 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.782263 kubelet[3135]: E0813 00:20:11.782116 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.783065 kubelet[3135]: E0813 00:20:11.782614 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.783065 kubelet[3135]: W0813 00:20:11.782669 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.783065 kubelet[3135]: E0813 00:20:11.782709 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.784430 kubelet[3135]: E0813 00:20:11.783952 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.784430 kubelet[3135]: W0813 00:20:11.783988 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.784430 kubelet[3135]: E0813 00:20:11.784037 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.785958 kubelet[3135]: E0813 00:20:11.785915 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.786338 kubelet[3135]: W0813 00:20:11.786109 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.786338 kubelet[3135]: E0813 00:20:11.786193 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.786820 kubelet[3135]: E0813 00:20:11.786791 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.787796 kubelet[3135]: W0813 00:20:11.786925 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.788300 kubelet[3135]: E0813 00:20:11.788111 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.788781 kubelet[3135]: E0813 00:20:11.788554 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.788781 kubelet[3135]: W0813 00:20:11.788582 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.788781 kubelet[3135]: E0813 00:20:11.788639 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.789997 kubelet[3135]: E0813 00:20:11.789959 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.790220 kubelet[3135]: W0813 00:20:11.790190 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.791854 kubelet[3135]: E0813 00:20:11.791794 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.792694 kubelet[3135]: E0813 00:20:11.792423 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.792694 kubelet[3135]: W0813 00:20:11.792454 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.792694 kubelet[3135]: E0813 00:20:11.792515 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.794805 kubelet[3135]: E0813 00:20:11.793996 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.795015 kubelet[3135]: W0813 00:20:11.794977 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.795383 kubelet[3135]: E0813 00:20:11.795152 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.796035 kubelet[3135]: E0813 00:20:11.795640 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.796035 kubelet[3135]: W0813 00:20:11.795668 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.796035 kubelet[3135]: E0813 00:20:11.795724 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.797907 kubelet[3135]: E0813 00:20:11.796746 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.798387 kubelet[3135]: W0813 00:20:11.798105 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.798387 kubelet[3135]: E0813 00:20:11.798364 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.800806 kubelet[3135]: E0813 00:20:11.799076 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.800806 kubelet[3135]: W0813 00:20:11.799103 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.800806 kubelet[3135]: E0813 00:20:11.799168 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.801192 kubelet[3135]: E0813 00:20:11.801163 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.801296 kubelet[3135]: W0813 00:20:11.801271 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.802961 kubelet[3135]: E0813 00:20:11.802922 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.803305 kubelet[3135]: W0813 00:20:11.803111 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.806230 kubelet[3135]: E0813 00:20:11.804545 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.806230 kubelet[3135]: W0813 00:20:11.804577 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.806562 kubelet[3135]: E0813 00:20:11.806533 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.807848 kubelet[3135]: W0813 00:20:11.806646 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.808078 kubelet[3135]: E0813 00:20:11.808048 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.809192 kubelet[3135]: E0813 00:20:11.807221 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.809192 kubelet[3135]: E0813 00:20:11.807251 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.809716 kubelet[3135]: E0813 00:20:11.807237 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.813308 kubelet[3135]: E0813 00:20:11.812901 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.813308 kubelet[3135]: W0813 00:20:11.812938 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.813308 kubelet[3135]: E0813 00:20:11.812986 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.813538 kubelet[3135]: E0813 00:20:11.813470 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.813538 kubelet[3135]: W0813 00:20:11.813491 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.813538 kubelet[3135]: E0813 00:20:11.813533 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.814809 kubelet[3135]: E0813 00:20:11.813875 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.814809 kubelet[3135]: W0813 00:20:11.813906 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.814809 kubelet[3135]: E0813 00:20:11.814081 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.815136 kubelet[3135]: E0813 00:20:11.815078 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.815136 kubelet[3135]: W0813 00:20:11.815130 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.815263 kubelet[3135]: E0813 00:20:11.815172 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.818197 kubelet[3135]: E0813 00:20:11.818136 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.818197 kubelet[3135]: W0813 00:20:11.818182 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.818954 kubelet[3135]: E0813 00:20:11.818880 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.820940 kubelet[3135]: E0813 00:20:11.820844 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.820940 kubelet[3135]: W0813 00:20:11.820886 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.820940 kubelet[3135]: E0813 00:20:11.820922 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.900060 kubelet[3135]: E0813 00:20:11.899909 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.900060 kubelet[3135]: W0813 00:20:11.899948 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.900060 kubelet[3135]: E0813 00:20:11.899989 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:11.913012 kubelet[3135]: E0813 00:20:11.912780 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:11.913012 kubelet[3135]: W0813 00:20:11.912815 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:11.913012 kubelet[3135]: E0813 00:20:11.912847 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.736821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1883046199.mount: Deactivated successfully. Aug 13 00:20:12.929630 kubelet[3135]: E0813 00:20:12.929514 3135 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzmtn" podUID="e5f0f3a3-68e2-4f84-92cb-c460ed58604c" Aug 13 00:20:14.010968 containerd[1941]: time="2025-08-13T00:20:14.010881443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:14.013194 containerd[1941]: time="2025-08-13T00:20:14.013124051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Aug 13 00:20:14.013960 containerd[1941]: time="2025-08-13T00:20:14.013901111Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:14.018595 containerd[1941]: time="2025-08-13T00:20:14.018521976Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:14.022087 containerd[1941]: time="2025-08-13T00:20:14.021887916Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 2.611013113s" Aug 13 00:20:14.022087 containerd[1941]: time="2025-08-13T00:20:14.021951768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Aug 13 00:20:14.027112 containerd[1941]: time="2025-08-13T00:20:14.025804572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 00:20:14.061463 containerd[1941]: time="2025-08-13T00:20:14.061400940Z" level=info msg="CreateContainer within sandbox \"a0c4c086f05d75ce3ee03841f8826434518d59e3dab097d1da99017046c5a93f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 00:20:14.080506 containerd[1941]: time="2025-08-13T00:20:14.080422296Z" level=info msg="CreateContainer within sandbox \"a0c4c086f05d75ce3ee03841f8826434518d59e3dab097d1da99017046c5a93f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"09635a2618e82217c51ce3ac072566be6d1ebf8f45f4bec5a97485d0a54658aa\"" Aug 13 00:20:14.081806 containerd[1941]: time="2025-08-13T00:20:14.081714660Z" level=info msg="StartContainer for \"09635a2618e82217c51ce3ac072566be6d1ebf8f45f4bec5a97485d0a54658aa\"" Aug 13 00:20:14.137086 systemd[1]: Started cri-containerd-09635a2618e82217c51ce3ac072566be6d1ebf8f45f4bec5a97485d0a54658aa.scope - libcontainer container 09635a2618e82217c51ce3ac072566be6d1ebf8f45f4bec5a97485d0a54658aa. Aug 13 00:20:14.209426 containerd[1941]: time="2025-08-13T00:20:14.209342904Z" level=info msg="StartContainer for \"09635a2618e82217c51ce3ac072566be6d1ebf8f45f4bec5a97485d0a54658aa\" returns successfully" Aug 13 00:20:14.929121 kubelet[3135]: E0813 00:20:14.929037 3135 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzmtn" podUID="e5f0f3a3-68e2-4f84-92cb-c460ed58604c" Aug 13 00:20:15.166088 containerd[1941]: time="2025-08-13T00:20:15.165996625Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:15.168796 containerd[1941]: time="2025-08-13T00:20:15.168346081Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Aug 13 00:20:15.172142 containerd[1941]: time="2025-08-13T00:20:15.171702697Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:15.177665 containerd[1941]: time="2025-08-13T00:20:15.177580297Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:15.179527 containerd[1941]: time="2025-08-13T00:20:15.179242849Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.153368545s" Aug 13 00:20:15.179527 containerd[1941]: time="2025-08-13T00:20:15.179301709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Aug 13 00:20:15.184924 kubelet[3135]: E0813 00:20:15.183272 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.184924 kubelet[3135]: W0813 00:20:15.183317 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.184924 kubelet[3135]: E0813 00:20:15.183353 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.185457 kubelet[3135]: E0813 00:20:15.185119 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.185457 kubelet[3135]: W0813 00:20:15.185453 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.186572 kubelet[3135]: E0813 00:20:15.185516 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.187121 kubelet[3135]: E0813 00:20:15.187068 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.187121 kubelet[3135]: W0813 00:20:15.187109 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.187278 kubelet[3135]: E0813 00:20:15.187141 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.189179 kubelet[3135]: E0813 00:20:15.189071 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.189939 kubelet[3135]: W0813 00:20:15.189343 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.189939 kubelet[3135]: E0813 00:20:15.189387 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.191156 kubelet[3135]: E0813 00:20:15.190661 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.191156 kubelet[3135]: W0813 00:20:15.190704 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.191156 kubelet[3135]: E0813 00:20:15.190737 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.192009 kubelet[3135]: E0813 00:20:15.191555 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.192009 kubelet[3135]: W0813 00:20:15.191644 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.192009 kubelet[3135]: E0813 00:20:15.191676 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.192261 kubelet[3135]: E0813 00:20:15.192123 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.192261 kubelet[3135]: W0813 00:20:15.192143 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.192261 kubelet[3135]: E0813 00:20:15.192166 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.193880 kubelet[3135]: E0813 00:20:15.192468 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.193880 kubelet[3135]: W0813 00:20:15.192497 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.193880 kubelet[3135]: E0813 00:20:15.192695 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.193880 kubelet[3135]: E0813 00:20:15.193311 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.193880 kubelet[3135]: W0813 00:20:15.193353 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.193880 kubelet[3135]: E0813 00:20:15.193391 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.195660 kubelet[3135]: E0813 00:20:15.195248 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.195660 kubelet[3135]: W0813 00:20:15.195282 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.195660 kubelet[3135]: E0813 00:20:15.195458 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.198879 kubelet[3135]: E0813 00:20:15.197935 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.198879 kubelet[3135]: W0813 00:20:15.197968 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.198879 kubelet[3135]: E0813 00:20:15.198000 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.199124 containerd[1941]: time="2025-08-13T00:20:15.198134113Z" level=info msg="CreateContainer within sandbox \"a697d3245b6b15fcaf793b6d93b79010fb1011dbeeb4a1c1c086322f345e637f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 00:20:15.202430 kubelet[3135]: E0813 00:20:15.200950 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.202430 kubelet[3135]: W0813 00:20:15.202012 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.202430 kubelet[3135]: E0813 00:20:15.202073 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.204750 kubelet[3135]: E0813 00:20:15.204269 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.204750 kubelet[3135]: W0813 00:20:15.204310 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.204750 kubelet[3135]: E0813 00:20:15.204345 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.206196 kubelet[3135]: E0813 00:20:15.206152 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.206483 kubelet[3135]: W0813 00:20:15.206311 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.206483 kubelet[3135]: E0813 00:20:15.206346 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.207750 kubelet[3135]: E0813 00:20:15.207571 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.207750 kubelet[3135]: W0813 00:20:15.207607 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.207750 kubelet[3135]: E0813 00:20:15.207636 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.229806 kubelet[3135]: I0813 00:20:15.227024 3135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5cc7cb64f8-xn556" podStartSLOduration=2.611151529 podStartE2EDuration="5.22700237s" podCreationTimestamp="2025-08-13 00:20:10 +0000 UTC" firstStartedPulling="2025-08-13 00:20:11.407580083 +0000 UTC m=+35.717927951" lastFinishedPulling="2025-08-13 00:20:14.02343084 +0000 UTC m=+38.333778792" observedRunningTime="2025-08-13 00:20:15.194520841 +0000 UTC m=+39.504868721" watchObservedRunningTime="2025-08-13 00:20:15.22700237 +0000 UTC m=+39.537350250" Aug 13 00:20:15.236459 kubelet[3135]: E0813 00:20:15.236414 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.236964 kubelet[3135]: W0813 00:20:15.236627 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.236964 kubelet[3135]: E0813 00:20:15.236671 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.239306 kubelet[3135]: E0813 00:20:15.239112 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.239306 kubelet[3135]: W0813 00:20:15.239145 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.239306 kubelet[3135]: E0813 00:20:15.239247 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.241223 kubelet[3135]: E0813 00:20:15.240926 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.241223 kubelet[3135]: W0813 00:20:15.241082 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.241807 kubelet[3135]: E0813 00:20:15.241527 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.242203 kubelet[3135]: E0813 00:20:15.242171 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.242696 kubelet[3135]: W0813 00:20:15.242353 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.243142 kubelet[3135]: E0813 00:20:15.242974 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.243697 kubelet[3135]: E0813 00:20:15.243656 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.244019 kubelet[3135]: W0813 00:20:15.243867 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.244019 kubelet[3135]: E0813 00:20:15.243971 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.245600 kubelet[3135]: E0813 00:20:15.245080 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.245600 kubelet[3135]: W0813 00:20:15.245116 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.245600 kubelet[3135]: E0813 00:20:15.245157 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.247177 kubelet[3135]: E0813 00:20:15.247138 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.247482 kubelet[3135]: W0813 00:20:15.247334 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.248115 kubelet[3135]: E0813 00:20:15.247983 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.249387 kubelet[3135]: E0813 00:20:15.248975 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.249387 kubelet[3135]: W0813 00:20:15.249040 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.249888 kubelet[3135]: E0813 00:20:15.249734 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.250167 containerd[1941]: time="2025-08-13T00:20:15.250094594Z" level=info msg="CreateContainer within sandbox \"a697d3245b6b15fcaf793b6d93b79010fb1011dbeeb4a1c1c086322f345e637f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9033d3374b0040cfbfd42b75bf8301a19de4f10997210826fc21e8135b415ec7\"" Aug 13 00:20:15.251996 containerd[1941]: time="2025-08-13T00:20:15.251683070Z" level=info msg="StartContainer for \"9033d3374b0040cfbfd42b75bf8301a19de4f10997210826fc21e8135b415ec7\"" Aug 13 00:20:15.253382 kubelet[3135]: E0813 00:20:15.253315 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.253685 kubelet[3135]: W0813 00:20:15.253346 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.254850 kubelet[3135]: E0813 00:20:15.253929 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.256177 kubelet[3135]: E0813 00:20:15.256115 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.258742 kubelet[3135]: W0813 00:20:15.258120 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.258742 kubelet[3135]: E0813 00:20:15.258285 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.260646 kubelet[3135]: E0813 00:20:15.260573 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.260646 kubelet[3135]: W0813 00:20:15.260606 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.262653 kubelet[3135]: E0813 00:20:15.262275 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.266140 kubelet[3135]: E0813 00:20:15.266083 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.266957 kubelet[3135]: W0813 00:20:15.266327 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.267420 kubelet[3135]: E0813 00:20:15.267282 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.268722 kubelet[3135]: E0813 00:20:15.268463 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.268722 kubelet[3135]: W0813 00:20:15.268499 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.269635 kubelet[3135]: E0813 00:20:15.269052 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.272409 kubelet[3135]: E0813 00:20:15.272101 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.272409 kubelet[3135]: W0813 00:20:15.272136 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.272721 kubelet[3135]: E0813 00:20:15.272689 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.273298 kubelet[3135]: E0813 00:20:15.273218 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.273298 kubelet[3135]: W0813 00:20:15.273260 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.273726 kubelet[3135]: E0813 00:20:15.273490 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.275024 kubelet[3135]: E0813 00:20:15.274750 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.275024 kubelet[3135]: W0813 00:20:15.274959 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.275674 kubelet[3135]: E0813 00:20:15.275410 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.277039 kubelet[3135]: E0813 00:20:15.276895 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.277039 kubelet[3135]: W0813 00:20:15.276929 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.277039 kubelet[3135]: E0813 00:20:15.276976 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.278448 kubelet[3135]: E0813 00:20:15.278290 3135 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.278448 kubelet[3135]: W0813 00:20:15.278337 3135 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.278448 kubelet[3135]: E0813 00:20:15.278374 3135 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.344100 systemd[1]: Started cri-containerd-9033d3374b0040cfbfd42b75bf8301a19de4f10997210826fc21e8135b415ec7.scope - libcontainer container 9033d3374b0040cfbfd42b75bf8301a19de4f10997210826fc21e8135b415ec7. Aug 13 00:20:15.397136 containerd[1941]: time="2025-08-13T00:20:15.396967898Z" level=info msg="StartContainer for \"9033d3374b0040cfbfd42b75bf8301a19de4f10997210826fc21e8135b415ec7\" returns successfully" Aug 13 00:20:15.426860 systemd[1]: cri-containerd-9033d3374b0040cfbfd42b75bf8301a19de4f10997210826fc21e8135b415ec7.scope: Deactivated successfully. Aug 13 00:20:15.475720 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9033d3374b0040cfbfd42b75bf8301a19de4f10997210826fc21e8135b415ec7-rootfs.mount: Deactivated successfully. Aug 13 00:20:15.793553 containerd[1941]: time="2025-08-13T00:20:15.793367044Z" level=info msg="shim disconnected" id=9033d3374b0040cfbfd42b75bf8301a19de4f10997210826fc21e8135b415ec7 namespace=k8s.io Aug 13 00:20:15.793553 containerd[1941]: time="2025-08-13T00:20:15.793446376Z" level=warning msg="cleaning up after shim disconnected" id=9033d3374b0040cfbfd42b75bf8301a19de4f10997210826fc21e8135b415ec7 namespace=k8s.io Aug 13 00:20:15.793553 containerd[1941]: time="2025-08-13T00:20:15.793468888Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:20:16.168894 containerd[1941]: time="2025-08-13T00:20:16.167745470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 00:20:16.929067 kubelet[3135]: E0813 00:20:16.928950 3135 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzmtn" podUID="e5f0f3a3-68e2-4f84-92cb-c460ed58604c" Aug 13 00:20:18.911186 containerd[1941]: time="2025-08-13T00:20:18.911103740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:18.912934 containerd[1941]: time="2025-08-13T00:20:18.912864056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Aug 13 00:20:18.915328 containerd[1941]: time="2025-08-13T00:20:18.915251792Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:18.920360 containerd[1941]: time="2025-08-13T00:20:18.920267888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:18.921809 containerd[1941]: time="2025-08-13T00:20:18.921653660Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.752021154s" Aug 13 00:20:18.921809 containerd[1941]: time="2025-08-13T00:20:18.921707192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Aug 13 00:20:18.927565 containerd[1941]: time="2025-08-13T00:20:18.927452504Z" level=info msg="CreateContainer within sandbox \"a697d3245b6b15fcaf793b6d93b79010fb1011dbeeb4a1c1c086322f345e637f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 00:20:18.929834 kubelet[3135]: E0813 00:20:18.929648 3135 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzmtn" podUID="e5f0f3a3-68e2-4f84-92cb-c460ed58604c" Aug 13 00:20:18.960416 containerd[1941]: time="2025-08-13T00:20:18.960348812Z" level=info msg="CreateContainer within sandbox \"a697d3245b6b15fcaf793b6d93b79010fb1011dbeeb4a1c1c086322f345e637f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f4aabd9498942ac844584131ef9f739d66e2757f00efe4d68f113249e48cd913\"" Aug 13 00:20:18.968037 containerd[1941]: time="2025-08-13T00:20:18.967149140Z" level=info msg="StartContainer for \"f4aabd9498942ac844584131ef9f739d66e2757f00efe4d68f113249e48cd913\"" Aug 13 00:20:19.025739 systemd[1]: run-containerd-runc-k8s.io-f4aabd9498942ac844584131ef9f739d66e2757f00efe4d68f113249e48cd913-runc.w1KX1x.mount: Deactivated successfully. Aug 13 00:20:19.039114 systemd[1]: Started cri-containerd-f4aabd9498942ac844584131ef9f739d66e2757f00efe4d68f113249e48cd913.scope - libcontainer container f4aabd9498942ac844584131ef9f739d66e2757f00efe4d68f113249e48cd913. Aug 13 00:20:19.107442 containerd[1941]: time="2025-08-13T00:20:19.106969193Z" level=info msg="StartContainer for \"f4aabd9498942ac844584131ef9f739d66e2757f00efe4d68f113249e48cd913\" returns successfully" Aug 13 00:20:20.126387 containerd[1941]: time="2025-08-13T00:20:20.126274158Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:20:20.131121 systemd[1]: cri-containerd-f4aabd9498942ac844584131ef9f739d66e2757f00efe4d68f113249e48cd913.scope: Deactivated successfully. Aug 13 00:20:20.132094 systemd[1]: cri-containerd-f4aabd9498942ac844584131ef9f739d66e2757f00efe4d68f113249e48cd913.scope: Consumed 1.012s CPU time. Aug 13 00:20:20.157397 kubelet[3135]: I0813 00:20:20.156208 3135 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 00:20:20.197162 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4aabd9498942ac844584131ef9f739d66e2757f00efe4d68f113249e48cd913-rootfs.mount: Deactivated successfully. Aug 13 00:20:20.245589 systemd[1]: Created slice kubepods-burstable-pod3a2c1391_3856_407e_9f32_3dffc0012695.slice - libcontainer container kubepods-burstable-pod3a2c1391_3856_407e_9f32_3dffc0012695.slice. Aug 13 00:20:20.279373 systemd[1]: Created slice kubepods-besteffort-pod73b3a345_ddf0_47d6_a1d3_270371119508.slice - libcontainer container kubepods-besteffort-pod73b3a345_ddf0_47d6_a1d3_270371119508.slice. Aug 13 00:20:20.297974 kubelet[3135]: I0813 00:20:20.297925 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltwd8\" (UniqueName: \"kubernetes.io/projected/73b3a345-ddf0-47d6-a1d3-270371119508-kube-api-access-ltwd8\") pod \"calico-apiserver-65d97cd995-7ztd9\" (UID: \"73b3a345-ddf0-47d6-a1d3-270371119508\") " pod="calico-apiserver/calico-apiserver-65d97cd995-7ztd9" Aug 13 00:20:20.298503 kubelet[3135]: I0813 00:20:20.298456 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/73b3a345-ddf0-47d6-a1d3-270371119508-calico-apiserver-certs\") pod \"calico-apiserver-65d97cd995-7ztd9\" (UID: \"73b3a345-ddf0-47d6-a1d3-270371119508\") " pod="calico-apiserver/calico-apiserver-65d97cd995-7ztd9" Aug 13 00:20:20.298871 kubelet[3135]: I0813 00:20:20.298812 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a2c1391-3856-407e-9f32-3dffc0012695-config-volume\") pod \"coredns-7c65d6cfc9-6mjjr\" (UID: \"3a2c1391-3856-407e-9f32-3dffc0012695\") " pod="kube-system/coredns-7c65d6cfc9-6mjjr" Aug 13 00:20:20.299607 kubelet[3135]: I0813 00:20:20.299157 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4qq8\" (UniqueName: \"kubernetes.io/projected/3a2c1391-3856-407e-9f32-3dffc0012695-kube-api-access-m4qq8\") pod \"coredns-7c65d6cfc9-6mjjr\" (UID: \"3a2c1391-3856-407e-9f32-3dffc0012695\") " pod="kube-system/coredns-7c65d6cfc9-6mjjr" Aug 13 00:20:20.335855 systemd[1]: Created slice kubepods-besteffort-podd21d0cba_f46e_4a88_8bd1_db42d9c6b456.slice - libcontainer container kubepods-besteffort-podd21d0cba_f46e_4a88_8bd1_db42d9c6b456.slice. Aug 13 00:20:20.363701 systemd[1]: Created slice kubepods-burstable-podf1dfa461_4b5f_4ed8_a850_cf604830db07.slice - libcontainer container kubepods-burstable-podf1dfa461_4b5f_4ed8_a850_cf604830db07.slice. Aug 13 00:20:20.386580 systemd[1]: Created slice kubepods-besteffort-pod0399420f_0a76_4a22_be47_c06978fb8813.slice - libcontainer container kubepods-besteffort-pod0399420f_0a76_4a22_be47_c06978fb8813.slice. Aug 13 00:20:20.400490 kubelet[3135]: I0813 00:20:20.400138 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0399420f-0a76-4a22-be47-c06978fb8813-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-555lv\" (UID: \"0399420f-0a76-4a22-be47-c06978fb8813\") " pod="calico-system/goldmane-58fd7646b9-555lv" Aug 13 00:20:20.400490 kubelet[3135]: I0813 00:20:20.400327 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p968z\" (UniqueName: \"kubernetes.io/projected/0399420f-0a76-4a22-be47-c06978fb8813-kube-api-access-p968z\") pod \"goldmane-58fd7646b9-555lv\" (UID: \"0399420f-0a76-4a22-be47-c06978fb8813\") " pod="calico-system/goldmane-58fd7646b9-555lv" Aug 13 00:20:20.401464 kubelet[3135]: I0813 00:20:20.400587 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7tdp\" (UniqueName: \"kubernetes.io/projected/f1dfa461-4b5f-4ed8-a850-cf604830db07-kube-api-access-z7tdp\") pod \"coredns-7c65d6cfc9-p2lsx\" (UID: \"f1dfa461-4b5f-4ed8-a850-cf604830db07\") " pod="kube-system/coredns-7c65d6cfc9-p2lsx" Aug 13 00:20:20.401464 kubelet[3135]: I0813 00:20:20.400875 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv62t\" (UniqueName: \"kubernetes.io/projected/af68b499-5806-484e-ba8b-a6c001a6ada8-kube-api-access-jv62t\") pod \"whisker-b5fc5d65f-tngz7\" (UID: \"af68b499-5806-484e-ba8b-a6c001a6ada8\") " pod="calico-system/whisker-b5fc5d65f-tngz7" Aug 13 00:20:20.402377 kubelet[3135]: I0813 00:20:20.401863 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0399420f-0a76-4a22-be47-c06978fb8813-config\") pod \"goldmane-58fd7646b9-555lv\" (UID: \"0399420f-0a76-4a22-be47-c06978fb8813\") " pod="calico-system/goldmane-58fd7646b9-555lv" Aug 13 00:20:20.402794 kubelet[3135]: I0813 00:20:20.402626 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1dfa461-4b5f-4ed8-a850-cf604830db07-config-volume\") pod \"coredns-7c65d6cfc9-p2lsx\" (UID: \"f1dfa461-4b5f-4ed8-a850-cf604830db07\") " pod="kube-system/coredns-7c65d6cfc9-p2lsx" Aug 13 00:20:20.403339 kubelet[3135]: I0813 00:20:20.403110 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/af68b499-5806-484e-ba8b-a6c001a6ada8-whisker-backend-key-pair\") pod \"whisker-b5fc5d65f-tngz7\" (UID: \"af68b499-5806-484e-ba8b-a6c001a6ada8\") " pod="calico-system/whisker-b5fc5d65f-tngz7" Aug 13 00:20:20.404046 kubelet[3135]: I0813 00:20:20.403968 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0399420f-0a76-4a22-be47-c06978fb8813-goldmane-key-pair\") pod \"goldmane-58fd7646b9-555lv\" (UID: \"0399420f-0a76-4a22-be47-c06978fb8813\") " pod="calico-system/goldmane-58fd7646b9-555lv" Aug 13 00:20:20.404751 kubelet[3135]: I0813 00:20:20.404339 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzmbc\" (UniqueName: \"kubernetes.io/projected/594d665a-07d4-46f9-938a-94309ad04257-kube-api-access-vzmbc\") pod \"calico-kube-controllers-6db5fd67fb-ph246\" (UID: \"594d665a-07d4-46f9-938a-94309ad04257\") " pod="calico-system/calico-kube-controllers-6db5fd67fb-ph246" Aug 13 00:20:20.404751 kubelet[3135]: I0813 00:20:20.404669 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ftln\" (UniqueName: \"kubernetes.io/projected/d21d0cba-f46e-4a88-8bd1-db42d9c6b456-kube-api-access-2ftln\") pod \"calico-apiserver-65d97cd995-jdgkd\" (UID: \"d21d0cba-f46e-4a88-8bd1-db42d9c6b456\") " pod="calico-apiserver/calico-apiserver-65d97cd995-jdgkd" Aug 13 00:20:20.405829 kubelet[3135]: I0813 00:20:20.404718 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af68b499-5806-484e-ba8b-a6c001a6ada8-whisker-ca-bundle\") pod \"whisker-b5fc5d65f-tngz7\" (UID: \"af68b499-5806-484e-ba8b-a6c001a6ada8\") " pod="calico-system/whisker-b5fc5d65f-tngz7" Aug 13 00:20:20.407407 kubelet[3135]: I0813 00:20:20.407359 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/594d665a-07d4-46f9-938a-94309ad04257-tigera-ca-bundle\") pod \"calico-kube-controllers-6db5fd67fb-ph246\" (UID: \"594d665a-07d4-46f9-938a-94309ad04257\") " pod="calico-system/calico-kube-controllers-6db5fd67fb-ph246" Aug 13 00:20:20.407612 systemd[1]: Created slice kubepods-besteffort-podaf68b499_5806_484e_ba8b_a6c001a6ada8.slice - libcontainer container kubepods-besteffort-podaf68b499_5806_484e_ba8b_a6c001a6ada8.slice. Aug 13 00:20:20.410832 kubelet[3135]: I0813 00:20:20.410468 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d21d0cba-f46e-4a88-8bd1-db42d9c6b456-calico-apiserver-certs\") pod \"calico-apiserver-65d97cd995-jdgkd\" (UID: \"d21d0cba-f46e-4a88-8bd1-db42d9c6b456\") " pod="calico-apiserver/calico-apiserver-65d97cd995-jdgkd" Aug 13 00:20:20.451604 systemd[1]: Created slice kubepods-besteffort-pod594d665a_07d4_46f9_938a_94309ad04257.slice - libcontainer container kubepods-besteffort-pod594d665a_07d4_46f9_938a_94309ad04257.slice. Aug 13 00:20:20.605579 containerd[1941]: time="2025-08-13T00:20:20.605521544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6mjjr,Uid:3a2c1391-3856-407e-9f32-3dffc0012695,Namespace:kube-system,Attempt:0,}" Aug 13 00:20:20.622556 containerd[1941]: time="2025-08-13T00:20:20.622473908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65d97cd995-7ztd9,Uid:73b3a345-ddf0-47d6-a1d3-270371119508,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:20:20.656664 containerd[1941]: time="2025-08-13T00:20:20.655624988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65d97cd995-jdgkd,Uid:d21d0cba-f46e-4a88-8bd1-db42d9c6b456,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:20:20.672369 containerd[1941]: time="2025-08-13T00:20:20.672306249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p2lsx,Uid:f1dfa461-4b5f-4ed8-a850-cf604830db07,Namespace:kube-system,Attempt:0,}" Aug 13 00:20:20.712934 containerd[1941]: time="2025-08-13T00:20:20.712553421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-555lv,Uid:0399420f-0a76-4a22-be47-c06978fb8813,Namespace:calico-system,Attempt:0,}" Aug 13 00:20:20.724044 containerd[1941]: time="2025-08-13T00:20:20.723974697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b5fc5d65f-tngz7,Uid:af68b499-5806-484e-ba8b-a6c001a6ada8,Namespace:calico-system,Attempt:0,}" Aug 13 00:20:20.759712 containerd[1941]: time="2025-08-13T00:20:20.759378561Z" level=info msg="shim disconnected" id=f4aabd9498942ac844584131ef9f739d66e2757f00efe4d68f113249e48cd913 namespace=k8s.io Aug 13 00:20:20.759712 containerd[1941]: time="2025-08-13T00:20:20.759464829Z" level=warning msg="cleaning up after shim disconnected" id=f4aabd9498942ac844584131ef9f739d66e2757f00efe4d68f113249e48cd913 namespace=k8s.io Aug 13 00:20:20.759712 containerd[1941]: time="2025-08-13T00:20:20.759485637Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:20:20.771165 containerd[1941]: time="2025-08-13T00:20:20.771106233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6db5fd67fb-ph246,Uid:594d665a-07d4-46f9-938a-94309ad04257,Namespace:calico-system,Attempt:0,}" Aug 13 00:20:20.955415 systemd[1]: Created slice kubepods-besteffort-pode5f0f3a3_68e2_4f84_92cb_c460ed58604c.slice - libcontainer container kubepods-besteffort-pode5f0f3a3_68e2_4f84_92cb_c460ed58604c.slice. Aug 13 00:20:20.970521 containerd[1941]: time="2025-08-13T00:20:20.970448170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wzmtn,Uid:e5f0f3a3-68e2-4f84-92cb-c460ed58604c,Namespace:calico-system,Attempt:0,}" Aug 13 00:20:21.270308 containerd[1941]: time="2025-08-13T00:20:21.270049124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 00:20:21.336721 containerd[1941]: time="2025-08-13T00:20:21.336644684Z" level=error msg="Failed to destroy network for sandbox \"a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:21.341353 containerd[1941]: time="2025-08-13T00:20:21.341262176Z" level=error msg="encountered an error cleaning up failed sandbox \"a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:21.341528 containerd[1941]: time="2025-08-13T00:20:21.341384996Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6db5fd67fb-ph246,Uid:594d665a-07d4-46f9-938a-94309ad04257,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:21.342477 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291-shm.mount: Deactivated successfully. Aug 13 00:20:21.343753 kubelet[3135]: E0813 00:20:21.343673 3135 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:21.346172 kubelet[3135]: E0813 00:20:21.343792 3135 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6db5fd67fb-ph246" Aug 13 00:20:21.346172 kubelet[3135]: E0813 00:20:21.343827 3135 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6db5fd67fb-ph246" Aug 13 00:20:21.346172 kubelet[3135]: E0813 00:20:21.343905 3135 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6db5fd67fb-ph246_calico-system(594d665a-07d4-46f9-938a-94309ad04257)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6db5fd67fb-ph246_calico-system(594d665a-07d4-46f9-938a-94309ad04257)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6db5fd67fb-ph246" podUID="594d665a-07d4-46f9-938a-94309ad04257" Aug 13 00:20:21.388092 containerd[1941]: time="2025-08-13T00:20:21.387879452Z" level=error msg="Failed to destroy network for sandbox \"58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:21.394807 containerd[1941]: time="2025-08-13T00:20:21.390679964Z" level=error msg="encountered an error cleaning up failed sandbox \"58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:21.393605 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b-shm.mount: Deactivated successfully. Aug 13 00:20:21.400220 containerd[1941]: time="2025-08-13T00:20:21.400152860Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6mjjr,Uid:3a2c1391-3856-407e-9f32-3dffc0012695,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:21.400709 kubelet[3135]: E0813 00:20:21.400661 3135 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:21.400985 kubelet[3135]: E0813 00:20:21.400949 3135 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6mjjr" Aug 13 00:20:21.401146 kubelet[3135]: E0813 00:20:21.401109 3135 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6mjjr" Aug 13 00:20:21.401337 kubelet[3135]: E0813 00:20:21.401296 3135 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-6mjjr_kube-system(3a2c1391-3856-407e-9f32-3dffc0012695)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-6mjjr_kube-system(3a2c1391-3856-407e-9f32-3dffc0012695)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-6mjjr" podUID="3a2c1391-3856-407e-9f32-3dffc0012695" Aug 13 00:20:21.411049 containerd[1941]: time="2025-08-13T00:20:21.410982536Z" level=error msg="Failed to destroy network for sandbox \"8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:21.413796 containerd[1941]: time="2025-08-13T00:20:21.412966196Z" level=error msg="encountered an error cleaning up failed sandbox \"8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:21.415930 containerd[1941]: time="2025-08-13T00:20:21.415869008Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p2lsx,Uid:f1dfa461-4b5f-4ed8-a850-cf604830db07,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:21.416753 kubelet[3135]: E0813 00:20:21.416705 3135 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:21.417099 kubelet[3135]: E0813 00:20:21.417061 3135 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-p2lsx" Aug 13 00:20:21.417259 kubelet[3135]: E0813 00:20:21.417220 3135 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-p2lsx" Aug 13 00:20:21.417377 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b-shm.mount: Deactivated successfully. Aug 13 00:20:21.417598 kubelet[3135]: E0813 00:20:21.417553 3135 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-p2lsx_kube-system(f1dfa461-4b5f-4ed8-a850-cf604830db07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-p2lsx_kube-system(f1dfa461-4b5f-4ed8-a850-cf604830db07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-p2lsx" podUID="f1dfa461-4b5f-4ed8-a850-cf604830db07" Aug 13 00:20:21.436035 containerd[1941]: time="2025-08-13T00:20:21.435967532Z" level=error msg="Failed to destroy network for sandbox \"2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:21.442320 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1-shm.mount: Deactivated successfully. Aug 13 00:20:21.445146 containerd[1941]: time="2025-08-13T00:20:21.445004072Z" level=error msg="encountered an error cleaning up failed sandbox \"2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:21.445529 containerd[1941]: time="2025-08-13T00:20:21.445297364Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65d97cd995-7ztd9,Uid:73b3a345-ddf0-47d6-a1d3-270371119508,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:21.446730 kubelet[3135]: E0813 00:20:21.446199 3135 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:21.446730 kubelet[3135]: E0813 00:20:21.446286 3135 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65d97cd995-7ztd9" Aug 13 00:20:21.446730 kubelet[3135]: E0813 00:20:21.446324 3135 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65d97cd995-7ztd9" Aug 13 00:20:21.447063 kubelet[3135]: E0813 00:20:21.446393 3135 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65d97cd995-7ztd9_calico-apiserver(73b3a345-ddf0-47d6-a1d3-270371119508)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65d97cd995-7ztd9_calico-apiserver(73b3a345-ddf0-47d6-a1d3-270371119508)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65d97cd995-7ztd9" podUID="73b3a345-ddf0-47d6-a1d3-270371119508" Aug 13 00:20:21.461941 containerd[1941]: time="2025-08-13T00:20:21.461548724Z" level=error msg="Failed to destroy network for sandbox \"f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:21.463930 containerd[1941]: time="2025-08-13T00:20:21.463807292Z" level=error msg="encountered an error cleaning up failed sandbox \"f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:21.464381 containerd[1941]: time="2025-08-13T00:20:21.464213936Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65d97cd995-jdgkd,Uid:d21d0cba-f46e-4a88-8bd1-db42d9c6b456,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:21.465106 kubelet[3135]: E0813 00:20:21.464863 3135 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:21.465260 kubelet[3135]: E0813 00:20:21.465071 3135 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65d97cd995-jdgkd" Aug 13 00:20:21.465260 kubelet[3135]: E0813 00:20:21.465236 3135 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65d97cd995-jdgkd" Aug 13 00:20:21.466012 kubelet[3135]: E0813 00:20:21.465545 3135 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65d97cd995-jdgkd_calico-apiserver(d21d0cba-f46e-4a88-8bd1-db42d9c6b456)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65d97cd995-jdgkd_calico-apiserver(d21d0cba-f46e-4a88-8bd1-db42d9c6b456)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65d97cd995-jdgkd" podUID="d21d0cba-f46e-4a88-8bd1-db42d9c6b456" Aug 13 00:20:21.473807 containerd[1941]: time="2025-08-13T00:20:21.473619081Z" level=error msg="Failed to destroy network for sandbox \"fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:21.474449 containerd[1941]: time="2025-08-13T00:20:21.474222729Z" level=error msg="encountered an error cleaning up failed sandbox \"fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:21.474449 containerd[1941]: time="2025-08-13T00:20:21.474334713Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b5fc5d65f-tngz7,Uid:af68b499-5806-484e-ba8b-a6c001a6ada8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:21.475558 kubelet[3135]: E0813 00:20:21.474792 3135 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:21.475558 kubelet[3135]: E0813 00:20:21.474878 3135 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b5fc5d65f-tngz7" Aug 13 00:20:21.475558 kubelet[3135]: E0813 00:20:21.474914 3135 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-b5fc5d65f-tngz7" Aug 13 00:20:21.475811 kubelet[3135]: E0813 00:20:21.474998 3135 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-b5fc5d65f-tngz7_calico-system(af68b499-5806-484e-ba8b-a6c001a6ada8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-b5fc5d65f-tngz7_calico-system(af68b499-5806-484e-ba8b-a6c001a6ada8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b5fc5d65f-tngz7" podUID="af68b499-5806-484e-ba8b-a6c001a6ada8" Aug 13 00:20:21.483409 containerd[1941]: time="2025-08-13T00:20:21.482891169Z" level=error msg="Failed to destroy network for sandbox \"9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:21.484227 containerd[1941]: time="2025-08-13T00:20:21.483740109Z" level=error msg="encountered an error cleaning up failed sandbox \"9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:21.484227 containerd[1941]: time="2025-08-13T00:20:21.483943437Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wzmtn,Uid:e5f0f3a3-68e2-4f84-92cb-c460ed58604c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:21.484453 kubelet[3135]: E0813 00:20:21.484274 3135 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:21.484453 kubelet[3135]: E0813 00:20:21.484348 3135 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wzmtn" Aug 13 00:20:21.484453 kubelet[3135]: E0813 00:20:21.484376 3135 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wzmtn" Aug 13 00:20:21.484614 kubelet[3135]: E0813 00:20:21.484437 3135 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wzmtn_calico-system(e5f0f3a3-68e2-4f84-92cb-c460ed58604c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wzmtn_calico-system(e5f0f3a3-68e2-4f84-92cb-c460ed58604c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wzmtn" podUID="e5f0f3a3-68e2-4f84-92cb-c460ed58604c" Aug 13 00:20:21.496914 containerd[1941]: time="2025-08-13T00:20:21.496738797Z" level=error msg="Failed to destroy network for sandbox \"ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:21.497735 containerd[1941]: time="2025-08-13T00:20:21.497667621Z" level=error msg="encountered an error cleaning up failed sandbox \"ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:21.498003 containerd[1941]: time="2025-08-13T00:20:21.497827557Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-555lv,Uid:0399420f-0a76-4a22-be47-c06978fb8813,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:21.498714 kubelet[3135]: E0813 00:20:21.498308 3135 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:21.498714 kubelet[3135]: E0813 00:20:21.498386 3135 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-555lv" Aug 13 00:20:21.498714 kubelet[3135]: E0813 00:20:21.498416 3135 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-555lv" Aug 13 00:20:21.499184 kubelet[3135]: E0813 00:20:21.498494 3135 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-555lv_calico-system(0399420f-0a76-4a22-be47-c06978fb8813)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-555lv_calico-system(0399420f-0a76-4a22-be47-c06978fb8813)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-555lv" podUID="0399420f-0a76-4a22-be47-c06978fb8813" Aug 13 00:20:22.193562 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03-shm.mount: Deactivated successfully. Aug 13 00:20:22.193757 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e-shm.mount: Deactivated successfully. Aug 13 00:20:22.193936 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c-shm.mount: Deactivated successfully. Aug 13 00:20:22.194077 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68-shm.mount: Deactivated successfully. Aug 13 00:20:22.227959 kubelet[3135]: I0813 00:20:22.227014 3135 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" Aug 13 00:20:22.228867 containerd[1941]: time="2025-08-13T00:20:22.228687296Z" level=info msg="StopPodSandbox for \"8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b\"" Aug 13 00:20:22.230030 containerd[1941]: time="2025-08-13T00:20:22.229779524Z" level=info msg="Ensure that sandbox 8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b in task-service has been cleanup successfully" Aug 13 00:20:22.232367 kubelet[3135]: I0813 00:20:22.231569 3135 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" Aug 13 00:20:22.234541 containerd[1941]: time="2025-08-13T00:20:22.234278360Z" level=info msg="StopPodSandbox for \"2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1\"" Aug 13 00:20:22.235130 containerd[1941]: time="2025-08-13T00:20:22.234954308Z" level=info msg="Ensure that sandbox 2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1 in task-service has been cleanup successfully" Aug 13 00:20:22.236557 kubelet[3135]: I0813 00:20:22.236507 3135 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" Aug 13 00:20:22.244329 kubelet[3135]: I0813 00:20:22.242465 3135 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" Aug 13 00:20:22.244514 containerd[1941]: time="2025-08-13T00:20:22.241527920Z" level=info msg="StopPodSandbox for \"58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b\"" Aug 13 00:20:22.244755 containerd[1941]: time="2025-08-13T00:20:22.244678760Z" level=info msg="StopPodSandbox for \"9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03\"" Aug 13 00:20:22.245017 containerd[1941]: time="2025-08-13T00:20:22.244970492Z" level=info msg="Ensure that sandbox 58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b in task-service has been cleanup successfully" Aug 13 00:20:22.246504 containerd[1941]: time="2025-08-13T00:20:22.245512472Z" level=info msg="Ensure that sandbox 9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03 in task-service has been cleanup successfully" Aug 13 00:20:22.257813 kubelet[3135]: I0813 00:20:22.257705 3135 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" Aug 13 00:20:22.261908 containerd[1941]: time="2025-08-13T00:20:22.261629972Z" level=info msg="StopPodSandbox for \"fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e\"" Aug 13 00:20:22.263809 containerd[1941]: time="2025-08-13T00:20:22.263663852Z" level=info msg="Ensure that sandbox fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e in task-service has been cleanup successfully" Aug 13 00:20:22.275662 kubelet[3135]: I0813 00:20:22.275501 3135 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" Aug 13 00:20:22.278087 containerd[1941]: time="2025-08-13T00:20:22.278036469Z" level=info msg="StopPodSandbox for \"a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291\"" Aug 13 00:20:22.286028 containerd[1941]: time="2025-08-13T00:20:22.285727677Z" level=info msg="Ensure that sandbox a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291 in task-service has been cleanup successfully" Aug 13 00:20:22.290616 kubelet[3135]: I0813 00:20:22.290548 3135 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" Aug 13 00:20:22.298017 containerd[1941]: time="2025-08-13T00:20:22.297958521Z" level=info msg="StopPodSandbox for \"f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68\"" Aug 13 00:20:22.304808 containerd[1941]: time="2025-08-13T00:20:22.304494561Z" level=info msg="Ensure that sandbox f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68 in task-service has been cleanup successfully" Aug 13 00:20:22.340358 kubelet[3135]: I0813 00:20:22.340177 3135 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" Aug 13 00:20:22.342817 containerd[1941]: time="2025-08-13T00:20:22.341805489Z" level=info msg="StopPodSandbox for \"ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c\"" Aug 13 00:20:22.342817 containerd[1941]: time="2025-08-13T00:20:22.342175173Z" level=info msg="Ensure that sandbox ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c in task-service has been cleanup successfully" Aug 13 00:20:22.469235 containerd[1941]: time="2025-08-13T00:20:22.468912993Z" level=error msg="StopPodSandbox for \"8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b\" failed" error="failed to destroy network for sandbox \"8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.471830 kubelet[3135]: E0813 00:20:22.471249 3135 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" Aug 13 00:20:22.471830 kubelet[3135]: E0813 00:20:22.471334 3135 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b"} Aug 13 00:20:22.471830 kubelet[3135]: E0813 00:20:22.471432 3135 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f1dfa461-4b5f-4ed8-a850-cf604830db07\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:20:22.471830 kubelet[3135]: E0813 00:20:22.471472 3135 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f1dfa461-4b5f-4ed8-a850-cf604830db07\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-p2lsx" podUID="f1dfa461-4b5f-4ed8-a850-cf604830db07" Aug 13 00:20:22.489382 containerd[1941]: time="2025-08-13T00:20:22.487944526Z" level=error msg="StopPodSandbox for \"2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1\" failed" error="failed to destroy network for sandbox \"2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.489560 kubelet[3135]: E0813 00:20:22.488286 3135 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" Aug 13 00:20:22.489560 kubelet[3135]: E0813 00:20:22.488350 3135 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1"} Aug 13 00:20:22.489560 kubelet[3135]: E0813 00:20:22.488404 3135 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"73b3a345-ddf0-47d6-a1d3-270371119508\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:20:22.489560 kubelet[3135]: E0813 00:20:22.488450 3135 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"73b3a345-ddf0-47d6-a1d3-270371119508\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65d97cd995-7ztd9" podUID="73b3a345-ddf0-47d6-a1d3-270371119508" Aug 13 00:20:22.500427 containerd[1941]: time="2025-08-13T00:20:22.500336542Z" level=error msg="StopPodSandbox for \"9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03\" failed" error="failed to destroy network for sandbox \"9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.500786 kubelet[3135]: E0813 00:20:22.500697 3135 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" Aug 13 00:20:22.500920 kubelet[3135]: E0813 00:20:22.500864 3135 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03"} Aug 13 00:20:22.500978 kubelet[3135]: E0813 00:20:22.500930 3135 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e5f0f3a3-68e2-4f84-92cb-c460ed58604c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:20:22.501196 kubelet[3135]: E0813 00:20:22.500969 3135 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e5f0f3a3-68e2-4f84-92cb-c460ed58604c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wzmtn" podUID="e5f0f3a3-68e2-4f84-92cb-c460ed58604c" Aug 13 00:20:22.501373 containerd[1941]: time="2025-08-13T00:20:22.501309034Z" level=error msg="StopPodSandbox for \"fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e\" failed" error="failed to destroy network for sandbox \"fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.503052 kubelet[3135]: E0813 00:20:22.502730 3135 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" Aug 13 00:20:22.503052 kubelet[3135]: E0813 00:20:22.502883 3135 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e"} Aug 13 00:20:22.503052 kubelet[3135]: E0813 00:20:22.502937 3135 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"af68b499-5806-484e-ba8b-a6c001a6ada8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:20:22.503052 kubelet[3135]: E0813 00:20:22.502999 3135 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"af68b499-5806-484e-ba8b-a6c001a6ada8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-b5fc5d65f-tngz7" podUID="af68b499-5806-484e-ba8b-a6c001a6ada8" Aug 13 00:20:22.503464 containerd[1941]: time="2025-08-13T00:20:22.502849822Z" level=error msg="StopPodSandbox for \"58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b\" failed" error="failed to destroy network for sandbox \"58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.503549 kubelet[3135]: E0813 00:20:22.503098 3135 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" Aug 13 00:20:22.503549 kubelet[3135]: E0813 00:20:22.503146 3135 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b"} Aug 13 00:20:22.503549 kubelet[3135]: E0813 00:20:22.503187 3135 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3a2c1391-3856-407e-9f32-3dffc0012695\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:20:22.503549 kubelet[3135]: E0813 00:20:22.503224 3135 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3a2c1391-3856-407e-9f32-3dffc0012695\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-6mjjr" podUID="3a2c1391-3856-407e-9f32-3dffc0012695" Aug 13 00:20:22.517026 containerd[1941]: time="2025-08-13T00:20:22.516959050Z" level=error msg="StopPodSandbox for \"f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68\" failed" error="failed to destroy network for sandbox \"f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.517783 kubelet[3135]: E0813 00:20:22.517523 3135 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" Aug 13 00:20:22.517783 kubelet[3135]: E0813 00:20:22.517594 3135 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68"} Aug 13 00:20:22.517783 kubelet[3135]: E0813 00:20:22.517649 3135 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d21d0cba-f46e-4a88-8bd1-db42d9c6b456\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:20:22.517783 kubelet[3135]: E0813 00:20:22.517694 3135 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d21d0cba-f46e-4a88-8bd1-db42d9c6b456\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65d97cd995-jdgkd" podUID="d21d0cba-f46e-4a88-8bd1-db42d9c6b456" Aug 13 00:20:22.526068 containerd[1941]: time="2025-08-13T00:20:22.525993574Z" level=error msg="StopPodSandbox for \"a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291\" failed" error="failed to destroy network for sandbox \"a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.526888 kubelet[3135]: E0813 00:20:22.526340 3135 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" Aug 13 00:20:22.526888 kubelet[3135]: E0813 00:20:22.526495 3135 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291"} Aug 13 00:20:22.526888 kubelet[3135]: E0813 00:20:22.526556 3135 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"594d665a-07d4-46f9-938a-94309ad04257\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:20:22.526888 kubelet[3135]: E0813 00:20:22.526596 3135 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"594d665a-07d4-46f9-938a-94309ad04257\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6db5fd67fb-ph246" podUID="594d665a-07d4-46f9-938a-94309ad04257" Aug 13 00:20:22.541942 containerd[1941]: time="2025-08-13T00:20:22.541738066Z" level=error msg="StopPodSandbox for \"ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c\" failed" error="failed to destroy network for sandbox \"ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.543020 kubelet[3135]: E0813 00:20:22.542864 3135 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" Aug 13 00:20:22.543020 kubelet[3135]: E0813 00:20:22.542962 3135 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c"} Aug 13 00:20:22.543249 kubelet[3135]: E0813 00:20:22.543025 3135 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0399420f-0a76-4a22-be47-c06978fb8813\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:20:22.543249 kubelet[3135]: E0813 00:20:22.543065 3135 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0399420f-0a76-4a22-be47-c06978fb8813\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-555lv" podUID="0399420f-0a76-4a22-be47-c06978fb8813" Aug 13 00:20:27.901405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1780969303.mount: Deactivated successfully. Aug 13 00:20:28.008907 containerd[1941]: time="2025-08-13T00:20:28.008826277Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:28.012415 containerd[1941]: time="2025-08-13T00:20:28.011869645Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Aug 13 00:20:28.014104 containerd[1941]: time="2025-08-13T00:20:28.014012461Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:28.018262 containerd[1941]: time="2025-08-13T00:20:28.018182017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:28.019606 containerd[1941]: time="2025-08-13T00:20:28.019545913Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 6.749413797s" Aug 13 00:20:28.019709 containerd[1941]: time="2025-08-13T00:20:28.019612681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Aug 13 00:20:28.062024 containerd[1941]: time="2025-08-13T00:20:28.061865869Z" level=info msg="CreateContainer within sandbox \"a697d3245b6b15fcaf793b6d93b79010fb1011dbeeb4a1c1c086322f345e637f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 00:20:28.081603 systemd[1]: Started sshd@7-172.31.18.251:22-139.178.89.65:40150.service - OpenSSH per-connection server daemon (139.178.89.65:40150). Aug 13 00:20:28.110958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount237446307.mount: Deactivated successfully. Aug 13 00:20:28.124802 containerd[1941]: time="2025-08-13T00:20:28.124715102Z" level=info msg="CreateContainer within sandbox \"a697d3245b6b15fcaf793b6d93b79010fb1011dbeeb4a1c1c086322f345e637f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"97da452e3537254ffc296259ff81ad3d15a6453d0a1e8bfc038b4eb111fd0f89\"" Aug 13 00:20:28.129322 containerd[1941]: time="2025-08-13T00:20:28.127618970Z" level=info msg="StartContainer for \"97da452e3537254ffc296259ff81ad3d15a6453d0a1e8bfc038b4eb111fd0f89\"" Aug 13 00:20:28.194688 systemd[1]: Started cri-containerd-97da452e3537254ffc296259ff81ad3d15a6453d0a1e8bfc038b4eb111fd0f89.scope - libcontainer container 97da452e3537254ffc296259ff81ad3d15a6453d0a1e8bfc038b4eb111fd0f89. Aug 13 00:20:28.275777 containerd[1941]: time="2025-08-13T00:20:28.275543102Z" level=info msg="StartContainer for \"97da452e3537254ffc296259ff81ad3d15a6453d0a1e8bfc038b4eb111fd0f89\" returns successfully" Aug 13 00:20:28.291827 sshd[4466]: Accepted publickey for core from 139.178.89.65 port 40150 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:28.295942 sshd[4466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:28.305054 systemd-logind[1913]: New session 8 of user core. Aug 13 00:20:28.314406 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 00:20:28.416038 kubelet[3135]: I0813 00:20:28.415922 3135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qkxkv" podStartSLOduration=1.945331393 podStartE2EDuration="18.415892103s" podCreationTimestamp="2025-08-13 00:20:10 +0000 UTC" firstStartedPulling="2025-08-13 00:20:11.550934963 +0000 UTC m=+35.861282831" lastFinishedPulling="2025-08-13 00:20:28.021495673 +0000 UTC m=+52.331843541" observedRunningTime="2025-08-13 00:20:28.412622727 +0000 UTC m=+52.722970619" watchObservedRunningTime="2025-08-13 00:20:28.415892103 +0000 UTC m=+52.726239983" Aug 13 00:20:28.704220 sshd[4466]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:28.713961 systemd-logind[1913]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:20:28.718811 systemd[1]: sshd@7-172.31.18.251:22-139.178.89.65:40150.service: Deactivated successfully. Aug 13 00:20:28.728477 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:20:28.733231 systemd-logind[1913]: Removed session 8. Aug 13 00:20:28.774294 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 00:20:28.774935 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 00:20:29.094888 containerd[1941]: time="2025-08-13T00:20:29.094707194Z" level=info msg="StopPodSandbox for \"fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e\"" Aug 13 00:20:29.473851 containerd[1941]: 2025-08-13 00:20:29.289 [INFO][4565] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" Aug 13 00:20:29.473851 containerd[1941]: 2025-08-13 00:20:29.292 [INFO][4565] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" iface="eth0" netns="/var/run/netns/cni-4241ac25-58de-d743-b317-6b8af81ad664" Aug 13 00:20:29.473851 containerd[1941]: 2025-08-13 00:20:29.292 [INFO][4565] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" iface="eth0" netns="/var/run/netns/cni-4241ac25-58de-d743-b317-6b8af81ad664" Aug 13 00:20:29.473851 containerd[1941]: 2025-08-13 00:20:29.294 [INFO][4565] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" iface="eth0" netns="/var/run/netns/cni-4241ac25-58de-d743-b317-6b8af81ad664" Aug 13 00:20:29.473851 containerd[1941]: 2025-08-13 00:20:29.294 [INFO][4565] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" Aug 13 00:20:29.473851 containerd[1941]: 2025-08-13 00:20:29.294 [INFO][4565] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" Aug 13 00:20:29.473851 containerd[1941]: 2025-08-13 00:20:29.438 [INFO][4574] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" HandleID="k8s-pod-network.fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" Workload="ip--172--31--18--251-k8s-whisker--b5fc5d65f--tngz7-eth0" Aug 13 00:20:29.473851 containerd[1941]: 2025-08-13 00:20:29.439 [INFO][4574] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:29.473851 containerd[1941]: 2025-08-13 00:20:29.439 [INFO][4574] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:29.473851 containerd[1941]: 2025-08-13 00:20:29.456 [WARNING][4574] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" HandleID="k8s-pod-network.fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" Workload="ip--172--31--18--251-k8s-whisker--b5fc5d65f--tngz7-eth0" Aug 13 00:20:29.473851 containerd[1941]: 2025-08-13 00:20:29.457 [INFO][4574] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" HandleID="k8s-pod-network.fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" Workload="ip--172--31--18--251-k8s-whisker--b5fc5d65f--tngz7-eth0" Aug 13 00:20:29.473851 containerd[1941]: 2025-08-13 00:20:29.460 [INFO][4574] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:29.473851 containerd[1941]: 2025-08-13 00:20:29.469 [INFO][4565] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" Aug 13 00:20:29.479055 containerd[1941]: time="2025-08-13T00:20:29.478981120Z" level=info msg="TearDown network for sandbox \"fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e\" successfully" Aug 13 00:20:29.479055 containerd[1941]: time="2025-08-13T00:20:29.479043928Z" level=info msg="StopPodSandbox for \"fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e\" returns successfully" Aug 13 00:20:29.486074 systemd[1]: run-netns-cni\x2d4241ac25\x2d58de\x2dd743\x2db317\x2d6b8af81ad664.mount: Deactivated successfully. Aug 13 00:20:29.595495 kubelet[3135]: I0813 00:20:29.594191 3135 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jv62t\" (UniqueName: \"kubernetes.io/projected/af68b499-5806-484e-ba8b-a6c001a6ada8-kube-api-access-jv62t\") pod \"af68b499-5806-484e-ba8b-a6c001a6ada8\" (UID: \"af68b499-5806-484e-ba8b-a6c001a6ada8\") " Aug 13 00:20:29.595495 kubelet[3135]: I0813 00:20:29.594293 3135 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/af68b499-5806-484e-ba8b-a6c001a6ada8-whisker-backend-key-pair\") pod \"af68b499-5806-484e-ba8b-a6c001a6ada8\" (UID: \"af68b499-5806-484e-ba8b-a6c001a6ada8\") " Aug 13 00:20:29.595495 kubelet[3135]: I0813 00:20:29.594342 3135 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af68b499-5806-484e-ba8b-a6c001a6ada8-whisker-ca-bundle\") pod \"af68b499-5806-484e-ba8b-a6c001a6ada8\" (UID: \"af68b499-5806-484e-ba8b-a6c001a6ada8\") " Aug 13 00:20:29.595495 kubelet[3135]: I0813 00:20:29.595015 3135 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af68b499-5806-484e-ba8b-a6c001a6ada8-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "af68b499-5806-484e-ba8b-a6c001a6ada8" (UID: "af68b499-5806-484e-ba8b-a6c001a6ada8"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:20:29.606227 systemd[1]: var-lib-kubelet-pods-af68b499\x2d5806\x2d484e\x2dba8b\x2da6c001a6ada8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djv62t.mount: Deactivated successfully. Aug 13 00:20:29.612580 systemd[1]: var-lib-kubelet-pods-af68b499\x2d5806\x2d484e\x2dba8b\x2da6c001a6ada8-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 00:20:29.622092 kubelet[3135]: I0813 00:20:29.621985 3135 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af68b499-5806-484e-ba8b-a6c001a6ada8-kube-api-access-jv62t" (OuterVolumeSpecName: "kube-api-access-jv62t") pod "af68b499-5806-484e-ba8b-a6c001a6ada8" (UID: "af68b499-5806-484e-ba8b-a6c001a6ada8"). InnerVolumeSpecName "kube-api-access-jv62t". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:20:29.622481 kubelet[3135]: I0813 00:20:29.622338 3135 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af68b499-5806-484e-ba8b-a6c001a6ada8-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "af68b499-5806-484e-ba8b-a6c001a6ada8" (UID: "af68b499-5806-484e-ba8b-a6c001a6ada8"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:20:29.695521 kubelet[3135]: I0813 00:20:29.695454 3135 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/af68b499-5806-484e-ba8b-a6c001a6ada8-whisker-backend-key-pair\") on node \"ip-172-31-18-251\" DevicePath \"\"" Aug 13 00:20:29.695521 kubelet[3135]: I0813 00:20:29.695515 3135 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af68b499-5806-484e-ba8b-a6c001a6ada8-whisker-ca-bundle\") on node \"ip-172-31-18-251\" DevicePath \"\"" Aug 13 00:20:29.695795 kubelet[3135]: I0813 00:20:29.695540 3135 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jv62t\" (UniqueName: \"kubernetes.io/projected/af68b499-5806-484e-ba8b-a6c001a6ada8-kube-api-access-jv62t\") on node \"ip-172-31-18-251\" DevicePath \"\"" Aug 13 00:20:29.942646 systemd[1]: Removed slice kubepods-besteffort-podaf68b499_5806_484e_ba8b_a6c001a6ada8.slice - libcontainer container kubepods-besteffort-podaf68b499_5806_484e_ba8b_a6c001a6ada8.slice. Aug 13 00:20:30.483883 systemd[1]: Created slice kubepods-besteffort-pod57c0ecf6_405d_4d04_9b6c_f1613eebac7f.slice - libcontainer container kubepods-besteffort-pod57c0ecf6_405d_4d04_9b6c_f1613eebac7f.slice. Aug 13 00:20:30.603215 kubelet[3135]: I0813 00:20:30.603100 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98fff\" (UniqueName: \"kubernetes.io/projected/57c0ecf6-405d-4d04-9b6c-f1613eebac7f-kube-api-access-98fff\") pod \"whisker-54b5bbf697-b6xv7\" (UID: \"57c0ecf6-405d-4d04-9b6c-f1613eebac7f\") " pod="calico-system/whisker-54b5bbf697-b6xv7" Aug 13 00:20:30.603215 kubelet[3135]: I0813 00:20:30.603165 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/57c0ecf6-405d-4d04-9b6c-f1613eebac7f-whisker-backend-key-pair\") pod \"whisker-54b5bbf697-b6xv7\" (UID: \"57c0ecf6-405d-4d04-9b6c-f1613eebac7f\") " pod="calico-system/whisker-54b5bbf697-b6xv7" Aug 13 00:20:30.603215 kubelet[3135]: I0813 00:20:30.603213 3135 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57c0ecf6-405d-4d04-9b6c-f1613eebac7f-whisker-ca-bundle\") pod \"whisker-54b5bbf697-b6xv7\" (UID: \"57c0ecf6-405d-4d04-9b6c-f1613eebac7f\") " pod="calico-system/whisker-54b5bbf697-b6xv7" Aug 13 00:20:30.792961 containerd[1941]: time="2025-08-13T00:20:30.792611875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54b5bbf697-b6xv7,Uid:57c0ecf6-405d-4d04-9b6c-f1613eebac7f,Namespace:calico-system,Attempt:0,}" Aug 13 00:20:31.220628 (udev-worker)[4543]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:20:31.227672 systemd-networkd[1845]: calid6db0231eb5: Link UP Aug 13 00:20:31.232224 systemd-networkd[1845]: calid6db0231eb5: Gained carrier Aug 13 00:20:31.294148 containerd[1941]: 2025-08-13 00:20:30.921 [INFO][4663] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:20:31.294148 containerd[1941]: 2025-08-13 00:20:30.995 [INFO][4663] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--251-k8s-whisker--54b5bbf697--b6xv7-eth0 whisker-54b5bbf697- calico-system 57c0ecf6-405d-4d04-9b6c-f1613eebac7f 959 0 2025-08-13 00:20:30 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:54b5bbf697 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-18-251 whisker-54b5bbf697-b6xv7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid6db0231eb5 [] [] }} ContainerID="bf50bf25e1c6cf307afc672421cbd6a99456820fcfc5e3bcf12414d6ac2b0f09" Namespace="calico-system" Pod="whisker-54b5bbf697-b6xv7" WorkloadEndpoint="ip--172--31--18--251-k8s-whisker--54b5bbf697--b6xv7-" Aug 13 00:20:31.294148 containerd[1941]: 2025-08-13 00:20:30.995 [INFO][4663] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bf50bf25e1c6cf307afc672421cbd6a99456820fcfc5e3bcf12414d6ac2b0f09" Namespace="calico-system" Pod="whisker-54b5bbf697-b6xv7" WorkloadEndpoint="ip--172--31--18--251-k8s-whisker--54b5bbf697--b6xv7-eth0" Aug 13 00:20:31.294148 containerd[1941]: 2025-08-13 00:20:31.113 [INFO][4715] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bf50bf25e1c6cf307afc672421cbd6a99456820fcfc5e3bcf12414d6ac2b0f09" HandleID="k8s-pod-network.bf50bf25e1c6cf307afc672421cbd6a99456820fcfc5e3bcf12414d6ac2b0f09" Workload="ip--172--31--18--251-k8s-whisker--54b5bbf697--b6xv7-eth0" Aug 13 00:20:31.294148 containerd[1941]: 2025-08-13 00:20:31.118 [INFO][4715] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bf50bf25e1c6cf307afc672421cbd6a99456820fcfc5e3bcf12414d6ac2b0f09" HandleID="k8s-pod-network.bf50bf25e1c6cf307afc672421cbd6a99456820fcfc5e3bcf12414d6ac2b0f09" Workload="ip--172--31--18--251-k8s-whisker--54b5bbf697--b6xv7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003412d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-251", "pod":"whisker-54b5bbf697-b6xv7", "timestamp":"2025-08-13 00:20:31.113083456 +0000 UTC"}, Hostname:"ip-172-31-18-251", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:20:31.294148 containerd[1941]: 2025-08-13 00:20:31.118 [INFO][4715] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:31.294148 containerd[1941]: 2025-08-13 00:20:31.118 [INFO][4715] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:31.294148 containerd[1941]: 2025-08-13 00:20:31.118 [INFO][4715] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-251' Aug 13 00:20:31.294148 containerd[1941]: 2025-08-13 00:20:31.142 [INFO][4715] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bf50bf25e1c6cf307afc672421cbd6a99456820fcfc5e3bcf12414d6ac2b0f09" host="ip-172-31-18-251" Aug 13 00:20:31.294148 containerd[1941]: 2025-08-13 00:20:31.153 [INFO][4715] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-251" Aug 13 00:20:31.294148 containerd[1941]: 2025-08-13 00:20:31.163 [INFO][4715] ipam/ipam.go 511: Trying affinity for 192.168.32.128/26 host="ip-172-31-18-251" Aug 13 00:20:31.294148 containerd[1941]: 2025-08-13 00:20:31.168 [INFO][4715] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.128/26 host="ip-172-31-18-251" Aug 13 00:20:31.294148 containerd[1941]: 2025-08-13 00:20:31.173 [INFO][4715] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.128/26 host="ip-172-31-18-251" Aug 13 00:20:31.294148 containerd[1941]: 2025-08-13 00:20:31.173 [INFO][4715] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.32.128/26 handle="k8s-pod-network.bf50bf25e1c6cf307afc672421cbd6a99456820fcfc5e3bcf12414d6ac2b0f09" host="ip-172-31-18-251" Aug 13 00:20:31.294148 containerd[1941]: 2025-08-13 00:20:31.177 [INFO][4715] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bf50bf25e1c6cf307afc672421cbd6a99456820fcfc5e3bcf12414d6ac2b0f09 Aug 13 00:20:31.294148 containerd[1941]: 2025-08-13 00:20:31.185 [INFO][4715] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.32.128/26 handle="k8s-pod-network.bf50bf25e1c6cf307afc672421cbd6a99456820fcfc5e3bcf12414d6ac2b0f09" host="ip-172-31-18-251" Aug 13 00:20:31.294148 containerd[1941]: 2025-08-13 00:20:31.199 [INFO][4715] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.32.129/26] block=192.168.32.128/26 handle="k8s-pod-network.bf50bf25e1c6cf307afc672421cbd6a99456820fcfc5e3bcf12414d6ac2b0f09" host="ip-172-31-18-251" Aug 13 00:20:31.294148 containerd[1941]: 2025-08-13 00:20:31.199 [INFO][4715] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.129/26] handle="k8s-pod-network.bf50bf25e1c6cf307afc672421cbd6a99456820fcfc5e3bcf12414d6ac2b0f09" host="ip-172-31-18-251" Aug 13 00:20:31.294148 containerd[1941]: 2025-08-13 00:20:31.200 [INFO][4715] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:31.294148 containerd[1941]: 2025-08-13 00:20:31.200 [INFO][4715] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.129/26] IPv6=[] ContainerID="bf50bf25e1c6cf307afc672421cbd6a99456820fcfc5e3bcf12414d6ac2b0f09" HandleID="k8s-pod-network.bf50bf25e1c6cf307afc672421cbd6a99456820fcfc5e3bcf12414d6ac2b0f09" Workload="ip--172--31--18--251-k8s-whisker--54b5bbf697--b6xv7-eth0" Aug 13 00:20:31.297633 containerd[1941]: 2025-08-13 00:20:31.205 [INFO][4663] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bf50bf25e1c6cf307afc672421cbd6a99456820fcfc5e3bcf12414d6ac2b0f09" Namespace="calico-system" Pod="whisker-54b5bbf697-b6xv7" WorkloadEndpoint="ip--172--31--18--251-k8s-whisker--54b5bbf697--b6xv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--251-k8s-whisker--54b5bbf697--b6xv7-eth0", GenerateName:"whisker-54b5bbf697-", Namespace:"calico-system", SelfLink:"", UID:"57c0ecf6-405d-4d04-9b6c-f1613eebac7f", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54b5bbf697", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-251", ContainerID:"", Pod:"whisker-54b5bbf697-b6xv7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.32.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid6db0231eb5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:31.297633 containerd[1941]: 2025-08-13 00:20:31.206 [INFO][4663] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.129/32] ContainerID="bf50bf25e1c6cf307afc672421cbd6a99456820fcfc5e3bcf12414d6ac2b0f09" Namespace="calico-system" Pod="whisker-54b5bbf697-b6xv7" WorkloadEndpoint="ip--172--31--18--251-k8s-whisker--54b5bbf697--b6xv7-eth0" Aug 13 00:20:31.297633 containerd[1941]: 2025-08-13 00:20:31.206 [INFO][4663] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid6db0231eb5 ContainerID="bf50bf25e1c6cf307afc672421cbd6a99456820fcfc5e3bcf12414d6ac2b0f09" Namespace="calico-system" Pod="whisker-54b5bbf697-b6xv7" WorkloadEndpoint="ip--172--31--18--251-k8s-whisker--54b5bbf697--b6xv7-eth0" Aug 13 00:20:31.297633 containerd[1941]: 2025-08-13 00:20:31.235 [INFO][4663] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bf50bf25e1c6cf307afc672421cbd6a99456820fcfc5e3bcf12414d6ac2b0f09" Namespace="calico-system" Pod="whisker-54b5bbf697-b6xv7" WorkloadEndpoint="ip--172--31--18--251-k8s-whisker--54b5bbf697--b6xv7-eth0" Aug 13 00:20:31.297633 containerd[1941]: 2025-08-13 00:20:31.238 [INFO][4663] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bf50bf25e1c6cf307afc672421cbd6a99456820fcfc5e3bcf12414d6ac2b0f09" Namespace="calico-system" Pod="whisker-54b5bbf697-b6xv7" WorkloadEndpoint="ip--172--31--18--251-k8s-whisker--54b5bbf697--b6xv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--251-k8s-whisker--54b5bbf697--b6xv7-eth0", GenerateName:"whisker-54b5bbf697-", Namespace:"calico-system", SelfLink:"", UID:"57c0ecf6-405d-4d04-9b6c-f1613eebac7f", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54b5bbf697", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-251", ContainerID:"bf50bf25e1c6cf307afc672421cbd6a99456820fcfc5e3bcf12414d6ac2b0f09", Pod:"whisker-54b5bbf697-b6xv7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.32.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid6db0231eb5", MAC:"12:3e:14:00:4e:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:31.297633 containerd[1941]: 2025-08-13 00:20:31.283 [INFO][4663] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bf50bf25e1c6cf307afc672421cbd6a99456820fcfc5e3bcf12414d6ac2b0f09" Namespace="calico-system" Pod="whisker-54b5bbf697-b6xv7" WorkloadEndpoint="ip--172--31--18--251-k8s-whisker--54b5bbf697--b6xv7-eth0" Aug 13 00:20:31.371434 containerd[1941]: time="2025-08-13T00:20:31.370645002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:20:31.371434 containerd[1941]: time="2025-08-13T00:20:31.370916886Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:20:31.371434 containerd[1941]: time="2025-08-13T00:20:31.370997994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:31.371434 containerd[1941]: time="2025-08-13T00:20:31.371243598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:31.451383 systemd[1]: Started cri-containerd-bf50bf25e1c6cf307afc672421cbd6a99456820fcfc5e3bcf12414d6ac2b0f09.scope - libcontainer container bf50bf25e1c6cf307afc672421cbd6a99456820fcfc5e3bcf12414d6ac2b0f09. Aug 13 00:20:31.647175 containerd[1941]: time="2025-08-13T00:20:31.647105671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54b5bbf697-b6xv7,Uid:57c0ecf6-405d-4d04-9b6c-f1613eebac7f,Namespace:calico-system,Attempt:0,} returns sandbox id \"bf50bf25e1c6cf307afc672421cbd6a99456820fcfc5e3bcf12414d6ac2b0f09\"" Aug 13 00:20:31.653290 containerd[1941]: time="2025-08-13T00:20:31.652904395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 00:20:31.936782 kubelet[3135]: I0813 00:20:31.936178 3135 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af68b499-5806-484e-ba8b-a6c001a6ada8" path="/var/lib/kubelet/pods/af68b499-5806-484e-ba8b-a6c001a6ada8/volumes" Aug 13 00:20:32.041004 kernel: bpftool[4811]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 13 00:20:32.392898 systemd-networkd[1845]: vxlan.calico: Link UP Aug 13 00:20:32.392914 systemd-networkd[1845]: vxlan.calico: Gained carrier Aug 13 00:20:32.445911 (udev-worker)[4545]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:20:32.811472 systemd-networkd[1845]: calid6db0231eb5: Gained IPv6LL Aug 13 00:20:33.066899 containerd[1941]: time="2025-08-13T00:20:33.066191094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:33.071214 containerd[1941]: time="2025-08-13T00:20:33.071137962Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Aug 13 00:20:33.075223 containerd[1941]: time="2025-08-13T00:20:33.074969886Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:33.080213 containerd[1941]: time="2025-08-13T00:20:33.080103222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:33.084606 containerd[1941]: time="2025-08-13T00:20:33.083220750Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.430249347s" Aug 13 00:20:33.084606 containerd[1941]: time="2025-08-13T00:20:33.083304738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Aug 13 00:20:33.098867 containerd[1941]: time="2025-08-13T00:20:33.098799522Z" level=info msg="CreateContainer within sandbox \"bf50bf25e1c6cf307afc672421cbd6a99456820fcfc5e3bcf12414d6ac2b0f09\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 00:20:33.126316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount313685460.mount: Deactivated successfully. Aug 13 00:20:33.132150 containerd[1941]: time="2025-08-13T00:20:33.132072030Z" level=info msg="CreateContainer within sandbox \"bf50bf25e1c6cf307afc672421cbd6a99456820fcfc5e3bcf12414d6ac2b0f09\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"1134fce1f09e6d554c3b1ae786452b5f6f68a03930e6c7efbe479608fa785404\"" Aug 13 00:20:33.134083 containerd[1941]: time="2025-08-13T00:20:33.133994886Z" level=info msg="StartContainer for \"1134fce1f09e6d554c3b1ae786452b5f6f68a03930e6c7efbe479608fa785404\"" Aug 13 00:20:33.194061 systemd[1]: Started cri-containerd-1134fce1f09e6d554c3b1ae786452b5f6f68a03930e6c7efbe479608fa785404.scope - libcontainer container 1134fce1f09e6d554c3b1ae786452b5f6f68a03930e6c7efbe479608fa785404. Aug 13 00:20:33.263666 containerd[1941]: time="2025-08-13T00:20:33.263591611Z" level=info msg="StartContainer for \"1134fce1f09e6d554c3b1ae786452b5f6f68a03930e6c7efbe479608fa785404\" returns successfully" Aug 13 00:20:33.268174 containerd[1941]: time="2025-08-13T00:20:33.268118527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 00:20:33.743607 systemd[1]: Started sshd@8-172.31.18.251:22-139.178.89.65:34096.service - OpenSSH per-connection server daemon (139.178.89.65:34096). Aug 13 00:20:33.933107 containerd[1941]: time="2025-08-13T00:20:33.931389982Z" level=info msg="StopPodSandbox for \"8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b\"" Aug 13 00:20:33.933107 containerd[1941]: time="2025-08-13T00:20:33.932173630Z" level=info msg="StopPodSandbox for \"9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03\"" Aug 13 00:20:33.937636 sshd[4922]: Accepted publickey for core from 139.178.89.65 port 34096 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:33.939583 containerd[1941]: time="2025-08-13T00:20:33.938169778Z" level=info msg="StopPodSandbox for \"a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291\"" Aug 13 00:20:33.941317 containerd[1941]: time="2025-08-13T00:20:33.940495150Z" level=info msg="StopPodSandbox for \"58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b\"" Aug 13 00:20:33.947494 sshd[4922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:33.971288 systemd-logind[1913]: New session 9 of user core. Aug 13 00:20:33.978470 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 00:20:34.155490 systemd-networkd[1845]: vxlan.calico: Gained IPv6LL Aug 13 00:20:34.414090 sshd[4922]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:34.429782 systemd[1]: sshd@8-172.31.18.251:22-139.178.89.65:34096.service: Deactivated successfully. Aug 13 00:20:34.440619 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:20:34.453582 systemd-logind[1913]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:20:34.464327 systemd-logind[1913]: Removed session 9. Aug 13 00:20:34.667407 containerd[1941]: 2025-08-13 00:20:34.374 [INFO][4959] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" Aug 13 00:20:34.667407 containerd[1941]: 2025-08-13 00:20:34.375 [INFO][4959] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" iface="eth0" netns="/var/run/netns/cni-9a5bd4c3-7ce3-5c58-da29-97b8c22cba4c" Aug 13 00:20:34.667407 containerd[1941]: 2025-08-13 00:20:34.376 [INFO][4959] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" iface="eth0" netns="/var/run/netns/cni-9a5bd4c3-7ce3-5c58-da29-97b8c22cba4c" Aug 13 00:20:34.667407 containerd[1941]: 2025-08-13 00:20:34.385 [INFO][4959] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" iface="eth0" netns="/var/run/netns/cni-9a5bd4c3-7ce3-5c58-da29-97b8c22cba4c" Aug 13 00:20:34.667407 containerd[1941]: 2025-08-13 00:20:34.385 [INFO][4959] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" Aug 13 00:20:34.667407 containerd[1941]: 2025-08-13 00:20:34.385 [INFO][4959] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" Aug 13 00:20:34.667407 containerd[1941]: 2025-08-13 00:20:34.610 [INFO][5000] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" HandleID="k8s-pod-network.8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" Workload="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--p2lsx-eth0" Aug 13 00:20:34.667407 containerd[1941]: 2025-08-13 00:20:34.611 [INFO][5000] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:34.667407 containerd[1941]: 2025-08-13 00:20:34.611 [INFO][5000] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:34.667407 containerd[1941]: 2025-08-13 00:20:34.635 [WARNING][5000] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" HandleID="k8s-pod-network.8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" Workload="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--p2lsx-eth0" Aug 13 00:20:34.667407 containerd[1941]: 2025-08-13 00:20:34.635 [INFO][5000] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" HandleID="k8s-pod-network.8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" Workload="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--p2lsx-eth0" Aug 13 00:20:34.667407 containerd[1941]: 2025-08-13 00:20:34.641 [INFO][5000] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:34.667407 containerd[1941]: 2025-08-13 00:20:34.656 [INFO][4959] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" Aug 13 00:20:34.673853 containerd[1941]: time="2025-08-13T00:20:34.673370446Z" level=info msg="TearDown network for sandbox \"8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b\" successfully" Aug 13 00:20:34.673853 containerd[1941]: time="2025-08-13T00:20:34.673425574Z" level=info msg="StopPodSandbox for \"8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b\" returns successfully" Aug 13 00:20:34.680488 systemd[1]: run-netns-cni\x2d9a5bd4c3\x2d7ce3\x2d5c58\x2dda29\x2d97b8c22cba4c.mount: Deactivated successfully. Aug 13 00:20:34.710905 containerd[1941]: time="2025-08-13T00:20:34.710835010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p2lsx,Uid:f1dfa461-4b5f-4ed8-a850-cf604830db07,Namespace:kube-system,Attempt:1,}" Aug 13 00:20:34.731111 containerd[1941]: 2025-08-13 00:20:34.461 [INFO][4969] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" Aug 13 00:20:34.731111 containerd[1941]: 2025-08-13 00:20:34.482 [INFO][4969] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" iface="eth0" netns="/var/run/netns/cni-c7e187e2-0b62-3741-e584-364bb2ad2e84" Aug 13 00:20:34.731111 containerd[1941]: 2025-08-13 00:20:34.482 [INFO][4969] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" iface="eth0" netns="/var/run/netns/cni-c7e187e2-0b62-3741-e584-364bb2ad2e84" Aug 13 00:20:34.731111 containerd[1941]: 2025-08-13 00:20:34.483 [INFO][4969] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" iface="eth0" netns="/var/run/netns/cni-c7e187e2-0b62-3741-e584-364bb2ad2e84" Aug 13 00:20:34.731111 containerd[1941]: 2025-08-13 00:20:34.483 [INFO][4969] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" Aug 13 00:20:34.731111 containerd[1941]: 2025-08-13 00:20:34.483 [INFO][4969] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" Aug 13 00:20:34.731111 containerd[1941]: 2025-08-13 00:20:34.623 [INFO][5013] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" HandleID="k8s-pod-network.a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" Workload="ip--172--31--18--251-k8s-calico--kube--controllers--6db5fd67fb--ph246-eth0" Aug 13 00:20:34.731111 containerd[1941]: 2025-08-13 00:20:34.626 [INFO][5013] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:34.731111 containerd[1941]: 2025-08-13 00:20:34.641 [INFO][5013] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:34.731111 containerd[1941]: 2025-08-13 00:20:34.692 [WARNING][5013] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" HandleID="k8s-pod-network.a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" Workload="ip--172--31--18--251-k8s-calico--kube--controllers--6db5fd67fb--ph246-eth0" Aug 13 00:20:34.731111 containerd[1941]: 2025-08-13 00:20:34.694 [INFO][5013] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" HandleID="k8s-pod-network.a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" Workload="ip--172--31--18--251-k8s-calico--kube--controllers--6db5fd67fb--ph246-eth0" Aug 13 00:20:34.731111 containerd[1941]: 2025-08-13 00:20:34.705 [INFO][5013] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:34.731111 containerd[1941]: 2025-08-13 00:20:34.721 [INFO][4969] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" Aug 13 00:20:34.734364 containerd[1941]: time="2025-08-13T00:20:34.734135638Z" level=info msg="TearDown network for sandbox \"a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291\" successfully" Aug 13 00:20:34.734364 containerd[1941]: time="2025-08-13T00:20:34.734191402Z" level=info msg="StopPodSandbox for \"a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291\" returns successfully" Aug 13 00:20:34.738086 containerd[1941]: time="2025-08-13T00:20:34.736309978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6db5fd67fb-ph246,Uid:594d665a-07d4-46f9-938a-94309ad04257,Namespace:calico-system,Attempt:1,}" Aug 13 00:20:34.742664 systemd[1]: run-netns-cni\x2dc7e187e2\x2d0b62\x2d3741\x2de584\x2d364bb2ad2e84.mount: Deactivated successfully. Aug 13 00:20:34.765851 containerd[1941]: 2025-08-13 00:20:34.359 [INFO][4967] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" Aug 13 00:20:34.765851 containerd[1941]: 2025-08-13 00:20:34.361 [INFO][4967] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" iface="eth0" netns="/var/run/netns/cni-6d1a9725-78e0-d1aa-5c6a-78e55ba53287" Aug 13 00:20:34.765851 containerd[1941]: 2025-08-13 00:20:34.362 [INFO][4967] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" iface="eth0" netns="/var/run/netns/cni-6d1a9725-78e0-d1aa-5c6a-78e55ba53287" Aug 13 00:20:34.765851 containerd[1941]: 2025-08-13 00:20:34.367 [INFO][4967] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" iface="eth0" netns="/var/run/netns/cni-6d1a9725-78e0-d1aa-5c6a-78e55ba53287" Aug 13 00:20:34.765851 containerd[1941]: 2025-08-13 00:20:34.367 [INFO][4967] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" Aug 13 00:20:34.765851 containerd[1941]: 2025-08-13 00:20:34.367 [INFO][4967] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" Aug 13 00:20:34.765851 containerd[1941]: 2025-08-13 00:20:34.662 [INFO][4998] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" HandleID="k8s-pod-network.9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" Workload="ip--172--31--18--251-k8s-csi--node--driver--wzmtn-eth0" Aug 13 00:20:34.765851 containerd[1941]: 2025-08-13 00:20:34.662 [INFO][4998] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:34.765851 containerd[1941]: 2025-08-13 00:20:34.707 [INFO][4998] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:34.765851 containerd[1941]: 2025-08-13 00:20:34.739 [WARNING][4998] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" HandleID="k8s-pod-network.9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" Workload="ip--172--31--18--251-k8s-csi--node--driver--wzmtn-eth0" Aug 13 00:20:34.765851 containerd[1941]: 2025-08-13 00:20:34.739 [INFO][4998] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" HandleID="k8s-pod-network.9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" Workload="ip--172--31--18--251-k8s-csi--node--driver--wzmtn-eth0" Aug 13 00:20:34.765851 containerd[1941]: 2025-08-13 00:20:34.746 [INFO][4998] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:34.765851 containerd[1941]: 2025-08-13 00:20:34.753 [INFO][4967] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" Aug 13 00:20:34.771524 containerd[1941]: time="2025-08-13T00:20:34.770314367Z" level=info msg="TearDown network for sandbox \"9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03\" successfully" Aug 13 00:20:34.771524 containerd[1941]: time="2025-08-13T00:20:34.770365091Z" level=info msg="StopPodSandbox for \"9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03\" returns successfully" Aug 13 00:20:34.775316 containerd[1941]: time="2025-08-13T00:20:34.774221795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wzmtn,Uid:e5f0f3a3-68e2-4f84-92cb-c460ed58604c,Namespace:calico-system,Attempt:1,}" Aug 13 00:20:34.791453 systemd[1]: run-netns-cni\x2d6d1a9725\x2d78e0\x2dd1aa\x2d5c6a\x2d78e55ba53287.mount: Deactivated successfully. Aug 13 00:20:34.817312 containerd[1941]: 2025-08-13 00:20:34.461 [INFO][4968] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" Aug 13 00:20:34.817312 containerd[1941]: 2025-08-13 00:20:34.462 [INFO][4968] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" iface="eth0" netns="/var/run/netns/cni-3a848575-e842-2197-6df8-a360e2970102" Aug 13 00:20:34.817312 containerd[1941]: 2025-08-13 00:20:34.470 [INFO][4968] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" iface="eth0" netns="/var/run/netns/cni-3a848575-e842-2197-6df8-a360e2970102" Aug 13 00:20:34.817312 containerd[1941]: 2025-08-13 00:20:34.482 [INFO][4968] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" iface="eth0" netns="/var/run/netns/cni-3a848575-e842-2197-6df8-a360e2970102" Aug 13 00:20:34.817312 containerd[1941]: 2025-08-13 00:20:34.482 [INFO][4968] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" Aug 13 00:20:34.817312 containerd[1941]: 2025-08-13 00:20:34.482 [INFO][4968] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" Aug 13 00:20:34.817312 containerd[1941]: 2025-08-13 00:20:34.730 [INFO][5011] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" HandleID="k8s-pod-network.58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" Workload="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--6mjjr-eth0" Aug 13 00:20:34.817312 containerd[1941]: 2025-08-13 00:20:34.732 [INFO][5011] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:34.817312 containerd[1941]: 2025-08-13 00:20:34.746 [INFO][5011] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:34.817312 containerd[1941]: 2025-08-13 00:20:34.797 [WARNING][5011] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" HandleID="k8s-pod-network.58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" Workload="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--6mjjr-eth0" Aug 13 00:20:34.817312 containerd[1941]: 2025-08-13 00:20:34.799 [INFO][5011] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" HandleID="k8s-pod-network.58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" Workload="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--6mjjr-eth0" Aug 13 00:20:34.817312 containerd[1941]: 2025-08-13 00:20:34.807 [INFO][5011] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:34.817312 containerd[1941]: 2025-08-13 00:20:34.812 [INFO][4968] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" Aug 13 00:20:34.822808 containerd[1941]: time="2025-08-13T00:20:34.821300483Z" level=info msg="TearDown network for sandbox \"58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b\" successfully" Aug 13 00:20:34.822808 containerd[1941]: time="2025-08-13T00:20:34.821353127Z" level=info msg="StopPodSandbox for \"58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b\" returns successfully" Aug 13 00:20:34.828598 containerd[1941]: time="2025-08-13T00:20:34.825902063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6mjjr,Uid:3a2c1391-3856-407e-9f32-3dffc0012695,Namespace:kube-system,Attempt:1,}" Aug 13 00:20:34.828254 systemd[1]: run-netns-cni\x2d3a848575\x2de842\x2d2197\x2d6df8\x2da360e2970102.mount: Deactivated successfully. Aug 13 00:20:34.935200 containerd[1941]: time="2025-08-13T00:20:34.934835087Z" level=info msg="StopPodSandbox for \"ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c\"" Aug 13 00:20:35.599336 (udev-worker)[4843]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:20:35.600974 systemd-networkd[1845]: cali70d1429311e: Link UP Aug 13 00:20:35.610744 systemd-networkd[1845]: cali70d1429311e: Gained carrier Aug 13 00:20:35.657956 containerd[1941]: 2025-08-13 00:20:35.111 [INFO][5039] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--251-k8s-calico--kube--controllers--6db5fd67fb--ph246-eth0 calico-kube-controllers-6db5fd67fb- calico-system 594d665a-07d4-46f9-938a-94309ad04257 998 0 2025-08-13 00:20:11 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6db5fd67fb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-18-251 calico-kube-controllers-6db5fd67fb-ph246 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali70d1429311e [] [] }} ContainerID="08f75636be172eea21e6e52f3e8c0600e36f0aa48bde389b804e0c7f76f73606" Namespace="calico-system" Pod="calico-kube-controllers-6db5fd67fb-ph246" WorkloadEndpoint="ip--172--31--18--251-k8s-calico--kube--controllers--6db5fd67fb--ph246-" Aug 13 00:20:35.657956 containerd[1941]: 2025-08-13 00:20:35.113 [INFO][5039] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="08f75636be172eea21e6e52f3e8c0600e36f0aa48bde389b804e0c7f76f73606" Namespace="calico-system" Pod="calico-kube-controllers-6db5fd67fb-ph246" WorkloadEndpoint="ip--172--31--18--251-k8s-calico--kube--controllers--6db5fd67fb--ph246-eth0" Aug 13 00:20:35.657956 containerd[1941]: 2025-08-13 00:20:35.383 [INFO][5094] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="08f75636be172eea21e6e52f3e8c0600e36f0aa48bde389b804e0c7f76f73606" HandleID="k8s-pod-network.08f75636be172eea21e6e52f3e8c0600e36f0aa48bde389b804e0c7f76f73606" Workload="ip--172--31--18--251-k8s-calico--kube--controllers--6db5fd67fb--ph246-eth0" Aug 13 00:20:35.657956 containerd[1941]: 2025-08-13 00:20:35.385 [INFO][5094] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="08f75636be172eea21e6e52f3e8c0600e36f0aa48bde389b804e0c7f76f73606" HandleID="k8s-pod-network.08f75636be172eea21e6e52f3e8c0600e36f0aa48bde389b804e0c7f76f73606" Workload="ip--172--31--18--251-k8s-calico--kube--controllers--6db5fd67fb--ph246-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000121930), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-251", "pod":"calico-kube-controllers-6db5fd67fb-ph246", "timestamp":"2025-08-13 00:20:35.383254198 +0000 UTC"}, Hostname:"ip-172-31-18-251", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:20:35.657956 containerd[1941]: 2025-08-13 00:20:35.385 [INFO][5094] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:35.657956 containerd[1941]: 2025-08-13 00:20:35.385 [INFO][5094] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:35.657956 containerd[1941]: 2025-08-13 00:20:35.386 [INFO][5094] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-251' Aug 13 00:20:35.657956 containerd[1941]: 2025-08-13 00:20:35.441 [INFO][5094] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.08f75636be172eea21e6e52f3e8c0600e36f0aa48bde389b804e0c7f76f73606" host="ip-172-31-18-251" Aug 13 00:20:35.657956 containerd[1941]: 2025-08-13 00:20:35.475 [INFO][5094] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-251" Aug 13 00:20:35.657956 containerd[1941]: 2025-08-13 00:20:35.497 [INFO][5094] ipam/ipam.go 511: Trying affinity for 192.168.32.128/26 host="ip-172-31-18-251" Aug 13 00:20:35.657956 containerd[1941]: 2025-08-13 00:20:35.503 [INFO][5094] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.128/26 host="ip-172-31-18-251" Aug 13 00:20:35.657956 containerd[1941]: 2025-08-13 00:20:35.516 [INFO][5094] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.128/26 host="ip-172-31-18-251" Aug 13 00:20:35.657956 containerd[1941]: 2025-08-13 00:20:35.517 [INFO][5094] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.32.128/26 handle="k8s-pod-network.08f75636be172eea21e6e52f3e8c0600e36f0aa48bde389b804e0c7f76f73606" host="ip-172-31-18-251" Aug 13 00:20:35.657956 containerd[1941]: 2025-08-13 00:20:35.524 [INFO][5094] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.08f75636be172eea21e6e52f3e8c0600e36f0aa48bde389b804e0c7f76f73606 Aug 13 00:20:35.657956 containerd[1941]: 2025-08-13 00:20:35.543 [INFO][5094] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.32.128/26 handle="k8s-pod-network.08f75636be172eea21e6e52f3e8c0600e36f0aa48bde389b804e0c7f76f73606" host="ip-172-31-18-251" Aug 13 00:20:35.657956 containerd[1941]: 2025-08-13 00:20:35.572 [INFO][5094] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.32.130/26] block=192.168.32.128/26 handle="k8s-pod-network.08f75636be172eea21e6e52f3e8c0600e36f0aa48bde389b804e0c7f76f73606" host="ip-172-31-18-251" Aug 13 00:20:35.657956 containerd[1941]: 2025-08-13 00:20:35.572 [INFO][5094] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.130/26] handle="k8s-pod-network.08f75636be172eea21e6e52f3e8c0600e36f0aa48bde389b804e0c7f76f73606" host="ip-172-31-18-251" Aug 13 00:20:35.657956 containerd[1941]: 2025-08-13 00:20:35.572 [INFO][5094] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:35.657956 containerd[1941]: 2025-08-13 00:20:35.572 [INFO][5094] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.130/26] IPv6=[] ContainerID="08f75636be172eea21e6e52f3e8c0600e36f0aa48bde389b804e0c7f76f73606" HandleID="k8s-pod-network.08f75636be172eea21e6e52f3e8c0600e36f0aa48bde389b804e0c7f76f73606" Workload="ip--172--31--18--251-k8s-calico--kube--controllers--6db5fd67fb--ph246-eth0" Aug 13 00:20:35.661485 containerd[1941]: 2025-08-13 00:20:35.583 [INFO][5039] cni-plugin/k8s.go 418: Populated endpoint ContainerID="08f75636be172eea21e6e52f3e8c0600e36f0aa48bde389b804e0c7f76f73606" Namespace="calico-system" Pod="calico-kube-controllers-6db5fd67fb-ph246" WorkloadEndpoint="ip--172--31--18--251-k8s-calico--kube--controllers--6db5fd67fb--ph246-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--251-k8s-calico--kube--controllers--6db5fd67fb--ph246-eth0", GenerateName:"calico-kube-controllers-6db5fd67fb-", Namespace:"calico-system", SelfLink:"", UID:"594d665a-07d4-46f9-938a-94309ad04257", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6db5fd67fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-251", ContainerID:"", Pod:"calico-kube-controllers-6db5fd67fb-ph246", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali70d1429311e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:35.661485 containerd[1941]: 2025-08-13 00:20:35.583 [INFO][5039] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.130/32] ContainerID="08f75636be172eea21e6e52f3e8c0600e36f0aa48bde389b804e0c7f76f73606" Namespace="calico-system" Pod="calico-kube-controllers-6db5fd67fb-ph246" WorkloadEndpoint="ip--172--31--18--251-k8s-calico--kube--controllers--6db5fd67fb--ph246-eth0" Aug 13 00:20:35.661485 containerd[1941]: 2025-08-13 00:20:35.583 [INFO][5039] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali70d1429311e ContainerID="08f75636be172eea21e6e52f3e8c0600e36f0aa48bde389b804e0c7f76f73606" Namespace="calico-system" Pod="calico-kube-controllers-6db5fd67fb-ph246" WorkloadEndpoint="ip--172--31--18--251-k8s-calico--kube--controllers--6db5fd67fb--ph246-eth0" Aug 13 00:20:35.661485 containerd[1941]: 2025-08-13 00:20:35.614 [INFO][5039] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="08f75636be172eea21e6e52f3e8c0600e36f0aa48bde389b804e0c7f76f73606" Namespace="calico-system" Pod="calico-kube-controllers-6db5fd67fb-ph246" WorkloadEndpoint="ip--172--31--18--251-k8s-calico--kube--controllers--6db5fd67fb--ph246-eth0" Aug 13 00:20:35.661485 containerd[1941]: 2025-08-13 00:20:35.615 [INFO][5039] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="08f75636be172eea21e6e52f3e8c0600e36f0aa48bde389b804e0c7f76f73606" Namespace="calico-system" Pod="calico-kube-controllers-6db5fd67fb-ph246" WorkloadEndpoint="ip--172--31--18--251-k8s-calico--kube--controllers--6db5fd67fb--ph246-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--251-k8s-calico--kube--controllers--6db5fd67fb--ph246-eth0", GenerateName:"calico-kube-controllers-6db5fd67fb-", Namespace:"calico-system", SelfLink:"", UID:"594d665a-07d4-46f9-938a-94309ad04257", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6db5fd67fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-251", ContainerID:"08f75636be172eea21e6e52f3e8c0600e36f0aa48bde389b804e0c7f76f73606", Pod:"calico-kube-controllers-6db5fd67fb-ph246", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali70d1429311e", MAC:"aa:c6:84:cf:1d:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:35.661485 containerd[1941]: 2025-08-13 00:20:35.651 [INFO][5039] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="08f75636be172eea21e6e52f3e8c0600e36f0aa48bde389b804e0c7f76f73606" Namespace="calico-system" Pod="calico-kube-controllers-6db5fd67fb-ph246" WorkloadEndpoint="ip--172--31--18--251-k8s-calico--kube--controllers--6db5fd67fb--ph246-eth0" Aug 13 00:20:35.817333 systemd-networkd[1845]: cali278814795cb: Link UP Aug 13 00:20:35.829003 systemd-networkd[1845]: cali278814795cb: Gained carrier Aug 13 00:20:35.887463 containerd[1941]: time="2025-08-13T00:20:35.887301360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:20:35.888587 containerd[1941]: time="2025-08-13T00:20:35.887879496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:20:35.889549 containerd[1941]: time="2025-08-13T00:20:35.888968940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:35.891377 containerd[1941]: time="2025-08-13T00:20:35.890994612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:35.954886 containerd[1941]: time="2025-08-13T00:20:35.953242416Z" level=info msg="StopPodSandbox for \"f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68\"" Aug 13 00:20:35.987396 containerd[1941]: time="2025-08-13T00:20:35.986235361Z" level=info msg="StopPodSandbox for \"fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e\"" Aug 13 00:20:36.001634 containerd[1941]: 2025-08-13 00:20:35.066 [INFO][5034] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--251-k8s-coredns--7c65d6cfc9--p2lsx-eth0 coredns-7c65d6cfc9- kube-system f1dfa461-4b5f-4ed8-a850-cf604830db07 996 0 2025-08-13 00:19:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-251 coredns-7c65d6cfc9-p2lsx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali278814795cb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f3d4fc3dea1bd7cc020bc5fc36728b6dda9f1a97845a7f80fca984024b5424ff" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p2lsx" WorkloadEndpoint="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--p2lsx-" Aug 13 00:20:36.001634 containerd[1941]: 2025-08-13 00:20:35.066 [INFO][5034] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f3d4fc3dea1bd7cc020bc5fc36728b6dda9f1a97845a7f80fca984024b5424ff" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p2lsx" WorkloadEndpoint="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--p2lsx-eth0" Aug 13 00:20:36.001634 containerd[1941]: 2025-08-13 00:20:35.438 [INFO][5088] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f3d4fc3dea1bd7cc020bc5fc36728b6dda9f1a97845a7f80fca984024b5424ff" HandleID="k8s-pod-network.f3d4fc3dea1bd7cc020bc5fc36728b6dda9f1a97845a7f80fca984024b5424ff" Workload="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--p2lsx-eth0" Aug 13 00:20:36.001634 containerd[1941]: 2025-08-13 00:20:35.439 [INFO][5088] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f3d4fc3dea1bd7cc020bc5fc36728b6dda9f1a97845a7f80fca984024b5424ff" HandleID="k8s-pod-network.f3d4fc3dea1bd7cc020bc5fc36728b6dda9f1a97845a7f80fca984024b5424ff" Workload="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--p2lsx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000311a30), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-251", "pod":"coredns-7c65d6cfc9-p2lsx", "timestamp":"2025-08-13 00:20:35.43812127 +0000 UTC"}, Hostname:"ip-172-31-18-251", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:20:36.001634 containerd[1941]: 2025-08-13 00:20:35.446 [INFO][5088] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:36.001634 containerd[1941]: 2025-08-13 00:20:35.572 [INFO][5088] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:36.001634 containerd[1941]: 2025-08-13 00:20:35.572 [INFO][5088] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-251' Aug 13 00:20:36.001634 containerd[1941]: 2025-08-13 00:20:35.628 [INFO][5088] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f3d4fc3dea1bd7cc020bc5fc36728b6dda9f1a97845a7f80fca984024b5424ff" host="ip-172-31-18-251" Aug 13 00:20:36.001634 containerd[1941]: 2025-08-13 00:20:35.659 [INFO][5088] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-251" Aug 13 00:20:36.001634 containerd[1941]: 2025-08-13 00:20:35.692 [INFO][5088] ipam/ipam.go 511: Trying affinity for 192.168.32.128/26 host="ip-172-31-18-251" Aug 13 00:20:36.001634 containerd[1941]: 2025-08-13 00:20:35.709 [INFO][5088] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.128/26 host="ip-172-31-18-251" Aug 13 00:20:36.001634 containerd[1941]: 2025-08-13 00:20:35.723 [INFO][5088] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.128/26 host="ip-172-31-18-251" Aug 13 00:20:36.001634 containerd[1941]: 2025-08-13 00:20:35.723 [INFO][5088] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.32.128/26 handle="k8s-pod-network.f3d4fc3dea1bd7cc020bc5fc36728b6dda9f1a97845a7f80fca984024b5424ff" host="ip-172-31-18-251" Aug 13 00:20:36.001634 containerd[1941]: 2025-08-13 00:20:35.736 [INFO][5088] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f3d4fc3dea1bd7cc020bc5fc36728b6dda9f1a97845a7f80fca984024b5424ff Aug 13 00:20:36.001634 containerd[1941]: 2025-08-13 00:20:35.750 [INFO][5088] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.32.128/26 handle="k8s-pod-network.f3d4fc3dea1bd7cc020bc5fc36728b6dda9f1a97845a7f80fca984024b5424ff" host="ip-172-31-18-251" Aug 13 00:20:36.001634 containerd[1941]: 2025-08-13 00:20:35.769 [INFO][5088] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.32.131/26] block=192.168.32.128/26 handle="k8s-pod-network.f3d4fc3dea1bd7cc020bc5fc36728b6dda9f1a97845a7f80fca984024b5424ff" host="ip-172-31-18-251" Aug 13 00:20:36.001634 containerd[1941]: 2025-08-13 00:20:35.769 [INFO][5088] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.131/26] handle="k8s-pod-network.f3d4fc3dea1bd7cc020bc5fc36728b6dda9f1a97845a7f80fca984024b5424ff" host="ip-172-31-18-251" Aug 13 00:20:36.001634 containerd[1941]: 2025-08-13 00:20:35.769 [INFO][5088] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:36.001634 containerd[1941]: 2025-08-13 00:20:35.778 [INFO][5088] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.131/26] IPv6=[] ContainerID="f3d4fc3dea1bd7cc020bc5fc36728b6dda9f1a97845a7f80fca984024b5424ff" HandleID="k8s-pod-network.f3d4fc3dea1bd7cc020bc5fc36728b6dda9f1a97845a7f80fca984024b5424ff" Workload="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--p2lsx-eth0" Aug 13 00:20:36.004749 containerd[1941]: 2025-08-13 00:20:35.792 [INFO][5034] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f3d4fc3dea1bd7cc020bc5fc36728b6dda9f1a97845a7f80fca984024b5424ff" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p2lsx" WorkloadEndpoint="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--p2lsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--251-k8s-coredns--7c65d6cfc9--p2lsx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f1dfa461-4b5f-4ed8-a850-cf604830db07", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-251", ContainerID:"", Pod:"coredns-7c65d6cfc9-p2lsx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali278814795cb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:36.004749 containerd[1941]: 2025-08-13 00:20:35.793 [INFO][5034] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.131/32] ContainerID="f3d4fc3dea1bd7cc020bc5fc36728b6dda9f1a97845a7f80fca984024b5424ff" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p2lsx" WorkloadEndpoint="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--p2lsx-eth0" Aug 13 00:20:36.004749 containerd[1941]: 2025-08-13 00:20:35.793 [INFO][5034] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali278814795cb ContainerID="f3d4fc3dea1bd7cc020bc5fc36728b6dda9f1a97845a7f80fca984024b5424ff" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p2lsx" WorkloadEndpoint="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--p2lsx-eth0" Aug 13 00:20:36.004749 containerd[1941]: 2025-08-13 00:20:35.825 [INFO][5034] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f3d4fc3dea1bd7cc020bc5fc36728b6dda9f1a97845a7f80fca984024b5424ff" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p2lsx" WorkloadEndpoint="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--p2lsx-eth0" Aug 13 00:20:36.004749 containerd[1941]: 2025-08-13 00:20:35.850 [INFO][5034] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f3d4fc3dea1bd7cc020bc5fc36728b6dda9f1a97845a7f80fca984024b5424ff" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p2lsx" WorkloadEndpoint="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--p2lsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--251-k8s-coredns--7c65d6cfc9--p2lsx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f1dfa461-4b5f-4ed8-a850-cf604830db07", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-251", ContainerID:"f3d4fc3dea1bd7cc020bc5fc36728b6dda9f1a97845a7f80fca984024b5424ff", Pod:"coredns-7c65d6cfc9-p2lsx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali278814795cb", MAC:"6a:f4:bc:b3:f5:24", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:36.004749 containerd[1941]: 2025-08-13 00:20:35.910 [INFO][5034] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f3d4fc3dea1bd7cc020bc5fc36728b6dda9f1a97845a7f80fca984024b5424ff" Namespace="kube-system" Pod="coredns-7c65d6cfc9-p2lsx" WorkloadEndpoint="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--p2lsx-eth0" Aug 13 00:20:36.037471 systemd[1]: Started cri-containerd-08f75636be172eea21e6e52f3e8c0600e36f0aa48bde389b804e0c7f76f73606.scope - libcontainer container 08f75636be172eea21e6e52f3e8c0600e36f0aa48bde389b804e0c7f76f73606. Aug 13 00:20:36.093338 systemd-networkd[1845]: cali0bed2aee68a: Link UP Aug 13 00:20:36.113368 systemd-networkd[1845]: cali0bed2aee68a: Gained carrier Aug 13 00:20:36.236249 containerd[1941]: 2025-08-13 00:20:35.146 [INFO][5046] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--251-k8s-csi--node--driver--wzmtn-eth0 csi-node-driver- calico-system e5f0f3a3-68e2-4f84-92cb-c460ed58604c 995 0 2025-08-13 00:20:11 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-18-251 csi-node-driver-wzmtn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0bed2aee68a [] [] }} ContainerID="4c825a17c4ff4f470302465d9c6ac397f28d19ba49d6e0959657805b3d2468ad" Namespace="calico-system" Pod="csi-node-driver-wzmtn" WorkloadEndpoint="ip--172--31--18--251-k8s-csi--node--driver--wzmtn-" Aug 13 00:20:36.236249 containerd[1941]: 2025-08-13 00:20:35.149 [INFO][5046] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4c825a17c4ff4f470302465d9c6ac397f28d19ba49d6e0959657805b3d2468ad" Namespace="calico-system" Pod="csi-node-driver-wzmtn" WorkloadEndpoint="ip--172--31--18--251-k8s-csi--node--driver--wzmtn-eth0" Aug 13 00:20:36.236249 containerd[1941]: 2025-08-13 00:20:35.480 [INFO][5103] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c825a17c4ff4f470302465d9c6ac397f28d19ba49d6e0959657805b3d2468ad" HandleID="k8s-pod-network.4c825a17c4ff4f470302465d9c6ac397f28d19ba49d6e0959657805b3d2468ad" Workload="ip--172--31--18--251-k8s-csi--node--driver--wzmtn-eth0" Aug 13 00:20:36.236249 containerd[1941]: 2025-08-13 00:20:35.482 [INFO][5103] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4c825a17c4ff4f470302465d9c6ac397f28d19ba49d6e0959657805b3d2468ad" HandleID="k8s-pod-network.4c825a17c4ff4f470302465d9c6ac397f28d19ba49d6e0959657805b3d2468ad" Workload="ip--172--31--18--251-k8s-csi--node--driver--wzmtn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cf6e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-251", "pod":"csi-node-driver-wzmtn", "timestamp":"2025-08-13 00:20:35.480534562 +0000 UTC"}, Hostname:"ip-172-31-18-251", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:20:36.236249 containerd[1941]: 2025-08-13 00:20:35.482 [INFO][5103] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:36.236249 containerd[1941]: 2025-08-13 00:20:35.769 [INFO][5103] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:36.236249 containerd[1941]: 2025-08-13 00:20:35.770 [INFO][5103] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-251' Aug 13 00:20:36.236249 containerd[1941]: 2025-08-13 00:20:35.835 [INFO][5103] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4c825a17c4ff4f470302465d9c6ac397f28d19ba49d6e0959657805b3d2468ad" host="ip-172-31-18-251" Aug 13 00:20:36.236249 containerd[1941]: 2025-08-13 00:20:35.871 [INFO][5103] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-251" Aug 13 00:20:36.236249 containerd[1941]: 2025-08-13 00:20:35.917 [INFO][5103] ipam/ipam.go 511: Trying affinity for 192.168.32.128/26 host="ip-172-31-18-251" Aug 13 00:20:36.236249 containerd[1941]: 2025-08-13 00:20:35.933 [INFO][5103] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.128/26 host="ip-172-31-18-251" Aug 13 00:20:36.236249 containerd[1941]: 2025-08-13 00:20:35.955 [INFO][5103] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.128/26 host="ip-172-31-18-251" Aug 13 00:20:36.236249 containerd[1941]: 2025-08-13 00:20:35.955 [INFO][5103] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.32.128/26 handle="k8s-pod-network.4c825a17c4ff4f470302465d9c6ac397f28d19ba49d6e0959657805b3d2468ad" host="ip-172-31-18-251" Aug 13 00:20:36.236249 containerd[1941]: 2025-08-13 00:20:35.965 [INFO][5103] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4c825a17c4ff4f470302465d9c6ac397f28d19ba49d6e0959657805b3d2468ad Aug 13 00:20:36.236249 containerd[1941]: 2025-08-13 00:20:35.981 [INFO][5103] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.32.128/26 handle="k8s-pod-network.4c825a17c4ff4f470302465d9c6ac397f28d19ba49d6e0959657805b3d2468ad" host="ip-172-31-18-251" Aug 13 00:20:36.236249 containerd[1941]: 2025-08-13 00:20:36.017 [INFO][5103] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.32.132/26] block=192.168.32.128/26 handle="k8s-pod-network.4c825a17c4ff4f470302465d9c6ac397f28d19ba49d6e0959657805b3d2468ad" host="ip-172-31-18-251" Aug 13 00:20:36.236249 containerd[1941]: 2025-08-13 00:20:36.017 [INFO][5103] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.132/26] handle="k8s-pod-network.4c825a17c4ff4f470302465d9c6ac397f28d19ba49d6e0959657805b3d2468ad" host="ip-172-31-18-251" Aug 13 00:20:36.236249 containerd[1941]: 2025-08-13 00:20:36.017 [INFO][5103] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:36.236249 containerd[1941]: 2025-08-13 00:20:36.017 [INFO][5103] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.132/26] IPv6=[] ContainerID="4c825a17c4ff4f470302465d9c6ac397f28d19ba49d6e0959657805b3d2468ad" HandleID="k8s-pod-network.4c825a17c4ff4f470302465d9c6ac397f28d19ba49d6e0959657805b3d2468ad" Workload="ip--172--31--18--251-k8s-csi--node--driver--wzmtn-eth0" Aug 13 00:20:36.238432 containerd[1941]: 2025-08-13 00:20:36.060 [INFO][5046] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4c825a17c4ff4f470302465d9c6ac397f28d19ba49d6e0959657805b3d2468ad" Namespace="calico-system" Pod="csi-node-driver-wzmtn" WorkloadEndpoint="ip--172--31--18--251-k8s-csi--node--driver--wzmtn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--251-k8s-csi--node--driver--wzmtn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e5f0f3a3-68e2-4f84-92cb-c460ed58604c", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-251", ContainerID:"", Pod:"csi-node-driver-wzmtn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.32.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0bed2aee68a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:36.238432 containerd[1941]: 2025-08-13 00:20:36.063 [INFO][5046] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.132/32] ContainerID="4c825a17c4ff4f470302465d9c6ac397f28d19ba49d6e0959657805b3d2468ad" Namespace="calico-system" Pod="csi-node-driver-wzmtn" WorkloadEndpoint="ip--172--31--18--251-k8s-csi--node--driver--wzmtn-eth0" Aug 13 00:20:36.238432 containerd[1941]: 2025-08-13 00:20:36.064 [INFO][5046] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0bed2aee68a ContainerID="4c825a17c4ff4f470302465d9c6ac397f28d19ba49d6e0959657805b3d2468ad" Namespace="calico-system" Pod="csi-node-driver-wzmtn" WorkloadEndpoint="ip--172--31--18--251-k8s-csi--node--driver--wzmtn-eth0" Aug 13 00:20:36.238432 containerd[1941]: 2025-08-13 00:20:36.125 [INFO][5046] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c825a17c4ff4f470302465d9c6ac397f28d19ba49d6e0959657805b3d2468ad" Namespace="calico-system" Pod="csi-node-driver-wzmtn" WorkloadEndpoint="ip--172--31--18--251-k8s-csi--node--driver--wzmtn-eth0" Aug 13 00:20:36.238432 containerd[1941]: 2025-08-13 00:20:36.133 [INFO][5046] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4c825a17c4ff4f470302465d9c6ac397f28d19ba49d6e0959657805b3d2468ad" Namespace="calico-system" Pod="csi-node-driver-wzmtn" WorkloadEndpoint="ip--172--31--18--251-k8s-csi--node--driver--wzmtn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--251-k8s-csi--node--driver--wzmtn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e5f0f3a3-68e2-4f84-92cb-c460ed58604c", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-251", ContainerID:"4c825a17c4ff4f470302465d9c6ac397f28d19ba49d6e0959657805b3d2468ad", Pod:"csi-node-driver-wzmtn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.32.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0bed2aee68a", MAC:"12:b2:f4:dd:8b:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:36.238432 containerd[1941]: 2025-08-13 00:20:36.209 [INFO][5046] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4c825a17c4ff4f470302465d9c6ac397f28d19ba49d6e0959657805b3d2468ad" Namespace="calico-system" Pod="csi-node-driver-wzmtn" WorkloadEndpoint="ip--172--31--18--251-k8s-csi--node--driver--wzmtn-eth0" Aug 13 00:20:36.301205 containerd[1941]: time="2025-08-13T00:20:36.297522646Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:20:36.301205 containerd[1941]: time="2025-08-13T00:20:36.297628798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:20:36.301205 containerd[1941]: time="2025-08-13T00:20:36.297656422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:36.301205 containerd[1941]: time="2025-08-13T00:20:36.297846838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:36.321565 systemd-networkd[1845]: cali5e4ec14e916: Link UP Aug 13 00:20:36.333031 systemd-networkd[1845]: cali5e4ec14e916: Gained carrier Aug 13 00:20:36.373459 containerd[1941]: 2025-08-13 00:20:35.249 [INFO][5076] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" Aug 13 00:20:36.373459 containerd[1941]: 2025-08-13 00:20:35.254 [INFO][5076] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" iface="eth0" netns="/var/run/netns/cni-8d007021-1ee9-6fc6-f4a6-72f88677039b" Aug 13 00:20:36.373459 containerd[1941]: 2025-08-13 00:20:35.262 [INFO][5076] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" iface="eth0" netns="/var/run/netns/cni-8d007021-1ee9-6fc6-f4a6-72f88677039b" Aug 13 00:20:36.373459 containerd[1941]: 2025-08-13 00:20:35.286 [INFO][5076] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" iface="eth0" netns="/var/run/netns/cni-8d007021-1ee9-6fc6-f4a6-72f88677039b" Aug 13 00:20:36.373459 containerd[1941]: 2025-08-13 00:20:35.286 [INFO][5076] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" Aug 13 00:20:36.373459 containerd[1941]: 2025-08-13 00:20:35.287 [INFO][5076] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" Aug 13 00:20:36.373459 containerd[1941]: 2025-08-13 00:20:35.563 [INFO][5114] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" HandleID="k8s-pod-network.ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" Workload="ip--172--31--18--251-k8s-goldmane--58fd7646b9--555lv-eth0" Aug 13 00:20:36.373459 containerd[1941]: 2025-08-13 00:20:35.565 [INFO][5114] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:36.373459 containerd[1941]: 2025-08-13 00:20:36.229 [INFO][5114] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:36.373459 containerd[1941]: 2025-08-13 00:20:36.283 [WARNING][5114] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" HandleID="k8s-pod-network.ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" Workload="ip--172--31--18--251-k8s-goldmane--58fd7646b9--555lv-eth0" Aug 13 00:20:36.373459 containerd[1941]: 2025-08-13 00:20:36.283 [INFO][5114] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" HandleID="k8s-pod-network.ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" Workload="ip--172--31--18--251-k8s-goldmane--58fd7646b9--555lv-eth0" Aug 13 00:20:36.373459 containerd[1941]: 2025-08-13 00:20:36.289 [INFO][5114] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:36.373459 containerd[1941]: 2025-08-13 00:20:36.335 [INFO][5076] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" Aug 13 00:20:36.383040 containerd[1941]: time="2025-08-13T00:20:36.382359551Z" level=info msg="TearDown network for sandbox \"ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c\" successfully" Aug 13 00:20:36.383040 containerd[1941]: time="2025-08-13T00:20:36.382409351Z" level=info msg="StopPodSandbox for \"ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c\" returns successfully" Aug 13 00:20:36.398007 containerd[1941]: time="2025-08-13T00:20:36.397803671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-555lv,Uid:0399420f-0a76-4a22-be47-c06978fb8813,Namespace:calico-system,Attempt:1,}" Aug 13 00:20:36.409071 containerd[1941]: 2025-08-13 00:20:35.220 [INFO][5057] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--251-k8s-coredns--7c65d6cfc9--6mjjr-eth0 coredns-7c65d6cfc9- kube-system 3a2c1391-3856-407e-9f32-3dffc0012695 997 0 2025-08-13 00:19:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-251 coredns-7c65d6cfc9-6mjjr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5e4ec14e916 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="eb20d88dde4551cc14d00a9a0dcf98d3fa26631ce6aeda5dc483f29cd43c23cb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6mjjr" WorkloadEndpoint="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--6mjjr-" Aug 13 00:20:36.409071 containerd[1941]: 2025-08-13 00:20:35.220 [INFO][5057] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eb20d88dde4551cc14d00a9a0dcf98d3fa26631ce6aeda5dc483f29cd43c23cb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6mjjr" WorkloadEndpoint="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--6mjjr-eth0" Aug 13 00:20:36.409071 containerd[1941]: 2025-08-13 00:20:35.527 [INFO][5108] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb20d88dde4551cc14d00a9a0dcf98d3fa26631ce6aeda5dc483f29cd43c23cb" HandleID="k8s-pod-network.eb20d88dde4551cc14d00a9a0dcf98d3fa26631ce6aeda5dc483f29cd43c23cb" Workload="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--6mjjr-eth0" Aug 13 00:20:36.409071 containerd[1941]: 2025-08-13 00:20:35.529 [INFO][5108] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eb20d88dde4551cc14d00a9a0dcf98d3fa26631ce6aeda5dc483f29cd43c23cb" HandleID="k8s-pod-network.eb20d88dde4551cc14d00a9a0dcf98d3fa26631ce6aeda5dc483f29cd43c23cb" Workload="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--6mjjr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3e00), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-251", "pod":"coredns-7c65d6cfc9-6mjjr", "timestamp":"2025-08-13 00:20:35.527142682 +0000 UTC"}, Hostname:"ip-172-31-18-251", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:20:36.409071 containerd[1941]: 2025-08-13 00:20:35.529 [INFO][5108] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:36.409071 containerd[1941]: 2025-08-13 00:20:36.021 [INFO][5108] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:36.409071 containerd[1941]: 2025-08-13 00:20:36.021 [INFO][5108] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-251' Aug 13 00:20:36.409071 containerd[1941]: 2025-08-13 00:20:36.070 [INFO][5108] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eb20d88dde4551cc14d00a9a0dcf98d3fa26631ce6aeda5dc483f29cd43c23cb" host="ip-172-31-18-251" Aug 13 00:20:36.409071 containerd[1941]: 2025-08-13 00:20:36.099 [INFO][5108] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-251" Aug 13 00:20:36.409071 containerd[1941]: 2025-08-13 00:20:36.143 [INFO][5108] ipam/ipam.go 511: Trying affinity for 192.168.32.128/26 host="ip-172-31-18-251" Aug 13 00:20:36.409071 containerd[1941]: 2025-08-13 00:20:36.156 [INFO][5108] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.128/26 host="ip-172-31-18-251" Aug 13 00:20:36.409071 containerd[1941]: 2025-08-13 00:20:36.172 [INFO][5108] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.128/26 host="ip-172-31-18-251" Aug 13 00:20:36.409071 containerd[1941]: 2025-08-13 00:20:36.174 [INFO][5108] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.32.128/26 handle="k8s-pod-network.eb20d88dde4551cc14d00a9a0dcf98d3fa26631ce6aeda5dc483f29cd43c23cb" host="ip-172-31-18-251" Aug 13 00:20:36.409071 containerd[1941]: 2025-08-13 00:20:36.184 [INFO][5108] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.eb20d88dde4551cc14d00a9a0dcf98d3fa26631ce6aeda5dc483f29cd43c23cb Aug 13 00:20:36.409071 containerd[1941]: 2025-08-13 00:20:36.203 [INFO][5108] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.32.128/26 handle="k8s-pod-network.eb20d88dde4551cc14d00a9a0dcf98d3fa26631ce6aeda5dc483f29cd43c23cb" host="ip-172-31-18-251" Aug 13 00:20:36.409071 containerd[1941]: 2025-08-13 00:20:36.229 [INFO][5108] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.32.133/26] block=192.168.32.128/26 handle="k8s-pod-network.eb20d88dde4551cc14d00a9a0dcf98d3fa26631ce6aeda5dc483f29cd43c23cb" host="ip-172-31-18-251" Aug 13 00:20:36.409071 containerd[1941]: 2025-08-13 00:20:36.229 [INFO][5108] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.133/26] handle="k8s-pod-network.eb20d88dde4551cc14d00a9a0dcf98d3fa26631ce6aeda5dc483f29cd43c23cb" host="ip-172-31-18-251" Aug 13 00:20:36.409071 containerd[1941]: 2025-08-13 00:20:36.229 [INFO][5108] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:36.409071 containerd[1941]: 2025-08-13 00:20:36.229 [INFO][5108] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.133/26] IPv6=[] ContainerID="eb20d88dde4551cc14d00a9a0dcf98d3fa26631ce6aeda5dc483f29cd43c23cb" HandleID="k8s-pod-network.eb20d88dde4551cc14d00a9a0dcf98d3fa26631ce6aeda5dc483f29cd43c23cb" Workload="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--6mjjr-eth0" Aug 13 00:20:36.412613 containerd[1941]: 2025-08-13 00:20:36.272 [INFO][5057] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eb20d88dde4551cc14d00a9a0dcf98d3fa26631ce6aeda5dc483f29cd43c23cb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6mjjr" WorkloadEndpoint="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--6mjjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--251-k8s-coredns--7c65d6cfc9--6mjjr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3a2c1391-3856-407e-9f32-3dffc0012695", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-251", ContainerID:"", Pod:"coredns-7c65d6cfc9-6mjjr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5e4ec14e916", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:36.412613 containerd[1941]: 2025-08-13 00:20:36.292 [INFO][5057] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.133/32] ContainerID="eb20d88dde4551cc14d00a9a0dcf98d3fa26631ce6aeda5dc483f29cd43c23cb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6mjjr" WorkloadEndpoint="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--6mjjr-eth0" Aug 13 00:20:36.412613 containerd[1941]: 2025-08-13 00:20:36.297 [INFO][5057] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5e4ec14e916 ContainerID="eb20d88dde4551cc14d00a9a0dcf98d3fa26631ce6aeda5dc483f29cd43c23cb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6mjjr" WorkloadEndpoint="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--6mjjr-eth0" Aug 13 00:20:36.412613 containerd[1941]: 2025-08-13 00:20:36.355 [INFO][5057] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb20d88dde4551cc14d00a9a0dcf98d3fa26631ce6aeda5dc483f29cd43c23cb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6mjjr" WorkloadEndpoint="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--6mjjr-eth0" Aug 13 00:20:36.412613 containerd[1941]: 2025-08-13 00:20:36.356 [INFO][5057] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eb20d88dde4551cc14d00a9a0dcf98d3fa26631ce6aeda5dc483f29cd43c23cb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6mjjr" WorkloadEndpoint="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--6mjjr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--251-k8s-coredns--7c65d6cfc9--6mjjr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3a2c1391-3856-407e-9f32-3dffc0012695", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-251", ContainerID:"eb20d88dde4551cc14d00a9a0dcf98d3fa26631ce6aeda5dc483f29cd43c23cb", Pod:"coredns-7c65d6cfc9-6mjjr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5e4ec14e916", MAC:"9a:dc:16:26:2e:89", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:36.412613 containerd[1941]: 2025-08-13 00:20:36.391 [INFO][5057] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eb20d88dde4551cc14d00a9a0dcf98d3fa26631ce6aeda5dc483f29cd43c23cb" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6mjjr" WorkloadEndpoint="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--6mjjr-eth0" Aug 13 00:20:36.503524 containerd[1941]: time="2025-08-13T00:20:36.482841143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:20:36.503524 containerd[1941]: time="2025-08-13T00:20:36.491242919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:20:36.503524 containerd[1941]: time="2025-08-13T00:20:36.491279855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:36.503524 containerd[1941]: time="2025-08-13T00:20:36.491449043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:36.561277 systemd[1]: Started cri-containerd-f3d4fc3dea1bd7cc020bc5fc36728b6dda9f1a97845a7f80fca984024b5424ff.scope - libcontainer container f3d4fc3dea1bd7cc020bc5fc36728b6dda9f1a97845a7f80fca984024b5424ff. Aug 13 00:20:36.690844 systemd[1]: run-netns-cni\x2d8d007021\x2d1ee9\x2d6fc6\x2df4a6\x2d72f88677039b.mount: Deactivated successfully. Aug 13 00:20:36.716137 systemd-networkd[1845]: cali70d1429311e: Gained IPv6LL Aug 13 00:20:36.782263 containerd[1941]: time="2025-08-13T00:20:36.776957497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:20:36.782263 containerd[1941]: time="2025-08-13T00:20:36.777084289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:20:36.782263 containerd[1941]: time="2025-08-13T00:20:36.777111061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:36.782263 containerd[1941]: time="2025-08-13T00:20:36.777278137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:36.933883 containerd[1941]: time="2025-08-13T00:20:36.933255181Z" level=info msg="StopPodSandbox for \"2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1\"" Aug 13 00:20:36.959478 systemd[1]: Started cri-containerd-4c825a17c4ff4f470302465d9c6ac397f28d19ba49d6e0959657805b3d2468ad.scope - libcontainer container 4c825a17c4ff4f470302465d9c6ac397f28d19ba49d6e0959657805b3d2468ad. Aug 13 00:20:36.976288 containerd[1941]: time="2025-08-13T00:20:36.975540470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6db5fd67fb-ph246,Uid:594d665a-07d4-46f9-938a-94309ad04257,Namespace:calico-system,Attempt:1,} returns sandbox id \"08f75636be172eea21e6e52f3e8c0600e36f0aa48bde389b804e0c7f76f73606\"" Aug 13 00:20:37.015197 systemd[1]: Started cri-containerd-eb20d88dde4551cc14d00a9a0dcf98d3fa26631ce6aeda5dc483f29cd43c23cb.scope - libcontainer container eb20d88dde4551cc14d00a9a0dcf98d3fa26631ce6aeda5dc483f29cd43c23cb. Aug 13 00:20:37.089888 containerd[1941]: 2025-08-13 00:20:36.603 [WARNING][5198] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" WorkloadEndpoint="ip--172--31--18--251-k8s-whisker--b5fc5d65f--tngz7-eth0" Aug 13 00:20:37.089888 containerd[1941]: 2025-08-13 00:20:36.603 [INFO][5198] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" Aug 13 00:20:37.089888 containerd[1941]: 2025-08-13 00:20:36.603 [INFO][5198] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" iface="eth0" netns="" Aug 13 00:20:37.089888 containerd[1941]: 2025-08-13 00:20:36.603 [INFO][5198] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" Aug 13 00:20:37.089888 containerd[1941]: 2025-08-13 00:20:36.603 [INFO][5198] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" Aug 13 00:20:37.089888 containerd[1941]: 2025-08-13 00:20:36.931 [INFO][5313] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" HandleID="k8s-pod-network.fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" Workload="ip--172--31--18--251-k8s-whisker--b5fc5d65f--tngz7-eth0" Aug 13 00:20:37.089888 containerd[1941]: 2025-08-13 00:20:36.934 [INFO][5313] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:37.089888 containerd[1941]: 2025-08-13 00:20:36.934 [INFO][5313] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:37.089888 containerd[1941]: 2025-08-13 00:20:37.036 [WARNING][5313] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" HandleID="k8s-pod-network.fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" Workload="ip--172--31--18--251-k8s-whisker--b5fc5d65f--tngz7-eth0" Aug 13 00:20:37.089888 containerd[1941]: 2025-08-13 00:20:37.036 [INFO][5313] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" HandleID="k8s-pod-network.fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" Workload="ip--172--31--18--251-k8s-whisker--b5fc5d65f--tngz7-eth0" Aug 13 00:20:37.089888 containerd[1941]: 2025-08-13 00:20:37.047 [INFO][5313] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:37.089888 containerd[1941]: 2025-08-13 00:20:37.071 [INFO][5198] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" Aug 13 00:20:37.089888 containerd[1941]: time="2025-08-13T00:20:37.089570362Z" level=info msg="TearDown network for sandbox \"fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e\" successfully" Aug 13 00:20:37.089888 containerd[1941]: time="2025-08-13T00:20:37.089608090Z" level=info msg="StopPodSandbox for \"fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e\" returns successfully" Aug 13 00:20:37.091461 containerd[1941]: time="2025-08-13T00:20:37.091140286Z" level=info msg="RemovePodSandbox for \"fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e\"" Aug 13 00:20:37.091461 containerd[1941]: time="2025-08-13T00:20:37.091215586Z" level=info msg="Forcibly stopping sandbox \"fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e\"" Aug 13 00:20:37.221417 containerd[1941]: 2025-08-13 00:20:36.637 [INFO][5197] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" Aug 13 00:20:37.221417 containerd[1941]: 2025-08-13 00:20:36.641 [INFO][5197] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" iface="eth0" netns="/var/run/netns/cni-a95c19ad-2069-d125-fe49-ea0da5e1f956" Aug 13 00:20:37.221417 containerd[1941]: 2025-08-13 00:20:36.642 [INFO][5197] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" iface="eth0" netns="/var/run/netns/cni-a95c19ad-2069-d125-fe49-ea0da5e1f956" Aug 13 00:20:37.221417 containerd[1941]: 2025-08-13 00:20:36.647 [INFO][5197] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" iface="eth0" netns="/var/run/netns/cni-a95c19ad-2069-d125-fe49-ea0da5e1f956" Aug 13 00:20:37.221417 containerd[1941]: 2025-08-13 00:20:36.648 [INFO][5197] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" Aug 13 00:20:37.221417 containerd[1941]: 2025-08-13 00:20:36.648 [INFO][5197] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" Aug 13 00:20:37.221417 containerd[1941]: 2025-08-13 00:20:37.089 [INFO][5325] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" HandleID="k8s-pod-network.f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" Workload="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--jdgkd-eth0" Aug 13 00:20:37.221417 containerd[1941]: 2025-08-13 00:20:37.094 [INFO][5325] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:37.221417 containerd[1941]: 2025-08-13 00:20:37.094 [INFO][5325] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:37.221417 containerd[1941]: 2025-08-13 00:20:37.171 [WARNING][5325] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" HandleID="k8s-pod-network.f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" Workload="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--jdgkd-eth0" Aug 13 00:20:37.221417 containerd[1941]: 2025-08-13 00:20:37.171 [INFO][5325] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" HandleID="k8s-pod-network.f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" Workload="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--jdgkd-eth0" Aug 13 00:20:37.221417 containerd[1941]: 2025-08-13 00:20:37.180 [INFO][5325] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:37.221417 containerd[1941]: 2025-08-13 00:20:37.192 [INFO][5197] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" Aug 13 00:20:37.228050 systemd-networkd[1845]: cali278814795cb: Gained IPv6LL Aug 13 00:20:37.239993 containerd[1941]: time="2025-08-13T00:20:37.239548667Z" level=info msg="TearDown network for sandbox \"f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68\" successfully" Aug 13 00:20:37.245550 systemd[1]: run-netns-cni\x2da95c19ad\x2d2069\x2dd125\x2dfe49\x2dea0da5e1f956.mount: Deactivated successfully. Aug 13 00:20:37.250850 containerd[1941]: time="2025-08-13T00:20:37.250612439Z" level=info msg="StopPodSandbox for \"f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68\" returns successfully" Aug 13 00:20:37.257577 containerd[1941]: time="2025-08-13T00:20:37.254364707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65d97cd995-jdgkd,Uid:d21d0cba-f46e-4a88-8bd1-db42d9c6b456,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:20:37.420449 systemd-networkd[1845]: cali5e4ec14e916: Gained IPv6LL Aug 13 00:20:37.443370 containerd[1941]: time="2025-08-13T00:20:37.443027676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6mjjr,Uid:3a2c1391-3856-407e-9f32-3dffc0012695,Namespace:kube-system,Attempt:1,} returns sandbox id \"eb20d88dde4551cc14d00a9a0dcf98d3fa26631ce6aeda5dc483f29cd43c23cb\"" Aug 13 00:20:37.507241 containerd[1941]: time="2025-08-13T00:20:37.507184440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-p2lsx,Uid:f1dfa461-4b5f-4ed8-a850-cf604830db07,Namespace:kube-system,Attempt:1,} returns sandbox id \"f3d4fc3dea1bd7cc020bc5fc36728b6dda9f1a97845a7f80fca984024b5424ff\"" Aug 13 00:20:37.537826 containerd[1941]: time="2025-08-13T00:20:37.537692868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wzmtn,Uid:e5f0f3a3-68e2-4f84-92cb-c460ed58604c,Namespace:calico-system,Attempt:1,} returns sandbox id \"4c825a17c4ff4f470302465d9c6ac397f28d19ba49d6e0959657805b3d2468ad\"" Aug 13 00:20:37.542906 containerd[1941]: time="2025-08-13T00:20:37.541628748Z" level=info msg="CreateContainer within sandbox \"eb20d88dde4551cc14d00a9a0dcf98d3fa26631ce6aeda5dc483f29cd43c23cb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:20:37.585708 containerd[1941]: time="2025-08-13T00:20:37.584323741Z" level=info msg="CreateContainer within sandbox \"f3d4fc3dea1bd7cc020bc5fc36728b6dda9f1a97845a7f80fca984024b5424ff\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:20:37.742079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1631796294.mount: Deactivated successfully. Aug 13 00:20:37.823217 containerd[1941]: time="2025-08-13T00:20:37.823152746Z" level=info msg="CreateContainer within sandbox \"f3d4fc3dea1bd7cc020bc5fc36728b6dda9f1a97845a7f80fca984024b5424ff\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2261a58c5998643ce7b70bb899a62b008ddce029fad102d5a543de75af82115f\"" Aug 13 00:20:37.833163 containerd[1941]: time="2025-08-13T00:20:37.832986410Z" level=info msg="StartContainer for \"2261a58c5998643ce7b70bb899a62b008ddce029fad102d5a543de75af82115f\"" Aug 13 00:20:37.844399 containerd[1941]: time="2025-08-13T00:20:37.844205150Z" level=info msg="CreateContainer within sandbox \"eb20d88dde4551cc14d00a9a0dcf98d3fa26631ce6aeda5dc483f29cd43c23cb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"70b2260e8016deace873d16fadf7dc719635c87df9130f32d2b92807091c7733\"" Aug 13 00:20:37.868831 systemd-networkd[1845]: cali0bed2aee68a: Gained IPv6LL Aug 13 00:20:37.873284 containerd[1941]: time="2025-08-13T00:20:37.872684114Z" level=info msg="StartContainer for \"70b2260e8016deace873d16fadf7dc719635c87df9130f32d2b92807091c7733\"" Aug 13 00:20:38.002298 systemd-networkd[1845]: cali3f73ef32eb3: Link UP Aug 13 00:20:38.006991 systemd-networkd[1845]: cali3f73ef32eb3: Gained carrier Aug 13 00:20:38.104227 containerd[1941]: 2025-08-13 00:20:37.070 [INFO][5280] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--251-k8s-goldmane--58fd7646b9--555lv-eth0 goldmane-58fd7646b9- calico-system 0399420f-0a76-4a22-be47-c06978fb8813 1005 0 2025-08-13 00:20:11 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-18-251 goldmane-58fd7646b9-555lv eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali3f73ef32eb3 [] [] }} ContainerID="449cbf60fe20e80100f16917fea692462dfc5f60625ab9c3dad5c4691c339131" Namespace="calico-system" Pod="goldmane-58fd7646b9-555lv" WorkloadEndpoint="ip--172--31--18--251-k8s-goldmane--58fd7646b9--555lv-" Aug 13 00:20:38.104227 containerd[1941]: 2025-08-13 00:20:37.076 [INFO][5280] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="449cbf60fe20e80100f16917fea692462dfc5f60625ab9c3dad5c4691c339131" Namespace="calico-system" Pod="goldmane-58fd7646b9-555lv" WorkloadEndpoint="ip--172--31--18--251-k8s-goldmane--58fd7646b9--555lv-eth0" Aug 13 00:20:38.104227 containerd[1941]: 2025-08-13 00:20:37.708 [INFO][5404] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="449cbf60fe20e80100f16917fea692462dfc5f60625ab9c3dad5c4691c339131" HandleID="k8s-pod-network.449cbf60fe20e80100f16917fea692462dfc5f60625ab9c3dad5c4691c339131" Workload="ip--172--31--18--251-k8s-goldmane--58fd7646b9--555lv-eth0" Aug 13 00:20:38.104227 containerd[1941]: 2025-08-13 00:20:37.709 [INFO][5404] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="449cbf60fe20e80100f16917fea692462dfc5f60625ab9c3dad5c4691c339131" HandleID="k8s-pod-network.449cbf60fe20e80100f16917fea692462dfc5f60625ab9c3dad5c4691c339131" Workload="ip--172--31--18--251-k8s-goldmane--58fd7646b9--555lv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000399560), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-251", "pod":"goldmane-58fd7646b9-555lv", "timestamp":"2025-08-13 00:20:37.708159673 +0000 UTC"}, Hostname:"ip-172-31-18-251", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:20:38.104227 containerd[1941]: 2025-08-13 00:20:37.709 [INFO][5404] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:38.104227 containerd[1941]: 2025-08-13 00:20:37.728 [INFO][5404] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:38.104227 containerd[1941]: 2025-08-13 00:20:37.728 [INFO][5404] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-251' Aug 13 00:20:38.104227 containerd[1941]: 2025-08-13 00:20:37.801 [INFO][5404] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.449cbf60fe20e80100f16917fea692462dfc5f60625ab9c3dad5c4691c339131" host="ip-172-31-18-251" Aug 13 00:20:38.104227 containerd[1941]: 2025-08-13 00:20:37.820 [INFO][5404] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-251" Aug 13 00:20:38.104227 containerd[1941]: 2025-08-13 00:20:37.850 [INFO][5404] ipam/ipam.go 511: Trying affinity for 192.168.32.128/26 host="ip-172-31-18-251" Aug 13 00:20:38.104227 containerd[1941]: 2025-08-13 00:20:37.858 [INFO][5404] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.128/26 host="ip-172-31-18-251" Aug 13 00:20:38.104227 containerd[1941]: 2025-08-13 00:20:37.881 [INFO][5404] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.128/26 host="ip-172-31-18-251" Aug 13 00:20:38.104227 containerd[1941]: 2025-08-13 00:20:37.881 [INFO][5404] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.32.128/26 handle="k8s-pod-network.449cbf60fe20e80100f16917fea692462dfc5f60625ab9c3dad5c4691c339131" host="ip-172-31-18-251" Aug 13 00:20:38.104227 containerd[1941]: 2025-08-13 00:20:37.891 [INFO][5404] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.449cbf60fe20e80100f16917fea692462dfc5f60625ab9c3dad5c4691c339131 Aug 13 00:20:38.104227 containerd[1941]: 2025-08-13 00:20:37.907 [INFO][5404] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.32.128/26 handle="k8s-pod-network.449cbf60fe20e80100f16917fea692462dfc5f60625ab9c3dad5c4691c339131" host="ip-172-31-18-251" Aug 13 00:20:38.104227 containerd[1941]: 2025-08-13 00:20:37.953 [INFO][5404] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.32.134/26] block=192.168.32.128/26 handle="k8s-pod-network.449cbf60fe20e80100f16917fea692462dfc5f60625ab9c3dad5c4691c339131" host="ip-172-31-18-251" Aug 13 00:20:38.104227 containerd[1941]: 2025-08-13 00:20:37.953 [INFO][5404] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.134/26] handle="k8s-pod-network.449cbf60fe20e80100f16917fea692462dfc5f60625ab9c3dad5c4691c339131" host="ip-172-31-18-251" Aug 13 00:20:38.104227 containerd[1941]: 2025-08-13 00:20:37.953 [INFO][5404] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:38.104227 containerd[1941]: 2025-08-13 00:20:37.953 [INFO][5404] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.134/26] IPv6=[] ContainerID="449cbf60fe20e80100f16917fea692462dfc5f60625ab9c3dad5c4691c339131" HandleID="k8s-pod-network.449cbf60fe20e80100f16917fea692462dfc5f60625ab9c3dad5c4691c339131" Workload="ip--172--31--18--251-k8s-goldmane--58fd7646b9--555lv-eth0" Aug 13 00:20:38.106472 containerd[1941]: 2025-08-13 00:20:37.981 [INFO][5280] cni-plugin/k8s.go 418: Populated endpoint ContainerID="449cbf60fe20e80100f16917fea692462dfc5f60625ab9c3dad5c4691c339131" Namespace="calico-system" Pod="goldmane-58fd7646b9-555lv" WorkloadEndpoint="ip--172--31--18--251-k8s-goldmane--58fd7646b9--555lv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--251-k8s-goldmane--58fd7646b9--555lv-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"0399420f-0a76-4a22-be47-c06978fb8813", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-251", ContainerID:"", Pod:"goldmane-58fd7646b9-555lv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.32.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3f73ef32eb3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:38.106472 containerd[1941]: 2025-08-13 00:20:37.983 [INFO][5280] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.134/32] ContainerID="449cbf60fe20e80100f16917fea692462dfc5f60625ab9c3dad5c4691c339131" Namespace="calico-system" Pod="goldmane-58fd7646b9-555lv" WorkloadEndpoint="ip--172--31--18--251-k8s-goldmane--58fd7646b9--555lv-eth0" Aug 13 00:20:38.106472 containerd[1941]: 2025-08-13 00:20:37.984 [INFO][5280] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f73ef32eb3 ContainerID="449cbf60fe20e80100f16917fea692462dfc5f60625ab9c3dad5c4691c339131" Namespace="calico-system" Pod="goldmane-58fd7646b9-555lv" WorkloadEndpoint="ip--172--31--18--251-k8s-goldmane--58fd7646b9--555lv-eth0" Aug 13 00:20:38.106472 containerd[1941]: 2025-08-13 00:20:38.020 [INFO][5280] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="449cbf60fe20e80100f16917fea692462dfc5f60625ab9c3dad5c4691c339131" Namespace="calico-system" Pod="goldmane-58fd7646b9-555lv" WorkloadEndpoint="ip--172--31--18--251-k8s-goldmane--58fd7646b9--555lv-eth0" Aug 13 00:20:38.106472 containerd[1941]: 2025-08-13 00:20:38.029 [INFO][5280] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="449cbf60fe20e80100f16917fea692462dfc5f60625ab9c3dad5c4691c339131" Namespace="calico-system" Pod="goldmane-58fd7646b9-555lv" WorkloadEndpoint="ip--172--31--18--251-k8s-goldmane--58fd7646b9--555lv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--251-k8s-goldmane--58fd7646b9--555lv-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"0399420f-0a76-4a22-be47-c06978fb8813", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-251", ContainerID:"449cbf60fe20e80100f16917fea692462dfc5f60625ab9c3dad5c4691c339131", Pod:"goldmane-58fd7646b9-555lv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.32.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3f73ef32eb3", MAC:"56:99:eb:16:9d:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:38.106472 containerd[1941]: 2025-08-13 00:20:38.079 [INFO][5280] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="449cbf60fe20e80100f16917fea692462dfc5f60625ab9c3dad5c4691c339131" Namespace="calico-system" Pod="goldmane-58fd7646b9-555lv" WorkloadEndpoint="ip--172--31--18--251-k8s-goldmane--58fd7646b9--555lv-eth0" Aug 13 00:20:38.135703 systemd[1]: Started cri-containerd-70b2260e8016deace873d16fadf7dc719635c87df9130f32d2b92807091c7733.scope - libcontainer container 70b2260e8016deace873d16fadf7dc719635c87df9130f32d2b92807091c7733. Aug 13 00:20:38.155191 systemd[1]: Started cri-containerd-2261a58c5998643ce7b70bb899a62b008ddce029fad102d5a543de75af82115f.scope - libcontainer container 2261a58c5998643ce7b70bb899a62b008ddce029fad102d5a543de75af82115f. Aug 13 00:20:38.173990 containerd[1941]: 2025-08-13 00:20:37.641 [WARNING][5399] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" WorkloadEndpoint="ip--172--31--18--251-k8s-whisker--b5fc5d65f--tngz7-eth0" Aug 13 00:20:38.173990 containerd[1941]: 2025-08-13 00:20:37.654 [INFO][5399] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" Aug 13 00:20:38.173990 containerd[1941]: 2025-08-13 00:20:37.654 [INFO][5399] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" iface="eth0" netns="" Aug 13 00:20:38.173990 containerd[1941]: 2025-08-13 00:20:37.654 [INFO][5399] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" Aug 13 00:20:38.173990 containerd[1941]: 2025-08-13 00:20:37.654 [INFO][5399] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" Aug 13 00:20:38.173990 containerd[1941]: 2025-08-13 00:20:38.026 [INFO][5451] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" HandleID="k8s-pod-network.fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" Workload="ip--172--31--18--251-k8s-whisker--b5fc5d65f--tngz7-eth0" Aug 13 00:20:38.173990 containerd[1941]: 2025-08-13 00:20:38.031 [INFO][5451] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:38.173990 containerd[1941]: 2025-08-13 00:20:38.031 [INFO][5451] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:38.173990 containerd[1941]: 2025-08-13 00:20:38.097 [WARNING][5451] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" HandleID="k8s-pod-network.fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" Workload="ip--172--31--18--251-k8s-whisker--b5fc5d65f--tngz7-eth0" Aug 13 00:20:38.173990 containerd[1941]: 2025-08-13 00:20:38.097 [INFO][5451] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" HandleID="k8s-pod-network.fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" Workload="ip--172--31--18--251-k8s-whisker--b5fc5d65f--tngz7-eth0" Aug 13 00:20:38.173990 containerd[1941]: 2025-08-13 00:20:38.122 [INFO][5451] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:38.173990 containerd[1941]: 2025-08-13 00:20:38.164 [INFO][5399] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e" Aug 13 00:20:38.175129 containerd[1941]: time="2025-08-13T00:20:38.174026255Z" level=info msg="TearDown network for sandbox \"fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e\" successfully" Aug 13 00:20:38.201141 containerd[1941]: time="2025-08-13T00:20:38.200477088Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:20:38.201141 containerd[1941]: time="2025-08-13T00:20:38.200586588Z" level=info msg="RemovePodSandbox \"fca278bef1c8ffef15939796198db90733a5b651ec53441c7abb5fec329f6b2e\" returns successfully" Aug 13 00:20:38.243496 containerd[1941]: 2025-08-13 00:20:37.560 [INFO][5378] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" Aug 13 00:20:38.243496 containerd[1941]: 2025-08-13 00:20:37.566 [INFO][5378] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" iface="eth0" netns="/var/run/netns/cni-eddd483d-f4e4-e05d-549e-f5a0f6499b5e" Aug 13 00:20:38.243496 containerd[1941]: 2025-08-13 00:20:37.575 [INFO][5378] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" iface="eth0" netns="/var/run/netns/cni-eddd483d-f4e4-e05d-549e-f5a0f6499b5e" Aug 13 00:20:38.243496 containerd[1941]: 2025-08-13 00:20:37.576 [INFO][5378] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" iface="eth0" netns="/var/run/netns/cni-eddd483d-f4e4-e05d-549e-f5a0f6499b5e" Aug 13 00:20:38.243496 containerd[1941]: 2025-08-13 00:20:37.576 [INFO][5378] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" Aug 13 00:20:38.243496 containerd[1941]: 2025-08-13 00:20:37.576 [INFO][5378] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" Aug 13 00:20:38.243496 containerd[1941]: 2025-08-13 00:20:38.035 [INFO][5441] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" HandleID="k8s-pod-network.2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" Workload="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--7ztd9-eth0" Aug 13 00:20:38.243496 containerd[1941]: 2025-08-13 00:20:38.037 [INFO][5441] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:38.243496 containerd[1941]: 2025-08-13 00:20:38.124 [INFO][5441] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:38.243496 containerd[1941]: 2025-08-13 00:20:38.170 [WARNING][5441] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" HandleID="k8s-pod-network.2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" Workload="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--7ztd9-eth0" Aug 13 00:20:38.243496 containerd[1941]: 2025-08-13 00:20:38.172 [INFO][5441] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" HandleID="k8s-pod-network.2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" Workload="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--7ztd9-eth0" Aug 13 00:20:38.243496 containerd[1941]: 2025-08-13 00:20:38.183 [INFO][5441] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:38.243496 containerd[1941]: 2025-08-13 00:20:38.210 [INFO][5378] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" Aug 13 00:20:38.253427 containerd[1941]: time="2025-08-13T00:20:38.250853352Z" level=info msg="TearDown network for sandbox \"2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1\" successfully" Aug 13 00:20:38.253427 containerd[1941]: time="2025-08-13T00:20:38.251712732Z" level=info msg="StopPodSandbox for \"2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1\" returns successfully" Aug 13 00:20:38.258673 containerd[1941]: time="2025-08-13T00:20:38.258460524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65d97cd995-7ztd9,Uid:73b3a345-ddf0-47d6-a1d3-270371119508,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:20:38.378267 containerd[1941]: time="2025-08-13T00:20:38.378126901Z" level=info msg="StartContainer for \"70b2260e8016deace873d16fadf7dc719635c87df9130f32d2b92807091c7733\" returns successfully" Aug 13 00:20:38.416810 containerd[1941]: time="2025-08-13T00:20:38.407890117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:20:38.425061 containerd[1941]: time="2025-08-13T00:20:38.420498757Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:20:38.425061 containerd[1941]: time="2025-08-13T00:20:38.422845489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:38.431025 containerd[1941]: time="2025-08-13T00:20:38.428409145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:38.440470 containerd[1941]: time="2025-08-13T00:20:38.440275537Z" level=info msg="StartContainer for \"2261a58c5998643ce7b70bb899a62b008ddce029fad102d5a543de75af82115f\" returns successfully" Aug 13 00:20:38.541185 systemd[1]: Started cri-containerd-449cbf60fe20e80100f16917fea692462dfc5f60625ab9c3dad5c4691c339131.scope - libcontainer container 449cbf60fe20e80100f16917fea692462dfc5f60625ab9c3dad5c4691c339131. Aug 13 00:20:38.696054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount451476972.mount: Deactivated successfully. Aug 13 00:20:38.697575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1356415971.mount: Deactivated successfully. Aug 13 00:20:38.698039 systemd[1]: run-netns-cni\x2deddd483d\x2df4e4\x2de05d\x2d549e\x2df5a0f6499b5e.mount: Deactivated successfully. Aug 13 00:20:38.751259 systemd-networkd[1845]: calib5b0069624d: Link UP Aug 13 00:20:38.757046 systemd-networkd[1845]: calib5b0069624d: Gained carrier Aug 13 00:20:38.885017 containerd[1941]: 2025-08-13 00:20:37.975 [INFO][5423] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--jdgkd-eth0 calico-apiserver-65d97cd995- calico-apiserver d21d0cba-f46e-4a88-8bd1-db42d9c6b456 1022 0 2025-08-13 00:19:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65d97cd995 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-251 calico-apiserver-65d97cd995-jdgkd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib5b0069624d [] [] }} ContainerID="1ef3482af57f139559dee4b1ff29fd26f4a6b55deff708d5bb51925b9ec2ff7b" Namespace="calico-apiserver" Pod="calico-apiserver-65d97cd995-jdgkd" WorkloadEndpoint="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--jdgkd-" Aug 13 00:20:38.885017 containerd[1941]: 2025-08-13 00:20:37.975 [INFO][5423] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1ef3482af57f139559dee4b1ff29fd26f4a6b55deff708d5bb51925b9ec2ff7b" Namespace="calico-apiserver" Pod="calico-apiserver-65d97cd995-jdgkd" WorkloadEndpoint="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--jdgkd-eth0" Aug 13 00:20:38.885017 containerd[1941]: 2025-08-13 00:20:38.429 [INFO][5490] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1ef3482af57f139559dee4b1ff29fd26f4a6b55deff708d5bb51925b9ec2ff7b" HandleID="k8s-pod-network.1ef3482af57f139559dee4b1ff29fd26f4a6b55deff708d5bb51925b9ec2ff7b" Workload="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--jdgkd-eth0" Aug 13 00:20:38.885017 containerd[1941]: 2025-08-13 00:20:38.435 [INFO][5490] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1ef3482af57f139559dee4b1ff29fd26f4a6b55deff708d5bb51925b9ec2ff7b" HandleID="k8s-pod-network.1ef3482af57f139559dee4b1ff29fd26f4a6b55deff708d5bb51925b9ec2ff7b" Workload="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--jdgkd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cb30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-251", "pod":"calico-apiserver-65d97cd995-jdgkd", "timestamp":"2025-08-13 00:20:38.421229269 +0000 UTC"}, Hostname:"ip-172-31-18-251", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:20:38.885017 containerd[1941]: 2025-08-13 00:20:38.439 [INFO][5490] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:38.885017 containerd[1941]: 2025-08-13 00:20:38.439 [INFO][5490] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:38.885017 containerd[1941]: 2025-08-13 00:20:38.439 [INFO][5490] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-251' Aug 13 00:20:38.885017 containerd[1941]: 2025-08-13 00:20:38.501 [INFO][5490] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1ef3482af57f139559dee4b1ff29fd26f4a6b55deff708d5bb51925b9ec2ff7b" host="ip-172-31-18-251" Aug 13 00:20:38.885017 containerd[1941]: 2025-08-13 00:20:38.531 [INFO][5490] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-251" Aug 13 00:20:38.885017 containerd[1941]: 2025-08-13 00:20:38.562 [INFO][5490] ipam/ipam.go 511: Trying affinity for 192.168.32.128/26 host="ip-172-31-18-251" Aug 13 00:20:38.885017 containerd[1941]: 2025-08-13 00:20:38.571 [INFO][5490] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.128/26 host="ip-172-31-18-251" Aug 13 00:20:38.885017 containerd[1941]: 2025-08-13 00:20:38.590 [INFO][5490] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.128/26 host="ip-172-31-18-251" Aug 13 00:20:38.885017 containerd[1941]: 2025-08-13 00:20:38.593 [INFO][5490] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.32.128/26 handle="k8s-pod-network.1ef3482af57f139559dee4b1ff29fd26f4a6b55deff708d5bb51925b9ec2ff7b" host="ip-172-31-18-251" Aug 13 00:20:38.885017 containerd[1941]: 2025-08-13 00:20:38.602 [INFO][5490] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1ef3482af57f139559dee4b1ff29fd26f4a6b55deff708d5bb51925b9ec2ff7b Aug 13 00:20:38.885017 containerd[1941]: 2025-08-13 00:20:38.618 [INFO][5490] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.32.128/26 handle="k8s-pod-network.1ef3482af57f139559dee4b1ff29fd26f4a6b55deff708d5bb51925b9ec2ff7b" host="ip-172-31-18-251" Aug 13 00:20:38.885017 containerd[1941]: 2025-08-13 00:20:38.667 [INFO][5490] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.32.135/26] block=192.168.32.128/26 handle="k8s-pod-network.1ef3482af57f139559dee4b1ff29fd26f4a6b55deff708d5bb51925b9ec2ff7b" host="ip-172-31-18-251" Aug 13 00:20:38.885017 containerd[1941]: 2025-08-13 00:20:38.673 [INFO][5490] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.135/26] handle="k8s-pod-network.1ef3482af57f139559dee4b1ff29fd26f4a6b55deff708d5bb51925b9ec2ff7b" host="ip-172-31-18-251" Aug 13 00:20:38.885017 containerd[1941]: 2025-08-13 00:20:38.673 [INFO][5490] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:38.885017 containerd[1941]: 2025-08-13 00:20:38.673 [INFO][5490] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.135/26] IPv6=[] ContainerID="1ef3482af57f139559dee4b1ff29fd26f4a6b55deff708d5bb51925b9ec2ff7b" HandleID="k8s-pod-network.1ef3482af57f139559dee4b1ff29fd26f4a6b55deff708d5bb51925b9ec2ff7b" Workload="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--jdgkd-eth0" Aug 13 00:20:38.888726 containerd[1941]: 2025-08-13 00:20:38.698 [INFO][5423] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1ef3482af57f139559dee4b1ff29fd26f4a6b55deff708d5bb51925b9ec2ff7b" Namespace="calico-apiserver" Pod="calico-apiserver-65d97cd995-jdgkd" WorkloadEndpoint="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--jdgkd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--jdgkd-eth0", GenerateName:"calico-apiserver-65d97cd995-", Namespace:"calico-apiserver", SelfLink:"", UID:"d21d0cba-f46e-4a88-8bd1-db42d9c6b456", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65d97cd995", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-251", ContainerID:"", Pod:"calico-apiserver-65d97cd995-jdgkd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib5b0069624d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:38.888726 containerd[1941]: 2025-08-13 00:20:38.698 [INFO][5423] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.135/32] ContainerID="1ef3482af57f139559dee4b1ff29fd26f4a6b55deff708d5bb51925b9ec2ff7b" Namespace="calico-apiserver" Pod="calico-apiserver-65d97cd995-jdgkd" WorkloadEndpoint="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--jdgkd-eth0" Aug 13 00:20:38.888726 containerd[1941]: 2025-08-13 00:20:38.698 [INFO][5423] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib5b0069624d ContainerID="1ef3482af57f139559dee4b1ff29fd26f4a6b55deff708d5bb51925b9ec2ff7b" Namespace="calico-apiserver" Pod="calico-apiserver-65d97cd995-jdgkd" WorkloadEndpoint="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--jdgkd-eth0" Aug 13 00:20:38.888726 containerd[1941]: 2025-08-13 00:20:38.764 [INFO][5423] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1ef3482af57f139559dee4b1ff29fd26f4a6b55deff708d5bb51925b9ec2ff7b" Namespace="calico-apiserver" Pod="calico-apiserver-65d97cd995-jdgkd" WorkloadEndpoint="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--jdgkd-eth0" Aug 13 00:20:38.888726 containerd[1941]: 2025-08-13 00:20:38.766 [INFO][5423] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1ef3482af57f139559dee4b1ff29fd26f4a6b55deff708d5bb51925b9ec2ff7b" Namespace="calico-apiserver" Pod="calico-apiserver-65d97cd995-jdgkd" WorkloadEndpoint="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--jdgkd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--jdgkd-eth0", GenerateName:"calico-apiserver-65d97cd995-", Namespace:"calico-apiserver", SelfLink:"", UID:"d21d0cba-f46e-4a88-8bd1-db42d9c6b456", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65d97cd995", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-251", ContainerID:"1ef3482af57f139559dee4b1ff29fd26f4a6b55deff708d5bb51925b9ec2ff7b", Pod:"calico-apiserver-65d97cd995-jdgkd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib5b0069624d", MAC:"6a:a4:bc:2f:54:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:38.888726 containerd[1941]: 2025-08-13 00:20:38.856 [INFO][5423] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1ef3482af57f139559dee4b1ff29fd26f4a6b55deff708d5bb51925b9ec2ff7b" Namespace="calico-apiserver" Pod="calico-apiserver-65d97cd995-jdgkd" WorkloadEndpoint="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--jdgkd-eth0" Aug 13 00:20:38.965901 kubelet[3135]: I0813 00:20:38.965355 3135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-p2lsx" podStartSLOduration=57.962268687 podStartE2EDuration="57.962268687s" podCreationTimestamp="2025-08-13 00:19:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:20:38.900488259 +0000 UTC m=+63.210836163" watchObservedRunningTime="2025-08-13 00:20:38.962268687 +0000 UTC m=+63.272616543" Aug 13 00:20:39.068419 containerd[1941]: time="2025-08-13T00:20:39.065218356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:20:39.068419 containerd[1941]: time="2025-08-13T00:20:39.065458044Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:20:39.068419 containerd[1941]: time="2025-08-13T00:20:39.065501736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:39.071472 containerd[1941]: time="2025-08-13T00:20:39.070729560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:39.219089 systemd[1]: Started cri-containerd-1ef3482af57f139559dee4b1ff29fd26f4a6b55deff708d5bb51925b9ec2ff7b.scope - libcontainer container 1ef3482af57f139559dee4b1ff29fd26f4a6b55deff708d5bb51925b9ec2ff7b. Aug 13 00:20:39.247079 containerd[1941]: time="2025-08-13T00:20:39.246540049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-555lv,Uid:0399420f-0a76-4a22-be47-c06978fb8813,Namespace:calico-system,Attempt:1,} returns sandbox id \"449cbf60fe20e80100f16917fea692462dfc5f60625ab9c3dad5c4691c339131\"" Aug 13 00:20:39.368552 systemd-networkd[1845]: caliad321136fe3: Link UP Aug 13 00:20:39.372655 systemd-networkd[1845]: caliad321136fe3: Gained carrier Aug 13 00:20:39.437600 kubelet[3135]: I0813 00:20:39.434093 3135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-6mjjr" podStartSLOduration=58.434065982 podStartE2EDuration="58.434065982s" podCreationTimestamp="2025-08-13 00:19:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:20:38.966610527 +0000 UTC m=+63.276958395" watchObservedRunningTime="2025-08-13 00:20:39.434065982 +0000 UTC m=+63.744413838" Aug 13 00:20:39.446650 containerd[1941]: 2025-08-13 00:20:38.748 [INFO][5555] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--7ztd9-eth0 calico-apiserver-65d97cd995- calico-apiserver 73b3a345-ddf0-47d6-a1d3-270371119508 1031 0 2025-08-13 00:19:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65d97cd995 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-251 calico-apiserver-65d97cd995-7ztd9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliad321136fe3 [] [] }} ContainerID="21e4ea7832ef28270171841d4d142ed74270662a0a0d9ecf37e17611d6019d70" Namespace="calico-apiserver" Pod="calico-apiserver-65d97cd995-7ztd9" WorkloadEndpoint="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--7ztd9-" Aug 13 00:20:39.446650 containerd[1941]: 2025-08-13 00:20:38.755 [INFO][5555] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="21e4ea7832ef28270171841d4d142ed74270662a0a0d9ecf37e17611d6019d70" Namespace="calico-apiserver" Pod="calico-apiserver-65d97cd995-7ztd9" WorkloadEndpoint="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--7ztd9-eth0" Aug 13 00:20:39.446650 containerd[1941]: 2025-08-13 00:20:39.024 [INFO][5606] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="21e4ea7832ef28270171841d4d142ed74270662a0a0d9ecf37e17611d6019d70" HandleID="k8s-pod-network.21e4ea7832ef28270171841d4d142ed74270662a0a0d9ecf37e17611d6019d70" Workload="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--7ztd9-eth0" Aug 13 00:20:39.446650 containerd[1941]: 2025-08-13 00:20:39.025 [INFO][5606] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="21e4ea7832ef28270171841d4d142ed74270662a0a0d9ecf37e17611d6019d70" HandleID="k8s-pod-network.21e4ea7832ef28270171841d4d142ed74270662a0a0d9ecf37e17611d6019d70" Workload="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--7ztd9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000327810), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-251", "pod":"calico-apiserver-65d97cd995-7ztd9", "timestamp":"2025-08-13 00:20:39.024804876 +0000 UTC"}, Hostname:"ip-172-31-18-251", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:20:39.446650 containerd[1941]: 2025-08-13 00:20:39.026 [INFO][5606] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:39.446650 containerd[1941]: 2025-08-13 00:20:39.026 [INFO][5606] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:39.446650 containerd[1941]: 2025-08-13 00:20:39.027 [INFO][5606] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-251' Aug 13 00:20:39.446650 containerd[1941]: 2025-08-13 00:20:39.071 [INFO][5606] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.21e4ea7832ef28270171841d4d142ed74270662a0a0d9ecf37e17611d6019d70" host="ip-172-31-18-251" Aug 13 00:20:39.446650 containerd[1941]: 2025-08-13 00:20:39.114 [INFO][5606] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-251" Aug 13 00:20:39.446650 containerd[1941]: 2025-08-13 00:20:39.257 [INFO][5606] ipam/ipam.go 511: Trying affinity for 192.168.32.128/26 host="ip-172-31-18-251" Aug 13 00:20:39.446650 containerd[1941]: 2025-08-13 00:20:39.267 [INFO][5606] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.128/26 host="ip-172-31-18-251" Aug 13 00:20:39.446650 containerd[1941]: 2025-08-13 00:20:39.279 [INFO][5606] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.128/26 host="ip-172-31-18-251" Aug 13 00:20:39.446650 containerd[1941]: 2025-08-13 00:20:39.279 [INFO][5606] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.32.128/26 handle="k8s-pod-network.21e4ea7832ef28270171841d4d142ed74270662a0a0d9ecf37e17611d6019d70" host="ip-172-31-18-251" Aug 13 00:20:39.446650 containerd[1941]: 2025-08-13 00:20:39.287 [INFO][5606] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.21e4ea7832ef28270171841d4d142ed74270662a0a0d9ecf37e17611d6019d70 Aug 13 00:20:39.446650 containerd[1941]: 2025-08-13 00:20:39.303 [INFO][5606] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.32.128/26 handle="k8s-pod-network.21e4ea7832ef28270171841d4d142ed74270662a0a0d9ecf37e17611d6019d70" host="ip-172-31-18-251" Aug 13 00:20:39.446650 containerd[1941]: 2025-08-13 00:20:39.327 [INFO][5606] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.32.136/26] block=192.168.32.128/26 handle="k8s-pod-network.21e4ea7832ef28270171841d4d142ed74270662a0a0d9ecf37e17611d6019d70" host="ip-172-31-18-251" Aug 13 00:20:39.446650 containerd[1941]: 2025-08-13 00:20:39.342 [INFO][5606] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.136/26] handle="k8s-pod-network.21e4ea7832ef28270171841d4d142ed74270662a0a0d9ecf37e17611d6019d70" host="ip-172-31-18-251" Aug 13 00:20:39.446650 containerd[1941]: 2025-08-13 00:20:39.342 [INFO][5606] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:39.446650 containerd[1941]: 2025-08-13 00:20:39.342 [INFO][5606] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.32.136/26] IPv6=[] ContainerID="21e4ea7832ef28270171841d4d142ed74270662a0a0d9ecf37e17611d6019d70" HandleID="k8s-pod-network.21e4ea7832ef28270171841d4d142ed74270662a0a0d9ecf37e17611d6019d70" Workload="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--7ztd9-eth0" Aug 13 00:20:39.447862 containerd[1941]: 2025-08-13 00:20:39.356 [INFO][5555] cni-plugin/k8s.go 418: Populated endpoint ContainerID="21e4ea7832ef28270171841d4d142ed74270662a0a0d9ecf37e17611d6019d70" Namespace="calico-apiserver" Pod="calico-apiserver-65d97cd995-7ztd9" WorkloadEndpoint="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--7ztd9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--7ztd9-eth0", GenerateName:"calico-apiserver-65d97cd995-", Namespace:"calico-apiserver", SelfLink:"", UID:"73b3a345-ddf0-47d6-a1d3-270371119508", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65d97cd995", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-251", ContainerID:"", Pod:"calico-apiserver-65d97cd995-7ztd9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliad321136fe3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:39.447862 containerd[1941]: 2025-08-13 00:20:39.357 [INFO][5555] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.136/32] ContainerID="21e4ea7832ef28270171841d4d142ed74270662a0a0d9ecf37e17611d6019d70" Namespace="calico-apiserver" Pod="calico-apiserver-65d97cd995-7ztd9" WorkloadEndpoint="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--7ztd9-eth0" Aug 13 00:20:39.447862 containerd[1941]: 2025-08-13 00:20:39.357 [INFO][5555] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad321136fe3 ContainerID="21e4ea7832ef28270171841d4d142ed74270662a0a0d9ecf37e17611d6019d70" Namespace="calico-apiserver" Pod="calico-apiserver-65d97cd995-7ztd9" WorkloadEndpoint="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--7ztd9-eth0" Aug 13 00:20:39.447862 containerd[1941]: 2025-08-13 00:20:39.376 [INFO][5555] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="21e4ea7832ef28270171841d4d142ed74270662a0a0d9ecf37e17611d6019d70" Namespace="calico-apiserver" Pod="calico-apiserver-65d97cd995-7ztd9" WorkloadEndpoint="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--7ztd9-eth0" Aug 13 00:20:39.447862 containerd[1941]: 2025-08-13 00:20:39.390 [INFO][5555] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="21e4ea7832ef28270171841d4d142ed74270662a0a0d9ecf37e17611d6019d70" Namespace="calico-apiserver" Pod="calico-apiserver-65d97cd995-7ztd9" WorkloadEndpoint="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--7ztd9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--7ztd9-eth0", GenerateName:"calico-apiserver-65d97cd995-", Namespace:"calico-apiserver", SelfLink:"", UID:"73b3a345-ddf0-47d6-a1d3-270371119508", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65d97cd995", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-251", ContainerID:"21e4ea7832ef28270171841d4d142ed74270662a0a0d9ecf37e17611d6019d70", Pod:"calico-apiserver-65d97cd995-7ztd9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliad321136fe3", MAC:"5a:5e:0f:dc:79:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:39.447862 containerd[1941]: 2025-08-13 00:20:39.437 [INFO][5555] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="21e4ea7832ef28270171841d4d142ed74270662a0a0d9ecf37e17611d6019d70" Namespace="calico-apiserver" Pod="calico-apiserver-65d97cd995-7ztd9" WorkloadEndpoint="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--7ztd9-eth0" Aug 13 00:20:39.457348 systemd[1]: Started sshd@9-172.31.18.251:22-139.178.89.65:60626.service - OpenSSH per-connection server daemon (139.178.89.65:60626). Aug 13 00:20:39.556059 containerd[1941]: time="2025-08-13T00:20:39.553062830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:20:39.556059 containerd[1941]: time="2025-08-13T00:20:39.553196522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:20:39.558073 containerd[1941]: time="2025-08-13T00:20:39.556298582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:39.558073 containerd[1941]: time="2025-08-13T00:20:39.556535714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:39.572851 containerd[1941]: time="2025-08-13T00:20:39.569959694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65d97cd995-jdgkd,Uid:d21d0cba-f46e-4a88-8bd1-db42d9c6b456,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1ef3482af57f139559dee4b1ff29fd26f4a6b55deff708d5bb51925b9ec2ff7b\"" Aug 13 00:20:39.628144 systemd[1]: Started cri-containerd-21e4ea7832ef28270171841d4d142ed74270662a0a0d9ecf37e17611d6019d70.scope - libcontainer container 21e4ea7832ef28270171841d4d142ed74270662a0a0d9ecf37e17611d6019d70. Aug 13 00:20:39.690299 sshd[5674]: Accepted publickey for core from 139.178.89.65 port 60626 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:39.702362 sshd[5674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:39.717482 systemd-logind[1913]: New session 10 of user core. Aug 13 00:20:39.726229 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 00:20:39.766603 containerd[1941]: time="2025-08-13T00:20:39.766387851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65d97cd995-7ztd9,Uid:73b3a345-ddf0-47d6-a1d3-270371119508,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"21e4ea7832ef28270171841d4d142ed74270662a0a0d9ecf37e17611d6019d70\"" Aug 13 00:20:39.853562 systemd-networkd[1845]: cali3f73ef32eb3: Gained IPv6LL Aug 13 00:20:39.980500 systemd-networkd[1845]: calib5b0069624d: Gained IPv6LL Aug 13 00:20:40.140499 sshd[5674]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:40.150739 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:20:40.153642 systemd[1]: sshd@9-172.31.18.251:22-139.178.89.65:60626.service: Deactivated successfully. Aug 13 00:20:40.171300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount655471770.mount: Deactivated successfully. Aug 13 00:20:40.174952 systemd-logind[1913]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:20:40.198860 systemd[1]: Started sshd@10-172.31.18.251:22-139.178.89.65:60634.service - OpenSSH per-connection server daemon (139.178.89.65:60634). Aug 13 00:20:40.200693 systemd-logind[1913]: Removed session 10. Aug 13 00:20:40.225803 containerd[1941]: time="2025-08-13T00:20:40.225299366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:40.228128 containerd[1941]: time="2025-08-13T00:20:40.228063806Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Aug 13 00:20:40.230874 containerd[1941]: time="2025-08-13T00:20:40.230809526Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:40.238324 containerd[1941]: time="2025-08-13T00:20:40.238196654Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:40.240086 containerd[1941]: time="2025-08-13T00:20:40.239843270Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 6.971661407s" Aug 13 00:20:40.240086 containerd[1941]: time="2025-08-13T00:20:40.239917394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Aug 13 00:20:40.243677 containerd[1941]: time="2025-08-13T00:20:40.242407982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 00:20:40.246270 containerd[1941]: time="2025-08-13T00:20:40.246051086Z" level=info msg="CreateContainer within sandbox \"bf50bf25e1c6cf307afc672421cbd6a99456820fcfc5e3bcf12414d6ac2b0f09\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 00:20:40.275132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3835587644.mount: Deactivated successfully. Aug 13 00:20:40.285591 containerd[1941]: time="2025-08-13T00:20:40.285534002Z" level=info msg="CreateContainer within sandbox \"bf50bf25e1c6cf307afc672421cbd6a99456820fcfc5e3bcf12414d6ac2b0f09\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"26676b497dfb990ac10d452cb6fc42d143c1f0046312aa4fd66453bc39ad38b2\"" Aug 13 00:20:40.288755 containerd[1941]: time="2025-08-13T00:20:40.288569306Z" level=info msg="StartContainer for \"26676b497dfb990ac10d452cb6fc42d143c1f0046312aa4fd66453bc39ad38b2\"" Aug 13 00:20:40.355084 systemd[1]: Started cri-containerd-26676b497dfb990ac10d452cb6fc42d143c1f0046312aa4fd66453bc39ad38b2.scope - libcontainer container 26676b497dfb990ac10d452cb6fc42d143c1f0046312aa4fd66453bc39ad38b2. Aug 13 00:20:40.396952 sshd[5745]: Accepted publickey for core from 139.178.89.65 port 60634 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:40.403693 sshd[5745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:40.419718 systemd-logind[1913]: New session 11 of user core. Aug 13 00:20:40.428355 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 00:20:40.451807 containerd[1941]: time="2025-08-13T00:20:40.451706079Z" level=info msg="StartContainer for \"26676b497dfb990ac10d452cb6fc42d143c1f0046312aa4fd66453bc39ad38b2\" returns successfully" Aug 13 00:20:40.796115 sshd[5745]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:40.810969 systemd[1]: sshd@10-172.31.18.251:22-139.178.89.65:60634.service: Deactivated successfully. Aug 13 00:20:40.820260 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:20:40.826797 systemd-logind[1913]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:20:40.874033 systemd[1]: Started sshd@11-172.31.18.251:22-139.178.89.65:60648.service - OpenSSH per-connection server daemon (139.178.89.65:60648). Aug 13 00:20:40.878730 systemd-logind[1913]: Removed session 11. Aug 13 00:20:41.078736 sshd[5796]: Accepted publickey for core from 139.178.89.65 port 60648 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:41.082143 sshd[5796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:41.090087 systemd-logind[1913]: New session 12 of user core. Aug 13 00:20:41.101056 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 00:20:41.324055 systemd-networkd[1845]: caliad321136fe3: Gained IPv6LL Aug 13 00:20:41.446426 sshd[5796]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:41.455203 systemd[1]: sshd@11-172.31.18.251:22-139.178.89.65:60648.service: Deactivated successfully. Aug 13 00:20:41.465789 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:20:41.472825 systemd-logind[1913]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:20:41.477259 systemd-logind[1913]: Removed session 12. Aug 13 00:20:41.540807 kubelet[3135]: I0813 00:20:41.540281 3135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-54b5bbf697-b6xv7" podStartSLOduration=2.949473237 podStartE2EDuration="11.540256672s" podCreationTimestamp="2025-08-13 00:20:30 +0000 UTC" firstStartedPulling="2025-08-13 00:20:31.651214639 +0000 UTC m=+55.961562507" lastFinishedPulling="2025-08-13 00:20:40.241998062 +0000 UTC m=+64.552345942" observedRunningTime="2025-08-13 00:20:40.906241109 +0000 UTC m=+65.216589001" watchObservedRunningTime="2025-08-13 00:20:41.540256672 +0000 UTC m=+65.850604540" Aug 13 00:20:43.049312 containerd[1941]: time="2025-08-13T00:20:43.049234588Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:43.052103 containerd[1941]: time="2025-08-13T00:20:43.052012768Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Aug 13 00:20:43.053515 containerd[1941]: time="2025-08-13T00:20:43.053416504Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:43.059252 containerd[1941]: time="2025-08-13T00:20:43.058190332Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:43.062728 containerd[1941]: time="2025-08-13T00:20:43.062624956Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 2.820144002s" Aug 13 00:20:43.063171 containerd[1941]: time="2025-08-13T00:20:43.063130804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Aug 13 00:20:43.073269 containerd[1941]: time="2025-08-13T00:20:43.072151012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 00:20:43.109328 containerd[1941]: time="2025-08-13T00:20:43.109275256Z" level=info msg="CreateContainer within sandbox \"08f75636be172eea21e6e52f3e8c0600e36f0aa48bde389b804e0c7f76f73606\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 00:20:43.143666 containerd[1941]: time="2025-08-13T00:20:43.143573008Z" level=info msg="CreateContainer within sandbox \"08f75636be172eea21e6e52f3e8c0600e36f0aa48bde389b804e0c7f76f73606\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"013c1d006943b5cee31b001a78c836fea2caf42e0a0d72bbb75a04a02245c3ac\"" Aug 13 00:20:43.147895 containerd[1941]: time="2025-08-13T00:20:43.145057996Z" level=info msg="StartContainer for \"013c1d006943b5cee31b001a78c836fea2caf42e0a0d72bbb75a04a02245c3ac\"" Aug 13 00:20:43.205105 systemd[1]: Started cri-containerd-013c1d006943b5cee31b001a78c836fea2caf42e0a0d72bbb75a04a02245c3ac.scope - libcontainer container 013c1d006943b5cee31b001a78c836fea2caf42e0a0d72bbb75a04a02245c3ac. Aug 13 00:20:43.277289 containerd[1941]: time="2025-08-13T00:20:43.277031417Z" level=info msg="StartContainer for \"013c1d006943b5cee31b001a78c836fea2caf42e0a0d72bbb75a04a02245c3ac\" returns successfully" Aug 13 00:20:43.984158 kubelet[3135]: I0813 00:20:43.984032 3135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6db5fd67fb-ph246" podStartSLOduration=26.930497294 podStartE2EDuration="32.984010352s" podCreationTimestamp="2025-08-13 00:20:11 +0000 UTC" firstStartedPulling="2025-08-13 00:20:37.014547862 +0000 UTC m=+61.324895730" lastFinishedPulling="2025-08-13 00:20:43.06806092 +0000 UTC m=+67.378408788" observedRunningTime="2025-08-13 00:20:43.910680956 +0000 UTC m=+68.221028884" watchObservedRunningTime="2025-08-13 00:20:43.984010352 +0000 UTC m=+68.294358256" Aug 13 00:20:44.148949 ntpd[1908]: Listen normally on 7 vxlan.calico 192.168.32.128:123 Aug 13 00:20:44.150154 ntpd[1908]: 13 Aug 00:20:44 ntpd[1908]: Listen normally on 7 vxlan.calico 192.168.32.128:123 Aug 13 00:20:44.150154 ntpd[1908]: 13 Aug 00:20:44 ntpd[1908]: Listen normally on 8 calid6db0231eb5 [fe80::ecee:eeff:feee:eeee%4]:123 Aug 13 00:20:44.150154 ntpd[1908]: 13 Aug 00:20:44 ntpd[1908]: Listen normally on 9 vxlan.calico [fe80::6400:3fff:fe4e:7da7%5]:123 Aug 13 00:20:44.150154 ntpd[1908]: 13 Aug 00:20:44 ntpd[1908]: Listen normally on 10 cali70d1429311e [fe80::ecee:eeff:feee:eeee%8]:123 Aug 13 00:20:44.150154 ntpd[1908]: 13 Aug 00:20:44 ntpd[1908]: Listen normally on 11 cali278814795cb [fe80::ecee:eeff:feee:eeee%9]:123 Aug 13 00:20:44.150154 ntpd[1908]: 13 Aug 00:20:44 ntpd[1908]: Listen normally on 12 cali0bed2aee68a [fe80::ecee:eeff:feee:eeee%10]:123 Aug 13 00:20:44.150154 ntpd[1908]: 13 Aug 00:20:44 ntpd[1908]: Listen normally on 13 cali5e4ec14e916 [fe80::ecee:eeff:feee:eeee%11]:123 Aug 13 00:20:44.150154 ntpd[1908]: 13 Aug 00:20:44 ntpd[1908]: Listen normally on 14 cali3f73ef32eb3 [fe80::ecee:eeff:feee:eeee%12]:123 Aug 13 00:20:44.150154 ntpd[1908]: 13 Aug 00:20:44 ntpd[1908]: Listen normally on 15 calib5b0069624d [fe80::ecee:eeff:feee:eeee%13]:123 Aug 13 00:20:44.150154 ntpd[1908]: 13 Aug 00:20:44 ntpd[1908]: Listen normally on 16 caliad321136fe3 [fe80::ecee:eeff:feee:eeee%14]:123 Aug 13 00:20:44.149072 ntpd[1908]: Listen normally on 8 calid6db0231eb5 [fe80::ecee:eeff:feee:eeee%4]:123 Aug 13 00:20:44.149151 ntpd[1908]: Listen normally on 9 vxlan.calico [fe80::6400:3fff:fe4e:7da7%5]:123 Aug 13 00:20:44.149217 ntpd[1908]: Listen normally on 10 cali70d1429311e [fe80::ecee:eeff:feee:eeee%8]:123 Aug 13 00:20:44.149283 ntpd[1908]: Listen normally on 11 cali278814795cb [fe80::ecee:eeff:feee:eeee%9]:123 Aug 13 00:20:44.149349 ntpd[1908]: Listen normally on 12 cali0bed2aee68a [fe80::ecee:eeff:feee:eeee%10]:123 Aug 13 00:20:44.149415 ntpd[1908]: Listen normally on 13 cali5e4ec14e916 [fe80::ecee:eeff:feee:eeee%11]:123 Aug 13 00:20:44.149479 ntpd[1908]: Listen normally on 14 cali3f73ef32eb3 [fe80::ecee:eeff:feee:eeee%12]:123 Aug 13 00:20:44.149567 ntpd[1908]: Listen normally on 15 calib5b0069624d [fe80::ecee:eeff:feee:eeee%13]:123 Aug 13 00:20:44.149645 ntpd[1908]: Listen normally on 16 caliad321136fe3 [fe80::ecee:eeff:feee:eeee%14]:123 Aug 13 00:20:44.426925 containerd[1941]: time="2025-08-13T00:20:44.425947399Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:44.428240 containerd[1941]: time="2025-08-13T00:20:44.428167735Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Aug 13 00:20:44.429674 containerd[1941]: time="2025-08-13T00:20:44.429589831Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:44.433900 containerd[1941]: time="2025-08-13T00:20:44.433832443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:44.436088 containerd[1941]: time="2025-08-13T00:20:44.435537391Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.361615323s" Aug 13 00:20:44.436088 containerd[1941]: time="2025-08-13T00:20:44.435598183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Aug 13 00:20:44.438341 containerd[1941]: time="2025-08-13T00:20:44.438026983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 00:20:44.441509 containerd[1941]: time="2025-08-13T00:20:44.441447247Z" level=info msg="CreateContainer within sandbox \"4c825a17c4ff4f470302465d9c6ac397f28d19ba49d6e0959657805b3d2468ad\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 00:20:44.477491 containerd[1941]: time="2025-08-13T00:20:44.476863591Z" level=info msg="CreateContainer within sandbox \"4c825a17c4ff4f470302465d9c6ac397f28d19ba49d6e0959657805b3d2468ad\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"60d2c07e055e3f615da4290a7942e17be6b6b37dd28ad5d32ea6193a8c8b3457\"" Aug 13 00:20:44.478299 containerd[1941]: time="2025-08-13T00:20:44.478242223Z" level=info msg="StartContainer for \"60d2c07e055e3f615da4290a7942e17be6b6b37dd28ad5d32ea6193a8c8b3457\"" Aug 13 00:20:44.582086 systemd[1]: Started cri-containerd-60d2c07e055e3f615da4290a7942e17be6b6b37dd28ad5d32ea6193a8c8b3457.scope - libcontainer container 60d2c07e055e3f615da4290a7942e17be6b6b37dd28ad5d32ea6193a8c8b3457. Aug 13 00:20:44.635534 containerd[1941]: time="2025-08-13T00:20:44.635449640Z" level=info msg="StartContainer for \"60d2c07e055e3f615da4290a7942e17be6b6b37dd28ad5d32ea6193a8c8b3457\" returns successfully" Aug 13 00:20:46.490326 systemd[1]: Started sshd@12-172.31.18.251:22-139.178.89.65:60650.service - OpenSSH per-connection server daemon (139.178.89.65:60650). Aug 13 00:20:46.683972 sshd[5955]: Accepted publickey for core from 139.178.89.65 port 60650 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:46.687573 sshd[5955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:46.696848 systemd-logind[1913]: New session 13 of user core. Aug 13 00:20:46.709050 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 00:20:46.999965 sshd[5955]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:47.007405 systemd-logind[1913]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:20:47.009217 systemd[1]: sshd@12-172.31.18.251:22-139.178.89.65:60650.service: Deactivated successfully. Aug 13 00:20:47.014180 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:20:47.017885 systemd-logind[1913]: Removed session 13. Aug 13 00:20:48.848412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1987941407.mount: Deactivated successfully. Aug 13 00:20:49.629896 containerd[1941]: time="2025-08-13T00:20:49.628619964Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:49.631183 containerd[1941]: time="2025-08-13T00:20:49.630905604Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Aug 13 00:20:49.633522 containerd[1941]: time="2025-08-13T00:20:49.633459624Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:49.639323 containerd[1941]: time="2025-08-13T00:20:49.639132372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:49.641311 containerd[1941]: time="2025-08-13T00:20:49.641062284Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 5.202971509s" Aug 13 00:20:49.641311 containerd[1941]: time="2025-08-13T00:20:49.641128296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Aug 13 00:20:49.644750 containerd[1941]: time="2025-08-13T00:20:49.644157612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:20:49.646844 containerd[1941]: time="2025-08-13T00:20:49.646673652Z" level=info msg="CreateContainer within sandbox \"449cbf60fe20e80100f16917fea692462dfc5f60625ab9c3dad5c4691c339131\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 00:20:49.679106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount631863449.mount: Deactivated successfully. Aug 13 00:20:49.681285 containerd[1941]: time="2025-08-13T00:20:49.679846009Z" level=info msg="CreateContainer within sandbox \"449cbf60fe20e80100f16917fea692462dfc5f60625ab9c3dad5c4691c339131\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"ab462823f7377609feb840c405ae24e8298e31129d52706d67b1ac1a27c944a4\"" Aug 13 00:20:49.684804 containerd[1941]: time="2025-08-13T00:20:49.683148673Z" level=info msg="StartContainer for \"ab462823f7377609feb840c405ae24e8298e31129d52706d67b1ac1a27c944a4\"" Aug 13 00:20:49.764079 systemd[1]: Started cri-containerd-ab462823f7377609feb840c405ae24e8298e31129d52706d67b1ac1a27c944a4.scope - libcontainer container ab462823f7377609feb840c405ae24e8298e31129d52706d67b1ac1a27c944a4. Aug 13 00:20:49.841658 containerd[1941]: time="2025-08-13T00:20:49.841586413Z" level=info msg="StartContainer for \"ab462823f7377609feb840c405ae24e8298e31129d52706d67b1ac1a27c944a4\" returns successfully" Aug 13 00:20:51.060154 systemd[1]: run-containerd-runc-k8s.io-ab462823f7377609feb840c405ae24e8298e31129d52706d67b1ac1a27c944a4-runc.pPVnGO.mount: Deactivated successfully. Aug 13 00:20:52.041442 systemd[1]: Started sshd@13-172.31.18.251:22-139.178.89.65:44460.service - OpenSSH per-connection server daemon (139.178.89.65:44460). Aug 13 00:20:52.265099 sshd[6116]: Accepted publickey for core from 139.178.89.65 port 44460 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:52.270441 sshd[6116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:52.283869 systemd-logind[1913]: New session 14 of user core. Aug 13 00:20:52.290065 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 00:20:52.364447 containerd[1941]: time="2025-08-13T00:20:52.364359170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:52.366053 containerd[1941]: time="2025-08-13T00:20:52.365959238Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Aug 13 00:20:52.367060 containerd[1941]: time="2025-08-13T00:20:52.366980582Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:52.371156 containerd[1941]: time="2025-08-13T00:20:52.371008574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:52.373568 containerd[1941]: time="2025-08-13T00:20:52.372647882Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 2.728428782s" Aug 13 00:20:52.373568 containerd[1941]: time="2025-08-13T00:20:52.372708110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 13 00:20:52.375407 containerd[1941]: time="2025-08-13T00:20:52.375236570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:20:52.376986 containerd[1941]: time="2025-08-13T00:20:52.376812182Z" level=info msg="CreateContainer within sandbox \"1ef3482af57f139559dee4b1ff29fd26f4a6b55deff708d5bb51925b9ec2ff7b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:20:52.404481 containerd[1941]: time="2025-08-13T00:20:52.404400842Z" level=info msg="CreateContainer within sandbox \"1ef3482af57f139559dee4b1ff29fd26f4a6b55deff708d5bb51925b9ec2ff7b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9e1f1bfe993ca596dffa5b7b64adb3c518a4b08ab5e3cdcd388896017e0036a3\"" Aug 13 00:20:52.411273 containerd[1941]: time="2025-08-13T00:20:52.406622594Z" level=info msg="StartContainer for \"9e1f1bfe993ca596dffa5b7b64adb3c518a4b08ab5e3cdcd388896017e0036a3\"" Aug 13 00:20:52.527159 systemd[1]: Started cri-containerd-9e1f1bfe993ca596dffa5b7b64adb3c518a4b08ab5e3cdcd388896017e0036a3.scope - libcontainer container 9e1f1bfe993ca596dffa5b7b64adb3c518a4b08ab5e3cdcd388896017e0036a3. Aug 13 00:20:52.658646 sshd[6116]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:52.674528 systemd[1]: sshd@13-172.31.18.251:22-139.178.89.65:44460.service: Deactivated successfully. Aug 13 00:20:52.687863 containerd[1941]: time="2025-08-13T00:20:52.682129888Z" level=info msg="StartContainer for \"9e1f1bfe993ca596dffa5b7b64adb3c518a4b08ab5e3cdcd388896017e0036a3\" returns successfully" Aug 13 00:20:52.688313 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:20:52.694244 systemd-logind[1913]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:20:52.700265 systemd-logind[1913]: Removed session 14. Aug 13 00:20:52.732506 containerd[1941]: time="2025-08-13T00:20:52.732006100Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:52.734566 containerd[1941]: time="2025-08-13T00:20:52.733982992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 00:20:52.742337 containerd[1941]: time="2025-08-13T00:20:52.742188700Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 366.888938ms" Aug 13 00:20:52.742337 containerd[1941]: time="2025-08-13T00:20:52.742277200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 13 00:20:52.746754 containerd[1941]: time="2025-08-13T00:20:52.746309788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 00:20:52.748716 containerd[1941]: time="2025-08-13T00:20:52.748472368Z" level=info msg="CreateContainer within sandbox \"21e4ea7832ef28270171841d4d142ed74270662a0a0d9ecf37e17611d6019d70\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:20:52.786262 containerd[1941]: time="2025-08-13T00:20:52.786152224Z" level=info msg="CreateContainer within sandbox \"21e4ea7832ef28270171841d4d142ed74270662a0a0d9ecf37e17611d6019d70\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f1c3cfd27b7bea4e4b873797077433ac8432d9b462b9bad97605da7c68e993ea\"" Aug 13 00:20:52.788706 containerd[1941]: time="2025-08-13T00:20:52.788643328Z" level=info msg="StartContainer for \"f1c3cfd27b7bea4e4b873797077433ac8432d9b462b9bad97605da7c68e993ea\"" Aug 13 00:20:52.849182 systemd[1]: Started cri-containerd-f1c3cfd27b7bea4e4b873797077433ac8432d9b462b9bad97605da7c68e993ea.scope - libcontainer container f1c3cfd27b7bea4e4b873797077433ac8432d9b462b9bad97605da7c68e993ea. Aug 13 00:20:52.989253 containerd[1941]: time="2025-08-13T00:20:52.989058881Z" level=info msg="StartContainer for \"f1c3cfd27b7bea4e4b873797077433ac8432d9b462b9bad97605da7c68e993ea\" returns successfully" Aug 13 00:20:52.994812 kubelet[3135]: I0813 00:20:52.994661 3135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-555lv" podStartSLOduration=31.60460851 podStartE2EDuration="41.994613657s" podCreationTimestamp="2025-08-13 00:20:11 +0000 UTC" firstStartedPulling="2025-08-13 00:20:39.253312129 +0000 UTC m=+63.563659997" lastFinishedPulling="2025-08-13 00:20:49.643317276 +0000 UTC m=+73.953665144" observedRunningTime="2025-08-13 00:20:49.943009634 +0000 UTC m=+74.253357538" watchObservedRunningTime="2025-08-13 00:20:52.994613657 +0000 UTC m=+77.304961525" Aug 13 00:20:52.996842 kubelet[3135]: I0813 00:20:52.995721 3135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-65d97cd995-jdgkd" podStartSLOduration=41.209576954 podStartE2EDuration="53.995699849s" podCreationTimestamp="2025-08-13 00:19:59 +0000 UTC" firstStartedPulling="2025-08-13 00:20:39.588155847 +0000 UTC m=+63.898503703" lastFinishedPulling="2025-08-13 00:20:52.37427873 +0000 UTC m=+76.684626598" observedRunningTime="2025-08-13 00:20:52.991668401 +0000 UTC m=+77.302016377" watchObservedRunningTime="2025-08-13 00:20:52.995699849 +0000 UTC m=+77.306047717" Aug 13 00:20:53.999544 kubelet[3135]: I0813 00:20:53.999355 3135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-65d97cd995-7ztd9" podStartSLOduration=42.027003013 podStartE2EDuration="54.99911391s" podCreationTimestamp="2025-08-13 00:19:59 +0000 UTC" firstStartedPulling="2025-08-13 00:20:39.772103175 +0000 UTC m=+64.082451043" lastFinishedPulling="2025-08-13 00:20:52.744214084 +0000 UTC m=+77.054561940" observedRunningTime="2025-08-13 00:20:53.994643694 +0000 UTC m=+78.304991946" watchObservedRunningTime="2025-08-13 00:20:53.99911391 +0000 UTC m=+78.309461982" Aug 13 00:20:54.449269 containerd[1941]: time="2025-08-13T00:20:54.447865708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:54.451985 containerd[1941]: time="2025-08-13T00:20:54.451918252Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Aug 13 00:20:54.455786 containerd[1941]: time="2025-08-13T00:20:54.454693612Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:54.464061 containerd[1941]: time="2025-08-13T00:20:54.463998604Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:54.467432 containerd[1941]: time="2025-08-13T00:20:54.467197348Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.720820252s" Aug 13 00:20:54.470162 containerd[1941]: time="2025-08-13T00:20:54.470120404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Aug 13 00:20:54.480125 containerd[1941]: time="2025-08-13T00:20:54.479034016Z" level=info msg="CreateContainer within sandbox \"4c825a17c4ff4f470302465d9c6ac397f28d19ba49d6e0959657805b3d2468ad\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 00:20:54.504384 containerd[1941]: time="2025-08-13T00:20:54.503451797Z" level=info msg="CreateContainer within sandbox \"4c825a17c4ff4f470302465d9c6ac397f28d19ba49d6e0959657805b3d2468ad\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"dc2e969be4173db64c9e8c4119b377710d606211535505807ad5f26f7aeac08f\"" Aug 13 00:20:54.507810 containerd[1941]: time="2025-08-13T00:20:54.507021221Z" level=info msg="StartContainer for \"dc2e969be4173db64c9e8c4119b377710d606211535505807ad5f26f7aeac08f\"" Aug 13 00:20:54.611069 systemd[1]: Started cri-containerd-dc2e969be4173db64c9e8c4119b377710d606211535505807ad5f26f7aeac08f.scope - libcontainer container dc2e969be4173db64c9e8c4119b377710d606211535505807ad5f26f7aeac08f. Aug 13 00:20:54.701394 containerd[1941]: time="2025-08-13T00:20:54.700338954Z" level=info msg="StartContainer for \"dc2e969be4173db64c9e8c4119b377710d606211535505807ad5f26f7aeac08f\" returns successfully" Aug 13 00:20:54.988051 kubelet[3135]: I0813 00:20:54.987356 3135 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:20:55.151362 kubelet[3135]: I0813 00:20:55.150881 3135 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 00:20:55.151362 kubelet[3135]: I0813 00:20:55.151071 3135 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 00:20:56.013376 kubelet[3135]: I0813 00:20:56.013274 3135 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-wzmtn" podStartSLOduration=28.119531605 podStartE2EDuration="45.013226248s" podCreationTimestamp="2025-08-13 00:20:11 +0000 UTC" firstStartedPulling="2025-08-13 00:20:37.578278777 +0000 UTC m=+61.888626645" lastFinishedPulling="2025-08-13 00:20:54.47197342 +0000 UTC m=+78.782321288" observedRunningTime="2025-08-13 00:20:55.016338747 +0000 UTC m=+79.326686627" watchObservedRunningTime="2025-08-13 00:20:56.013226248 +0000 UTC m=+80.323574128" Aug 13 00:20:57.706324 systemd[1]: Started sshd@14-172.31.18.251:22-139.178.89.65:44476.service - OpenSSH per-connection server daemon (139.178.89.65:44476). Aug 13 00:20:57.956146 sshd[6268]: Accepted publickey for core from 139.178.89.65 port 44476 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:57.960280 sshd[6268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:57.973891 systemd-logind[1913]: New session 15 of user core. Aug 13 00:20:57.982622 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 00:20:58.283199 sshd[6268]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:58.292921 systemd-logind[1913]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:20:58.293482 systemd[1]: sshd@14-172.31.18.251:22-139.178.89.65:44476.service: Deactivated successfully. Aug 13 00:20:58.298477 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:20:58.301142 systemd-logind[1913]: Removed session 15. Aug 13 00:21:03.327312 systemd[1]: Started sshd@15-172.31.18.251:22-139.178.89.65:58712.service - OpenSSH per-connection server daemon (139.178.89.65:58712). Aug 13 00:21:03.538457 sshd[6291]: Accepted publickey for core from 139.178.89.65 port 58712 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:21:03.544080 sshd[6291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:03.558925 systemd-logind[1913]: New session 16 of user core. Aug 13 00:21:03.564141 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 00:21:03.938012 sshd[6291]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:03.946100 systemd[1]: sshd@15-172.31.18.251:22-139.178.89.65:58712.service: Deactivated successfully. Aug 13 00:21:03.954069 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:21:03.963078 systemd-logind[1913]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:21:03.983319 systemd[1]: Started sshd@16-172.31.18.251:22-139.178.89.65:58724.service - OpenSSH per-connection server daemon (139.178.89.65:58724). Aug 13 00:21:03.986293 systemd-logind[1913]: Removed session 16. Aug 13 00:21:04.174983 sshd[6304]: Accepted publickey for core from 139.178.89.65 port 58724 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:21:04.179513 sshd[6304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:04.192635 systemd-logind[1913]: New session 17 of user core. Aug 13 00:21:04.200263 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 00:21:04.844445 sshd[6304]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:04.852992 systemd-logind[1913]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:21:04.854482 systemd[1]: sshd@16-172.31.18.251:22-139.178.89.65:58724.service: Deactivated successfully. Aug 13 00:21:04.862680 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:21:04.873306 systemd-logind[1913]: Removed session 17. Aug 13 00:21:04.882318 systemd[1]: Started sshd@17-172.31.18.251:22-139.178.89.65:58732.service - OpenSSH per-connection server daemon (139.178.89.65:58732). Aug 13 00:21:05.083237 sshd[6314]: Accepted publickey for core from 139.178.89.65 port 58732 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:21:05.086022 sshd[6314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:05.095145 systemd-logind[1913]: New session 18 of user core. Aug 13 00:21:05.101126 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 00:21:08.927221 sshd[6314]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:08.938137 systemd[1]: sshd@17-172.31.18.251:22-139.178.89.65:58732.service: Deactivated successfully. Aug 13 00:21:08.948646 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:21:08.950001 systemd[1]: session-18.scope: Consumed 1.175s CPU time. Aug 13 00:21:08.957422 systemd-logind[1913]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:21:08.981190 systemd[1]: Started sshd@18-172.31.18.251:22-139.178.89.65:46896.service - OpenSSH per-connection server daemon (139.178.89.65:46896). Aug 13 00:21:08.983678 systemd-logind[1913]: Removed session 18. Aug 13 00:21:09.164581 sshd[6333]: Accepted publickey for core from 139.178.89.65 port 46896 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:21:09.167453 sshd[6333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:09.175355 systemd-logind[1913]: New session 19 of user core. Aug 13 00:21:09.185073 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 00:21:09.721676 sshd[6333]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:09.729195 systemd[1]: sshd@18-172.31.18.251:22-139.178.89.65:46896.service: Deactivated successfully. Aug 13 00:21:09.733161 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:21:09.735467 systemd-logind[1913]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:21:09.738529 systemd-logind[1913]: Removed session 19. Aug 13 00:21:09.765299 systemd[1]: Started sshd@19-172.31.18.251:22-139.178.89.65:46906.service - OpenSSH per-connection server daemon (139.178.89.65:46906). Aug 13 00:21:09.940847 sshd[6344]: Accepted publickey for core from 139.178.89.65 port 46906 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:21:09.944221 sshd[6344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:09.952181 systemd-logind[1913]: New session 20 of user core. Aug 13 00:21:09.961051 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 00:21:10.204178 sshd[6344]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:10.212027 systemd[1]: sshd@19-172.31.18.251:22-139.178.89.65:46906.service: Deactivated successfully. Aug 13 00:21:10.217099 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:21:10.219280 systemd-logind[1913]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:21:10.221547 systemd-logind[1913]: Removed session 20. Aug 13 00:21:15.246477 systemd[1]: Started sshd@20-172.31.18.251:22-139.178.89.65:46918.service - OpenSSH per-connection server daemon (139.178.89.65:46918). Aug 13 00:21:15.432843 sshd[6386]: Accepted publickey for core from 139.178.89.65 port 46918 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:21:15.435683 sshd[6386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:15.444741 systemd-logind[1913]: New session 21 of user core. Aug 13 00:21:15.454179 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 00:21:15.738104 sshd[6386]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:15.745182 systemd[1]: sshd@20-172.31.18.251:22-139.178.89.65:46918.service: Deactivated successfully. Aug 13 00:21:15.752119 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:21:15.756000 systemd-logind[1913]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:21:15.760508 systemd-logind[1913]: Removed session 21. Aug 13 00:21:20.784328 systemd[1]: Started sshd@21-172.31.18.251:22-139.178.89.65:60550.service - OpenSSH per-connection server daemon (139.178.89.65:60550). Aug 13 00:21:21.008415 sshd[6422]: Accepted publickey for core from 139.178.89.65 port 60550 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:21:21.014380 sshd[6422]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:21.026487 systemd-logind[1913]: New session 22 of user core. Aug 13 00:21:21.035143 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 00:21:21.385956 sshd[6422]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:21.396497 systemd[1]: sshd@21-172.31.18.251:22-139.178.89.65:60550.service: Deactivated successfully. Aug 13 00:21:21.406644 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:21:21.411804 systemd-logind[1913]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:21:21.414699 systemd-logind[1913]: Removed session 22. Aug 13 00:21:26.428286 systemd[1]: Started sshd@22-172.31.18.251:22-139.178.89.65:60560.service - OpenSSH per-connection server daemon (139.178.89.65:60560). Aug 13 00:21:26.631714 sshd[6480]: Accepted publickey for core from 139.178.89.65 port 60560 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:21:26.634835 sshd[6480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:26.645270 systemd-logind[1913]: New session 23 of user core. Aug 13 00:21:26.653140 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 00:21:26.946402 sshd[6480]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:26.954747 systemd[1]: sshd@22-172.31.18.251:22-139.178.89.65:60560.service: Deactivated successfully. Aug 13 00:21:26.964206 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:21:26.968828 systemd-logind[1913]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:21:26.971086 systemd-logind[1913]: Removed session 23. Aug 13 00:21:32.005004 systemd[1]: Started sshd@23-172.31.18.251:22-139.178.89.65:46064.service - OpenSSH per-connection server daemon (139.178.89.65:46064). Aug 13 00:21:32.178728 sshd[6493]: Accepted publickey for core from 139.178.89.65 port 46064 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:21:32.182448 sshd[6493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:32.193159 systemd-logind[1913]: New session 24 of user core. Aug 13 00:21:32.201179 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 00:21:32.506694 sshd[6493]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:32.515610 systemd[1]: sshd@23-172.31.18.251:22-139.178.89.65:46064.service: Deactivated successfully. Aug 13 00:21:32.523553 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:21:32.528137 systemd-logind[1913]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:21:32.532349 systemd-logind[1913]: Removed session 24. Aug 13 00:21:37.550074 systemd[1]: Started sshd@24-172.31.18.251:22-139.178.89.65:46078.service - OpenSSH per-connection server daemon (139.178.89.65:46078). Aug 13 00:21:37.769924 sshd[6509]: Accepted publickey for core from 139.178.89.65 port 46078 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:21:37.771122 sshd[6509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:37.784394 systemd-logind[1913]: New session 25 of user core. Aug 13 00:21:37.791224 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 00:21:38.063871 sshd[6509]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:38.069973 systemd-logind[1913]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:21:38.072012 systemd[1]: sshd@24-172.31.18.251:22-139.178.89.65:46078.service: Deactivated successfully. Aug 13 00:21:38.078470 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:21:38.086034 systemd-logind[1913]: Removed session 25. Aug 13 00:21:38.216548 containerd[1941]: time="2025-08-13T00:21:38.216454594Z" level=info msg="StopPodSandbox for \"9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03\"" Aug 13 00:21:38.373594 containerd[1941]: 2025-08-13 00:21:38.294 [WARNING][6529] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--251-k8s-csi--node--driver--wzmtn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e5f0f3a3-68e2-4f84-92cb-c460ed58604c", ResourceVersion:"1202", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-251", ContainerID:"4c825a17c4ff4f470302465d9c6ac397f28d19ba49d6e0959657805b3d2468ad", Pod:"csi-node-driver-wzmtn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.32.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0bed2aee68a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:21:38.373594 containerd[1941]: 2025-08-13 00:21:38.294 [INFO][6529] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" Aug 13 00:21:38.373594 containerd[1941]: 2025-08-13 00:21:38.295 [INFO][6529] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" iface="eth0" netns="" Aug 13 00:21:38.373594 containerd[1941]: 2025-08-13 00:21:38.295 [INFO][6529] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" Aug 13 00:21:38.373594 containerd[1941]: 2025-08-13 00:21:38.295 [INFO][6529] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" Aug 13 00:21:38.373594 containerd[1941]: 2025-08-13 00:21:38.349 [INFO][6536] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" HandleID="k8s-pod-network.9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" Workload="ip--172--31--18--251-k8s-csi--node--driver--wzmtn-eth0" Aug 13 00:21:38.373594 containerd[1941]: 2025-08-13 00:21:38.349 [INFO][6536] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:21:38.373594 containerd[1941]: 2025-08-13 00:21:38.349 [INFO][6536] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:21:38.373594 containerd[1941]: 2025-08-13 00:21:38.362 [WARNING][6536] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" HandleID="k8s-pod-network.9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" Workload="ip--172--31--18--251-k8s-csi--node--driver--wzmtn-eth0" Aug 13 00:21:38.373594 containerd[1941]: 2025-08-13 00:21:38.362 [INFO][6536] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" HandleID="k8s-pod-network.9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" Workload="ip--172--31--18--251-k8s-csi--node--driver--wzmtn-eth0" Aug 13 00:21:38.373594 containerd[1941]: 2025-08-13 00:21:38.366 [INFO][6536] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:21:38.373594 containerd[1941]: 2025-08-13 00:21:38.369 [INFO][6529] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" Aug 13 00:21:38.375722 containerd[1941]: time="2025-08-13T00:21:38.374581955Z" level=info msg="TearDown network for sandbox \"9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03\" successfully" Aug 13 00:21:38.375722 containerd[1941]: time="2025-08-13T00:21:38.374634731Z" level=info msg="StopPodSandbox for \"9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03\" returns successfully" Aug 13 00:21:38.375722 containerd[1941]: time="2025-08-13T00:21:38.375647939Z" level=info msg="RemovePodSandbox for \"9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03\"" Aug 13 00:21:38.376218 containerd[1941]: time="2025-08-13T00:21:38.375695855Z" level=info msg="Forcibly stopping sandbox \"9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03\"" Aug 13 00:21:38.543799 containerd[1941]: 2025-08-13 00:21:38.453 [WARNING][6550] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--251-k8s-csi--node--driver--wzmtn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e5f0f3a3-68e2-4f84-92cb-c460ed58604c", ResourceVersion:"1202", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-251", ContainerID:"4c825a17c4ff4f470302465d9c6ac397f28d19ba49d6e0959657805b3d2468ad", Pod:"csi-node-driver-wzmtn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.32.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0bed2aee68a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:21:38.543799 containerd[1941]: 2025-08-13 00:21:38.453 [INFO][6550] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" Aug 13 00:21:38.543799 containerd[1941]: 2025-08-13 00:21:38.453 [INFO][6550] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" iface="eth0" netns="" Aug 13 00:21:38.543799 containerd[1941]: 2025-08-13 00:21:38.454 [INFO][6550] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" Aug 13 00:21:38.543799 containerd[1941]: 2025-08-13 00:21:38.454 [INFO][6550] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" Aug 13 00:21:38.543799 containerd[1941]: 2025-08-13 00:21:38.497 [INFO][6558] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" HandleID="k8s-pod-network.9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" Workload="ip--172--31--18--251-k8s-csi--node--driver--wzmtn-eth0" Aug 13 00:21:38.543799 containerd[1941]: 2025-08-13 00:21:38.497 [INFO][6558] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:21:38.543799 containerd[1941]: 2025-08-13 00:21:38.498 [INFO][6558] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:21:38.543799 containerd[1941]: 2025-08-13 00:21:38.528 [WARNING][6558] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" HandleID="k8s-pod-network.9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" Workload="ip--172--31--18--251-k8s-csi--node--driver--wzmtn-eth0" Aug 13 00:21:38.543799 containerd[1941]: 2025-08-13 00:21:38.529 [INFO][6558] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" HandleID="k8s-pod-network.9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" Workload="ip--172--31--18--251-k8s-csi--node--driver--wzmtn-eth0" Aug 13 00:21:38.543799 containerd[1941]: 2025-08-13 00:21:38.533 [INFO][6558] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:21:38.543799 containerd[1941]: 2025-08-13 00:21:38.538 [INFO][6550] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03" Aug 13 00:21:38.543799 containerd[1941]: time="2025-08-13T00:21:38.543261107Z" level=info msg="TearDown network for sandbox \"9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03\" successfully" Aug 13 00:21:38.552850 containerd[1941]: time="2025-08-13T00:21:38.552218147Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:21:38.552850 containerd[1941]: time="2025-08-13T00:21:38.552751103Z" level=info msg="RemovePodSandbox \"9b39f51893b6f44d08aca138916a470be07306923159d2e63e3304131a95ae03\" returns successfully" Aug 13 00:21:38.555105 containerd[1941]: time="2025-08-13T00:21:38.555058559Z" level=info msg="StopPodSandbox for \"8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b\"" Aug 13 00:21:38.691321 containerd[1941]: 2025-08-13 00:21:38.620 [WARNING][6573] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--251-k8s-coredns--7c65d6cfc9--p2lsx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f1dfa461-4b5f-4ed8-a850-cf604830db07", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-251", ContainerID:"f3d4fc3dea1bd7cc020bc5fc36728b6dda9f1a97845a7f80fca984024b5424ff", Pod:"coredns-7c65d6cfc9-p2lsx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali278814795cb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:21:38.691321 containerd[1941]: 2025-08-13 00:21:38.620 [INFO][6573] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" Aug 13 00:21:38.691321 containerd[1941]: 2025-08-13 00:21:38.621 [INFO][6573] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" iface="eth0" netns="" Aug 13 00:21:38.691321 containerd[1941]: 2025-08-13 00:21:38.621 [INFO][6573] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" Aug 13 00:21:38.691321 containerd[1941]: 2025-08-13 00:21:38.621 [INFO][6573] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" Aug 13 00:21:38.691321 containerd[1941]: 2025-08-13 00:21:38.665 [INFO][6580] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" HandleID="k8s-pod-network.8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" Workload="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--p2lsx-eth0" Aug 13 00:21:38.691321 containerd[1941]: 2025-08-13 00:21:38.666 [INFO][6580] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:21:38.691321 containerd[1941]: 2025-08-13 00:21:38.666 [INFO][6580] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:21:38.691321 containerd[1941]: 2025-08-13 00:21:38.679 [WARNING][6580] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" HandleID="k8s-pod-network.8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" Workload="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--p2lsx-eth0" Aug 13 00:21:38.691321 containerd[1941]: 2025-08-13 00:21:38.679 [INFO][6580] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" HandleID="k8s-pod-network.8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" Workload="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--p2lsx-eth0" Aug 13 00:21:38.691321 containerd[1941]: 2025-08-13 00:21:38.682 [INFO][6580] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:21:38.691321 containerd[1941]: 2025-08-13 00:21:38.685 [INFO][6573] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" Aug 13 00:21:38.691321 containerd[1941]: time="2025-08-13T00:21:38.688710756Z" level=info msg="TearDown network for sandbox \"8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b\" successfully" Aug 13 00:21:38.691321 containerd[1941]: time="2025-08-13T00:21:38.688749996Z" level=info msg="StopPodSandbox for \"8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b\" returns successfully" Aug 13 00:21:38.691321 containerd[1941]: time="2025-08-13T00:21:38.690507792Z" level=info msg="RemovePodSandbox for \"8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b\"" Aug 13 00:21:38.691321 containerd[1941]: time="2025-08-13T00:21:38.690563016Z" level=info msg="Forcibly stopping sandbox \"8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b\"" Aug 13 00:21:38.850252 containerd[1941]: 2025-08-13 00:21:38.764 [WARNING][6594] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--251-k8s-coredns--7c65d6cfc9--p2lsx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f1dfa461-4b5f-4ed8-a850-cf604830db07", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-251", ContainerID:"f3d4fc3dea1bd7cc020bc5fc36728b6dda9f1a97845a7f80fca984024b5424ff", Pod:"coredns-7c65d6cfc9-p2lsx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali278814795cb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:21:38.850252 containerd[1941]: 2025-08-13 00:21:38.764 [INFO][6594] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" Aug 13 00:21:38.850252 containerd[1941]: 2025-08-13 00:21:38.764 [INFO][6594] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" iface="eth0" netns="" Aug 13 00:21:38.850252 containerd[1941]: 2025-08-13 00:21:38.764 [INFO][6594] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" Aug 13 00:21:38.850252 containerd[1941]: 2025-08-13 00:21:38.764 [INFO][6594] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" Aug 13 00:21:38.850252 containerd[1941]: 2025-08-13 00:21:38.812 [INFO][6602] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" HandleID="k8s-pod-network.8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" Workload="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--p2lsx-eth0" Aug 13 00:21:38.850252 containerd[1941]: 2025-08-13 00:21:38.813 [INFO][6602] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:21:38.850252 containerd[1941]: 2025-08-13 00:21:38.813 [INFO][6602] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:21:38.850252 containerd[1941]: 2025-08-13 00:21:38.835 [WARNING][6602] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" HandleID="k8s-pod-network.8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" Workload="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--p2lsx-eth0" Aug 13 00:21:38.850252 containerd[1941]: 2025-08-13 00:21:38.835 [INFO][6602] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" HandleID="k8s-pod-network.8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" Workload="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--p2lsx-eth0" Aug 13 00:21:38.850252 containerd[1941]: 2025-08-13 00:21:38.840 [INFO][6602] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:21:38.850252 containerd[1941]: 2025-08-13 00:21:38.843 [INFO][6594] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b" Aug 13 00:21:38.850252 containerd[1941]: time="2025-08-13T00:21:38.848145457Z" level=info msg="TearDown network for sandbox \"8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b\" successfully" Aug 13 00:21:38.858414 containerd[1941]: time="2025-08-13T00:21:38.858022705Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:21:38.858414 containerd[1941]: time="2025-08-13T00:21:38.858199849Z" level=info msg="RemovePodSandbox \"8a0f1c260396640f753db7b46d87e499733e1dfdf747b38b1560974dec9dd58b\" returns successfully" Aug 13 00:21:38.859914 containerd[1941]: time="2025-08-13T00:21:38.859024081Z" level=info msg="StopPodSandbox for \"f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68\"" Aug 13 00:21:39.085874 containerd[1941]: 2025-08-13 00:21:38.986 [WARNING][6616] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--jdgkd-eth0", GenerateName:"calico-apiserver-65d97cd995-", Namespace:"calico-apiserver", SelfLink:"", UID:"d21d0cba-f46e-4a88-8bd1-db42d9c6b456", ResourceVersion:"1210", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65d97cd995", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-251", ContainerID:"1ef3482af57f139559dee4b1ff29fd26f4a6b55deff708d5bb51925b9ec2ff7b", Pod:"calico-apiserver-65d97cd995-jdgkd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib5b0069624d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:21:39.085874 containerd[1941]: 2025-08-13 00:21:38.986 [INFO][6616] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" Aug 13 00:21:39.085874 containerd[1941]: 2025-08-13 00:21:38.986 [INFO][6616] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" iface="eth0" netns="" Aug 13 00:21:39.085874 containerd[1941]: 2025-08-13 00:21:38.986 [INFO][6616] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" Aug 13 00:21:39.085874 containerd[1941]: 2025-08-13 00:21:38.986 [INFO][6616] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" Aug 13 00:21:39.085874 containerd[1941]: 2025-08-13 00:21:39.036 [INFO][6623] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" HandleID="k8s-pod-network.f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" Workload="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--jdgkd-eth0" Aug 13 00:21:39.085874 containerd[1941]: 2025-08-13 00:21:39.036 [INFO][6623] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:21:39.085874 containerd[1941]: 2025-08-13 00:21:39.036 [INFO][6623] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:21:39.085874 containerd[1941]: 2025-08-13 00:21:39.072 [WARNING][6623] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" HandleID="k8s-pod-network.f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" Workload="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--jdgkd-eth0" Aug 13 00:21:39.085874 containerd[1941]: 2025-08-13 00:21:39.072 [INFO][6623] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" HandleID="k8s-pod-network.f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" Workload="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--jdgkd-eth0" Aug 13 00:21:39.085874 containerd[1941]: 2025-08-13 00:21:39.075 [INFO][6623] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:21:39.085874 containerd[1941]: 2025-08-13 00:21:39.079 [INFO][6616] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" Aug 13 00:21:39.085874 containerd[1941]: time="2025-08-13T00:21:39.084046222Z" level=info msg="TearDown network for sandbox \"f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68\" successfully" Aug 13 00:21:39.085874 containerd[1941]: time="2025-08-13T00:21:39.084087790Z" level=info msg="StopPodSandbox for \"f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68\" returns successfully" Aug 13 00:21:39.089471 containerd[1941]: time="2025-08-13T00:21:39.087422098Z" level=info msg="RemovePodSandbox for \"f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68\"" Aug 13 00:21:39.089471 containerd[1941]: time="2025-08-13T00:21:39.088994398Z" level=info msg="Forcibly stopping sandbox \"f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68\"" Aug 13 00:21:39.304515 containerd[1941]: 2025-08-13 00:21:39.215 [WARNING][6638] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--jdgkd-eth0", GenerateName:"calico-apiserver-65d97cd995-", Namespace:"calico-apiserver", SelfLink:"", UID:"d21d0cba-f46e-4a88-8bd1-db42d9c6b456", ResourceVersion:"1210", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65d97cd995", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-251", ContainerID:"1ef3482af57f139559dee4b1ff29fd26f4a6b55deff708d5bb51925b9ec2ff7b", Pod:"calico-apiserver-65d97cd995-jdgkd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib5b0069624d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:21:39.304515 containerd[1941]: 2025-08-13 00:21:39.215 [INFO][6638] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" Aug 13 00:21:39.304515 containerd[1941]: 2025-08-13 00:21:39.215 [INFO][6638] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" iface="eth0" netns="" Aug 13 00:21:39.304515 containerd[1941]: 2025-08-13 00:21:39.215 [INFO][6638] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" Aug 13 00:21:39.304515 containerd[1941]: 2025-08-13 00:21:39.215 [INFO][6638] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" Aug 13 00:21:39.304515 containerd[1941]: 2025-08-13 00:21:39.282 [INFO][6645] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" HandleID="k8s-pod-network.f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" Workload="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--jdgkd-eth0" Aug 13 00:21:39.304515 containerd[1941]: 2025-08-13 00:21:39.282 [INFO][6645] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:21:39.304515 containerd[1941]: 2025-08-13 00:21:39.282 [INFO][6645] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:21:39.304515 containerd[1941]: 2025-08-13 00:21:39.295 [WARNING][6645] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" HandleID="k8s-pod-network.f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" Workload="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--jdgkd-eth0" Aug 13 00:21:39.304515 containerd[1941]: 2025-08-13 00:21:39.295 [INFO][6645] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" HandleID="k8s-pod-network.f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" Workload="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--jdgkd-eth0" Aug 13 00:21:39.304515 containerd[1941]: 2025-08-13 00:21:39.297 [INFO][6645] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:21:39.304515 containerd[1941]: 2025-08-13 00:21:39.300 [INFO][6638] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68" Aug 13 00:21:39.306316 containerd[1941]: time="2025-08-13T00:21:39.304567739Z" level=info msg="TearDown network for sandbox \"f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68\" successfully" Aug 13 00:21:39.314475 containerd[1941]: time="2025-08-13T00:21:39.314371955Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:21:39.314612 containerd[1941]: time="2025-08-13T00:21:39.314516183Z" level=info msg="RemovePodSandbox \"f63200f58d3204464ba3f1e301a16c021903ca2459ce40e3e6476f923a0fdd68\" returns successfully" Aug 13 00:21:39.315998 containerd[1941]: time="2025-08-13T00:21:39.315613235Z" level=info msg="StopPodSandbox for \"2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1\"" Aug 13 00:21:39.483043 containerd[1941]: 2025-08-13 00:21:39.387 [WARNING][6659] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--7ztd9-eth0", GenerateName:"calico-apiserver-65d97cd995-", Namespace:"calico-apiserver", SelfLink:"", UID:"73b3a345-ddf0-47d6-a1d3-270371119508", ResourceVersion:"1225", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65d97cd995", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-251", ContainerID:"21e4ea7832ef28270171841d4d142ed74270662a0a0d9ecf37e17611d6019d70", Pod:"calico-apiserver-65d97cd995-7ztd9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliad321136fe3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:21:39.483043 containerd[1941]: 2025-08-13 00:21:39.387 [INFO][6659] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" Aug 13 00:21:39.483043 containerd[1941]: 2025-08-13 00:21:39.387 [INFO][6659] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" iface="eth0" netns="" Aug 13 00:21:39.483043 containerd[1941]: 2025-08-13 00:21:39.387 [INFO][6659] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" Aug 13 00:21:39.483043 containerd[1941]: 2025-08-13 00:21:39.387 [INFO][6659] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" Aug 13 00:21:39.483043 containerd[1941]: 2025-08-13 00:21:39.448 [INFO][6666] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" HandleID="k8s-pod-network.2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" Workload="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--7ztd9-eth0" Aug 13 00:21:39.483043 containerd[1941]: 2025-08-13 00:21:39.448 [INFO][6666] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:21:39.483043 containerd[1941]: 2025-08-13 00:21:39.448 [INFO][6666] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:21:39.483043 containerd[1941]: 2025-08-13 00:21:39.469 [WARNING][6666] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" HandleID="k8s-pod-network.2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" Workload="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--7ztd9-eth0" Aug 13 00:21:39.483043 containerd[1941]: 2025-08-13 00:21:39.470 [INFO][6666] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" HandleID="k8s-pod-network.2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" Workload="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--7ztd9-eth0" Aug 13 00:21:39.483043 containerd[1941]: 2025-08-13 00:21:39.474 [INFO][6666] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:21:39.483043 containerd[1941]: 2025-08-13 00:21:39.479 [INFO][6659] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" Aug 13 00:21:39.484453 containerd[1941]: time="2025-08-13T00:21:39.483082788Z" level=info msg="TearDown network for sandbox \"2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1\" successfully" Aug 13 00:21:39.484453 containerd[1941]: time="2025-08-13T00:21:39.483120576Z" level=info msg="StopPodSandbox for \"2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1\" returns successfully" Aug 13 00:21:39.484453 containerd[1941]: time="2025-08-13T00:21:39.484055052Z" level=info msg="RemovePodSandbox for \"2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1\"" Aug 13 00:21:39.484453 containerd[1941]: time="2025-08-13T00:21:39.484101564Z" level=info msg="Forcibly stopping sandbox \"2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1\"" Aug 13 00:21:39.635453 containerd[1941]: 2025-08-13 00:21:39.565 [WARNING][6680] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--7ztd9-eth0", GenerateName:"calico-apiserver-65d97cd995-", Namespace:"calico-apiserver", SelfLink:"", UID:"73b3a345-ddf0-47d6-a1d3-270371119508", ResourceVersion:"1225", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65d97cd995", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-251", ContainerID:"21e4ea7832ef28270171841d4d142ed74270662a0a0d9ecf37e17611d6019d70", Pod:"calico-apiserver-65d97cd995-7ztd9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliad321136fe3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:21:39.635453 containerd[1941]: 2025-08-13 00:21:39.566 [INFO][6680] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" Aug 13 00:21:39.635453 containerd[1941]: 2025-08-13 00:21:39.566 [INFO][6680] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" iface="eth0" netns="" Aug 13 00:21:39.635453 containerd[1941]: 2025-08-13 00:21:39.566 [INFO][6680] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" Aug 13 00:21:39.635453 containerd[1941]: 2025-08-13 00:21:39.566 [INFO][6680] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" Aug 13 00:21:39.635453 containerd[1941]: 2025-08-13 00:21:39.612 [INFO][6687] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" HandleID="k8s-pod-network.2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" Workload="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--7ztd9-eth0" Aug 13 00:21:39.635453 containerd[1941]: 2025-08-13 00:21:39.613 [INFO][6687] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:21:39.635453 containerd[1941]: 2025-08-13 00:21:39.613 [INFO][6687] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:21:39.635453 containerd[1941]: 2025-08-13 00:21:39.626 [WARNING][6687] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" HandleID="k8s-pod-network.2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" Workload="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--7ztd9-eth0" Aug 13 00:21:39.635453 containerd[1941]: 2025-08-13 00:21:39.627 [INFO][6687] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" HandleID="k8s-pod-network.2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" Workload="ip--172--31--18--251-k8s-calico--apiserver--65d97cd995--7ztd9-eth0" Aug 13 00:21:39.635453 containerd[1941]: 2025-08-13 00:21:39.629 [INFO][6687] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:21:39.635453 containerd[1941]: 2025-08-13 00:21:39.632 [INFO][6680] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1" Aug 13 00:21:39.636547 containerd[1941]: time="2025-08-13T00:21:39.636198637Z" level=info msg="TearDown network for sandbox \"2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1\" successfully" Aug 13 00:21:39.669046 containerd[1941]: time="2025-08-13T00:21:39.668966305Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:21:39.669618 containerd[1941]: time="2025-08-13T00:21:39.669091129Z" level=info msg="RemovePodSandbox \"2eb16bb9c9db1c689632cdb2cde28a5654226b36f0be4a437f3796f10c1246c1\" returns successfully" Aug 13 00:21:39.670565 containerd[1941]: time="2025-08-13T00:21:39.670168513Z" level=info msg="StopPodSandbox for \"a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291\"" Aug 13 00:21:39.830913 containerd[1941]: 2025-08-13 00:21:39.750 [WARNING][6701] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--251-k8s-calico--kube--controllers--6db5fd67fb--ph246-eth0", GenerateName:"calico-kube-controllers-6db5fd67fb-", Namespace:"calico-system", SelfLink:"", UID:"594d665a-07d4-46f9-938a-94309ad04257", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6db5fd67fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-251", ContainerID:"08f75636be172eea21e6e52f3e8c0600e36f0aa48bde389b804e0c7f76f73606", Pod:"calico-kube-controllers-6db5fd67fb-ph246", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali70d1429311e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:21:39.830913 containerd[1941]: 2025-08-13 00:21:39.751 [INFO][6701] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" Aug 13 00:21:39.830913 containerd[1941]: 2025-08-13 00:21:39.751 [INFO][6701] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" iface="eth0" netns="" Aug 13 00:21:39.830913 containerd[1941]: 2025-08-13 00:21:39.751 [INFO][6701] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" Aug 13 00:21:39.830913 containerd[1941]: 2025-08-13 00:21:39.751 [INFO][6701] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" Aug 13 00:21:39.830913 containerd[1941]: 2025-08-13 00:21:39.801 [INFO][6708] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" HandleID="k8s-pod-network.a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" Workload="ip--172--31--18--251-k8s-calico--kube--controllers--6db5fd67fb--ph246-eth0" Aug 13 00:21:39.830913 containerd[1941]: 2025-08-13 00:21:39.802 [INFO][6708] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:21:39.830913 containerd[1941]: 2025-08-13 00:21:39.802 [INFO][6708] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:21:39.830913 containerd[1941]: 2025-08-13 00:21:39.818 [WARNING][6708] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" HandleID="k8s-pod-network.a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" Workload="ip--172--31--18--251-k8s-calico--kube--controllers--6db5fd67fb--ph246-eth0" Aug 13 00:21:39.830913 containerd[1941]: 2025-08-13 00:21:39.818 [INFO][6708] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" HandleID="k8s-pod-network.a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" Workload="ip--172--31--18--251-k8s-calico--kube--controllers--6db5fd67fb--ph246-eth0" Aug 13 00:21:39.830913 containerd[1941]: 2025-08-13 00:21:39.822 [INFO][6708] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:21:39.830913 containerd[1941]: 2025-08-13 00:21:39.826 [INFO][6701] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" Aug 13 00:21:39.830913 containerd[1941]: time="2025-08-13T00:21:39.829456214Z" level=info msg="TearDown network for sandbox \"a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291\" successfully" Aug 13 00:21:39.830913 containerd[1941]: time="2025-08-13T00:21:39.829494506Z" level=info msg="StopPodSandbox for \"a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291\" returns successfully" Aug 13 00:21:39.832566 containerd[1941]: time="2025-08-13T00:21:39.832461758Z" level=info msg="RemovePodSandbox for \"a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291\"" Aug 13 00:21:39.833376 containerd[1941]: time="2025-08-13T00:21:39.832894778Z" level=info msg="Forcibly stopping sandbox \"a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291\"" Aug 13 00:21:39.991309 containerd[1941]: 2025-08-13 00:21:39.904 [WARNING][6723] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--251-k8s-calico--kube--controllers--6db5fd67fb--ph246-eth0", GenerateName:"calico-kube-controllers-6db5fd67fb-", Namespace:"calico-system", SelfLink:"", UID:"594d665a-07d4-46f9-938a-94309ad04257", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6db5fd67fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-251", ContainerID:"08f75636be172eea21e6e52f3e8c0600e36f0aa48bde389b804e0c7f76f73606", Pod:"calico-kube-controllers-6db5fd67fb-ph246", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali70d1429311e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:21:39.991309 containerd[1941]: 2025-08-13 00:21:39.904 [INFO][6723] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" Aug 13 00:21:39.991309 containerd[1941]: 2025-08-13 00:21:39.904 [INFO][6723] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" iface="eth0" netns="" Aug 13 00:21:39.991309 containerd[1941]: 2025-08-13 00:21:39.904 [INFO][6723] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" Aug 13 00:21:39.991309 containerd[1941]: 2025-08-13 00:21:39.904 [INFO][6723] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" Aug 13 00:21:39.991309 containerd[1941]: 2025-08-13 00:21:39.960 [INFO][6730] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" HandleID="k8s-pod-network.a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" Workload="ip--172--31--18--251-k8s-calico--kube--controllers--6db5fd67fb--ph246-eth0" Aug 13 00:21:39.991309 containerd[1941]: 2025-08-13 00:21:39.961 [INFO][6730] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:21:39.991309 containerd[1941]: 2025-08-13 00:21:39.961 [INFO][6730] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:21:39.991309 containerd[1941]: 2025-08-13 00:21:39.979 [WARNING][6730] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" HandleID="k8s-pod-network.a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" Workload="ip--172--31--18--251-k8s-calico--kube--controllers--6db5fd67fb--ph246-eth0" Aug 13 00:21:39.991309 containerd[1941]: 2025-08-13 00:21:39.979 [INFO][6730] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" HandleID="k8s-pod-network.a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" Workload="ip--172--31--18--251-k8s-calico--kube--controllers--6db5fd67fb--ph246-eth0" Aug 13 00:21:39.991309 containerd[1941]: 2025-08-13 00:21:39.983 [INFO][6730] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:21:39.991309 containerd[1941]: 2025-08-13 00:21:39.986 [INFO][6723] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291" Aug 13 00:21:39.992651 containerd[1941]: time="2025-08-13T00:21:39.991970595Z" level=info msg="TearDown network for sandbox \"a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291\" successfully" Aug 13 00:21:40.001474 containerd[1941]: time="2025-08-13T00:21:40.000951167Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:21:40.001474 containerd[1941]: time="2025-08-13T00:21:40.001066607Z" level=info msg="RemovePodSandbox \"a3500b8719c534c02755e786c67f75fd5929fa2de48aa1eca5a7093897d01291\" returns successfully" Aug 13 00:21:40.003035 containerd[1941]: time="2025-08-13T00:21:40.002411975Z" level=info msg="StopPodSandbox for \"58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b\"" Aug 13 00:21:40.166255 containerd[1941]: 2025-08-13 00:21:40.082 [WARNING][6745] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--251-k8s-coredns--7c65d6cfc9--6mjjr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3a2c1391-3856-407e-9f32-3dffc0012695", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-251", ContainerID:"eb20d88dde4551cc14d00a9a0dcf98d3fa26631ce6aeda5dc483f29cd43c23cb", Pod:"coredns-7c65d6cfc9-6mjjr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5e4ec14e916", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:21:40.166255 containerd[1941]: 2025-08-13 00:21:40.083 [INFO][6745] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" Aug 13 00:21:40.166255 containerd[1941]: 2025-08-13 00:21:40.083 [INFO][6745] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" iface="eth0" netns="" Aug 13 00:21:40.166255 containerd[1941]: 2025-08-13 00:21:40.084 [INFO][6745] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" Aug 13 00:21:40.166255 containerd[1941]: 2025-08-13 00:21:40.084 [INFO][6745] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" Aug 13 00:21:40.166255 containerd[1941]: 2025-08-13 00:21:40.139 [INFO][6753] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" HandleID="k8s-pod-network.58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" Workload="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--6mjjr-eth0" Aug 13 00:21:40.166255 containerd[1941]: 2025-08-13 00:21:40.140 [INFO][6753] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:21:40.166255 containerd[1941]: 2025-08-13 00:21:40.140 [INFO][6753] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:21:40.166255 containerd[1941]: 2025-08-13 00:21:40.153 [WARNING][6753] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" HandleID="k8s-pod-network.58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" Workload="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--6mjjr-eth0" Aug 13 00:21:40.166255 containerd[1941]: 2025-08-13 00:21:40.154 [INFO][6753] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" HandleID="k8s-pod-network.58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" Workload="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--6mjjr-eth0" Aug 13 00:21:40.166255 containerd[1941]: 2025-08-13 00:21:40.158 [INFO][6753] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:21:40.166255 containerd[1941]: 2025-08-13 00:21:40.161 [INFO][6745] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" Aug 13 00:21:40.167178 containerd[1941]: time="2025-08-13T00:21:40.166350935Z" level=info msg="TearDown network for sandbox \"58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b\" successfully" Aug 13 00:21:40.167178 containerd[1941]: time="2025-08-13T00:21:40.166393943Z" level=info msg="StopPodSandbox for \"58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b\" returns successfully" Aug 13 00:21:40.167819 containerd[1941]: time="2025-08-13T00:21:40.167411207Z" level=info msg="RemovePodSandbox for \"58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b\"" Aug 13 00:21:40.167819 containerd[1941]: time="2025-08-13T00:21:40.167508179Z" level=info msg="Forcibly stopping sandbox \"58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b\"" Aug 13 00:21:40.329077 containerd[1941]: 2025-08-13 00:21:40.257 [WARNING][6768] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--251-k8s-coredns--7c65d6cfc9--6mjjr-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3a2c1391-3856-407e-9f32-3dffc0012695", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-251", ContainerID:"eb20d88dde4551cc14d00a9a0dcf98d3fa26631ce6aeda5dc483f29cd43c23cb", Pod:"coredns-7c65d6cfc9-6mjjr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5e4ec14e916", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:21:40.329077 containerd[1941]: 2025-08-13 00:21:40.257 [INFO][6768] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" Aug 13 00:21:40.329077 containerd[1941]: 2025-08-13 00:21:40.257 [INFO][6768] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" iface="eth0" netns="" Aug 13 00:21:40.329077 containerd[1941]: 2025-08-13 00:21:40.257 [INFO][6768] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" Aug 13 00:21:40.329077 containerd[1941]: 2025-08-13 00:21:40.257 [INFO][6768] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" Aug 13 00:21:40.329077 containerd[1941]: 2025-08-13 00:21:40.301 [INFO][6775] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" HandleID="k8s-pod-network.58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" Workload="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--6mjjr-eth0" Aug 13 00:21:40.329077 containerd[1941]: 2025-08-13 00:21:40.302 [INFO][6775] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:21:40.329077 containerd[1941]: 2025-08-13 00:21:40.302 [INFO][6775] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:21:40.329077 containerd[1941]: 2025-08-13 00:21:40.318 [WARNING][6775] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" HandleID="k8s-pod-network.58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" Workload="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--6mjjr-eth0" Aug 13 00:21:40.329077 containerd[1941]: 2025-08-13 00:21:40.318 [INFO][6775] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" HandleID="k8s-pod-network.58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" Workload="ip--172--31--18--251-k8s-coredns--7c65d6cfc9--6mjjr-eth0" Aug 13 00:21:40.329077 containerd[1941]: 2025-08-13 00:21:40.321 [INFO][6775] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:21:40.329077 containerd[1941]: 2025-08-13 00:21:40.325 [INFO][6768] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b" Aug 13 00:21:40.330936 containerd[1941]: time="2025-08-13T00:21:40.329154420Z" level=info msg="TearDown network for sandbox \"58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b\" successfully" Aug 13 00:21:40.337160 containerd[1941]: time="2025-08-13T00:21:40.336893520Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:21:40.337160 containerd[1941]: time="2025-08-13T00:21:40.337015224Z" level=info msg="RemovePodSandbox \"58648675600837277c1f9fb96b2e6c0a0a03c6c3a0ead4d53eab683c30dace6b\" returns successfully" Aug 13 00:21:40.338138 containerd[1941]: time="2025-08-13T00:21:40.337641648Z" level=info msg="StopPodSandbox for \"ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c\"" Aug 13 00:21:40.516309 containerd[1941]: 2025-08-13 00:21:40.431 [WARNING][6790] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--251-k8s-goldmane--58fd7646b9--555lv-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"0399420f-0a76-4a22-be47-c06978fb8813", ResourceVersion:"1355", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-251", ContainerID:"449cbf60fe20e80100f16917fea692462dfc5f60625ab9c3dad5c4691c339131", Pod:"goldmane-58fd7646b9-555lv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.32.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3f73ef32eb3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:21:40.516309 containerd[1941]: 2025-08-13 00:21:40.431 [INFO][6790] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" Aug 13 00:21:40.516309 containerd[1941]: 2025-08-13 00:21:40.431 [INFO][6790] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" iface="eth0" netns="" Aug 13 00:21:40.516309 containerd[1941]: 2025-08-13 00:21:40.431 [INFO][6790] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" Aug 13 00:21:40.516309 containerd[1941]: 2025-08-13 00:21:40.431 [INFO][6790] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" Aug 13 00:21:40.516309 containerd[1941]: 2025-08-13 00:21:40.481 [INFO][6798] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" HandleID="k8s-pod-network.ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" Workload="ip--172--31--18--251-k8s-goldmane--58fd7646b9--555lv-eth0" Aug 13 00:21:40.516309 containerd[1941]: 2025-08-13 00:21:40.482 [INFO][6798] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:21:40.516309 containerd[1941]: 2025-08-13 00:21:40.482 [INFO][6798] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:21:40.516309 containerd[1941]: 2025-08-13 00:21:40.502 [WARNING][6798] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" HandleID="k8s-pod-network.ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" Workload="ip--172--31--18--251-k8s-goldmane--58fd7646b9--555lv-eth0" Aug 13 00:21:40.516309 containerd[1941]: 2025-08-13 00:21:40.502 [INFO][6798] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" HandleID="k8s-pod-network.ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" Workload="ip--172--31--18--251-k8s-goldmane--58fd7646b9--555lv-eth0" Aug 13 00:21:40.516309 containerd[1941]: 2025-08-13 00:21:40.505 [INFO][6798] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:21:40.516309 containerd[1941]: 2025-08-13 00:21:40.510 [INFO][6790] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" Aug 13 00:21:40.516309 containerd[1941]: time="2025-08-13T00:21:40.516236773Z" level=info msg="TearDown network for sandbox \"ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c\" successfully" Aug 13 00:21:40.516309 containerd[1941]: time="2025-08-13T00:21:40.516276121Z" level=info msg="StopPodSandbox for \"ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c\" returns successfully" Aug 13 00:21:40.519330 containerd[1941]: time="2025-08-13T00:21:40.518497957Z" level=info msg="RemovePodSandbox for \"ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c\"" Aug 13 00:21:40.519330 containerd[1941]: time="2025-08-13T00:21:40.518544673Z" level=info msg="Forcibly stopping sandbox \"ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c\"" Aug 13 00:21:40.667716 containerd[1941]: 2025-08-13 00:21:40.598 [WARNING][6812] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--251-k8s-goldmane--58fd7646b9--555lv-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"0399420f-0a76-4a22-be47-c06978fb8813", ResourceVersion:"1355", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-251", ContainerID:"449cbf60fe20e80100f16917fea692462dfc5f60625ab9c3dad5c4691c339131", Pod:"goldmane-58fd7646b9-555lv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.32.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3f73ef32eb3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:21:40.667716 containerd[1941]: 2025-08-13 00:21:40.599 [INFO][6812] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" Aug 13 00:21:40.667716 containerd[1941]: 2025-08-13 00:21:40.599 [INFO][6812] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" iface="eth0" netns="" Aug 13 00:21:40.667716 containerd[1941]: 2025-08-13 00:21:40.599 [INFO][6812] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" Aug 13 00:21:40.667716 containerd[1941]: 2025-08-13 00:21:40.599 [INFO][6812] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" Aug 13 00:21:40.667716 containerd[1941]: 2025-08-13 00:21:40.645 [INFO][6820] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" HandleID="k8s-pod-network.ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" Workload="ip--172--31--18--251-k8s-goldmane--58fd7646b9--555lv-eth0" Aug 13 00:21:40.667716 containerd[1941]: 2025-08-13 00:21:40.646 [INFO][6820] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:21:40.667716 containerd[1941]: 2025-08-13 00:21:40.646 [INFO][6820] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:21:40.667716 containerd[1941]: 2025-08-13 00:21:40.659 [WARNING][6820] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" HandleID="k8s-pod-network.ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" Workload="ip--172--31--18--251-k8s-goldmane--58fd7646b9--555lv-eth0" Aug 13 00:21:40.667716 containerd[1941]: 2025-08-13 00:21:40.659 [INFO][6820] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" HandleID="k8s-pod-network.ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" Workload="ip--172--31--18--251-k8s-goldmane--58fd7646b9--555lv-eth0" Aug 13 00:21:40.667716 containerd[1941]: 2025-08-13 00:21:40.661 [INFO][6820] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:21:40.667716 containerd[1941]: 2025-08-13 00:21:40.665 [INFO][6812] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c" Aug 13 00:21:40.669582 containerd[1941]: time="2025-08-13T00:21:40.667910654Z" level=info msg="TearDown network for sandbox \"ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c\" successfully" Aug 13 00:21:40.676604 containerd[1941]: time="2025-08-13T00:21:40.676469846Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:21:40.676794 containerd[1941]: time="2025-08-13T00:21:40.676649162Z" level=info msg="RemovePodSandbox \"ea52fa7cc2462548564fcab9e698b8d7caae524640fedfaaab494cd954176c3c\" returns successfully" Aug 13 00:21:43.106546 systemd[1]: Started sshd@25-172.31.18.251:22-139.178.89.65:59688.service - OpenSSH per-connection server daemon (139.178.89.65:59688). Aug 13 00:21:43.300051 sshd[6872]: Accepted publickey for core from 139.178.89.65 port 59688 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:21:43.305534 sshd[6872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:43.322076 systemd-logind[1913]: New session 26 of user core. Aug 13 00:21:43.328461 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 00:21:43.611266 sshd[6872]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:43.618370 systemd[1]: sshd@25-172.31.18.251:22-139.178.89.65:59688.service: Deactivated successfully. Aug 13 00:21:43.624549 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:21:43.628175 systemd-logind[1913]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:21:43.634241 systemd-logind[1913]: Removed session 26. Aug 13 00:21:58.574153 systemd[1]: cri-containerd-023f593a9ebc35a6b142dceca38c6618ad6c6b1043b095cb6a364c448f273f86.scope: Deactivated successfully. Aug 13 00:21:58.575993 systemd[1]: cri-containerd-023f593a9ebc35a6b142dceca38c6618ad6c6b1043b095cb6a364c448f273f86.scope: Consumed 27.688s CPU time. Aug 13 00:21:58.624945 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-023f593a9ebc35a6b142dceca38c6618ad6c6b1043b095cb6a364c448f273f86-rootfs.mount: Deactivated successfully. Aug 13 00:21:58.646002 containerd[1941]: time="2025-08-13T00:21:58.620747203Z" level=info msg="shim disconnected" id=023f593a9ebc35a6b142dceca38c6618ad6c6b1043b095cb6a364c448f273f86 namespace=k8s.io Aug 13 00:21:58.646002 containerd[1941]: time="2025-08-13T00:21:58.645988099Z" level=warning msg="cleaning up after shim disconnected" id=023f593a9ebc35a6b142dceca38c6618ad6c6b1043b095cb6a364c448f273f86 namespace=k8s.io Aug 13 00:21:58.646654 containerd[1941]: time="2025-08-13T00:21:58.646022827Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:21:58.779600 systemd[1]: cri-containerd-8c068fb99f2dced9a2c61fe87dd7b2867c792b75802ed6b988ce6b646b128fc5.scope: Deactivated successfully. Aug 13 00:21:58.780133 systemd[1]: cri-containerd-8c068fb99f2dced9a2c61fe87dd7b2867c792b75802ed6b988ce6b646b128fc5.scope: Consumed 5.135s CPU time, 19.6M memory peak, 0B memory swap peak. Aug 13 00:21:58.835821 containerd[1941]: time="2025-08-13T00:21:58.832635620Z" level=info msg="shim disconnected" id=8c068fb99f2dced9a2c61fe87dd7b2867c792b75802ed6b988ce6b646b128fc5 namespace=k8s.io Aug 13 00:21:58.835821 containerd[1941]: time="2025-08-13T00:21:58.832717688Z" level=warning msg="cleaning up after shim disconnected" id=8c068fb99f2dced9a2c61fe87dd7b2867c792b75802ed6b988ce6b646b128fc5 namespace=k8s.io Aug 13 00:21:58.835821 containerd[1941]: time="2025-08-13T00:21:58.832739504Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:21:58.839313 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c068fb99f2dced9a2c61fe87dd7b2867c792b75802ed6b988ce6b646b128fc5-rootfs.mount: Deactivated successfully. Aug 13 00:21:59.237610 kubelet[3135]: E0813 00:21:59.237238 3135 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.251:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-251?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Aug 13 00:21:59.270853 kubelet[3135]: I0813 00:21:59.270741 3135 scope.go:117] "RemoveContainer" containerID="8c068fb99f2dced9a2c61fe87dd7b2867c792b75802ed6b988ce6b646b128fc5" Aug 13 00:21:59.275267 kubelet[3135]: I0813 00:21:59.274286 3135 scope.go:117] "RemoveContainer" containerID="023f593a9ebc35a6b142dceca38c6618ad6c6b1043b095cb6a364c448f273f86" Aug 13 00:21:59.278135 containerd[1941]: time="2025-08-13T00:21:59.278078790Z" level=info msg="CreateContainer within sandbox \"b1ea712b4be93aa5da8e157f11c4675e52601605bd46aaf7b9e92e2c76aa52be\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Aug 13 00:21:59.307975 containerd[1941]: time="2025-08-13T00:21:59.307349478Z" level=info msg="CreateContainer within sandbox \"b1ea712b4be93aa5da8e157f11c4675e52601605bd46aaf7b9e92e2c76aa52be\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"d7611240b1ed2720d2558ae005e501813efb80c225a654327a4c5a750e51e4ff\"" Aug 13 00:21:59.310468 containerd[1941]: time="2025-08-13T00:21:59.310214059Z" level=info msg="StartContainer for \"d7611240b1ed2720d2558ae005e501813efb80c225a654327a4c5a750e51e4ff\"" Aug 13 00:21:59.361389 containerd[1941]: time="2025-08-13T00:21:59.361220707Z" level=info msg="CreateContainer within sandbox \"9baf0beff4028447d2109af2eb9e2ef182cd41084d38eee9f1553c16cb386c00\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Aug 13 00:21:59.373074 systemd[1]: Started cri-containerd-d7611240b1ed2720d2558ae005e501813efb80c225a654327a4c5a750e51e4ff.scope - libcontainer container d7611240b1ed2720d2558ae005e501813efb80c225a654327a4c5a750e51e4ff. Aug 13 00:21:59.429136 containerd[1941]: time="2025-08-13T00:21:59.428931607Z" level=info msg="CreateContainer within sandbox \"9baf0beff4028447d2109af2eb9e2ef182cd41084d38eee9f1553c16cb386c00\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"09822a47a39a2d487f9d07c9a165668b3af1be68ce1af4cd05cae6369d52c4aa\"" Aug 13 00:21:59.436187 containerd[1941]: time="2025-08-13T00:21:59.435908407Z" level=info msg="StartContainer for \"09822a47a39a2d487f9d07c9a165668b3af1be68ce1af4cd05cae6369d52c4aa\"" Aug 13 00:21:59.451987 containerd[1941]: time="2025-08-13T00:21:59.451098511Z" level=info msg="StartContainer for \"d7611240b1ed2720d2558ae005e501813efb80c225a654327a4c5a750e51e4ff\" returns successfully" Aug 13 00:21:59.495069 systemd[1]: Started cri-containerd-09822a47a39a2d487f9d07c9a165668b3af1be68ce1af4cd05cae6369d52c4aa.scope - libcontainer container 09822a47a39a2d487f9d07c9a165668b3af1be68ce1af4cd05cae6369d52c4aa. Aug 13 00:21:59.575893 containerd[1941]: time="2025-08-13T00:21:59.575824556Z" level=info msg="StartContainer for \"09822a47a39a2d487f9d07c9a165668b3af1be68ce1af4cd05cae6369d52c4aa\" returns successfully" Aug 13 00:22:02.011716 systemd[1]: cri-containerd-87f05eb8047ff116df02dc4d7a702476844f9bf9f5ea7b310c8f7e1c4c444389.scope: Deactivated successfully. Aug 13 00:22:02.013717 systemd[1]: cri-containerd-87f05eb8047ff116df02dc4d7a702476844f9bf9f5ea7b310c8f7e1c4c444389.scope: Consumed 3.647s CPU time, 16.2M memory peak, 0B memory swap peak. Aug 13 00:22:02.064974 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87f05eb8047ff116df02dc4d7a702476844f9bf9f5ea7b310c8f7e1c4c444389-rootfs.mount: Deactivated successfully. Aug 13 00:22:02.066263 containerd[1941]: time="2025-08-13T00:22:02.065945468Z" level=info msg="shim disconnected" id=87f05eb8047ff116df02dc4d7a702476844f9bf9f5ea7b310c8f7e1c4c444389 namespace=k8s.io Aug 13 00:22:02.066263 containerd[1941]: time="2025-08-13T00:22:02.066058208Z" level=warning msg="cleaning up after shim disconnected" id=87f05eb8047ff116df02dc4d7a702476844f9bf9f5ea7b310c8f7e1c4c444389 namespace=k8s.io Aug 13 00:22:02.066263 containerd[1941]: time="2025-08-13T00:22:02.066103676Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:22:02.300601 kubelet[3135]: I0813 00:22:02.300343 3135 scope.go:117] "RemoveContainer" containerID="87f05eb8047ff116df02dc4d7a702476844f9bf9f5ea7b310c8f7e1c4c444389" Aug 13 00:22:02.308252 containerd[1941]: time="2025-08-13T00:22:02.307589793Z" level=info msg="CreateContainer within sandbox \"354fed5e6eca83a4168d60f496e8493d4e29ad4721dad1e2d832a4cfa214b02a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Aug 13 00:22:02.336521 containerd[1941]: time="2025-08-13T00:22:02.336408226Z" level=info msg="CreateContainer within sandbox \"354fed5e6eca83a4168d60f496e8493d4e29ad4721dad1e2d832a4cfa214b02a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"67a4c2de916106672b14ab190a89bb31d277539e42403ef0de78ba707b270aa8\"" Aug 13 00:22:02.340547 containerd[1941]: time="2025-08-13T00:22:02.338462806Z" level=info msg="StartContainer for \"67a4c2de916106672b14ab190a89bb31d277539e42403ef0de78ba707b270aa8\"" Aug 13 00:22:02.411853 systemd[1]: Started cri-containerd-67a4c2de916106672b14ab190a89bb31d277539e42403ef0de78ba707b270aa8.scope - libcontainer container 67a4c2de916106672b14ab190a89bb31d277539e42403ef0de78ba707b270aa8. Aug 13 00:22:02.492687 containerd[1941]: time="2025-08-13T00:22:02.492612322Z" level=info msg="StartContainer for \"67a4c2de916106672b14ab190a89bb31d277539e42403ef0de78ba707b270aa8\" returns successfully" Aug 13 00:22:09.238016 kubelet[3135]: E0813 00:22:09.237925 3135 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.251:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-251?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Aug 13 00:22:10.922544 systemd[1]: cri-containerd-d7611240b1ed2720d2558ae005e501813efb80c225a654327a4c5a750e51e4ff.scope: Deactivated successfully. Aug 13 00:22:10.964059 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7611240b1ed2720d2558ae005e501813efb80c225a654327a4c5a750e51e4ff-rootfs.mount: Deactivated successfully. Aug 13 00:22:10.975942 containerd[1941]: time="2025-08-13T00:22:10.975847220Z" level=info msg="shim disconnected" id=d7611240b1ed2720d2558ae005e501813efb80c225a654327a4c5a750e51e4ff namespace=k8s.io Aug 13 00:22:10.975942 containerd[1941]: time="2025-08-13T00:22:10.975936980Z" level=warning msg="cleaning up after shim disconnected" id=d7611240b1ed2720d2558ae005e501813efb80c225a654327a4c5a750e51e4ff namespace=k8s.io Aug 13 00:22:10.976933 containerd[1941]: time="2025-08-13T00:22:10.975960116Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:22:11.335632 kubelet[3135]: I0813 00:22:11.335480 3135 scope.go:117] "RemoveContainer" containerID="023f593a9ebc35a6b142dceca38c6618ad6c6b1043b095cb6a364c448f273f86" Aug 13 00:22:11.336257 kubelet[3135]: I0813 00:22:11.336008 3135 scope.go:117] "RemoveContainer" containerID="d7611240b1ed2720d2558ae005e501813efb80c225a654327a4c5a750e51e4ff" Aug 13 00:22:11.343078 kubelet[3135]: E0813 00:22:11.342971 3135 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-5bf8dfcb4-v5vss_tigera-operator(7b5fb2ea-e517-41ef-ba7c-98f08d65dd7b)\"" pod="tigera-operator/tigera-operator-5bf8dfcb4-v5vss" podUID="7b5fb2ea-e517-41ef-ba7c-98f08d65dd7b" Aug 13 00:22:11.343364 containerd[1941]: time="2025-08-13T00:22:11.342605910Z" level=info msg="RemoveContainer for \"023f593a9ebc35a6b142dceca38c6618ad6c6b1043b095cb6a364c448f273f86\"" Aug 13 00:22:11.351491 containerd[1941]: time="2025-08-13T00:22:11.351419262Z" level=info msg="RemoveContainer for \"023f593a9ebc35a6b142dceca38c6618ad6c6b1043b095cb6a364c448f273f86\" returns successfully"