Aug 13 00:18:34.236133 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Aug 13 00:18:34.240265 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Aug 12 22:21:53 -00 2025 Aug 13 00:18:34.240315 kernel: KASLR disabled due to lack of seed Aug 13 00:18:34.240335 kernel: efi: EFI v2.7 by EDK II Aug 13 00:18:34.240352 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7affea98 MEMRESERVE=0x7852ee18 Aug 13 00:18:34.240368 kernel: ACPI: Early table checksum verification disabled Aug 13 00:18:34.240386 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Aug 13 00:18:34.240402 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Aug 13 00:18:34.240418 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Aug 13 00:18:34.240433 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Aug 13 00:18:34.240454 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Aug 13 00:18:34.240470 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Aug 13 00:18:34.240486 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Aug 13 00:18:34.240502 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Aug 13 00:18:34.240521 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Aug 13 00:18:34.240541 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Aug 13 00:18:34.240559 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Aug 13 00:18:34.240575 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Aug 13 00:18:34.240591 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Aug 13 00:18:34.240608 kernel: printk: bootconsole [uart0] enabled Aug 13 00:18:34.240624 kernel: NUMA: Failed to initialise from firmware Aug 13 00:18:34.240641 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Aug 13 00:18:34.240658 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Aug 13 00:18:34.240674 kernel: Zone ranges: Aug 13 00:18:34.240690 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Aug 13 00:18:34.240707 kernel: DMA32 empty Aug 13 00:18:34.240727 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Aug 13 00:18:34.240744 kernel: Movable zone start for each node Aug 13 00:18:34.240760 kernel: Early memory node ranges Aug 13 00:18:34.240777 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Aug 13 00:18:34.240793 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Aug 13 00:18:34.240809 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Aug 13 00:18:34.240826 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Aug 13 00:18:34.240842 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Aug 13 00:18:34.240859 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Aug 13 00:18:34.240875 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Aug 13 00:18:34.240891 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Aug 13 00:18:34.240908 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Aug 13 00:18:34.240928 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Aug 13 00:18:34.240946 kernel: psci: probing for conduit method from ACPI. Aug 13 00:18:34.240970 kernel: psci: PSCIv1.0 detected in firmware. Aug 13 00:18:34.240987 kernel: psci: Using standard PSCI v0.2 function IDs Aug 13 00:18:34.241005 kernel: psci: Trusted OS migration not required Aug 13 00:18:34.241026 kernel: psci: SMC Calling Convention v1.1 Aug 13 00:18:34.241045 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Aug 13 00:18:34.241063 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Aug 13 00:18:34.241080 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Aug 13 00:18:34.241098 kernel: pcpu-alloc: [0] 0 [0] 1 Aug 13 00:18:34.241116 kernel: Detected PIPT I-cache on CPU0 Aug 13 00:18:34.241133 kernel: CPU features: detected: GIC system register CPU interface Aug 13 00:18:34.241151 kernel: CPU features: detected: Spectre-v2 Aug 13 00:18:34.241190 kernel: CPU features: detected: Spectre-v3a Aug 13 00:18:34.241209 kernel: CPU features: detected: Spectre-BHB Aug 13 00:18:34.241226 kernel: CPU features: detected: ARM erratum 1742098 Aug 13 00:18:34.241250 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Aug 13 00:18:34.241268 kernel: alternatives: applying boot alternatives Aug 13 00:18:34.241288 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 13 00:18:34.241307 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:18:34.241324 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:18:34.241342 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:18:34.241360 kernel: Fallback order for Node 0: 0 Aug 13 00:18:34.241378 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Aug 13 00:18:34.241395 kernel: Policy zone: Normal Aug 13 00:18:34.241412 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:18:34.241430 kernel: software IO TLB: area num 2. Aug 13 00:18:34.241452 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Aug 13 00:18:34.241470 kernel: Memory: 3820088K/4030464K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 210376K reserved, 0K cma-reserved) Aug 13 00:18:34.241488 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 00:18:34.241505 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:18:34.241524 kernel: rcu: RCU event tracing is enabled. Aug 13 00:18:34.241542 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 00:18:34.241560 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:18:34.241578 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:18:34.241595 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:18:34.241613 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 00:18:34.241630 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 13 00:18:34.241652 kernel: GICv3: 96 SPIs implemented Aug 13 00:18:34.241670 kernel: GICv3: 0 Extended SPIs implemented Aug 13 00:18:34.241688 kernel: Root IRQ handler: gic_handle_irq Aug 13 00:18:34.241705 kernel: GICv3: GICv3 features: 16 PPIs Aug 13 00:18:34.241722 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Aug 13 00:18:34.241740 kernel: ITS [mem 0x10080000-0x1009ffff] Aug 13 00:18:34.241757 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Aug 13 00:18:34.241775 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Aug 13 00:18:34.241793 kernel: GICv3: using LPI property table @0x00000004000d0000 Aug 13 00:18:34.241810 kernel: ITS: Using hypervisor restricted LPI range [128] Aug 13 00:18:34.241828 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Aug 13 00:18:34.241845 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 00:18:34.241868 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Aug 13 00:18:34.241885 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Aug 13 00:18:34.241903 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Aug 13 00:18:34.241921 kernel: Console: colour dummy device 80x25 Aug 13 00:18:34.241939 kernel: printk: console [tty1] enabled Aug 13 00:18:34.241957 kernel: ACPI: Core revision 20230628 Aug 13 00:18:34.241975 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Aug 13 00:18:34.241993 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:18:34.242011 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 00:18:34.242033 kernel: landlock: Up and running. Aug 13 00:18:34.242051 kernel: SELinux: Initializing. Aug 13 00:18:34.242069 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:18:34.242087 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:18:34.242105 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:18:34.242123 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 00:18:34.242141 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:18:34.244208 kernel: rcu: Max phase no-delay instances is 400. Aug 13 00:18:34.244249 kernel: Platform MSI: ITS@0x10080000 domain created Aug 13 00:18:34.244276 kernel: PCI/MSI: ITS@0x10080000 domain created Aug 13 00:18:34.244314 kernel: Remapping and enabling EFI services. Aug 13 00:18:34.244334 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:18:34.244352 kernel: Detected PIPT I-cache on CPU1 Aug 13 00:18:34.244387 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Aug 13 00:18:34.244408 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Aug 13 00:18:34.244427 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Aug 13 00:18:34.244445 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 00:18:34.244463 kernel: SMP: Total of 2 processors activated. Aug 13 00:18:34.244480 kernel: CPU features: detected: 32-bit EL0 Support Aug 13 00:18:34.244505 kernel: CPU features: detected: 32-bit EL1 Support Aug 13 00:18:34.244523 kernel: CPU features: detected: CRC32 instructions Aug 13 00:18:34.244554 kernel: CPU: All CPU(s) started at EL1 Aug 13 00:18:34.244577 kernel: alternatives: applying system-wide alternatives Aug 13 00:18:34.244596 kernel: devtmpfs: initialized Aug 13 00:18:34.244615 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:18:34.244634 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 00:18:34.244653 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:18:34.244672 kernel: SMBIOS 3.0.0 present. Aug 13 00:18:34.244695 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Aug 13 00:18:34.244714 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:18:34.244734 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 13 00:18:34.244754 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 13 00:18:34.244773 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 13 00:18:34.244792 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:18:34.244811 kernel: audit: type=2000 audit(0.287:1): state=initialized audit_enabled=0 res=1 Aug 13 00:18:34.244835 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:18:34.244854 kernel: cpuidle: using governor menu Aug 13 00:18:34.244873 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 13 00:18:34.244892 kernel: ASID allocator initialised with 65536 entries Aug 13 00:18:34.244911 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:18:34.244930 kernel: Serial: AMBA PL011 UART driver Aug 13 00:18:34.244949 kernel: Modules: 17488 pages in range for non-PLT usage Aug 13 00:18:34.244968 kernel: Modules: 509008 pages in range for PLT usage Aug 13 00:18:34.244987 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:18:34.245009 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 00:18:34.245029 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 13 00:18:34.245048 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 13 00:18:34.245067 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:18:34.245085 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 00:18:34.245104 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 13 00:18:34.245123 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 13 00:18:34.245142 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:18:34.245182 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:18:34.245212 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:18:34.245232 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:18:34.245251 kernel: ACPI: Interpreter enabled Aug 13 00:18:34.245269 kernel: ACPI: Using GIC for interrupt routing Aug 13 00:18:34.245288 kernel: ACPI: MCFG table detected, 1 entries Aug 13 00:18:34.245307 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Aug 13 00:18:34.245626 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:18:34.245849 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 13 00:18:34.246065 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 13 00:18:34.248071 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Aug 13 00:18:34.248557 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Aug 13 00:18:34.248589 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Aug 13 00:18:34.248609 kernel: acpiphp: Slot [1] registered Aug 13 00:18:34.248628 kernel: acpiphp: Slot [2] registered Aug 13 00:18:34.248647 kernel: acpiphp: Slot [3] registered Aug 13 00:18:34.248665 kernel: acpiphp: Slot [4] registered Aug 13 00:18:34.248694 kernel: acpiphp: Slot [5] registered Aug 13 00:18:34.248713 kernel: acpiphp: Slot [6] registered Aug 13 00:18:34.248732 kernel: acpiphp: Slot [7] registered Aug 13 00:18:34.248751 kernel: acpiphp: Slot [8] registered Aug 13 00:18:34.248769 kernel: acpiphp: Slot [9] registered Aug 13 00:18:34.248788 kernel: acpiphp: Slot [10] registered Aug 13 00:18:34.248806 kernel: acpiphp: Slot [11] registered Aug 13 00:18:34.248824 kernel: acpiphp: Slot [12] registered Aug 13 00:18:34.248843 kernel: acpiphp: Slot [13] registered Aug 13 00:18:34.248862 kernel: acpiphp: Slot [14] registered Aug 13 00:18:34.248885 kernel: acpiphp: Slot [15] registered Aug 13 00:18:34.248903 kernel: acpiphp: Slot [16] registered Aug 13 00:18:34.248922 kernel: acpiphp: Slot [17] registered Aug 13 00:18:34.248940 kernel: acpiphp: Slot [18] registered Aug 13 00:18:34.248959 kernel: acpiphp: Slot [19] registered Aug 13 00:18:34.248977 kernel: acpiphp: Slot [20] registered Aug 13 00:18:34.248996 kernel: acpiphp: Slot [21] registered Aug 13 00:18:34.249015 kernel: acpiphp: Slot [22] registered Aug 13 00:18:34.249033 kernel: acpiphp: Slot [23] registered Aug 13 00:18:34.249056 kernel: acpiphp: Slot [24] registered Aug 13 00:18:34.249075 kernel: acpiphp: Slot [25] registered Aug 13 00:18:34.249094 kernel: acpiphp: Slot [26] registered Aug 13 00:18:34.249112 kernel: acpiphp: Slot [27] registered Aug 13 00:18:34.249131 kernel: acpiphp: Slot [28] registered Aug 13 00:18:34.249150 kernel: acpiphp: Slot [29] registered Aug 13 00:18:34.249191 kernel: acpiphp: Slot [30] registered Aug 13 00:18:34.249211 kernel: acpiphp: Slot [31] registered Aug 13 00:18:34.249230 kernel: PCI host bridge to bus 0000:00 Aug 13 00:18:34.249443 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Aug 13 00:18:34.249638 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 13 00:18:34.249830 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Aug 13 00:18:34.250014 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Aug 13 00:18:34.251353 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Aug 13 00:18:34.251618 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Aug 13 00:18:34.251832 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Aug 13 00:18:34.252068 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Aug 13 00:18:34.253444 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Aug 13 00:18:34.253681 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Aug 13 00:18:34.255374 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Aug 13 00:18:34.255603 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Aug 13 00:18:34.257372 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Aug 13 00:18:34.257634 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Aug 13 00:18:34.257863 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Aug 13 00:18:34.258095 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Aug 13 00:18:34.258436 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Aug 13 00:18:34.258658 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Aug 13 00:18:34.258873 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Aug 13 00:18:34.259097 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Aug 13 00:18:34.259398 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Aug 13 00:18:34.259594 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 13 00:18:34.259787 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Aug 13 00:18:34.259813 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 13 00:18:34.259833 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 13 00:18:34.259852 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 13 00:18:34.259872 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 13 00:18:34.259891 kernel: iommu: Default domain type: Translated Aug 13 00:18:34.259909 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 13 00:18:34.259934 kernel: efivars: Registered efivars operations Aug 13 00:18:34.259953 kernel: vgaarb: loaded Aug 13 00:18:34.259972 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 13 00:18:34.259990 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:18:34.260009 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:18:34.260027 kernel: pnp: PnP ACPI init Aug 13 00:18:34.261339 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Aug 13 00:18:34.261378 kernel: pnp: PnP ACPI: found 1 devices Aug 13 00:18:34.261405 kernel: NET: Registered PF_INET protocol family Aug 13 00:18:34.261426 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:18:34.261445 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:18:34.261464 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:18:34.261483 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:18:34.261502 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 00:18:34.261521 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:18:34.261540 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:18:34.261559 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:18:34.261582 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:18:34.261601 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:18:34.261619 kernel: kvm [1]: HYP mode not available Aug 13 00:18:34.261638 kernel: Initialise system trusted keyrings Aug 13 00:18:34.261657 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:18:34.261675 kernel: Key type asymmetric registered Aug 13 00:18:34.261694 kernel: Asymmetric key parser 'x509' registered Aug 13 00:18:34.261712 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 00:18:34.261732 kernel: io scheduler mq-deadline registered Aug 13 00:18:34.261755 kernel: io scheduler kyber registered Aug 13 00:18:34.261773 kernel: io scheduler bfq registered Aug 13 00:18:34.261991 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Aug 13 00:18:34.262019 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 13 00:18:34.262038 kernel: ACPI: button: Power Button [PWRB] Aug 13 00:18:34.262057 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Aug 13 00:18:34.262076 kernel: ACPI: button: Sleep Button [SLPB] Aug 13 00:18:34.262095 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:18:34.262120 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Aug 13 00:18:34.262371 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Aug 13 00:18:34.262398 kernel: printk: console [ttyS0] disabled Aug 13 00:18:34.262418 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Aug 13 00:18:34.262437 kernel: printk: console [ttyS0] enabled Aug 13 00:18:34.262456 kernel: printk: bootconsole [uart0] disabled Aug 13 00:18:34.262474 kernel: thunder_xcv, ver 1.0 Aug 13 00:18:34.262493 kernel: thunder_bgx, ver 1.0 Aug 13 00:18:34.262512 kernel: nicpf, ver 1.0 Aug 13 00:18:34.262537 kernel: nicvf, ver 1.0 Aug 13 00:18:34.262759 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 13 00:18:34.262959 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-13T00:18:33 UTC (1755044313) Aug 13 00:18:34.262985 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 00:18:34.263005 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Aug 13 00:18:34.263023 kernel: watchdog: Delayed init of the lockup detector failed: -19 Aug 13 00:18:34.263042 kernel: watchdog: Hard watchdog permanently disabled Aug 13 00:18:34.263061 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:18:34.263084 kernel: Segment Routing with IPv6 Aug 13 00:18:34.263104 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:18:34.263123 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:18:34.263141 kernel: Key type dns_resolver registered Aug 13 00:18:34.266811 kernel: registered taskstats version 1 Aug 13 00:18:34.266854 kernel: Loading compiled-in X.509 certificates Aug 13 00:18:34.266875 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 7263800c6d21650660e2b030c1023dce09b1e8b6' Aug 13 00:18:34.266894 kernel: Key type .fscrypt registered Aug 13 00:18:34.266913 kernel: Key type fscrypt-provisioning registered Aug 13 00:18:34.266941 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:18:34.266960 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:18:34.266979 kernel: ima: No architecture policies found Aug 13 00:18:34.266998 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 13 00:18:34.267017 kernel: clk: Disabling unused clocks Aug 13 00:18:34.267036 kernel: Freeing unused kernel memory: 39424K Aug 13 00:18:34.267055 kernel: Run /init as init process Aug 13 00:18:34.267073 kernel: with arguments: Aug 13 00:18:34.267091 kernel: /init Aug 13 00:18:34.267110 kernel: with environment: Aug 13 00:18:34.267133 kernel: HOME=/ Aug 13 00:18:34.267152 kernel: TERM=linux Aug 13 00:18:34.267215 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:18:34.267241 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 00:18:34.267265 systemd[1]: Detected virtualization amazon. Aug 13 00:18:34.267286 systemd[1]: Detected architecture arm64. Aug 13 00:18:34.267306 systemd[1]: Running in initrd. Aug 13 00:18:34.267332 systemd[1]: No hostname configured, using default hostname. Aug 13 00:18:34.267352 systemd[1]: Hostname set to . Aug 13 00:18:34.267373 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:18:34.267393 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:18:34.267413 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:18:34.267433 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:18:34.267454 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 00:18:34.267475 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:18:34.267500 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 00:18:34.267521 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 00:18:34.267545 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 00:18:34.267565 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 00:18:34.267586 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:18:34.267606 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:18:34.267626 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:18:34.267651 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:18:34.267671 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:18:34.267691 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:18:34.267712 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:18:34.267732 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:18:34.267752 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:18:34.267772 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 00:18:34.267793 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:18:34.267813 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:18:34.267838 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:18:34.267858 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:18:34.267878 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 00:18:34.267899 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:18:34.267919 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 00:18:34.267939 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:18:34.267958 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:18:34.267979 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:18:34.268003 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:18:34.268024 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 00:18:34.268098 systemd-journald[251]: Collecting audit messages is disabled. Aug 13 00:18:34.268144 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:18:34.268210 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:18:34.268234 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:18:34.268255 systemd-journald[251]: Journal started Aug 13 00:18:34.268315 systemd-journald[251]: Runtime Journal (/run/log/journal/ec20400600ddaa941787134241882349) is 8.0M, max 75.3M, 67.3M free. Aug 13 00:18:34.250892 systemd-modules-load[252]: Inserted module 'overlay' Aug 13 00:18:34.285202 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:18:34.292620 kernel: Bridge firewalling registered Aug 13 00:18:34.292685 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:18:34.293478 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:18:34.296951 systemd-modules-load[252]: Inserted module 'br_netfilter' Aug 13 00:18:34.300795 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:18:34.305890 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:18:34.320663 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:18:34.328898 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:18:34.338501 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:18:34.344438 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:18:34.370582 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:18:34.384524 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:18:34.401542 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:18:34.413532 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:18:34.425104 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:18:34.442537 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 00:18:34.473910 dracut-cmdline[290]: dracut-dracut-053 Aug 13 00:18:34.487892 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 13 00:18:34.493614 systemd-resolved[283]: Positive Trust Anchors: Aug 13 00:18:34.493636 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:18:34.493698 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:18:34.688184 kernel: SCSI subsystem initialized Aug 13 00:18:34.695198 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:18:34.708201 kernel: iscsi: registered transport (tcp) Aug 13 00:18:34.731325 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:18:34.731400 kernel: QLogic iSCSI HBA Driver Aug 13 00:18:34.783196 kernel: random: crng init done Aug 13 00:18:34.783984 systemd-resolved[283]: Defaulting to hostname 'linux'. Aug 13 00:18:34.786329 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:18:34.793809 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:18:34.818931 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 00:18:34.836417 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 00:18:34.864998 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:18:34.865081 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:18:34.865109 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 00:18:34.934213 kernel: raid6: neonx8 gen() 6698 MB/s Aug 13 00:18:34.951197 kernel: raid6: neonx4 gen() 6492 MB/s Aug 13 00:18:34.968197 kernel: raid6: neonx2 gen() 5406 MB/s Aug 13 00:18:34.985203 kernel: raid6: neonx1 gen() 3934 MB/s Aug 13 00:18:35.003195 kernel: raid6: int64x8 gen() 3792 MB/s Aug 13 00:18:35.020195 kernel: raid6: int64x4 gen() 3692 MB/s Aug 13 00:18:35.038195 kernel: raid6: int64x2 gen() 3546 MB/s Aug 13 00:18:35.056426 kernel: raid6: int64x1 gen() 2767 MB/s Aug 13 00:18:35.056465 kernel: raid6: using algorithm neonx8 gen() 6698 MB/s Aug 13 00:18:35.075199 kernel: raid6: .... xor() 4876 MB/s, rmw enabled Aug 13 00:18:35.075237 kernel: raid6: using neon recovery algorithm Aug 13 00:18:35.083199 kernel: xor: measuring software checksum speed Aug 13 00:18:35.085488 kernel: 8regs : 10203 MB/sec Aug 13 00:18:35.085523 kernel: 32regs : 11894 MB/sec Aug 13 00:18:35.086794 kernel: arm64_neon : 9493 MB/sec Aug 13 00:18:35.086828 kernel: xor: using function: 32regs (11894 MB/sec) Aug 13 00:18:35.172212 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 00:18:35.193254 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:18:35.211502 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:18:35.255286 systemd-udevd[471]: Using default interface naming scheme 'v255'. Aug 13 00:18:35.263471 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:18:35.277961 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 00:18:35.323730 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Aug 13 00:18:35.382120 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:18:35.394614 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:18:35.518234 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:18:35.528539 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 00:18:35.573180 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 00:18:35.579338 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:18:35.585116 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:18:35.588169 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:18:35.602467 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 00:18:35.652147 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:18:35.729543 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 13 00:18:35.729627 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Aug 13 00:18:35.735117 kernel: ena 0000:00:05.0: ENA device version: 0.10 Aug 13 00:18:35.737213 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Aug 13 00:18:35.738423 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:18:35.740815 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:18:35.748849 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:18:35.751565 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:18:35.752401 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:18:35.760646 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:18:35.775199 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:25:44:d3:36:3b Aug 13 00:18:35.776639 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:18:35.790095 (udev-worker)[521]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:18:35.800187 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Aug 13 00:18:35.804425 kernel: nvme nvme0: pci function 0000:00:04.0 Aug 13 00:18:35.816226 kernel: nvme nvme0: 2/0/0 default/read/poll queues Aug 13 00:18:35.823235 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:18:35.833356 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:18:35.833404 kernel: GPT:9289727 != 16777215 Aug 13 00:18:35.833441 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:18:35.833467 kernel: GPT:9289727 != 16777215 Aug 13 00:18:35.835273 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:18:35.836215 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 00:18:35.838458 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:18:35.884137 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:18:35.933222 kernel: BTRFS: device fsid 03408483-5051-409a-aab4-4e6d5027e982 devid 1 transid 41 /dev/nvme0n1p3 scanned by (udev-worker) (529) Aug 13 00:18:35.967230 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (537) Aug 13 00:18:36.003486 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Aug 13 00:18:36.076419 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Aug 13 00:18:36.094656 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 13 00:18:36.108922 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Aug 13 00:18:36.114947 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Aug 13 00:18:36.127534 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 00:18:36.149204 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 00:18:36.150423 disk-uuid[661]: Primary Header is updated. Aug 13 00:18:36.150423 disk-uuid[661]: Secondary Entries is updated. Aug 13 00:18:36.150423 disk-uuid[661]: Secondary Header is updated. Aug 13 00:18:36.178266 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 00:18:37.193251 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 13 00:18:37.195310 disk-uuid[663]: The operation has completed successfully. Aug 13 00:18:37.380898 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:18:37.382235 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 00:18:37.441503 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 00:18:37.452687 sh[1006]: Success Aug 13 00:18:37.473273 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 13 00:18:37.594666 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 00:18:37.609390 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 00:18:37.618558 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 00:18:37.653003 kernel: BTRFS info (device dm-0): first mount of filesystem 03408483-5051-409a-aab4-4e6d5027e982 Aug 13 00:18:37.653074 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:18:37.653114 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 00:18:37.654968 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 00:18:37.656420 kernel: BTRFS info (device dm-0): using free space tree Aug 13 00:18:37.698202 kernel: BTRFS info (device dm-0): enabling ssd optimizations Aug 13 00:18:37.713531 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 00:18:37.714621 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 00:18:37.729607 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 00:18:37.736245 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 00:18:37.776232 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:18:37.776325 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:18:37.778507 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 13 00:18:37.785227 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 13 00:18:37.802658 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 00:18:37.806567 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:18:37.818230 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 00:18:37.829539 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 00:18:37.977742 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:18:37.991108 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:18:38.061449 systemd-networkd[1200]: lo: Link UP Aug 13 00:18:38.061471 systemd-networkd[1200]: lo: Gained carrier Aug 13 00:18:38.067500 systemd-networkd[1200]: Enumeration completed Aug 13 00:18:38.067690 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:18:38.072975 systemd[1]: Reached target network.target - Network. Aug 13 00:18:38.076638 systemd-networkd[1200]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:18:38.076645 systemd-networkd[1200]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:18:38.088300 ignition[1115]: Ignition 2.19.0 Aug 13 00:18:38.088355 ignition[1115]: Stage: fetch-offline Aug 13 00:18:38.095507 systemd-networkd[1200]: eth0: Link UP Aug 13 00:18:38.095528 systemd-networkd[1200]: eth0: Gained carrier Aug 13 00:18:38.098737 ignition[1115]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:18:38.095546 systemd-networkd[1200]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:18:38.098763 ignition[1115]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:18:38.099675 ignition[1115]: Ignition finished successfully Aug 13 00:18:38.110292 systemd-networkd[1200]: eth0: DHCPv4 address 172.31.18.147/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 13 00:18:38.110818 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:18:38.129544 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 00:18:38.163927 ignition[1208]: Ignition 2.19.0 Aug 13 00:18:38.164727 ignition[1208]: Stage: fetch Aug 13 00:18:38.165457 ignition[1208]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:18:38.165483 ignition[1208]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:18:38.165632 ignition[1208]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:18:38.182692 ignition[1208]: PUT result: OK Aug 13 00:18:38.186260 ignition[1208]: parsed url from cmdline: "" Aug 13 00:18:38.186322 ignition[1208]: no config URL provided Aug 13 00:18:38.186338 ignition[1208]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:18:38.186364 ignition[1208]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:18:38.186408 ignition[1208]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:18:38.188274 ignition[1208]: PUT result: OK Aug 13 00:18:38.188346 ignition[1208]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Aug 13 00:18:38.191046 ignition[1208]: GET result: OK Aug 13 00:18:38.191231 ignition[1208]: parsing config with SHA512: 9c4c6607a32bac47106114206f32cf925baa135e752e179e1fcb27f26f5b8562f4b95872c2cf38a4f1123ca52de93698590d067e29ef76e8fc8db86664000e28 Aug 13 00:18:38.204206 unknown[1208]: fetched base config from "system" Aug 13 00:18:38.204957 ignition[1208]: fetch: fetch complete Aug 13 00:18:38.204223 unknown[1208]: fetched base config from "system" Aug 13 00:18:38.205001 ignition[1208]: fetch: fetch passed Aug 13 00:18:38.204236 unknown[1208]: fetched user config from "aws" Aug 13 00:18:38.205094 ignition[1208]: Ignition finished successfully Aug 13 00:18:38.219208 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 00:18:38.236617 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 00:18:38.262305 ignition[1214]: Ignition 2.19.0 Aug 13 00:18:38.262327 ignition[1214]: Stage: kargs Aug 13 00:18:38.262961 ignition[1214]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:18:38.262985 ignition[1214]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:18:38.263138 ignition[1214]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:18:38.265573 ignition[1214]: PUT result: OK Aug 13 00:18:38.274387 ignition[1214]: kargs: kargs passed Aug 13 00:18:38.280126 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 00:18:38.274485 ignition[1214]: Ignition finished successfully Aug 13 00:18:38.292640 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 00:18:38.320845 ignition[1220]: Ignition 2.19.0 Aug 13 00:18:38.320875 ignition[1220]: Stage: disks Aug 13 00:18:38.322766 ignition[1220]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:18:38.322826 ignition[1220]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:18:38.324122 ignition[1220]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:18:38.329021 ignition[1220]: PUT result: OK Aug 13 00:18:38.340523 ignition[1220]: disks: disks passed Aug 13 00:18:38.340634 ignition[1220]: Ignition finished successfully Aug 13 00:18:38.347226 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 00:18:38.350316 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 00:18:38.357845 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:18:38.360794 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:18:38.363032 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:18:38.365645 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:18:38.381428 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 00:18:38.432150 systemd-fsck[1228]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 00:18:38.435914 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 00:18:38.451338 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 00:18:38.537753 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 128aec8b-f05d-48ed-8996-c9e8b21a7810 r/w with ordered data mode. Quota mode: none. Aug 13 00:18:38.535563 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 00:18:38.538579 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 00:18:38.562353 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:18:38.571362 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 00:18:38.579215 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 00:18:38.579332 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:18:38.579390 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:18:38.610188 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1247) Aug 13 00:18:38.611593 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 00:18:38.622626 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:18:38.622664 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:18:38.622690 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 13 00:18:38.627529 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 00:18:38.636264 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 13 00:18:38.640422 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:18:38.736083 initrd-setup-root[1271]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:18:38.746466 initrd-setup-root[1278]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:18:38.755084 initrd-setup-root[1285]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:18:38.764095 initrd-setup-root[1292]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:18:38.918416 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 00:18:38.932940 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 00:18:38.940776 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 00:18:38.960048 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 00:18:38.963345 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:18:39.011766 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 00:18:39.015188 ignition[1361]: INFO : Ignition 2.19.0 Aug 13 00:18:39.015188 ignition[1361]: INFO : Stage: mount Aug 13 00:18:39.015188 ignition[1361]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:18:39.015188 ignition[1361]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:18:39.015188 ignition[1361]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:18:39.029783 ignition[1361]: INFO : PUT result: OK Aug 13 00:18:39.034985 ignition[1361]: INFO : mount: mount passed Aug 13 00:18:39.039481 ignition[1361]: INFO : Ignition finished successfully Aug 13 00:18:39.039534 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 00:18:39.053531 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 00:18:39.080031 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:18:39.102202 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1372) Aug 13 00:18:39.107389 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:18:39.107438 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:18:39.107467 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 13 00:18:39.114200 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 13 00:18:39.117505 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:18:39.159360 ignition[1389]: INFO : Ignition 2.19.0 Aug 13 00:18:39.161495 ignition[1389]: INFO : Stage: files Aug 13 00:18:39.161495 ignition[1389]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:18:39.161495 ignition[1389]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:18:39.161495 ignition[1389]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:18:39.170892 ignition[1389]: INFO : PUT result: OK Aug 13 00:18:39.175695 ignition[1389]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:18:39.179311 ignition[1389]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:18:39.179311 ignition[1389]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:18:39.187981 ignition[1389]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:18:39.192615 ignition[1389]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:18:39.196991 unknown[1389]: wrote ssh authorized keys file for user: core Aug 13 00:18:39.200490 ignition[1389]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:18:39.203365 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Aug 13 00:18:39.203365 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Aug 13 00:18:39.288267 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 00:18:39.327342 systemd-networkd[1200]: eth0: Gained IPv6LL Aug 13 00:18:39.438748 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Aug 13 00:18:39.443154 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:18:39.443154 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:18:39.443154 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:18:39.443154 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:18:39.443154 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:18:39.443154 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:18:39.443154 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:18:39.443154 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:18:39.443154 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:18:39.443154 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:18:39.443154 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Aug 13 00:18:39.443154 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Aug 13 00:18:39.443154 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Aug 13 00:18:39.443154 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Aug 13 00:18:39.784136 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 00:18:40.152339 ignition[1389]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Aug 13 00:18:40.152339 ignition[1389]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 00:18:40.162498 ignition[1389]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:18:40.162498 ignition[1389]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:18:40.162498 ignition[1389]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 00:18:40.162498 ignition[1389]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:18:40.162498 ignition[1389]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:18:40.162498 ignition[1389]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:18:40.162498 ignition[1389]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:18:40.162498 ignition[1389]: INFO : files: files passed Aug 13 00:18:40.162498 ignition[1389]: INFO : Ignition finished successfully Aug 13 00:18:40.160209 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 00:18:40.202743 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 00:18:40.214145 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 00:18:40.221474 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:18:40.221669 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 00:18:40.244567 initrd-setup-root-after-ignition[1417]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:18:40.244567 initrd-setup-root-after-ignition[1417]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:18:40.255639 initrd-setup-root-after-ignition[1421]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:18:40.262844 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:18:40.263388 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 00:18:40.280482 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 00:18:40.345591 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:18:40.345802 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 00:18:40.349481 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 00:18:40.353679 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 00:18:40.363779 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 00:18:40.373534 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 00:18:40.411235 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:18:40.422548 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 00:18:40.448442 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:18:40.451382 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:18:40.458938 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 00:18:40.463154 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:18:40.463426 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:18:40.466572 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 00:18:40.469697 systemd[1]: Stopped target basic.target - Basic System. Aug 13 00:18:40.479296 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 00:18:40.486605 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:18:40.491792 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 00:18:40.494762 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 00:18:40.497450 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:18:40.500537 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 00:18:40.503229 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 00:18:40.508246 systemd[1]: Stopped target swap.target - Swaps. Aug 13 00:18:40.512445 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:18:40.512699 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:18:40.520297 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:18:40.525369 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:18:40.529691 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 00:18:40.531191 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:18:40.534117 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:18:40.534753 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 00:18:40.539194 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:18:40.539748 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:18:40.557493 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:18:40.558034 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 00:18:40.577331 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 00:18:40.585763 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 00:18:40.591101 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:18:40.591429 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:18:40.599332 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:18:40.599589 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:18:40.631030 ignition[1441]: INFO : Ignition 2.19.0 Aug 13 00:18:40.644445 ignition[1441]: INFO : Stage: umount Aug 13 00:18:40.644445 ignition[1441]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:18:40.644445 ignition[1441]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 13 00:18:40.644445 ignition[1441]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 13 00:18:40.644445 ignition[1441]: INFO : PUT result: OK Aug 13 00:18:40.644445 ignition[1441]: INFO : umount: umount passed Aug 13 00:18:40.644445 ignition[1441]: INFO : Ignition finished successfully Aug 13 00:18:40.642051 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:18:40.644271 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 00:18:40.654426 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:18:40.659519 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 00:18:40.680424 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:18:40.680706 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 00:18:40.684502 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:18:40.684609 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 00:18:40.687013 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 00:18:40.687100 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 00:18:40.690338 systemd[1]: Stopped target network.target - Network. Aug 13 00:18:40.692528 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:18:40.692698 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:18:40.695416 systemd[1]: Stopped target paths.target - Path Units. Aug 13 00:18:40.698039 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:18:40.702318 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:18:40.702445 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 00:18:40.702853 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 00:18:40.703648 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:18:40.703724 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:18:40.704010 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:18:40.704076 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:18:40.714015 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:18:40.714103 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 00:18:40.715765 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 00:18:40.715841 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 00:18:40.716853 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 00:18:40.734442 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 00:18:40.745487 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:18:40.750250 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:18:40.750503 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 00:18:40.752677 systemd-networkd[1200]: eth0: DHCPv6 lease lost Aug 13 00:18:40.764049 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 00:18:40.769350 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:18:40.784817 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:18:40.785185 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 00:18:40.795077 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:18:40.795304 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 00:18:40.804181 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:18:40.804314 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:18:40.807289 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:18:40.807399 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 00:18:40.834434 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 00:18:40.850144 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:18:40.850287 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:18:40.853292 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:18:40.853381 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:18:40.855903 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:18:40.856003 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 00:18:40.859134 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:18:40.902399 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:18:40.902936 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:18:40.911057 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:18:40.911184 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 00:18:40.914508 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:18:40.914583 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:18:40.916369 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:18:40.916459 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:18:40.923832 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:18:40.923923 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 00:18:40.930821 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:18:40.930912 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:18:40.957402 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 00:18:40.964421 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:18:40.964532 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:18:40.970934 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 00:18:40.971040 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:18:40.983026 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:18:40.983136 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:18:40.996013 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:18:40.996109 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:18:40.999335 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:18:41.001205 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 00:18:41.011226 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:18:41.011429 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 00:18:41.022601 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 00:18:41.032488 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 00:18:41.051955 systemd[1]: Switching root. Aug 13 00:18:41.091906 systemd-journald[251]: Journal stopped Aug 13 00:18:42.969907 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Aug 13 00:18:42.970058 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:18:42.970104 kernel: SELinux: policy capability open_perms=1 Aug 13 00:18:42.970136 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:18:42.970200 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:18:42.970236 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:18:42.970268 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:18:42.970303 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:18:42.970341 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:18:42.970373 kernel: audit: type=1403 audit(1755044321.374:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:18:42.970417 systemd[1]: Successfully loaded SELinux policy in 51.722ms. Aug 13 00:18:42.970465 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.193ms. Aug 13 00:18:42.970500 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 00:18:42.970530 systemd[1]: Detected virtualization amazon. Aug 13 00:18:42.970562 systemd[1]: Detected architecture arm64. Aug 13 00:18:42.970591 systemd[1]: Detected first boot. Aug 13 00:18:42.970624 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:18:42.970660 zram_generator::config[1483]: No configuration found. Aug 13 00:18:42.970695 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:18:42.970726 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 00:18:42.970758 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 00:18:42.970789 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 00:18:42.970826 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 00:18:42.970856 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 00:18:42.970899 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 00:18:42.970934 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 00:18:42.970968 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 00:18:42.970998 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 00:18:42.971033 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 00:18:42.971067 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 00:18:42.971098 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:18:42.971130 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:18:42.973263 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 00:18:42.973322 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 00:18:42.973364 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 00:18:42.973401 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:18:42.973431 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 00:18:42.973463 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:18:42.973495 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 00:18:42.973525 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 00:18:42.975985 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 00:18:42.976032 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 00:18:42.976063 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:18:42.976095 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:18:42.976128 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:18:42.976208 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:18:42.976262 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 00:18:42.976294 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 00:18:42.976327 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:18:42.976359 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:18:42.976392 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:18:42.976428 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 00:18:42.976460 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 00:18:42.976494 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 00:18:42.976542 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 00:18:42.976572 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 00:18:42.976604 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 00:18:42.976640 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 00:18:42.976673 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:18:42.976709 systemd[1]: Reached target machines.target - Containers. Aug 13 00:18:42.976739 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 00:18:42.976769 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:18:42.976801 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:18:42.976832 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 00:18:42.976862 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:18:42.976894 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:18:42.976926 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:18:42.976955 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 00:18:42.976991 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:18:42.977025 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:18:42.977054 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 00:18:42.977084 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 00:18:42.977115 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 00:18:42.977147 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 00:18:42.977242 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:18:42.977276 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:18:42.977305 kernel: fuse: init (API version 7.39) Aug 13 00:18:42.977343 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:18:42.977373 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 00:18:42.977402 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:18:42.977433 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 00:18:42.977463 systemd[1]: Stopped verity-setup.service. Aug 13 00:18:42.977493 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 00:18:42.977524 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 00:18:42.977553 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 00:18:42.977587 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 00:18:42.977618 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 00:18:42.977650 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 00:18:42.977683 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:18:42.977713 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:18:42.977748 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 00:18:42.977779 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:18:42.977809 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:18:42.977838 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:18:42.977868 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:18:42.977901 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:18:42.977982 systemd-journald[1565]: Collecting audit messages is disabled. Aug 13 00:18:42.978051 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 00:18:42.978087 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:18:42.978125 kernel: loop: module loaded Aug 13 00:18:42.978154 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:18:42.978216 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 00:18:42.978250 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:18:42.978286 systemd-journald[1565]: Journal started Aug 13 00:18:42.978335 systemd-journald[1565]: Runtime Journal (/run/log/journal/ec20400600ddaa941787134241882349) is 8.0M, max 75.3M, 67.3M free. Aug 13 00:18:42.368629 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:18:42.395027 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Aug 13 00:18:42.395841 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 00:18:42.994229 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 00:18:43.018204 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 00:18:43.018308 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:18:43.018350 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:18:43.029859 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 00:18:43.040040 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 00:18:43.056321 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 00:18:43.056416 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:18:43.073193 kernel: ACPI: bus type drm_connector registered Aug 13 00:18:43.073280 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 00:18:43.082221 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:18:43.105921 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 00:18:43.120365 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:18:43.141329 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 00:18:43.148998 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:18:43.160190 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:18:43.171884 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 00:18:43.175270 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:18:43.177097 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:18:43.180412 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:18:43.180806 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:18:43.183644 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 00:18:43.186483 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 00:18:43.191356 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 00:18:43.226273 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 00:18:43.245656 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 00:18:43.263305 kernel: loop0: detected capacity change from 0 to 114432 Aug 13 00:18:43.260145 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 00:18:43.276500 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 00:18:43.282442 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:18:43.324307 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:18:43.318581 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:18:43.338746 systemd-journald[1565]: Time spent on flushing to /var/log/journal/ec20400600ddaa941787134241882349 is 75.388ms for 913 entries. Aug 13 00:18:43.338746 systemd-journald[1565]: System Journal (/var/log/journal/ec20400600ddaa941787134241882349) is 8.0M, max 195.6M, 187.6M free. Aug 13 00:18:43.429707 systemd-journald[1565]: Received client request to flush runtime journal. Aug 13 00:18:43.429815 kernel: loop1: detected capacity change from 0 to 52536 Aug 13 00:18:43.365319 systemd-tmpfiles[1595]: ACLs are not supported, ignoring. Aug 13 00:18:43.365727 systemd-tmpfiles[1595]: ACLs are not supported, ignoring. Aug 13 00:18:43.368838 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:18:43.375775 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 00:18:43.403711 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:18:43.417525 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 00:18:43.439511 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 00:18:43.457736 kernel: loop2: detected capacity change from 0 to 211168 Aug 13 00:18:43.547979 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:18:43.561027 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 00:18:43.593270 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 00:18:43.610424 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:18:43.625494 kernel: loop3: detected capacity change from 0 to 114328 Aug 13 00:18:43.627800 udevadm[1634]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 00:18:43.700246 kernel: loop4: detected capacity change from 0 to 114432 Aug 13 00:18:43.702197 systemd-tmpfiles[1637]: ACLs are not supported, ignoring. Aug 13 00:18:43.702238 systemd-tmpfiles[1637]: ACLs are not supported, ignoring. Aug 13 00:18:43.719436 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:18:43.732626 kernel: loop5: detected capacity change from 0 to 52536 Aug 13 00:18:43.754435 kernel: loop6: detected capacity change from 0 to 211168 Aug 13 00:18:43.798292 kernel: loop7: detected capacity change from 0 to 114328 Aug 13 00:18:43.828667 (sd-merge)[1640]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Aug 13 00:18:43.829721 (sd-merge)[1640]: Merged extensions into '/usr'. Aug 13 00:18:43.843714 systemd[1]: Reloading requested from client PID 1594 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 00:18:43.843750 systemd[1]: Reloading... Aug 13 00:18:44.007466 zram_generator::config[1665]: No configuration found. Aug 13 00:18:44.156313 ldconfig[1590]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:18:44.374306 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:18:44.494082 systemd[1]: Reloading finished in 648 ms. Aug 13 00:18:44.532296 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 00:18:44.541647 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 00:18:44.552148 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 00:18:44.562503 systemd[1]: Starting ensure-sysext.service... Aug 13 00:18:44.569789 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:18:44.580686 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:18:44.596416 systemd[1]: Reloading requested from client PID 1721 ('systemctl') (unit ensure-sysext.service)... Aug 13 00:18:44.596453 systemd[1]: Reloading... Aug 13 00:18:44.643433 systemd-tmpfiles[1722]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:18:44.644070 systemd-tmpfiles[1722]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 00:18:44.654176 systemd-tmpfiles[1722]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:18:44.655327 systemd-tmpfiles[1722]: ACLs are not supported, ignoring. Aug 13 00:18:44.655470 systemd-tmpfiles[1722]: ACLs are not supported, ignoring. Aug 13 00:18:44.668349 systemd-tmpfiles[1722]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:18:44.668371 systemd-tmpfiles[1722]: Skipping /boot Aug 13 00:18:44.698933 systemd-udevd[1723]: Using default interface naming scheme 'v255'. Aug 13 00:18:44.709037 systemd-tmpfiles[1722]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:18:44.709067 systemd-tmpfiles[1722]: Skipping /boot Aug 13 00:18:44.793206 zram_generator::config[1749]: No configuration found. Aug 13 00:18:44.973719 (udev-worker)[1789]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:18:45.200684 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:18:45.266330 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1765) Aug 13 00:18:45.382888 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 00:18:45.383757 systemd[1]: Reloading finished in 786 ms. Aug 13 00:18:45.418132 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:18:45.431251 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:18:45.515748 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 00:18:45.525727 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 00:18:45.533021 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:18:45.541777 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:18:45.552549 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:18:45.561360 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:18:45.569531 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:18:45.593122 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 00:18:45.606723 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:18:45.617787 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:18:45.631673 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 00:18:45.652769 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:18:45.663282 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 00:18:45.672032 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:18:45.672385 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:18:45.677688 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:18:45.681440 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:18:45.690420 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:18:45.692282 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:18:45.752294 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 00:18:45.764329 augenrules[1946]: No rules Aug 13 00:18:45.767571 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 00:18:45.782767 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 00:18:45.801126 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 00:18:45.814559 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 13 00:18:45.819944 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:18:45.828561 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 00:18:45.836068 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:18:45.845477 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:18:45.862530 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:18:45.870484 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:18:45.871910 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:18:45.879389 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 00:18:45.879810 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 00:18:45.884597 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 00:18:45.887146 lvm[1955]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:18:45.896628 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 00:18:45.900262 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:18:45.903292 systemd[1]: Finished ensure-sysext.service. Aug 13 00:18:45.907500 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:18:45.907827 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:18:45.937711 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:18:45.938129 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:18:45.953604 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:18:45.957442 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:18:45.960819 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:18:45.981037 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:18:45.981702 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:18:45.984664 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:18:45.989811 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:18:46.005232 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 00:18:46.008946 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 00:18:46.013117 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:18:46.026561 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 00:18:46.029967 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 00:18:46.057406 lvm[1974]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:18:46.063120 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 00:18:46.107017 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 00:18:46.187003 systemd-networkd[1928]: lo: Link UP Aug 13 00:18:46.187019 systemd-networkd[1928]: lo: Gained carrier Aug 13 00:18:46.190857 systemd-networkd[1928]: Enumeration completed Aug 13 00:18:46.191083 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:18:46.196948 systemd-networkd[1928]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:18:46.196972 systemd-networkd[1928]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:18:46.204511 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 00:18:46.206616 systemd-resolved[1929]: Positive Trust Anchors: Aug 13 00:18:46.206989 systemd-resolved[1929]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:18:46.207103 systemd-resolved[1929]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:18:46.209251 systemd-networkd[1928]: eth0: Link UP Aug 13 00:18:46.209560 systemd-networkd[1928]: eth0: Gained carrier Aug 13 00:18:46.209609 systemd-networkd[1928]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:18:46.219870 systemd-networkd[1928]: eth0: DHCPv4 address 172.31.18.147/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 13 00:18:46.224107 systemd-resolved[1929]: Defaulting to hostname 'linux'. Aug 13 00:18:46.227781 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:18:46.230402 systemd[1]: Reached target network.target - Network. Aug 13 00:18:46.232407 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:18:46.235042 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:18:46.237610 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 00:18:46.240454 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 00:18:46.243669 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 00:18:46.246424 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 00:18:46.249239 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 00:18:46.252032 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:18:46.252086 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:18:46.254103 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:18:46.257558 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 00:18:46.262930 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 00:18:46.272511 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 00:18:46.275924 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 00:18:46.278674 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:18:46.281018 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:18:46.283369 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:18:46.283435 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:18:46.293359 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 00:18:46.299622 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 00:18:46.310287 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 00:18:46.327310 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 00:18:46.342493 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 00:18:46.345747 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 00:18:46.352506 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 00:18:46.361649 systemd[1]: Started ntpd.service - Network Time Service. Aug 13 00:18:46.376085 jq[1990]: false Aug 13 00:18:46.383425 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 00:18:46.389421 systemd[1]: Starting setup-oem.service - Setup OEM... Aug 13 00:18:46.404568 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 00:18:46.414403 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 00:18:46.425341 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 00:18:46.430545 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:18:46.433633 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:18:46.435065 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 00:18:46.450718 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 00:18:46.483987 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:18:46.484998 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 00:18:46.510671 coreos-metadata[1988]: Aug 13 00:18:46.505 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 13 00:18:46.510671 coreos-metadata[1988]: Aug 13 00:18:46.509 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Aug 13 00:18:46.510671 coreos-metadata[1988]: Aug 13 00:18:46.510 INFO Fetch successful Aug 13 00:18:46.510671 coreos-metadata[1988]: Aug 13 00:18:46.510 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Aug 13 00:18:46.525263 coreos-metadata[1988]: Aug 13 00:18:46.512 INFO Fetch successful Aug 13 00:18:46.525263 coreos-metadata[1988]: Aug 13 00:18:46.512 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Aug 13 00:18:46.525263 coreos-metadata[1988]: Aug 13 00:18:46.515 INFO Fetch successful Aug 13 00:18:46.525263 coreos-metadata[1988]: Aug 13 00:18:46.515 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Aug 13 00:18:46.525263 coreos-metadata[1988]: Aug 13 00:18:46.516 INFO Fetch successful Aug 13 00:18:46.525263 coreos-metadata[1988]: Aug 13 00:18:46.516 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Aug 13 00:18:46.525263 coreos-metadata[1988]: Aug 13 00:18:46.518 INFO Fetch failed with 404: resource not found Aug 13 00:18:46.525263 coreos-metadata[1988]: Aug 13 00:18:46.520 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Aug 13 00:18:46.525263 coreos-metadata[1988]: Aug 13 00:18:46.522 INFO Fetch successful Aug 13 00:18:46.525263 coreos-metadata[1988]: Aug 13 00:18:46.522 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Aug 13 00:18:46.525263 coreos-metadata[1988]: Aug 13 00:18:46.524 INFO Fetch successful Aug 13 00:18:46.525263 coreos-metadata[1988]: Aug 13 00:18:46.524 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Aug 13 00:18:46.545344 coreos-metadata[1988]: Aug 13 00:18:46.526 INFO Fetch successful Aug 13 00:18:46.545344 coreos-metadata[1988]: Aug 13 00:18:46.528 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Aug 13 00:18:46.545344 coreos-metadata[1988]: Aug 13 00:18:46.530 INFO Fetch successful Aug 13 00:18:46.545344 coreos-metadata[1988]: Aug 13 00:18:46.530 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Aug 13 00:18:46.545344 coreos-metadata[1988]: Aug 13 00:18:46.531 INFO Fetch successful Aug 13 00:18:46.546901 ntpd[1995]: 13 Aug 00:18:46 ntpd[1995]: ntpd 4.2.8p17@1.4004-o Tue Aug 12 21:30:33 UTC 2025 (1): Starting Aug 13 00:18:46.546901 ntpd[1995]: 13 Aug 00:18:46 ntpd[1995]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 13 00:18:46.546901 ntpd[1995]: 13 Aug 00:18:46 ntpd[1995]: ---------------------------------------------------- Aug 13 00:18:46.546901 ntpd[1995]: 13 Aug 00:18:46 ntpd[1995]: ntp-4 is maintained by Network Time Foundation, Aug 13 00:18:46.546901 ntpd[1995]: 13 Aug 00:18:46 ntpd[1995]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 13 00:18:46.546901 ntpd[1995]: 13 Aug 00:18:46 ntpd[1995]: corporation. Support and training for ntp-4 are Aug 13 00:18:46.546901 ntpd[1995]: 13 Aug 00:18:46 ntpd[1995]: available at https://www.nwtime.org/support Aug 13 00:18:46.546901 ntpd[1995]: 13 Aug 00:18:46 ntpd[1995]: ---------------------------------------------------- Aug 13 00:18:46.538712 ntpd[1995]: ntpd 4.2.8p17@1.4004-o Tue Aug 12 21:30:33 UTC 2025 (1): Starting Aug 13 00:18:46.531869 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:18:46.538768 ntpd[1995]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 13 00:18:46.533302 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 00:18:46.566504 update_engine[2002]: I20250813 00:18:46.564724 2002 main.cc:92] Flatcar Update Engine starting Aug 13 00:18:46.591426 ntpd[1995]: 13 Aug 00:18:46 ntpd[1995]: proto: precision = 0.108 usec (-23) Aug 13 00:18:46.591426 ntpd[1995]: 13 Aug 00:18:46 ntpd[1995]: basedate set to 2025-07-31 Aug 13 00:18:46.591426 ntpd[1995]: 13 Aug 00:18:46 ntpd[1995]: gps base set to 2025-08-03 (week 2378) Aug 13 00:18:46.538789 ntpd[1995]: ---------------------------------------------------- Aug 13 00:18:46.570740 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 00:18:46.538809 ntpd[1995]: ntp-4 is maintained by Network Time Foundation, Aug 13 00:18:46.584679 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:18:46.538827 ntpd[1995]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 13 00:18:46.584732 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 00:18:46.592486 jq[2003]: true Aug 13 00:18:46.538846 ntpd[1995]: corporation. Support and training for ntp-4 are Aug 13 00:18:46.587636 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:18:46.538865 ntpd[1995]: available at https://www.nwtime.org/support Aug 13 00:18:46.587672 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 00:18:46.596814 ntpd[1995]: 13 Aug 00:18:46 ntpd[1995]: Listen and drop on 0 v6wildcard [::]:123 Aug 13 00:18:46.596814 ntpd[1995]: 13 Aug 00:18:46 ntpd[1995]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 13 00:18:46.538884 ntpd[1995]: ---------------------------------------------------- Aug 13 00:18:46.552901 ntpd[1995]: proto: precision = 0.108 usec (-23) Aug 13 00:18:46.559587 dbus-daemon[1989]: [system] SELinux support is enabled Aug 13 00:18:46.562565 ntpd[1995]: basedate set to 2025-07-31 Aug 13 00:18:46.562603 ntpd[1995]: gps base set to 2025-08-03 (week 2378) Aug 13 00:18:46.595319 ntpd[1995]: Listen and drop on 0 v6wildcard [::]:123 Aug 13 00:18:46.595398 ntpd[1995]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 13 00:18:46.598457 ntpd[1995]: Listen normally on 2 lo 127.0.0.1:123 Aug 13 00:18:46.602469 ntpd[1995]: 13 Aug 00:18:46 ntpd[1995]: Listen normally on 2 lo 127.0.0.1:123 Aug 13 00:18:46.602469 ntpd[1995]: 13 Aug 00:18:46 ntpd[1995]: Listen normally on 3 eth0 172.31.18.147:123 Aug 13 00:18:46.602469 ntpd[1995]: 13 Aug 00:18:46 ntpd[1995]: Listen normally on 4 lo [::1]:123 Aug 13 00:18:46.602469 ntpd[1995]: 13 Aug 00:18:46 ntpd[1995]: bind(21) AF_INET6 fe80::425:44ff:fed3:363b%2#123 flags 0x11 failed: Cannot assign requested address Aug 13 00:18:46.602469 ntpd[1995]: 13 Aug 00:18:46 ntpd[1995]: unable to create socket on eth0 (5) for fe80::425:44ff:fed3:363b%2#123 Aug 13 00:18:46.602469 ntpd[1995]: 13 Aug 00:18:46 ntpd[1995]: failed to init interface for address fe80::425:44ff:fed3:363b%2 Aug 13 00:18:46.602469 ntpd[1995]: 13 Aug 00:18:46 ntpd[1995]: Listening on routing socket on fd #21 for interface updates Aug 13 00:18:46.600358 ntpd[1995]: Listen normally on 3 eth0 172.31.18.147:123 Aug 13 00:18:46.600433 ntpd[1995]: Listen normally on 4 lo [::1]:123 Aug 13 00:18:46.600523 ntpd[1995]: bind(21) AF_INET6 fe80::425:44ff:fed3:363b%2#123 flags 0x11 failed: Cannot assign requested address Aug 13 00:18:46.600567 ntpd[1995]: unable to create socket on eth0 (5) for fe80::425:44ff:fed3:363b%2#123 Aug 13 00:18:46.600598 ntpd[1995]: failed to init interface for address fe80::425:44ff:fed3:363b%2 Aug 13 00:18:46.600663 ntpd[1995]: Listening on routing socket on fd #21 for interface updates Aug 13 00:18:46.606678 dbus-daemon[1989]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1928 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 00:18:46.607539 (ntainerd)[2016]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 00:18:46.613486 dbus-daemon[1989]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 13 00:18:46.619272 ntpd[1995]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 00:18:46.619457 ntpd[1995]: 13 Aug 00:18:46 ntpd[1995]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 00:18:46.619555 ntpd[1995]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 00:18:46.619666 ntpd[1995]: 13 Aug 00:18:46 ntpd[1995]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 13 00:18:46.654036 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 00:18:46.659628 update_engine[2002]: I20250813 00:18:46.655460 2002 update_check_scheduler.cc:74] Next update check in 4m5s Aug 13 00:18:46.656988 systemd[1]: Started update-engine.service - Update Engine. Aug 13 00:18:46.665888 tar[2006]: linux-arm64/LICENSE Aug 13 00:18:46.666606 tar[2006]: linux-arm64/helm Aug 13 00:18:46.686455 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 00:18:46.751765 extend-filesystems[1991]: Found loop4 Aug 13 00:18:46.755118 extend-filesystems[1991]: Found loop5 Aug 13 00:18:46.755118 extend-filesystems[1991]: Found loop6 Aug 13 00:18:46.755118 extend-filesystems[1991]: Found loop7 Aug 13 00:18:46.755118 extend-filesystems[1991]: Found nvme0n1 Aug 13 00:18:46.755118 extend-filesystems[1991]: Found nvme0n1p1 Aug 13 00:18:46.755118 extend-filesystems[1991]: Found nvme0n1p2 Aug 13 00:18:46.755118 extend-filesystems[1991]: Found nvme0n1p3 Aug 13 00:18:46.755118 extend-filesystems[1991]: Found usr Aug 13 00:18:46.755118 extend-filesystems[1991]: Found nvme0n1p4 Aug 13 00:18:46.755118 extend-filesystems[1991]: Found nvme0n1p6 Aug 13 00:18:46.755118 extend-filesystems[1991]: Found nvme0n1p7 Aug 13 00:18:46.755118 extend-filesystems[1991]: Found nvme0n1p9 Aug 13 00:18:46.755118 extend-filesystems[1991]: Checking size of /dev/nvme0n1p9 Aug 13 00:18:46.796209 jq[2020]: true Aug 13 00:18:46.826205 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:18:46.826559 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 00:18:46.849273 systemd[1]: Finished setup-oem.service - Setup OEM. Aug 13 00:18:46.862022 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 00:18:46.864917 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 00:18:46.872004 extend-filesystems[1991]: Resized partition /dev/nvme0n1p9 Aug 13 00:18:46.879787 extend-filesystems[2050]: resize2fs 1.47.1 (20-May-2024) Aug 13 00:18:46.897739 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Aug 13 00:18:46.957143 systemd-logind[2000]: Watching system buttons on /dev/input/event0 (Power Button) Aug 13 00:18:46.960845 systemd-logind[2000]: Watching system buttons on /dev/input/event1 (Sleep Button) Aug 13 00:18:46.964440 systemd-logind[2000]: New seat seat0. Aug 13 00:18:46.974146 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 00:18:47.031216 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Aug 13 00:18:47.052730 extend-filesystems[2050]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Aug 13 00:18:47.052730 extend-filesystems[2050]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 00:18:47.052730 extend-filesystems[2050]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Aug 13 00:18:47.062410 extend-filesystems[1991]: Resized filesystem in /dev/nvme0n1p9 Aug 13 00:18:47.065124 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:18:47.067252 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 00:18:47.083899 dbus-daemon[1989]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 00:18:47.084446 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 00:18:47.087324 dbus-daemon[1989]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2026 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 00:18:47.109118 bash[2078]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:18:47.133903 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 00:18:47.141298 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 00:18:47.161296 systemd[1]: Starting sshkeys.service... Aug 13 00:18:47.185956 polkitd[2080]: Started polkitd version 121 Aug 13 00:18:47.200215 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (1787) Aug 13 00:18:47.208870 polkitd[2080]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 00:18:47.209005 polkitd[2080]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 00:18:47.220431 polkitd[2080]: Finished loading, compiling and executing 2 rules Aug 13 00:18:47.221769 dbus-daemon[1989]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 00:18:47.222140 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 00:18:47.234061 polkitd[2080]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 00:18:47.300528 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 00:18:47.302817 systemd-hostnamed[2026]: Hostname set to (transient) Aug 13 00:18:47.305541 systemd-resolved[1929]: System hostname changed to 'ip-172-31-18-147'. Aug 13 00:18:47.344878 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 00:18:47.394648 locksmithd[2029]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:18:47.436564 containerd[2016]: time="2025-08-13T00:18:47.436423653Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 00:18:47.539198 containerd[2016]: time="2025-08-13T00:18:47.535821525Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:18:47.539998 ntpd[1995]: bind(24) AF_INET6 fe80::425:44ff:fed3:363b%2#123 flags 0x11 failed: Cannot assign requested address Aug 13 00:18:47.542690 ntpd[1995]: 13 Aug 00:18:47 ntpd[1995]: bind(24) AF_INET6 fe80::425:44ff:fed3:363b%2#123 flags 0x11 failed: Cannot assign requested address Aug 13 00:18:47.542690 ntpd[1995]: 13 Aug 00:18:47 ntpd[1995]: unable to create socket on eth0 (6) for fe80::425:44ff:fed3:363b%2#123 Aug 13 00:18:47.542690 ntpd[1995]: 13 Aug 00:18:47 ntpd[1995]: failed to init interface for address fe80::425:44ff:fed3:363b%2 Aug 13 00:18:47.540065 ntpd[1995]: unable to create socket on eth0 (6) for fe80::425:44ff:fed3:363b%2#123 Aug 13 00:18:47.540094 ntpd[1995]: failed to init interface for address fe80::425:44ff:fed3:363b%2 Aug 13 00:18:47.544642 containerd[2016]: time="2025-08-13T00:18:47.544572549Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:18:47.544836 containerd[2016]: time="2025-08-13T00:18:47.544805817Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:18:47.544947 containerd[2016]: time="2025-08-13T00:18:47.544918953Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:18:47.545375 containerd[2016]: time="2025-08-13T00:18:47.545341485Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 00:18:47.548200 containerd[2016]: time="2025-08-13T00:18:47.547220829Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 00:18:47.548200 containerd[2016]: time="2025-08-13T00:18:47.547461117Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:18:47.548200 containerd[2016]: time="2025-08-13T00:18:47.547495125Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:18:47.548200 containerd[2016]: time="2025-08-13T00:18:47.547822485Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:18:47.548200 containerd[2016]: time="2025-08-13T00:18:47.547855845Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:18:47.548200 containerd[2016]: time="2025-08-13T00:18:47.547886961Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:18:47.548200 containerd[2016]: time="2025-08-13T00:18:47.547912413Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:18:47.548200 containerd[2016]: time="2025-08-13T00:18:47.548072157Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:18:47.551207 containerd[2016]: time="2025-08-13T00:18:47.550552425Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:18:47.551207 containerd[2016]: time="2025-08-13T00:18:47.550825785Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:18:47.551207 containerd[2016]: time="2025-08-13T00:18:47.550858653Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:18:47.551207 containerd[2016]: time="2025-08-13T00:18:47.551027193Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:18:47.551207 containerd[2016]: time="2025-08-13T00:18:47.551123505Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:18:47.562067 containerd[2016]: time="2025-08-13T00:18:47.561125973Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:18:47.562067 containerd[2016]: time="2025-08-13T00:18:47.561411849Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:18:47.562067 containerd[2016]: time="2025-08-13T00:18:47.561549141Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 00:18:47.562067 containerd[2016]: time="2025-08-13T00:18:47.561590253Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 00:18:47.562067 containerd[2016]: time="2025-08-13T00:18:47.561653193Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:18:47.562577 containerd[2016]: time="2025-08-13T00:18:47.562448265Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:18:47.564922 containerd[2016]: time="2025-08-13T00:18:47.564396405Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:18:47.564922 containerd[2016]: time="2025-08-13T00:18:47.564671205Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 00:18:47.564922 containerd[2016]: time="2025-08-13T00:18:47.564706701Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 00:18:47.564922 containerd[2016]: time="2025-08-13T00:18:47.564740841Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 00:18:47.564922 containerd[2016]: time="2025-08-13T00:18:47.564773613Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:18:47.564922 containerd[2016]: time="2025-08-13T00:18:47.564803889Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:18:47.564922 containerd[2016]: time="2025-08-13T00:18:47.564833433Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:18:47.564922 containerd[2016]: time="2025-08-13T00:18:47.564865077Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:18:47.568194 containerd[2016]: time="2025-08-13T00:18:47.567240225Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:18:47.568194 containerd[2016]: time="2025-08-13T00:18:47.567295257Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:18:47.568194 containerd[2016]: time="2025-08-13T00:18:47.567334077Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:18:47.568194 containerd[2016]: time="2025-08-13T00:18:47.567366393Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:18:47.568194 containerd[2016]: time="2025-08-13T00:18:47.567410133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:18:47.568194 containerd[2016]: time="2025-08-13T00:18:47.567443349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:18:47.568194 containerd[2016]: time="2025-08-13T00:18:47.567490965Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:18:47.568194 containerd[2016]: time="2025-08-13T00:18:47.567523089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:18:47.568194 containerd[2016]: time="2025-08-13T00:18:47.567553893Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:18:47.568194 containerd[2016]: time="2025-08-13T00:18:47.567586821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:18:47.568194 containerd[2016]: time="2025-08-13T00:18:47.567618645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:18:47.568194 containerd[2016]: time="2025-08-13T00:18:47.567651201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:18:47.568194 containerd[2016]: time="2025-08-13T00:18:47.567682917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 00:18:47.568194 containerd[2016]: time="2025-08-13T00:18:47.567727137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 00:18:47.570559 containerd[2016]: time="2025-08-13T00:18:47.567761661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:18:47.570559 containerd[2016]: time="2025-08-13T00:18:47.567792381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 00:18:47.570559 containerd[2016]: time="2025-08-13T00:18:47.567831273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:18:47.570559 containerd[2016]: time="2025-08-13T00:18:47.567873969Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 00:18:47.570559 containerd[2016]: time="2025-08-13T00:18:47.567920997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 00:18:47.570559 containerd[2016]: time="2025-08-13T00:18:47.567957633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:18:47.570559 containerd[2016]: time="2025-08-13T00:18:47.567985545Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:18:47.574223 containerd[2016]: time="2025-08-13T00:18:47.570904161Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:18:47.574223 containerd[2016]: time="2025-08-13T00:18:47.571282149Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 00:18:47.574223 containerd[2016]: time="2025-08-13T00:18:47.571316565Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:18:47.574223 containerd[2016]: time="2025-08-13T00:18:47.571348605Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 00:18:47.574223 containerd[2016]: time="2025-08-13T00:18:47.571375257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:18:47.574223 containerd[2016]: time="2025-08-13T00:18:47.571409049Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 00:18:47.574223 containerd[2016]: time="2025-08-13T00:18:47.571435293Z" level=info msg="NRI interface is disabled by configuration." Aug 13 00:18:47.574223 containerd[2016]: time="2025-08-13T00:18:47.571462545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:18:47.574754 containerd[2016]: time="2025-08-13T00:18:47.572143449Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:18:47.574754 containerd[2016]: time="2025-08-13T00:18:47.572322849Z" level=info msg="Connect containerd service" Aug 13 00:18:47.574754 containerd[2016]: time="2025-08-13T00:18:47.572392905Z" level=info msg="using legacy CRI server" Aug 13 00:18:47.574754 containerd[2016]: time="2025-08-13T00:18:47.572411865Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 00:18:47.574754 containerd[2016]: time="2025-08-13T00:18:47.572570373Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:18:47.581184 containerd[2016]: time="2025-08-13T00:18:47.579920901Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:18:47.581184 containerd[2016]: time="2025-08-13T00:18:47.580308753Z" level=info msg="Start subscribing containerd event" Aug 13 00:18:47.581184 containerd[2016]: time="2025-08-13T00:18:47.580397361Z" level=info msg="Start recovering state" Aug 13 00:18:47.581184 containerd[2016]: time="2025-08-13T00:18:47.580523901Z" level=info msg="Start event monitor" Aug 13 00:18:47.581184 containerd[2016]: time="2025-08-13T00:18:47.580547241Z" level=info msg="Start snapshots syncer" Aug 13 00:18:47.581184 containerd[2016]: time="2025-08-13T00:18:47.580568229Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:18:47.581184 containerd[2016]: time="2025-08-13T00:18:47.580586289Z" level=info msg="Start streaming server" Aug 13 00:18:47.585404 containerd[2016]: time="2025-08-13T00:18:47.584398041Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:18:47.585404 containerd[2016]: time="2025-08-13T00:18:47.584522397Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:18:47.585404 containerd[2016]: time="2025-08-13T00:18:47.584650233Z" level=info msg="containerd successfully booted in 0.152403s" Aug 13 00:18:47.584782 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 00:18:47.650778 coreos-metadata[2108]: Aug 13 00:18:47.650 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 13 00:18:47.657356 coreos-metadata[2108]: Aug 13 00:18:47.657 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Aug 13 00:18:47.659628 coreos-metadata[2108]: Aug 13 00:18:47.659 INFO Fetch successful Aug 13 00:18:47.659628 coreos-metadata[2108]: Aug 13 00:18:47.659 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Aug 13 00:18:47.661009 coreos-metadata[2108]: Aug 13 00:18:47.660 INFO Fetch successful Aug 13 00:18:47.667800 unknown[2108]: wrote ssh authorized keys file for user: core Aug 13 00:18:47.721387 update-ssh-keys[2179]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:18:47.728243 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 00:18:47.740640 systemd[1]: Finished sshkeys.service. Aug 13 00:18:47.777264 systemd-networkd[1928]: eth0: Gained IPv6LL Aug 13 00:18:47.792738 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 00:18:47.797914 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 00:18:47.811680 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Aug 13 00:18:47.828480 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:18:47.839317 sshd_keygen[2039]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:18:47.836099 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 00:18:47.989543 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 00:18:48.006516 amazon-ssm-agent[2184]: Initializing new seelog logger Aug 13 00:18:48.010701 amazon-ssm-agent[2184]: New Seelog Logger Creation Complete Aug 13 00:18:48.010701 amazon-ssm-agent[2184]: 2025/08/13 00:18:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:18:48.010701 amazon-ssm-agent[2184]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:18:48.010701 amazon-ssm-agent[2184]: 2025/08/13 00:18:48 processing appconfig overrides Aug 13 00:18:48.011224 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 00:18:48.017266 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 00:18:48.022886 amazon-ssm-agent[2184]: 2025/08/13 00:18:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:18:48.023631 amazon-ssm-agent[2184]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:18:48.023631 amazon-ssm-agent[2184]: 2025-08-13 00:18:48 INFO Proxy environment variables: Aug 13 00:18:48.023779 amazon-ssm-agent[2184]: 2025/08/13 00:18:48 processing appconfig overrides Aug 13 00:18:48.025217 amazon-ssm-agent[2184]: 2025/08/13 00:18:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:18:48.025217 amazon-ssm-agent[2184]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:18:48.027841 amazon-ssm-agent[2184]: 2025/08/13 00:18:48 processing appconfig overrides Aug 13 00:18:48.035048 amazon-ssm-agent[2184]: 2025/08/13 00:18:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:18:48.035048 amazon-ssm-agent[2184]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 13 00:18:48.035687 amazon-ssm-agent[2184]: 2025/08/13 00:18:48 processing appconfig overrides Aug 13 00:18:48.045919 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:18:48.046910 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 00:18:48.062605 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 00:18:48.110491 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 00:18:48.124980 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 00:18:48.143800 amazon-ssm-agent[2184]: 2025-08-13 00:18:48 INFO https_proxy: Aug 13 00:18:48.142833 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 00:18:48.145724 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 00:18:48.238063 amazon-ssm-agent[2184]: 2025-08-13 00:18:48 INFO http_proxy: Aug 13 00:18:48.336039 amazon-ssm-agent[2184]: 2025-08-13 00:18:48 INFO no_proxy: Aug 13 00:18:48.434912 amazon-ssm-agent[2184]: 2025-08-13 00:18:48 INFO Checking if agent identity type OnPrem can be assumed Aug 13 00:18:48.533053 amazon-ssm-agent[2184]: 2025-08-13 00:18:48 INFO Checking if agent identity type EC2 can be assumed Aug 13 00:18:48.601772 tar[2006]: linux-arm64/README.md Aug 13 00:18:48.632044 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 00:18:48.636622 amazon-ssm-agent[2184]: 2025-08-13 00:18:48 INFO Agent will take identity from EC2 Aug 13 00:18:48.735386 amazon-ssm-agent[2184]: 2025-08-13 00:18:48 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 13 00:18:48.834732 amazon-ssm-agent[2184]: 2025-08-13 00:18:48 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 13 00:18:48.934034 amazon-ssm-agent[2184]: 2025-08-13 00:18:48 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 13 00:18:48.949086 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 00:18:48.961397 systemd[1]: Started sshd@0-172.31.18.147:22-139.178.89.65:34460.service - OpenSSH per-connection server daemon (139.178.89.65:34460). Aug 13 00:18:49.035325 amazon-ssm-agent[2184]: 2025-08-13 00:18:48 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Aug 13 00:18:49.135258 amazon-ssm-agent[2184]: 2025-08-13 00:18:48 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Aug 13 00:18:49.181313 sshd[2233]: Accepted publickey for core from 139.178.89.65 port 34460 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:18:49.184879 sshd[2233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:49.189716 amazon-ssm-agent[2184]: 2025-08-13 00:18:48 INFO [amazon-ssm-agent] Starting Core Agent Aug 13 00:18:49.189716 amazon-ssm-agent[2184]: 2025-08-13 00:18:48 INFO [amazon-ssm-agent] registrar detected. Attempting registration Aug 13 00:18:49.189716 amazon-ssm-agent[2184]: 2025-08-13 00:18:48 INFO [Registrar] Starting registrar module Aug 13 00:18:49.189716 amazon-ssm-agent[2184]: 2025-08-13 00:18:48 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Aug 13 00:18:49.190568 amazon-ssm-agent[2184]: 2025-08-13 00:18:49 INFO [EC2Identity] EC2 registration was successful. Aug 13 00:18:49.190568 amazon-ssm-agent[2184]: 2025-08-13 00:18:49 INFO [CredentialRefresher] credentialRefresher has started Aug 13 00:18:49.190568 amazon-ssm-agent[2184]: 2025-08-13 00:18:49 INFO [CredentialRefresher] Starting credentials refresher loop Aug 13 00:18:49.190568 amazon-ssm-agent[2184]: 2025-08-13 00:18:49 INFO EC2RoleProvider Successfully connected with instance profile role credentials Aug 13 00:18:49.209182 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 00:18:49.218726 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 00:18:49.230017 systemd-logind[2000]: New session 1 of user core. Aug 13 00:18:49.235681 amazon-ssm-agent[2184]: 2025-08-13 00:18:49 INFO [CredentialRefresher] Next credential rotation will be in 30.6749893195 minutes Aug 13 00:18:49.256299 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 00:18:49.271824 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 00:18:49.284466 (systemd)[2237]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:18:49.517820 systemd[2237]: Queued start job for default target default.target. Aug 13 00:18:49.527509 systemd[2237]: Created slice app.slice - User Application Slice. Aug 13 00:18:49.527727 systemd[2237]: Reached target paths.target - Paths. Aug 13 00:18:49.527766 systemd[2237]: Reached target timers.target - Timers. Aug 13 00:18:49.532492 systemd[2237]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 00:18:49.566618 systemd[2237]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 00:18:49.566874 systemd[2237]: Reached target sockets.target - Sockets. Aug 13 00:18:49.566907 systemd[2237]: Reached target basic.target - Basic System. Aug 13 00:18:49.567004 systemd[2237]: Reached target default.target - Main User Target. Aug 13 00:18:49.567068 systemd[2237]: Startup finished in 270ms. Aug 13 00:18:49.567290 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 00:18:49.578463 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 00:18:49.744747 systemd[1]: Started sshd@1-172.31.18.147:22-139.178.89.65:43242.service - OpenSSH per-connection server daemon (139.178.89.65:43242). Aug 13 00:18:49.935826 sshd[2248]: Accepted publickey for core from 139.178.89.65 port 43242 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:18:49.939070 sshd[2248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:49.948495 systemd-logind[2000]: New session 2 of user core. Aug 13 00:18:49.955507 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 00:18:50.092557 sshd[2248]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:50.100947 systemd[1]: sshd@1-172.31.18.147:22-139.178.89.65:43242.service: Deactivated successfully. Aug 13 00:18:50.105152 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:18:50.106819 systemd-logind[2000]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:18:50.109546 systemd-logind[2000]: Removed session 2. Aug 13 00:18:50.130742 systemd[1]: Started sshd@2-172.31.18.147:22-139.178.89.65:43254.service - OpenSSH per-connection server daemon (139.178.89.65:43254). Aug 13 00:18:50.211530 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:18:50.213380 (kubelet)[2263]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:18:50.215102 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 00:18:50.222328 systemd[1]: Startup finished in 1.186s (kernel) + 7.559s (initrd) + 8.899s (userspace) = 17.646s. Aug 13 00:18:50.251701 amazon-ssm-agent[2184]: 2025-08-13 00:18:50 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Aug 13 00:18:50.337444 sshd[2255]: Accepted publickey for core from 139.178.89.65 port 43254 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:18:50.340758 sshd[2255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:18:50.354060 amazon-ssm-agent[2184]: 2025-08-13 00:18:50 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2266) started Aug 13 00:18:50.356274 systemd-logind[2000]: New session 3 of user core. Aug 13 00:18:50.363490 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 00:18:50.452625 amazon-ssm-agent[2184]: 2025-08-13 00:18:50 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Aug 13 00:18:50.493735 sshd[2255]: pam_unix(sshd:session): session closed for user core Aug 13 00:18:50.504807 systemd-logind[2000]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:18:50.506286 systemd[1]: sshd@2-172.31.18.147:22-139.178.89.65:43254.service: Deactivated successfully. Aug 13 00:18:50.509626 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:18:50.514196 systemd-logind[2000]: Removed session 3. Aug 13 00:18:50.539859 ntpd[1995]: Listen normally on 7 eth0 [fe80::425:44ff:fed3:363b%2]:123 Aug 13 00:18:50.540486 ntpd[1995]: 13 Aug 00:18:50 ntpd[1995]: Listen normally on 7 eth0 [fe80::425:44ff:fed3:363b%2]:123 Aug 13 00:18:51.474411 kubelet[2263]: E0813 00:18:51.474315 2263 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:18:51.478244 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:18:51.478554 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:18:51.479620 systemd[1]: kubelet.service: Consumed 1.390s CPU time. Aug 13 00:18:53.171420 systemd-resolved[1929]: Clock change detected. Flushing caches. Aug 13 00:19:00.168093 systemd[1]: Started sshd@3-172.31.18.147:22-139.178.89.65:41462.service - OpenSSH per-connection server daemon (139.178.89.65:41462). Aug 13 00:19:00.332923 sshd[2289]: Accepted publickey for core from 139.178.89.65 port 41462 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:19:00.335842 sshd[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:19:00.345205 systemd-logind[2000]: New session 4 of user core. Aug 13 00:19:00.355021 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 00:19:00.484580 sshd[2289]: pam_unix(sshd:session): session closed for user core Aug 13 00:19:00.490943 systemd[1]: sshd@3-172.31.18.147:22-139.178.89.65:41462.service: Deactivated successfully. Aug 13 00:19:00.495082 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:19:00.496615 systemd-logind[2000]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:19:00.498352 systemd-logind[2000]: Removed session 4. Aug 13 00:19:00.524182 systemd[1]: Started sshd@4-172.31.18.147:22-139.178.89.65:41478.service - OpenSSH per-connection server daemon (139.178.89.65:41478). Aug 13 00:19:00.711118 sshd[2296]: Accepted publickey for core from 139.178.89.65 port 41478 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:19:00.713781 sshd[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:19:00.723113 systemd-logind[2000]: New session 5 of user core. Aug 13 00:19:00.733000 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 00:19:00.854567 sshd[2296]: pam_unix(sshd:session): session closed for user core Aug 13 00:19:00.861339 systemd[1]: sshd@4-172.31.18.147:22-139.178.89.65:41478.service: Deactivated successfully. Aug 13 00:19:00.865071 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:19:00.866781 systemd-logind[2000]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:19:00.868939 systemd-logind[2000]: Removed session 5. Aug 13 00:19:00.896135 systemd[1]: Started sshd@5-172.31.18.147:22-139.178.89.65:41490.service - OpenSSH per-connection server daemon (139.178.89.65:41490). Aug 13 00:19:01.060509 sshd[2303]: Accepted publickey for core from 139.178.89.65 port 41490 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:19:01.063301 sshd[2303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:19:01.072764 systemd-logind[2000]: New session 6 of user core. Aug 13 00:19:01.082991 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 00:19:01.190532 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:19:01.204024 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:19:01.217183 sshd[2303]: pam_unix(sshd:session): session closed for user core Aug 13 00:19:01.225627 systemd[1]: sshd@5-172.31.18.147:22-139.178.89.65:41490.service: Deactivated successfully. Aug 13 00:19:01.233004 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:19:01.236105 systemd-logind[2000]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:19:01.259408 systemd[1]: Started sshd@6-172.31.18.147:22-139.178.89.65:41492.service - OpenSSH per-connection server daemon (139.178.89.65:41492). Aug 13 00:19:01.263930 systemd-logind[2000]: Removed session 6. Aug 13 00:19:01.452078 sshd[2313]: Accepted publickey for core from 139.178.89.65 port 41492 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:19:01.457077 sshd[2313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:19:01.465844 systemd-logind[2000]: New session 7 of user core. Aug 13 00:19:01.473966 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 00:19:01.561750 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:19:01.577326 (kubelet)[2321]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:19:01.607220 sudo[2322]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:19:01.608092 sudo[2322]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:19:01.629570 sudo[2322]: pam_unix(sudo:session): session closed for user root Aug 13 00:19:01.654868 sshd[2313]: pam_unix(sshd:session): session closed for user core Aug 13 00:19:01.666574 systemd[1]: sshd@6-172.31.18.147:22-139.178.89.65:41492.service: Deactivated successfully. Aug 13 00:19:01.669488 kubelet[2321]: E0813 00:19:01.669410 2321 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:19:01.672880 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:19:01.677142 systemd-logind[2000]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:19:01.686552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:19:01.686922 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:19:01.693309 systemd[1]: Started sshd@7-172.31.18.147:22-139.178.89.65:41502.service - OpenSSH per-connection server daemon (139.178.89.65:41502). Aug 13 00:19:01.695187 systemd-logind[2000]: Removed session 7. Aug 13 00:19:01.879681 sshd[2333]: Accepted publickey for core from 139.178.89.65 port 41502 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:19:01.882550 sshd[2333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:19:01.892890 systemd-logind[2000]: New session 8 of user core. Aug 13 00:19:01.900025 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 00:19:02.004424 sudo[2337]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:19:02.005108 sudo[2337]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:19:02.011451 sudo[2337]: pam_unix(sudo:session): session closed for user root Aug 13 00:19:02.022847 sudo[2336]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 00:19:02.024210 sudo[2336]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:19:02.046307 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 00:19:02.061771 auditctl[2340]: No rules Aug 13 00:19:02.062623 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:19:02.063094 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 00:19:02.071414 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 00:19:02.125535 augenrules[2358]: No rules Aug 13 00:19:02.129771 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 00:19:02.132901 sudo[2336]: pam_unix(sudo:session): session closed for user root Aug 13 00:19:02.156501 sshd[2333]: pam_unix(sshd:session): session closed for user core Aug 13 00:19:02.162142 systemd-logind[2000]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:19:02.162419 systemd[1]: sshd@7-172.31.18.147:22-139.178.89.65:41502.service: Deactivated successfully. Aug 13 00:19:02.165402 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:19:02.169081 systemd-logind[2000]: Removed session 8. Aug 13 00:19:02.198166 systemd[1]: Started sshd@8-172.31.18.147:22-139.178.89.65:41510.service - OpenSSH per-connection server daemon (139.178.89.65:41510). Aug 13 00:19:02.378502 sshd[2366]: Accepted publickey for core from 139.178.89.65 port 41510 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:19:02.381109 sshd[2366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:19:02.388898 systemd-logind[2000]: New session 9 of user core. Aug 13 00:19:02.398905 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 00:19:02.503529 sudo[2369]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:19:02.504214 sudo[2369]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:19:03.023170 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 00:19:03.035312 (dockerd)[2385]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 00:19:03.470699 dockerd[2385]: time="2025-08-13T00:19:03.470443867Z" level=info msg="Starting up" Aug 13 00:19:03.620554 dockerd[2385]: time="2025-08-13T00:19:03.620491975Z" level=info msg="Loading containers: start." Aug 13 00:19:03.794695 kernel: Initializing XFRM netlink socket Aug 13 00:19:03.830111 (udev-worker)[2408]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:19:03.921429 systemd-networkd[1928]: docker0: Link UP Aug 13 00:19:03.952566 dockerd[2385]: time="2025-08-13T00:19:03.952497537Z" level=info msg="Loading containers: done." Aug 13 00:19:03.979325 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2718661332-merged.mount: Deactivated successfully. Aug 13 00:19:03.987373 dockerd[2385]: time="2025-08-13T00:19:03.987300477Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:19:03.987552 dockerd[2385]: time="2025-08-13T00:19:03.987457593Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 00:19:03.987795 dockerd[2385]: time="2025-08-13T00:19:03.987737973Z" level=info msg="Daemon has completed initialization" Aug 13 00:19:04.061590 dockerd[2385]: time="2025-08-13T00:19:04.060380610Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:19:04.060855 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 00:19:05.286989 containerd[2016]: time="2025-08-13T00:19:05.286287776Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\"" Aug 13 00:19:05.949933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3008585221.mount: Deactivated successfully. Aug 13 00:19:07.507078 containerd[2016]: time="2025-08-13T00:19:07.507016031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:07.510112 containerd[2016]: time="2025-08-13T00:19:07.510051443Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.3: active requests=0, bytes read=27352094" Aug 13 00:19:07.512543 containerd[2016]: time="2025-08-13T00:19:07.512478491Z" level=info msg="ImageCreate event name:\"sha256:c0425f3fe3fbf33c17a14d49c43d4fd0b60b2254511902d5b2c29e53ca684fc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:07.518995 containerd[2016]: time="2025-08-13T00:19:07.518918519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:07.521768 containerd[2016]: time="2025-08-13T00:19:07.521688359Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.3\" with image id \"sha256:c0425f3fe3fbf33c17a14d49c43d4fd0b60b2254511902d5b2c29e53ca684fc9\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\", size \"27348894\" in 2.235327551s" Aug 13 00:19:07.521768 containerd[2016]: time="2025-08-13T00:19:07.521762687Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\" returns image reference \"sha256:c0425f3fe3fbf33c17a14d49c43d4fd0b60b2254511902d5b2c29e53ca684fc9\"" Aug 13 00:19:07.524869 containerd[2016]: time="2025-08-13T00:19:07.524806883Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\"" Aug 13 00:19:09.105361 containerd[2016]: time="2025-08-13T00:19:09.105298031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:09.108032 containerd[2016]: time="2025-08-13T00:19:09.107965523Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.3: active requests=0, bytes read=23537846" Aug 13 00:19:09.109526 containerd[2016]: time="2025-08-13T00:19:09.109425227Z" level=info msg="ImageCreate event name:\"sha256:ef439b94d49d41d1b377c316fb053adb88bf6b26ec7e63aaf3deba953b7c766f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:09.116269 containerd[2016]: time="2025-08-13T00:19:09.116164571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:09.118894 containerd[2016]: time="2025-08-13T00:19:09.118829111Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.3\" with image id \"sha256:ef439b94d49d41d1b377c316fb053adb88bf6b26ec7e63aaf3deba953b7c766f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\", size \"25092764\" in 1.593775724s" Aug 13 00:19:09.119259 containerd[2016]: time="2025-08-13T00:19:09.119095643Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\" returns image reference \"sha256:ef439b94d49d41d1b377c316fb053adb88bf6b26ec7e63aaf3deba953b7c766f\"" Aug 13 00:19:09.120630 containerd[2016]: time="2025-08-13T00:19:09.120315095Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\"" Aug 13 00:19:10.573482 containerd[2016]: time="2025-08-13T00:19:10.573407282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:10.575742 containerd[2016]: time="2025-08-13T00:19:10.575614982Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.3: active requests=0, bytes read=18293524" Aug 13 00:19:10.576700 containerd[2016]: time="2025-08-13T00:19:10.576325022Z" level=info msg="ImageCreate event name:\"sha256:c03972dff86ba78247043f2b6171ce436ab9323da7833b18924c3d8e29ea37a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:10.582707 containerd[2016]: time="2025-08-13T00:19:10.582566030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:10.585334 containerd[2016]: time="2025-08-13T00:19:10.585266978Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.3\" with image id \"sha256:c03972dff86ba78247043f2b6171ce436ab9323da7833b18924c3d8e29ea37a5\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\", size \"19848460\" in 1.464847471s" Aug 13 00:19:10.585705 containerd[2016]: time="2025-08-13T00:19:10.585517586Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\" returns image reference \"sha256:c03972dff86ba78247043f2b6171ce436ab9323da7833b18924c3d8e29ea37a5\"" Aug 13 00:19:10.586303 containerd[2016]: time="2025-08-13T00:19:10.586228790Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\"" Aug 13 00:19:11.719967 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 00:19:11.731860 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:19:12.118887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4168691946.mount: Deactivated successfully. Aug 13 00:19:12.140080 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:19:12.153513 (kubelet)[2600]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:19:12.250258 kubelet[2600]: E0813 00:19:12.249552 2600 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:19:12.258450 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:19:12.258893 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:19:12.810270 containerd[2016]: time="2025-08-13T00:19:12.810193337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:12.816465 containerd[2016]: time="2025-08-13T00:19:12.816112541Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.3: active requests=0, bytes read=28199472" Aug 13 00:19:12.821297 containerd[2016]: time="2025-08-13T00:19:12.821206157Z" level=info msg="ImageCreate event name:\"sha256:738e99dbd7325e2cdd650d83d59a79c7ecb005ab0d5bf029fc15c54ee9359306\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:12.827577 containerd[2016]: time="2025-08-13T00:19:12.827258297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:12.829091 containerd[2016]: time="2025-08-13T00:19:12.828842537Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.3\" with image id \"sha256:738e99dbd7325e2cdd650d83d59a79c7ecb005ab0d5bf029fc15c54ee9359306\", repo tag \"registry.k8s.io/kube-proxy:v1.33.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\", size \"28198491\" in 2.242522223s" Aug 13 00:19:12.829091 containerd[2016]: time="2025-08-13T00:19:12.828915497Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\" returns image reference \"sha256:738e99dbd7325e2cdd650d83d59a79c7ecb005ab0d5bf029fc15c54ee9359306\"" Aug 13 00:19:12.831295 containerd[2016]: time="2025-08-13T00:19:12.831238001Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 13 00:19:13.644009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2730064512.mount: Deactivated successfully. Aug 13 00:19:14.938736 containerd[2016]: time="2025-08-13T00:19:14.938359400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:14.941099 containerd[2016]: time="2025-08-13T00:19:14.941000444Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Aug 13 00:19:14.943470 containerd[2016]: time="2025-08-13T00:19:14.943352276Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:14.950254 containerd[2016]: time="2025-08-13T00:19:14.950160536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:14.952826 containerd[2016]: time="2025-08-13T00:19:14.952590212Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 2.121284915s" Aug 13 00:19:14.952826 containerd[2016]: time="2025-08-13T00:19:14.952673300Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Aug 13 00:19:14.954554 containerd[2016]: time="2025-08-13T00:19:14.954272828Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:19:15.487406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4031927666.mount: Deactivated successfully. Aug 13 00:19:15.500795 containerd[2016]: time="2025-08-13T00:19:15.500319702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:15.502334 containerd[2016]: time="2025-08-13T00:19:15.502261638Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Aug 13 00:19:15.504813 containerd[2016]: time="2025-08-13T00:19:15.504737718Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:15.514674 containerd[2016]: time="2025-08-13T00:19:15.514576206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:15.517257 containerd[2016]: time="2025-08-13T00:19:15.517185270Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 562.853114ms" Aug 13 00:19:15.517257 containerd[2016]: time="2025-08-13T00:19:15.517252158Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 13 00:19:15.518530 containerd[2016]: time="2025-08-13T00:19:15.518462250Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Aug 13 00:19:16.094751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4252309982.mount: Deactivated successfully. Aug 13 00:19:16.967799 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 00:19:18.344322 containerd[2016]: time="2025-08-13T00:19:18.344263029Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:18.347240 containerd[2016]: time="2025-08-13T00:19:18.347176305Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334599" Aug 13 00:19:18.349132 containerd[2016]: time="2025-08-13T00:19:18.349050297Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:18.355753 containerd[2016]: time="2025-08-13T00:19:18.355618881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:18.358279 containerd[2016]: time="2025-08-13T00:19:18.358223145Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.839494914s" Aug 13 00:19:18.358565 containerd[2016]: time="2025-08-13T00:19:18.358422861Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Aug 13 00:19:22.469565 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 00:19:22.476072 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:19:22.809025 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:19:22.819095 (kubelet)[2755]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:19:22.889198 kubelet[2755]: E0813 00:19:22.889127 2755 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:19:22.894019 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:19:22.894339 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:19:27.886522 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:19:27.894169 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:19:27.951680 systemd[1]: Reloading requested from client PID 2769 ('systemctl') (unit session-9.scope)... Aug 13 00:19:27.951713 systemd[1]: Reloading... Aug 13 00:19:28.173695 zram_generator::config[2810]: No configuration found. Aug 13 00:19:28.444789 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:19:28.618765 systemd[1]: Reloading finished in 666 ms. Aug 13 00:19:28.702946 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 00:19:28.703137 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 00:19:28.705779 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:19:28.719611 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:19:29.819309 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:19:29.834185 (kubelet)[2871]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:19:29.903067 kubelet[2871]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:19:29.903067 kubelet[2871]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:19:29.903067 kubelet[2871]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:19:29.903067 kubelet[2871]: I0813 00:19:29.902534 2871 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:19:31.223164 update_engine[2002]: I20250813 00:19:31.222179 2002 update_attempter.cc:509] Updating boot flags... Aug 13 00:19:31.313693 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 41 scanned by (udev-worker) (2891) Aug 13 00:19:32.176697 kubelet[2871]: I0813 00:19:32.176576 2871 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 00:19:32.177941 kubelet[2871]: I0813 00:19:32.176631 2871 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:19:32.178200 kubelet[2871]: I0813 00:19:32.178169 2871 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 00:19:32.218995 kubelet[2871]: E0813 00:19:32.218945 2871 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.18.147:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.147:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Aug 13 00:19:32.221045 kubelet[2871]: I0813 00:19:32.220988 2871 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:19:32.234282 kubelet[2871]: E0813 00:19:32.233727 2871 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:19:32.234282 kubelet[2871]: I0813 00:19:32.233791 2871 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:19:32.238713 kubelet[2871]: I0813 00:19:32.238677 2871 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:19:32.239711 kubelet[2871]: I0813 00:19:32.239403 2871 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:19:32.239711 kubelet[2871]: I0813 00:19:32.239446 2871 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-147","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:19:32.239958 kubelet[2871]: I0813 00:19:32.239868 2871 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:19:32.239958 kubelet[2871]: I0813 00:19:32.239887 2871 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 00:19:32.240262 kubelet[2871]: I0813 00:19:32.240220 2871 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:19:32.246380 kubelet[2871]: I0813 00:19:32.246320 2871 kubelet.go:480] "Attempting to sync node with API server" Aug 13 00:19:32.246380 kubelet[2871]: I0813 00:19:32.246369 2871 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:19:32.246554 kubelet[2871]: I0813 00:19:32.246415 2871 kubelet.go:386] "Adding apiserver pod source" Aug 13 00:19:32.248851 kubelet[2871]: I0813 00:19:32.248662 2871 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:19:32.256025 kubelet[2871]: E0813 00:19:32.255961 2871 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.18.147:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 00:19:32.256189 kubelet[2871]: E0813 00:19:32.256149 2871 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.18.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-147&limit=500&resourceVersion=0\": dial tcp 172.31.18.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 00:19:32.257675 kubelet[2871]: I0813 00:19:32.256837 2871 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 00:19:32.258110 kubelet[2871]: I0813 00:19:32.258067 2871 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 00:19:32.258335 kubelet[2871]: W0813 00:19:32.258301 2871 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:19:32.265917 kubelet[2871]: I0813 00:19:32.265864 2871 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:19:32.266029 kubelet[2871]: I0813 00:19:32.265933 2871 server.go:1289] "Started kubelet" Aug 13 00:19:32.272389 kubelet[2871]: I0813 00:19:32.272312 2871 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:19:32.273702 kubelet[2871]: I0813 00:19:32.272710 2871 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:19:32.273702 kubelet[2871]: I0813 00:19:32.273249 2871 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:19:32.274553 kubelet[2871]: I0813 00:19:32.274519 2871 server.go:317] "Adding debug handlers to kubelet server" Aug 13 00:19:32.284618 kubelet[2871]: I0813 00:19:32.284582 2871 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:19:32.286394 kubelet[2871]: E0813 00:19:32.284071 2871 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.147:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.147:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-147.185b2b8ed5f50b56 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-147,UID:ip-172-31-18-147,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-147,},FirstTimestamp:2025-08-13 00:19:32.26589679 +0000 UTC m=+2.424904309,LastTimestamp:2025-08-13 00:19:32.26589679 +0000 UTC m=+2.424904309,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-147,}" Aug 13 00:19:32.289455 kubelet[2871]: I0813 00:19:32.289400 2871 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:19:32.296985 kubelet[2871]: I0813 00:19:32.296951 2871 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:19:32.297583 kubelet[2871]: E0813 00:19:32.297544 2871 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-18-147\" not found" Aug 13 00:19:32.298330 kubelet[2871]: I0813 00:19:32.298295 2871 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:19:32.299272 kubelet[2871]: E0813 00:19:32.299195 2871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-147?timeout=10s\": dial tcp 172.31.18.147:6443: connect: connection refused" interval="200ms" Aug 13 00:19:32.301835 kubelet[2871]: E0813 00:19:32.300932 2871 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:19:32.301835 kubelet[2871]: E0813 00:19:32.301149 2871 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.18.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 00:19:32.301835 kubelet[2871]: I0813 00:19:32.301459 2871 factory.go:223] Registration of the systemd container factory successfully Aug 13 00:19:32.301835 kubelet[2871]: I0813 00:19:32.301593 2871 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:19:32.303630 kubelet[2871]: I0813 00:19:32.303579 2871 factory.go:223] Registration of the containerd container factory successfully Aug 13 00:19:32.309696 kubelet[2871]: I0813 00:19:32.309599 2871 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:19:32.328991 kubelet[2871]: I0813 00:19:32.328749 2871 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 00:19:32.331162 kubelet[2871]: I0813 00:19:32.330774 2871 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 00:19:32.331162 kubelet[2871]: I0813 00:19:32.330814 2871 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 00:19:32.331162 kubelet[2871]: I0813 00:19:32.330854 2871 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:19:32.331162 kubelet[2871]: I0813 00:19:32.330869 2871 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 00:19:32.331162 kubelet[2871]: E0813 00:19:32.330933 2871 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:19:32.338692 kubelet[2871]: E0813 00:19:32.338318 2871 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.18.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 00:19:32.353908 kubelet[2871]: I0813 00:19:32.353816 2871 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:19:32.353908 kubelet[2871]: I0813 00:19:32.353850 2871 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:19:32.353908 kubelet[2871]: I0813 00:19:32.353882 2871 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:19:32.361262 kubelet[2871]: I0813 00:19:32.361208 2871 policy_none.go:49] "None policy: Start" Aug 13 00:19:32.361262 kubelet[2871]: I0813 00:19:32.361253 2871 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:19:32.361458 kubelet[2871]: I0813 00:19:32.361278 2871 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:19:32.374535 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 00:19:32.395554 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 00:19:32.398162 kubelet[2871]: E0813 00:19:32.398104 2871 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-18-147\" not found" Aug 13 00:19:32.402830 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 00:19:32.417682 kubelet[2871]: E0813 00:19:32.417616 2871 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 00:19:32.417961 kubelet[2871]: I0813 00:19:32.417924 2871 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:19:32.418030 kubelet[2871]: I0813 00:19:32.417957 2871 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:19:32.419477 kubelet[2871]: I0813 00:19:32.418729 2871 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:19:32.420559 kubelet[2871]: E0813 00:19:32.420390 2871 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:19:32.420559 kubelet[2871]: E0813 00:19:32.420467 2871 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-147\" not found" Aug 13 00:19:32.453492 systemd[1]: Created slice kubepods-burstable-pode3bc7c97e6a8c4768f13ac7a317557fa.slice - libcontainer container kubepods-burstable-pode3bc7c97e6a8c4768f13ac7a317557fa.slice. Aug 13 00:19:32.467174 kubelet[2871]: E0813 00:19:32.467112 2871 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-147\" not found" node="ip-172-31-18-147" Aug 13 00:19:32.473850 systemd[1]: Created slice kubepods-burstable-podfb735a7943a13986cb574638e108d48a.slice - libcontainer container kubepods-burstable-podfb735a7943a13986cb574638e108d48a.slice. Aug 13 00:19:32.488303 kubelet[2871]: E0813 00:19:32.488239 2871 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-147\" not found" node="ip-172-31-18-147" Aug 13 00:19:32.494585 systemd[1]: Created slice kubepods-burstable-pod02e7c9a2364abe358d14a5e38a75bea9.slice - libcontainer container kubepods-burstable-pod02e7c9a2364abe358d14a5e38a75bea9.slice. Aug 13 00:19:32.498592 kubelet[2871]: E0813 00:19:32.498542 2871 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-147\" not found" node="ip-172-31-18-147" Aug 13 00:19:32.500667 kubelet[2871]: E0813 00:19:32.500594 2871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-147?timeout=10s\": dial tcp 172.31.18.147:6443: connect: connection refused" interval="400ms" Aug 13 00:19:32.511816 kubelet[2871]: I0813 00:19:32.511760 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fb735a7943a13986cb574638e108d48a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-147\" (UID: \"fb735a7943a13986cb574638e108d48a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-147" Aug 13 00:19:32.511936 kubelet[2871]: I0813 00:19:32.511822 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/02e7c9a2364abe358d14a5e38a75bea9-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-147\" (UID: \"02e7c9a2364abe358d14a5e38a75bea9\") " pod="kube-system/kube-scheduler-ip-172-31-18-147" Aug 13 00:19:32.511936 kubelet[2871]: I0813 00:19:32.511859 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e3bc7c97e6a8c4768f13ac7a317557fa-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-147\" (UID: \"e3bc7c97e6a8c4768f13ac7a317557fa\") " pod="kube-system/kube-apiserver-ip-172-31-18-147" Aug 13 00:19:32.511936 kubelet[2871]: I0813 00:19:32.511923 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e3bc7c97e6a8c4768f13ac7a317557fa-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-147\" (UID: \"e3bc7c97e6a8c4768f13ac7a317557fa\") " pod="kube-system/kube-apiserver-ip-172-31-18-147" Aug 13 00:19:32.512089 kubelet[2871]: I0813 00:19:32.511969 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fb735a7943a13986cb574638e108d48a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-147\" (UID: \"fb735a7943a13986cb574638e108d48a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-147" Aug 13 00:19:32.512089 kubelet[2871]: I0813 00:19:32.512008 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fb735a7943a13986cb574638e108d48a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-147\" (UID: \"fb735a7943a13986cb574638e108d48a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-147" Aug 13 00:19:32.512089 kubelet[2871]: I0813 00:19:32.512045 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fb735a7943a13986cb574638e108d48a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-147\" (UID: \"fb735a7943a13986cb574638e108d48a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-147" Aug 13 00:19:32.512089 kubelet[2871]: I0813 00:19:32.512082 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e3bc7c97e6a8c4768f13ac7a317557fa-ca-certs\") pod \"kube-apiserver-ip-172-31-18-147\" (UID: \"e3bc7c97e6a8c4768f13ac7a317557fa\") " pod="kube-system/kube-apiserver-ip-172-31-18-147" Aug 13 00:19:32.512289 kubelet[2871]: I0813 00:19:32.512115 2871 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fb735a7943a13986cb574638e108d48a-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-147\" (UID: \"fb735a7943a13986cb574638e108d48a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-147" Aug 13 00:19:32.521000 kubelet[2871]: I0813 00:19:32.520937 2871 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-147" Aug 13 00:19:32.521695 kubelet[2871]: E0813 00:19:32.521616 2871 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.147:6443/api/v1/nodes\": dial tcp 172.31.18.147:6443: connect: connection refused" node="ip-172-31-18-147" Aug 13 00:19:32.724194 kubelet[2871]: I0813 00:19:32.724100 2871 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-147" Aug 13 00:19:32.724715 kubelet[2871]: E0813 00:19:32.724602 2871 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.147:6443/api/v1/nodes\": dial tcp 172.31.18.147:6443: connect: connection refused" node="ip-172-31-18-147" Aug 13 00:19:32.769239 containerd[2016]: time="2025-08-13T00:19:32.769182732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-147,Uid:e3bc7c97e6a8c4768f13ac7a317557fa,Namespace:kube-system,Attempt:0,}" Aug 13 00:19:32.790197 containerd[2016]: time="2025-08-13T00:19:32.790102560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-147,Uid:fb735a7943a13986cb574638e108d48a,Namespace:kube-system,Attempt:0,}" Aug 13 00:19:32.801358 containerd[2016]: time="2025-08-13T00:19:32.801009696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-147,Uid:02e7c9a2364abe358d14a5e38a75bea9,Namespace:kube-system,Attempt:0,}" Aug 13 00:19:32.902070 kubelet[2871]: E0813 00:19:32.902019 2871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-147?timeout=10s\": dial tcp 172.31.18.147:6443: connect: connection refused" interval="800ms" Aug 13 00:19:33.127514 kubelet[2871]: I0813 00:19:33.127372 2871 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-147" Aug 13 00:19:33.127930 kubelet[2871]: E0813 00:19:33.127867 2871 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.147:6443/api/v1/nodes\": dial tcp 172.31.18.147:6443: connect: connection refused" node="ip-172-31-18-147" Aug 13 00:19:33.166873 kubelet[2871]: E0813 00:19:33.166814 2871 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.18.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 00:19:33.272528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2853646184.mount: Deactivated successfully. Aug 13 00:19:33.289711 containerd[2016]: time="2025-08-13T00:19:33.288948527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:19:33.291280 containerd[2016]: time="2025-08-13T00:19:33.291209039Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:19:33.293182 containerd[2016]: time="2025-08-13T00:19:33.293079059Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Aug 13 00:19:33.295068 containerd[2016]: time="2025-08-13T00:19:33.295016747Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:19:33.297174 containerd[2016]: time="2025-08-13T00:19:33.297121871Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:19:33.300158 containerd[2016]: time="2025-08-13T00:19:33.300043703Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:19:33.301413 containerd[2016]: time="2025-08-13T00:19:33.301310015Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:19:33.306113 containerd[2016]: time="2025-08-13T00:19:33.306024263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:19:33.310684 containerd[2016]: time="2025-08-13T00:19:33.309980951Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 519.766167ms" Aug 13 00:19:33.314216 containerd[2016]: time="2025-08-13T00:19:33.314115323Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 544.780419ms" Aug 13 00:19:33.321680 containerd[2016]: time="2025-08-13T00:19:33.321509447Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 520.397895ms" Aug 13 00:19:33.349553 kubelet[2871]: E0813 00:19:33.349380 2871 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.18.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-147&limit=500&resourceVersion=0\": dial tcp 172.31.18.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 00:19:33.406290 kubelet[2871]: E0813 00:19:33.404989 2871 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.18.147:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 00:19:33.511094 containerd[2016]: time="2025-08-13T00:19:33.510608580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:19:33.511094 containerd[2016]: time="2025-08-13T00:19:33.510779532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:19:33.511094 containerd[2016]: time="2025-08-13T00:19:33.510806892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:33.511094 containerd[2016]: time="2025-08-13T00:19:33.510957996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:33.518246 containerd[2016]: time="2025-08-13T00:19:33.518056464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:19:33.518497 containerd[2016]: time="2025-08-13T00:19:33.518188860Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:19:33.518710 containerd[2016]: time="2025-08-13T00:19:33.518458740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:33.519311 containerd[2016]: time="2025-08-13T00:19:33.519199824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:33.525569 containerd[2016]: time="2025-08-13T00:19:33.525268092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:19:33.525569 containerd[2016]: time="2025-08-13T00:19:33.525393180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:19:33.525569 containerd[2016]: time="2025-08-13T00:19:33.525431688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:33.527261 containerd[2016]: time="2025-08-13T00:19:33.526966080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:33.564997 systemd[1]: Started cri-containerd-4e60e27934ce4e466e5f56646a5a9388c59a9ceb79ca5ca41c861d8b26cb08bb.scope - libcontainer container 4e60e27934ce4e466e5f56646a5a9388c59a9ceb79ca5ca41c861d8b26cb08bb. Aug 13 00:19:33.579317 systemd[1]: Started cri-containerd-77e989ebe5db930fb1e048ee9220aae585838b435901cb1d878c949a049c6db5.scope - libcontainer container 77e989ebe5db930fb1e048ee9220aae585838b435901cb1d878c949a049c6db5. Aug 13 00:19:33.593390 systemd[1]: Started cri-containerd-57c306d57dabe1d1c85b08e722acd9d6d8973ae13080fc071e81cd3465ef04fd.scope - libcontainer container 57c306d57dabe1d1c85b08e722acd9d6d8973ae13080fc071e81cd3465ef04fd. Aug 13 00:19:33.695080 kubelet[2871]: E0813 00:19:33.694878 2871 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.18.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.147:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 00:19:33.698567 containerd[2016]: time="2025-08-13T00:19:33.698482249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-147,Uid:fb735a7943a13986cb574638e108d48a,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e60e27934ce4e466e5f56646a5a9388c59a9ceb79ca5ca41c861d8b26cb08bb\"" Aug 13 00:19:33.703121 kubelet[2871]: E0813 00:19:33.702905 2871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-147?timeout=10s\": dial tcp 172.31.18.147:6443: connect: connection refused" interval="1.6s" Aug 13 00:19:33.711709 containerd[2016]: time="2025-08-13T00:19:33.711386257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-147,Uid:02e7c9a2364abe358d14a5e38a75bea9,Namespace:kube-system,Attempt:0,} returns sandbox id \"77e989ebe5db930fb1e048ee9220aae585838b435901cb1d878c949a049c6db5\"" Aug 13 00:19:33.715085 containerd[2016]: time="2025-08-13T00:19:33.714912733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-147,Uid:e3bc7c97e6a8c4768f13ac7a317557fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"57c306d57dabe1d1c85b08e722acd9d6d8973ae13080fc071e81cd3465ef04fd\"" Aug 13 00:19:33.722328 containerd[2016]: time="2025-08-13T00:19:33.721893193Z" level=info msg="CreateContainer within sandbox \"4e60e27934ce4e466e5f56646a5a9388c59a9ceb79ca5ca41c861d8b26cb08bb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:19:33.729252 containerd[2016]: time="2025-08-13T00:19:33.729184837Z" level=info msg="CreateContainer within sandbox \"77e989ebe5db930fb1e048ee9220aae585838b435901cb1d878c949a049c6db5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:19:33.734361 containerd[2016]: time="2025-08-13T00:19:33.734289121Z" level=info msg="CreateContainer within sandbox \"57c306d57dabe1d1c85b08e722acd9d6d8973ae13080fc071e81cd3465ef04fd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:19:33.783863 containerd[2016]: time="2025-08-13T00:19:33.783391849Z" level=info msg="CreateContainer within sandbox \"77e989ebe5db930fb1e048ee9220aae585838b435901cb1d878c949a049c6db5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"571d5fd75098040ce032bd2c474c4d26415381d881f0b9773d678ce8a79a6915\"" Aug 13 00:19:33.786246 containerd[2016]: time="2025-08-13T00:19:33.786142597Z" level=info msg="CreateContainer within sandbox \"4e60e27934ce4e466e5f56646a5a9388c59a9ceb79ca5ca41c861d8b26cb08bb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"27475d5f3af118bd12df2523de05c231105f6e6dda697c7281f6a13734af73d2\"" Aug 13 00:19:33.786864 containerd[2016]: time="2025-08-13T00:19:33.786801853Z" level=info msg="StartContainer for \"571d5fd75098040ce032bd2c474c4d26415381d881f0b9773d678ce8a79a6915\"" Aug 13 00:19:33.791696 containerd[2016]: time="2025-08-13T00:19:33.790227289Z" level=info msg="CreateContainer within sandbox \"57c306d57dabe1d1c85b08e722acd9d6d8973ae13080fc071e81cd3465ef04fd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6307ff7268de60a38a33ed5b0f972a25bbb06ad55935d385113b3b7a2f1fa622\"" Aug 13 00:19:33.791696 containerd[2016]: time="2025-08-13T00:19:33.790527613Z" level=info msg="StartContainer for \"27475d5f3af118bd12df2523de05c231105f6e6dda697c7281f6a13734af73d2\"" Aug 13 00:19:33.805082 containerd[2016]: time="2025-08-13T00:19:33.805022233Z" level=info msg="StartContainer for \"6307ff7268de60a38a33ed5b0f972a25bbb06ad55935d385113b3b7a2f1fa622\"" Aug 13 00:19:33.849006 systemd[1]: Started cri-containerd-571d5fd75098040ce032bd2c474c4d26415381d881f0b9773d678ce8a79a6915.scope - libcontainer container 571d5fd75098040ce032bd2c474c4d26415381d881f0b9773d678ce8a79a6915. Aug 13 00:19:33.878031 systemd[1]: Started cri-containerd-27475d5f3af118bd12df2523de05c231105f6e6dda697c7281f6a13734af73d2.scope - libcontainer container 27475d5f3af118bd12df2523de05c231105f6e6dda697c7281f6a13734af73d2. Aug 13 00:19:33.907200 systemd[1]: Started cri-containerd-6307ff7268de60a38a33ed5b0f972a25bbb06ad55935d385113b3b7a2f1fa622.scope - libcontainer container 6307ff7268de60a38a33ed5b0f972a25bbb06ad55935d385113b3b7a2f1fa622. Aug 13 00:19:33.933319 kubelet[2871]: I0813 00:19:33.932736 2871 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-147" Aug 13 00:19:33.933476 kubelet[2871]: E0813 00:19:33.933404 2871 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.147:6443/api/v1/nodes\": dial tcp 172.31.18.147:6443: connect: connection refused" node="ip-172-31-18-147" Aug 13 00:19:33.979897 containerd[2016]: time="2025-08-13T00:19:33.979521734Z" level=info msg="StartContainer for \"571d5fd75098040ce032bd2c474c4d26415381d881f0b9773d678ce8a79a6915\" returns successfully" Aug 13 00:19:34.023814 containerd[2016]: time="2025-08-13T00:19:34.020873974Z" level=info msg="StartContainer for \"27475d5f3af118bd12df2523de05c231105f6e6dda697c7281f6a13734af73d2\" returns successfully" Aug 13 00:19:34.047860 containerd[2016]: time="2025-08-13T00:19:34.047783795Z" level=info msg="StartContainer for \"6307ff7268de60a38a33ed5b0f972a25bbb06ad55935d385113b3b7a2f1fa622\" returns successfully" Aug 13 00:19:34.389466 kubelet[2871]: E0813 00:19:34.389346 2871 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-147\" not found" node="ip-172-31-18-147" Aug 13 00:19:34.399872 kubelet[2871]: E0813 00:19:34.397942 2871 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-147\" not found" node="ip-172-31-18-147" Aug 13 00:19:34.399872 kubelet[2871]: E0813 00:19:34.399088 2871 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-147\" not found" node="ip-172-31-18-147" Aug 13 00:19:35.404762 kubelet[2871]: E0813 00:19:35.402599 2871 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-147\" not found" node="ip-172-31-18-147" Aug 13 00:19:35.405542 kubelet[2871]: E0813 00:19:35.402017 2871 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-147\" not found" node="ip-172-31-18-147" Aug 13 00:19:35.536187 kubelet[2871]: I0813 00:19:35.536154 2871 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-147" Aug 13 00:19:35.954606 kubelet[2871]: E0813 00:19:35.954311 2871 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-147\" not found" node="ip-172-31-18-147" Aug 13 00:19:36.404832 kubelet[2871]: E0813 00:19:36.404249 2871 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-147\" not found" node="ip-172-31-18-147" Aug 13 00:19:36.405967 kubelet[2871]: E0813 00:19:36.405702 2871 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-147\" not found" node="ip-172-31-18-147" Aug 13 00:19:38.259659 kubelet[2871]: I0813 00:19:38.259121 2871 apiserver.go:52] "Watching apiserver" Aug 13 00:19:38.399294 kubelet[2871]: I0813 00:19:38.399225 2871 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:19:38.450029 kubelet[2871]: E0813 00:19:38.449944 2871 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-147\" not found" node="ip-172-31-18-147" Aug 13 00:19:38.497283 kubelet[2871]: I0813 00:19:38.496942 2871 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-18-147" Aug 13 00:19:38.498250 kubelet[2871]: I0813 00:19:38.498213 2871 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-147" Aug 13 00:19:38.673588 kubelet[2871]: E0813 00:19:38.673436 2871 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-147\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-18-147" Aug 13 00:19:38.673588 kubelet[2871]: I0813 00:19:38.673485 2871 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-147" Aug 13 00:19:38.685459 kubelet[2871]: E0813 00:19:38.685329 2871 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-18-147\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-18-147" Aug 13 00:19:38.685459 kubelet[2871]: I0813 00:19:38.685397 2871 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-147" Aug 13 00:19:38.703514 kubelet[2871]: E0813 00:19:38.703442 2871 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-18-147\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-18-147" Aug 13 00:19:42.523352 systemd[1]: Reloading requested from client PID 3253 ('systemctl') (unit session-9.scope)... Aug 13 00:19:42.523384 systemd[1]: Reloading... Aug 13 00:19:42.721722 zram_generator::config[3297]: No configuration found. Aug 13 00:19:42.957857 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:19:43.163971 systemd[1]: Reloading finished in 639 ms. Aug 13 00:19:43.250137 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:19:43.270797 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:19:43.272705 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:19:43.272916 systemd[1]: kubelet.service: Consumed 3.092s CPU time, 127.7M memory peak, 0B memory swap peak. Aug 13 00:19:43.286380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:19:43.610774 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:19:43.633818 (kubelet)[3353]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:19:43.735273 kubelet[3353]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:19:43.735273 kubelet[3353]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 00:19:43.735273 kubelet[3353]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:19:43.735273 kubelet[3353]: I0813 00:19:43.735442 3353 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:19:43.749678 kubelet[3353]: I0813 00:19:43.749413 3353 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 00:19:43.749678 kubelet[3353]: I0813 00:19:43.749457 3353 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:19:43.750184 kubelet[3353]: I0813 00:19:43.750153 3353 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 00:19:43.753504 kubelet[3353]: I0813 00:19:43.753221 3353 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Aug 13 00:19:43.765594 kubelet[3353]: I0813 00:19:43.765542 3353 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:19:43.776099 kubelet[3353]: E0813 00:19:43.776015 3353 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:19:43.776775 kubelet[3353]: I0813 00:19:43.776366 3353 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:19:43.781972 kubelet[3353]: I0813 00:19:43.781910 3353 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:19:43.782947 kubelet[3353]: I0813 00:19:43.782900 3353 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:19:43.783357 kubelet[3353]: I0813 00:19:43.783081 3353 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-147","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 00:19:43.783579 kubelet[3353]: I0813 00:19:43.783556 3353 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:19:43.784670 kubelet[3353]: I0813 00:19:43.783685 3353 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 00:19:43.784670 kubelet[3353]: I0813 00:19:43.783770 3353 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:19:43.784670 kubelet[3353]: I0813 00:19:43.784083 3353 kubelet.go:480] "Attempting to sync node with API server" Aug 13 00:19:43.784670 kubelet[3353]: I0813 00:19:43.784111 3353 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:19:43.784670 kubelet[3353]: I0813 00:19:43.784148 3353 kubelet.go:386] "Adding apiserver pod source" Aug 13 00:19:43.784670 kubelet[3353]: I0813 00:19:43.784188 3353 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:19:43.794626 kubelet[3353]: I0813 00:19:43.792817 3353 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 00:19:43.797853 kubelet[3353]: I0813 00:19:43.797221 3353 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 00:19:43.819961 kubelet[3353]: I0813 00:19:43.819926 3353 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 00:19:43.820154 kubelet[3353]: I0813 00:19:43.820136 3353 server.go:1289] "Started kubelet" Aug 13 00:19:43.825963 kubelet[3353]: I0813 00:19:43.825928 3353 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:19:43.829831 kubelet[3353]: I0813 00:19:43.829733 3353 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:19:43.831893 kubelet[3353]: I0813 00:19:43.831817 3353 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:19:43.849902 kubelet[3353]: I0813 00:19:43.848423 3353 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:19:43.850351 kubelet[3353]: I0813 00:19:43.842729 3353 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:19:43.852810 kubelet[3353]: I0813 00:19:43.832299 3353 server.go:317] "Adding debug handlers to kubelet server" Aug 13 00:19:43.858629 kubelet[3353]: I0813 00:19:43.846390 3353 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 00:19:43.858629 kubelet[3353]: E0813 00:19:43.846879 3353 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-18-147\" not found" Aug 13 00:19:43.858629 kubelet[3353]: I0813 00:19:43.846370 3353 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 00:19:43.862425 kubelet[3353]: I0813 00:19:43.862309 3353 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:19:43.869039 kubelet[3353]: I0813 00:19:43.869005 3353 factory.go:223] Registration of the systemd container factory successfully Aug 13 00:19:43.869357 kubelet[3353]: I0813 00:19:43.869326 3353 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:19:43.892814 kubelet[3353]: I0813 00:19:43.892767 3353 factory.go:223] Registration of the containerd container factory successfully Aug 13 00:19:43.902028 kubelet[3353]: E0813 00:19:43.901966 3353 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:19:43.928735 kubelet[3353]: I0813 00:19:43.928682 3353 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 00:19:43.980417 kubelet[3353]: I0813 00:19:43.980376 3353 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 00:19:43.980617 kubelet[3353]: I0813 00:19:43.980597 3353 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 00:19:43.980762 kubelet[3353]: I0813 00:19:43.980742 3353 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 00:19:43.981852 kubelet[3353]: I0813 00:19:43.981826 3353 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 00:19:43.982114 kubelet[3353]: E0813 00:19:43.982026 3353 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:19:44.033587 kubelet[3353]: I0813 00:19:44.033556 3353 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 00:19:44.035716 kubelet[3353]: I0813 00:19:44.033793 3353 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 00:19:44.035716 kubelet[3353]: I0813 00:19:44.033833 3353 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:19:44.035716 kubelet[3353]: I0813 00:19:44.035682 3353 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:19:44.035716 kubelet[3353]: I0813 00:19:44.035709 3353 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:19:44.035993 kubelet[3353]: I0813 00:19:44.035741 3353 policy_none.go:49] "None policy: Start" Aug 13 00:19:44.035993 kubelet[3353]: I0813 00:19:44.035760 3353 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 00:19:44.035993 kubelet[3353]: I0813 00:19:44.035782 3353 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:19:44.035993 kubelet[3353]: I0813 00:19:44.035955 3353 state_mem.go:75] "Updated machine memory state" Aug 13 00:19:44.044384 kubelet[3353]: E0813 00:19:44.044321 3353 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 00:19:44.046524 kubelet[3353]: I0813 00:19:44.044603 3353 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:19:44.046524 kubelet[3353]: I0813 00:19:44.045926 3353 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:19:44.046524 kubelet[3353]: I0813 00:19:44.046401 3353 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:19:44.055010 kubelet[3353]: E0813 00:19:44.053492 3353 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 00:19:44.083216 kubelet[3353]: I0813 00:19:44.083144 3353 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-147" Aug 13 00:19:44.087311 kubelet[3353]: I0813 00:19:44.084897 3353 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-147" Aug 13 00:19:44.087311 kubelet[3353]: I0813 00:19:44.086280 3353 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-147" Aug 13 00:19:44.165233 kubelet[3353]: I0813 00:19:44.164919 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fb735a7943a13986cb574638e108d48a-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-147\" (UID: \"fb735a7943a13986cb574638e108d48a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-147" Aug 13 00:19:44.165233 kubelet[3353]: I0813 00:19:44.164996 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fb735a7943a13986cb574638e108d48a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-147\" (UID: \"fb735a7943a13986cb574638e108d48a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-147" Aug 13 00:19:44.165233 kubelet[3353]: I0813 00:19:44.165036 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fb735a7943a13986cb574638e108d48a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-147\" (UID: \"fb735a7943a13986cb574638e108d48a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-147" Aug 13 00:19:44.165631 kubelet[3353]: I0813 00:19:44.165579 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/02e7c9a2364abe358d14a5e38a75bea9-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-147\" (UID: \"02e7c9a2364abe358d14a5e38a75bea9\") " pod="kube-system/kube-scheduler-ip-172-31-18-147" Aug 13 00:19:44.165631 kubelet[3353]: I0813 00:19:44.165680 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e3bc7c97e6a8c4768f13ac7a317557fa-ca-certs\") pod \"kube-apiserver-ip-172-31-18-147\" (UID: \"e3bc7c97e6a8c4768f13ac7a317557fa\") " pod="kube-system/kube-apiserver-ip-172-31-18-147" Aug 13 00:19:44.166176 kubelet[3353]: I0813 00:19:44.165840 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e3bc7c97e6a8c4768f13ac7a317557fa-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-147\" (UID: \"e3bc7c97e6a8c4768f13ac7a317557fa\") " pod="kube-system/kube-apiserver-ip-172-31-18-147" Aug 13 00:19:44.166176 kubelet[3353]: I0813 00:19:44.165887 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fb735a7943a13986cb574638e108d48a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-147\" (UID: \"fb735a7943a13986cb574638e108d48a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-147" Aug 13 00:19:44.166176 kubelet[3353]: I0813 00:19:44.165953 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fb735a7943a13986cb574638e108d48a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-147\" (UID: \"fb735a7943a13986cb574638e108d48a\") " pod="kube-system/kube-controller-manager-ip-172-31-18-147" Aug 13 00:19:44.166656 kubelet[3353]: I0813 00:19:44.166574 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e3bc7c97e6a8c4768f13ac7a317557fa-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-147\" (UID: \"e3bc7c97e6a8c4768f13ac7a317557fa\") " pod="kube-system/kube-apiserver-ip-172-31-18-147" Aug 13 00:19:44.171766 kubelet[3353]: I0813 00:19:44.170827 3353 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-147" Aug 13 00:19:44.184006 kubelet[3353]: I0813 00:19:44.183495 3353 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-18-147" Aug 13 00:19:44.184006 kubelet[3353]: I0813 00:19:44.183610 3353 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-18-147" Aug 13 00:19:44.787383 kubelet[3353]: I0813 00:19:44.787013 3353 apiserver.go:52] "Watching apiserver" Aug 13 00:19:44.859615 kubelet[3353]: I0813 00:19:44.859558 3353 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 00:19:45.073287 kubelet[3353]: I0813 00:19:45.072825 3353 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-147" podStartSLOduration=1.072772533 podStartE2EDuration="1.072772533s" podCreationTimestamp="2025-08-13 00:19:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:19:45.027936465 +0000 UTC m=+1.383126284" watchObservedRunningTime="2025-08-13 00:19:45.072772533 +0000 UTC m=+1.427962328" Aug 13 00:19:45.137403 kubelet[3353]: I0813 00:19:45.136491 3353 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-147" podStartSLOduration=1.136468678 podStartE2EDuration="1.136468678s" podCreationTimestamp="2025-08-13 00:19:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:19:45.076137141 +0000 UTC m=+1.431326936" watchObservedRunningTime="2025-08-13 00:19:45.136468678 +0000 UTC m=+1.491658473" Aug 13 00:19:45.189752 kubelet[3353]: I0813 00:19:45.189597 3353 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-147" podStartSLOduration=1.189577594 podStartE2EDuration="1.189577594s" podCreationTimestamp="2025-08-13 00:19:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:19:45.140494342 +0000 UTC m=+1.495684137" watchObservedRunningTime="2025-08-13 00:19:45.189577594 +0000 UTC m=+1.544767401" Aug 13 00:19:45.796357 kubelet[3353]: I0813 00:19:45.796293 3353 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:19:45.797809 containerd[2016]: time="2025-08-13T00:19:45.797596981Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:19:45.798335 kubelet[3353]: I0813 00:19:45.797911 3353 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:19:46.861095 systemd[1]: Created slice kubepods-besteffort-pod791b5a8d_c3bc_4b19_b6c6_5570be7064fd.slice - libcontainer container kubepods-besteffort-pod791b5a8d_c3bc_4b19_b6c6_5570be7064fd.slice. Aug 13 00:19:46.886129 kubelet[3353]: I0813 00:19:46.885924 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/791b5a8d-c3bc-4b19-b6c6-5570be7064fd-xtables-lock\") pod \"kube-proxy-r5nb9\" (UID: \"791b5a8d-c3bc-4b19-b6c6-5570be7064fd\") " pod="kube-system/kube-proxy-r5nb9" Aug 13 00:19:46.889347 kubelet[3353]: I0813 00:19:46.887395 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/791b5a8d-c3bc-4b19-b6c6-5570be7064fd-kube-proxy\") pod \"kube-proxy-r5nb9\" (UID: \"791b5a8d-c3bc-4b19-b6c6-5570be7064fd\") " pod="kube-system/kube-proxy-r5nb9" Aug 13 00:19:46.889347 kubelet[3353]: I0813 00:19:46.888805 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/791b5a8d-c3bc-4b19-b6c6-5570be7064fd-lib-modules\") pod \"kube-proxy-r5nb9\" (UID: \"791b5a8d-c3bc-4b19-b6c6-5570be7064fd\") " pod="kube-system/kube-proxy-r5nb9" Aug 13 00:19:46.889686 kubelet[3353]: I0813 00:19:46.888852 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kggm\" (UniqueName: \"kubernetes.io/projected/791b5a8d-c3bc-4b19-b6c6-5570be7064fd-kube-api-access-7kggm\") pod \"kube-proxy-r5nb9\" (UID: \"791b5a8d-c3bc-4b19-b6c6-5570be7064fd\") " pod="kube-system/kube-proxy-r5nb9" Aug 13 00:19:47.112517 systemd[1]: Created slice kubepods-besteffort-podc723ebbb_e05b_4f11_8f31_60a3c17ea0e4.slice - libcontainer container kubepods-besteffort-podc723ebbb_e05b_4f11_8f31_60a3c17ea0e4.slice. Aug 13 00:19:47.175825 containerd[2016]: time="2025-08-13T00:19:47.175601652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r5nb9,Uid:791b5a8d-c3bc-4b19-b6c6-5570be7064fd,Namespace:kube-system,Attempt:0,}" Aug 13 00:19:47.191716 kubelet[3353]: I0813 00:19:47.191614 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w56s2\" (UniqueName: \"kubernetes.io/projected/c723ebbb-e05b-4f11-8f31-60a3c17ea0e4-kube-api-access-w56s2\") pod \"tigera-operator-747864d56d-7rvck\" (UID: \"c723ebbb-e05b-4f11-8f31-60a3c17ea0e4\") " pod="tigera-operator/tigera-operator-747864d56d-7rvck" Aug 13 00:19:47.191860 kubelet[3353]: I0813 00:19:47.191722 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c723ebbb-e05b-4f11-8f31-60a3c17ea0e4-var-lib-calico\") pod \"tigera-operator-747864d56d-7rvck\" (UID: \"c723ebbb-e05b-4f11-8f31-60a3c17ea0e4\") " pod="tigera-operator/tigera-operator-747864d56d-7rvck" Aug 13 00:19:47.218790 containerd[2016]: time="2025-08-13T00:19:47.218536584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:19:47.218790 containerd[2016]: time="2025-08-13T00:19:47.218712600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:19:47.219377 containerd[2016]: time="2025-08-13T00:19:47.218771064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:47.219377 containerd[2016]: time="2025-08-13T00:19:47.218949132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:47.255023 systemd[1]: Started cri-containerd-a8ef31fa7f55c7ed16d31e36f56b012f4e44deba85d1b5b2467849fd973960a4.scope - libcontainer container a8ef31fa7f55c7ed16d31e36f56b012f4e44deba85d1b5b2467849fd973960a4. Aug 13 00:19:47.302100 containerd[2016]: time="2025-08-13T00:19:47.301951176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r5nb9,Uid:791b5a8d-c3bc-4b19-b6c6-5570be7064fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8ef31fa7f55c7ed16d31e36f56b012f4e44deba85d1b5b2467849fd973960a4\"" Aug 13 00:19:47.322711 containerd[2016]: time="2025-08-13T00:19:47.322042116Z" level=info msg="CreateContainer within sandbox \"a8ef31fa7f55c7ed16d31e36f56b012f4e44deba85d1b5b2467849fd973960a4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:19:47.351918 containerd[2016]: time="2025-08-13T00:19:47.351839725Z" level=info msg="CreateContainer within sandbox \"a8ef31fa7f55c7ed16d31e36f56b012f4e44deba85d1b5b2467849fd973960a4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1620cc442b5d9137e6439ecb6b08dffab92db2f0c4a8cb211d963038475c77f2\"" Aug 13 00:19:47.353041 containerd[2016]: time="2025-08-13T00:19:47.352975237Z" level=info msg="StartContainer for \"1620cc442b5d9137e6439ecb6b08dffab92db2f0c4a8cb211d963038475c77f2\"" Aug 13 00:19:47.414165 systemd[1]: Started cri-containerd-1620cc442b5d9137e6439ecb6b08dffab92db2f0c4a8cb211d963038475c77f2.scope - libcontainer container 1620cc442b5d9137e6439ecb6b08dffab92db2f0c4a8cb211d963038475c77f2. Aug 13 00:19:47.421520 containerd[2016]: time="2025-08-13T00:19:47.421187089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-7rvck,Uid:c723ebbb-e05b-4f11-8f31-60a3c17ea0e4,Namespace:tigera-operator,Attempt:0,}" Aug 13 00:19:47.481936 containerd[2016]: time="2025-08-13T00:19:47.480978721Z" level=info msg="StartContainer for \"1620cc442b5d9137e6439ecb6b08dffab92db2f0c4a8cb211d963038475c77f2\" returns successfully" Aug 13 00:19:47.494675 containerd[2016]: time="2025-08-13T00:19:47.493714957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:19:47.494675 containerd[2016]: time="2025-08-13T00:19:47.493823137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:19:47.494675 containerd[2016]: time="2025-08-13T00:19:47.493860337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:47.495974 containerd[2016]: time="2025-08-13T00:19:47.495749137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:19:47.532732 systemd[1]: Started cri-containerd-0b13b5a84a29606db9932909b8d40e708dd0e3a9d379bfb06dd341e8f2bd2134.scope - libcontainer container 0b13b5a84a29606db9932909b8d40e708dd0e3a9d379bfb06dd341e8f2bd2134. Aug 13 00:19:47.615319 containerd[2016]: time="2025-08-13T00:19:47.614882198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-7rvck,Uid:c723ebbb-e05b-4f11-8f31-60a3c17ea0e4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0b13b5a84a29606db9932909b8d40e708dd0e3a9d379bfb06dd341e8f2bd2134\"" Aug 13 00:19:47.621403 containerd[2016]: time="2025-08-13T00:19:47.619202318Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 00:19:48.886572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1783379832.mount: Deactivated successfully. Aug 13 00:19:51.117707 containerd[2016]: time="2025-08-13T00:19:51.117365787Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:51.120850 containerd[2016]: time="2025-08-13T00:19:51.120752307Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Aug 13 00:19:51.126680 containerd[2016]: time="2025-08-13T00:19:51.125729019Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:51.131977 containerd[2016]: time="2025-08-13T00:19:51.131918823Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:19:51.133613 containerd[2016]: time="2025-08-13T00:19:51.133564143Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 3.514269545s" Aug 13 00:19:51.133870 containerd[2016]: time="2025-08-13T00:19:51.133836471Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Aug 13 00:19:51.144049 containerd[2016]: time="2025-08-13T00:19:51.143996919Z" level=info msg="CreateContainer within sandbox \"0b13b5a84a29606db9932909b8d40e708dd0e3a9d379bfb06dd341e8f2bd2134\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 00:19:51.168471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2270302307.mount: Deactivated successfully. Aug 13 00:19:51.175478 containerd[2016]: time="2025-08-13T00:19:51.175399444Z" level=info msg="CreateContainer within sandbox \"0b13b5a84a29606db9932909b8d40e708dd0e3a9d379bfb06dd341e8f2bd2134\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"61866350d4565583b758baf6816e4c5d20b1712f9588187456521cc1c1a0f7d5\"" Aug 13 00:19:51.178054 containerd[2016]: time="2025-08-13T00:19:51.177795700Z" level=info msg="StartContainer for \"61866350d4565583b758baf6816e4c5d20b1712f9588187456521cc1c1a0f7d5\"" Aug 13 00:19:51.227320 systemd[1]: run-containerd-runc-k8s.io-61866350d4565583b758baf6816e4c5d20b1712f9588187456521cc1c1a0f7d5-runc.4BjteS.mount: Deactivated successfully. Aug 13 00:19:51.238972 systemd[1]: Started cri-containerd-61866350d4565583b758baf6816e4c5d20b1712f9588187456521cc1c1a0f7d5.scope - libcontainer container 61866350d4565583b758baf6816e4c5d20b1712f9588187456521cc1c1a0f7d5. Aug 13 00:19:51.284894 containerd[2016]: time="2025-08-13T00:19:51.284749276Z" level=info msg="StartContainer for \"61866350d4565583b758baf6816e4c5d20b1712f9588187456521cc1c1a0f7d5\" returns successfully" Aug 13 00:19:51.516813 kubelet[3353]: I0813 00:19:51.516695 3353 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r5nb9" podStartSLOduration=5.516671513 podStartE2EDuration="5.516671513s" podCreationTimestamp="2025-08-13 00:19:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:19:48.055839144 +0000 UTC m=+4.411028939" watchObservedRunningTime="2025-08-13 00:19:51.516671513 +0000 UTC m=+7.871861320" Aug 13 00:19:59.918031 sudo[2369]: pam_unix(sudo:session): session closed for user root Aug 13 00:19:59.944012 sshd[2366]: pam_unix(sshd:session): session closed for user core Aug 13 00:19:59.954914 systemd[1]: sshd@8-172.31.18.147:22-139.178.89.65:41510.service: Deactivated successfully. Aug 13 00:19:59.971857 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:19:59.973754 systemd[1]: session-9.scope: Consumed 13.193s CPU time, 150.2M memory peak, 0B memory swap peak. Aug 13 00:19:59.974955 systemd-logind[2000]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:19:59.978080 systemd-logind[2000]: Removed session 9. Aug 13 00:20:11.998187 kubelet[3353]: I0813 00:20:11.998030 3353 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-7rvck" podStartSLOduration=21.480553282 podStartE2EDuration="24.998005815s" podCreationTimestamp="2025-08-13 00:19:47 +0000 UTC" firstStartedPulling="2025-08-13 00:19:47.618122402 +0000 UTC m=+3.973312197" lastFinishedPulling="2025-08-13 00:19:51.135574935 +0000 UTC m=+7.490764730" observedRunningTime="2025-08-13 00:19:52.108809656 +0000 UTC m=+8.463999451" watchObservedRunningTime="2025-08-13 00:20:11.998005815 +0000 UTC m=+28.353195670" Aug 13 00:20:12.023576 systemd[1]: Created slice kubepods-besteffort-pod7ceca7fb_7209_40f8_b58f_ae2e5d054906.slice - libcontainer container kubepods-besteffort-pod7ceca7fb_7209_40f8_b58f_ae2e5d054906.slice. Aug 13 00:20:12.059447 kubelet[3353]: I0813 00:20:12.059238 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ceca7fb-7209-40f8-b58f-ae2e5d054906-tigera-ca-bundle\") pod \"calico-typha-85dd7cc86d-d4xcr\" (UID: \"7ceca7fb-7209-40f8-b58f-ae2e5d054906\") " pod="calico-system/calico-typha-85dd7cc86d-d4xcr" Aug 13 00:20:12.059447 kubelet[3353]: I0813 00:20:12.059302 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7ceca7fb-7209-40f8-b58f-ae2e5d054906-typha-certs\") pod \"calico-typha-85dd7cc86d-d4xcr\" (UID: \"7ceca7fb-7209-40f8-b58f-ae2e5d054906\") " pod="calico-system/calico-typha-85dd7cc86d-d4xcr" Aug 13 00:20:12.059447 kubelet[3353]: I0813 00:20:12.059350 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfq7w\" (UniqueName: \"kubernetes.io/projected/7ceca7fb-7209-40f8-b58f-ae2e5d054906-kube-api-access-cfq7w\") pod \"calico-typha-85dd7cc86d-d4xcr\" (UID: \"7ceca7fb-7209-40f8-b58f-ae2e5d054906\") " pod="calico-system/calico-typha-85dd7cc86d-d4xcr" Aug 13 00:20:12.278128 systemd[1]: Created slice kubepods-besteffort-podbabccb91_89c0_4ef6_a59a_9d12ba327638.slice - libcontainer container kubepods-besteffort-podbabccb91_89c0_4ef6_a59a_9d12ba327638.slice. Aug 13 00:20:12.334065 containerd[2016]: time="2025-08-13T00:20:12.334000333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85dd7cc86d-d4xcr,Uid:7ceca7fb-7209-40f8-b58f-ae2e5d054906,Namespace:calico-system,Attempt:0,}" Aug 13 00:20:12.361975 kubelet[3353]: I0813 00:20:12.361861 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/babccb91-89c0-4ef6-a59a-9d12ba327638-node-certs\") pod \"calico-node-8hwtq\" (UID: \"babccb91-89c0-4ef6-a59a-9d12ba327638\") " pod="calico-system/calico-node-8hwtq" Aug 13 00:20:12.361975 kubelet[3353]: I0813 00:20:12.361939 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/babccb91-89c0-4ef6-a59a-9d12ba327638-xtables-lock\") pod \"calico-node-8hwtq\" (UID: \"babccb91-89c0-4ef6-a59a-9d12ba327638\") " pod="calico-system/calico-node-8hwtq" Aug 13 00:20:12.362283 kubelet[3353]: I0813 00:20:12.361985 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/babccb91-89c0-4ef6-a59a-9d12ba327638-tigera-ca-bundle\") pod \"calico-node-8hwtq\" (UID: \"babccb91-89c0-4ef6-a59a-9d12ba327638\") " pod="calico-system/calico-node-8hwtq" Aug 13 00:20:12.362283 kubelet[3353]: I0813 00:20:12.362024 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/babccb91-89c0-4ef6-a59a-9d12ba327638-var-lib-calico\") pod \"calico-node-8hwtq\" (UID: \"babccb91-89c0-4ef6-a59a-9d12ba327638\") " pod="calico-system/calico-node-8hwtq" Aug 13 00:20:12.362283 kubelet[3353]: I0813 00:20:12.362067 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/babccb91-89c0-4ef6-a59a-9d12ba327638-cni-bin-dir\") pod \"calico-node-8hwtq\" (UID: \"babccb91-89c0-4ef6-a59a-9d12ba327638\") " pod="calico-system/calico-node-8hwtq" Aug 13 00:20:12.362283 kubelet[3353]: I0813 00:20:12.362102 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/babccb91-89c0-4ef6-a59a-9d12ba327638-policysync\") pod \"calico-node-8hwtq\" (UID: \"babccb91-89c0-4ef6-a59a-9d12ba327638\") " pod="calico-system/calico-node-8hwtq" Aug 13 00:20:12.362283 kubelet[3353]: I0813 00:20:12.362137 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/babccb91-89c0-4ef6-a59a-9d12ba327638-cni-log-dir\") pod \"calico-node-8hwtq\" (UID: \"babccb91-89c0-4ef6-a59a-9d12ba327638\") " pod="calico-system/calico-node-8hwtq" Aug 13 00:20:12.362615 kubelet[3353]: I0813 00:20:12.362170 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/babccb91-89c0-4ef6-a59a-9d12ba327638-cni-net-dir\") pod \"calico-node-8hwtq\" (UID: \"babccb91-89c0-4ef6-a59a-9d12ba327638\") " pod="calico-system/calico-node-8hwtq" Aug 13 00:20:12.362615 kubelet[3353]: I0813 00:20:12.362220 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/babccb91-89c0-4ef6-a59a-9d12ba327638-flexvol-driver-host\") pod \"calico-node-8hwtq\" (UID: \"babccb91-89c0-4ef6-a59a-9d12ba327638\") " pod="calico-system/calico-node-8hwtq" Aug 13 00:20:12.362615 kubelet[3353]: I0813 00:20:12.362264 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/babccb91-89c0-4ef6-a59a-9d12ba327638-var-run-calico\") pod \"calico-node-8hwtq\" (UID: \"babccb91-89c0-4ef6-a59a-9d12ba327638\") " pod="calico-system/calico-node-8hwtq" Aug 13 00:20:12.362615 kubelet[3353]: I0813 00:20:12.362304 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvcxn\" (UniqueName: \"kubernetes.io/projected/babccb91-89c0-4ef6-a59a-9d12ba327638-kube-api-access-qvcxn\") pod \"calico-node-8hwtq\" (UID: \"babccb91-89c0-4ef6-a59a-9d12ba327638\") " pod="calico-system/calico-node-8hwtq" Aug 13 00:20:12.362615 kubelet[3353]: I0813 00:20:12.362338 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/babccb91-89c0-4ef6-a59a-9d12ba327638-lib-modules\") pod \"calico-node-8hwtq\" (UID: \"babccb91-89c0-4ef6-a59a-9d12ba327638\") " pod="calico-system/calico-node-8hwtq" Aug 13 00:20:12.418216 containerd[2016]: time="2025-08-13T00:20:12.413748241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:20:12.418216 containerd[2016]: time="2025-08-13T00:20:12.413901205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:20:12.418216 containerd[2016]: time="2025-08-13T00:20:12.413976169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:12.418216 containerd[2016]: time="2025-08-13T00:20:12.414129253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:12.468514 kubelet[3353]: E0813 00:20:12.468235 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.468905 kubelet[3353]: W0813 00:20:12.468832 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.469767 kubelet[3353]: E0813 00:20:12.469300 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.477768 kubelet[3353]: E0813 00:20:12.475606 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.477768 kubelet[3353]: W0813 00:20:12.475812 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.477768 kubelet[3353]: E0813 00:20:12.475849 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.478508 kubelet[3353]: E0813 00:20:12.478418 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.478925 kubelet[3353]: W0813 00:20:12.478888 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.479872 kubelet[3353]: E0813 00:20:12.479819 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.486075 kubelet[3353]: E0813 00:20:12.485630 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.486075 kubelet[3353]: W0813 00:20:12.485692 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.486075 kubelet[3353]: E0813 00:20:12.485758 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.488727 kubelet[3353]: E0813 00:20:12.487750 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.488727 kubelet[3353]: W0813 00:20:12.487789 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.488727 kubelet[3353]: E0813 00:20:12.487822 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.492054 kubelet[3353]: E0813 00:20:12.491852 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.492054 kubelet[3353]: W0813 00:20:12.491910 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.492054 kubelet[3353]: E0813 00:20:12.491966 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.493747 kubelet[3353]: E0813 00:20:12.493183 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.493747 kubelet[3353]: W0813 00:20:12.493215 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.493747 kubelet[3353]: E0813 00:20:12.493245 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.497366 kubelet[3353]: E0813 00:20:12.495093 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.497366 kubelet[3353]: W0813 00:20:12.495149 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.497366 kubelet[3353]: E0813 00:20:12.495179 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.497067 systemd[1]: Started cri-containerd-dd678f1e4e9c5645b67e844ecc8b2131beef92508dbd6b642abd277df7291028.scope - libcontainer container dd678f1e4e9c5645b67e844ecc8b2131beef92508dbd6b642abd277df7291028. Aug 13 00:20:12.501660 kubelet[3353]: E0813 00:20:12.501141 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.501660 kubelet[3353]: W0813 00:20:12.501280 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.501660 kubelet[3353]: E0813 00:20:12.501314 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.504579 kubelet[3353]: E0813 00:20:12.503842 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.504579 kubelet[3353]: W0813 00:20:12.503878 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.504579 kubelet[3353]: E0813 00:20:12.503910 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.506600 kubelet[3353]: E0813 00:20:12.506407 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.506600 kubelet[3353]: W0813 00:20:12.506442 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.506600 kubelet[3353]: E0813 00:20:12.506507 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.518797 kubelet[3353]: E0813 00:20:12.517225 3353 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zw9d2" podUID="58a4d2bd-5ce6-4a65-a83a-e16060349add" Aug 13 00:20:12.538770 kubelet[3353]: E0813 00:20:12.537208 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.538770 kubelet[3353]: W0813 00:20:12.537251 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.538770 kubelet[3353]: E0813 00:20:12.537284 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.541360 kubelet[3353]: E0813 00:20:12.539500 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.541360 kubelet[3353]: W0813 00:20:12.539539 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.541360 kubelet[3353]: E0813 00:20:12.539619 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.541360 kubelet[3353]: E0813 00:20:12.540558 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.541360 kubelet[3353]: W0813 00:20:12.540583 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.541360 kubelet[3353]: E0813 00:20:12.540608 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.542241 kubelet[3353]: E0813 00:20:12.541742 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.542241 kubelet[3353]: W0813 00:20:12.541767 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.542241 kubelet[3353]: E0813 00:20:12.541796 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.544142 kubelet[3353]: E0813 00:20:12.543014 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.544142 kubelet[3353]: W0813 00:20:12.543057 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.544142 kubelet[3353]: E0813 00:20:12.543092 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.544848 kubelet[3353]: E0813 00:20:12.544630 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.544848 kubelet[3353]: W0813 00:20:12.544700 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.544848 kubelet[3353]: E0813 00:20:12.544730 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.549858 kubelet[3353]: E0813 00:20:12.549791 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.549858 kubelet[3353]: W0813 00:20:12.549835 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.550078 kubelet[3353]: E0813 00:20:12.549871 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.551227 kubelet[3353]: E0813 00:20:12.551165 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.551227 kubelet[3353]: W0813 00:20:12.551216 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.551434 kubelet[3353]: E0813 00:20:12.551250 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.552404 kubelet[3353]: E0813 00:20:12.552341 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.552404 kubelet[3353]: W0813 00:20:12.552381 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.552607 kubelet[3353]: E0813 00:20:12.552413 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.557255 kubelet[3353]: E0813 00:20:12.557155 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.557449 kubelet[3353]: W0813 00:20:12.557324 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.557449 kubelet[3353]: E0813 00:20:12.557361 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.558741 kubelet[3353]: E0813 00:20:12.558687 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.558911 kubelet[3353]: W0813 00:20:12.558745 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.558911 kubelet[3353]: E0813 00:20:12.558780 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.560584 kubelet[3353]: E0813 00:20:12.560012 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.560584 kubelet[3353]: W0813 00:20:12.560051 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.560584 kubelet[3353]: E0813 00:20:12.560084 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.561108 kubelet[3353]: E0813 00:20:12.561063 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.561108 kubelet[3353]: W0813 00:20:12.561100 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.561272 kubelet[3353]: E0813 00:20:12.561132 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.562501 kubelet[3353]: E0813 00:20:12.562435 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.562501 kubelet[3353]: W0813 00:20:12.562479 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.562734 kubelet[3353]: E0813 00:20:12.562514 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.564071 kubelet[3353]: E0813 00:20:12.564016 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.564071 kubelet[3353]: W0813 00:20:12.564052 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.565106 kubelet[3353]: E0813 00:20:12.564085 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.565502 kubelet[3353]: E0813 00:20:12.565456 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.565600 kubelet[3353]: W0813 00:20:12.565502 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.565600 kubelet[3353]: E0813 00:20:12.565536 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.567289 kubelet[3353]: E0813 00:20:12.567238 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.567289 kubelet[3353]: W0813 00:20:12.567278 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.567520 kubelet[3353]: E0813 00:20:12.567311 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.570089 kubelet[3353]: E0813 00:20:12.569065 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.570089 kubelet[3353]: W0813 00:20:12.569106 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.570089 kubelet[3353]: E0813 00:20:12.569165 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.570921 kubelet[3353]: E0813 00:20:12.570858 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.570921 kubelet[3353]: W0813 00:20:12.570922 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.571135 kubelet[3353]: E0813 00:20:12.570956 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.574076 kubelet[3353]: E0813 00:20:12.571727 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.574076 kubelet[3353]: W0813 00:20:12.571766 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.574076 kubelet[3353]: E0813 00:20:12.571822 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.574076 kubelet[3353]: E0813 00:20:12.573274 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.574076 kubelet[3353]: W0813 00:20:12.573302 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.574076 kubelet[3353]: E0813 00:20:12.573362 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.576834 kubelet[3353]: E0813 00:20:12.576780 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.576834 kubelet[3353]: W0813 00:20:12.576820 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.577058 kubelet[3353]: E0813 00:20:12.576851 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.577058 kubelet[3353]: I0813 00:20:12.576897 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/58a4d2bd-5ce6-4a65-a83a-e16060349add-kubelet-dir\") pod \"csi-node-driver-zw9d2\" (UID: \"58a4d2bd-5ce6-4a65-a83a-e16060349add\") " pod="calico-system/csi-node-driver-zw9d2" Aug 13 00:20:12.577780 kubelet[3353]: E0813 00:20:12.577726 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.577780 kubelet[3353]: W0813 00:20:12.577765 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.577950 kubelet[3353]: E0813 00:20:12.577796 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.577950 kubelet[3353]: I0813 00:20:12.577838 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/58a4d2bd-5ce6-4a65-a83a-e16060349add-registration-dir\") pod \"csi-node-driver-zw9d2\" (UID: \"58a4d2bd-5ce6-4a65-a83a-e16060349add\") " pod="calico-system/csi-node-driver-zw9d2" Aug 13 00:20:12.580590 kubelet[3353]: E0813 00:20:12.580472 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.580590 kubelet[3353]: W0813 00:20:12.580516 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.580590 kubelet[3353]: E0813 00:20:12.580550 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.580590 kubelet[3353]: I0813 00:20:12.580598 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm7dm\" (UniqueName: \"kubernetes.io/projected/58a4d2bd-5ce6-4a65-a83a-e16060349add-kube-api-access-xm7dm\") pod \"csi-node-driver-zw9d2\" (UID: \"58a4d2bd-5ce6-4a65-a83a-e16060349add\") " pod="calico-system/csi-node-driver-zw9d2" Aug 13 00:20:12.581747 kubelet[3353]: E0813 00:20:12.581207 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.581747 kubelet[3353]: W0813 00:20:12.581727 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.581962 kubelet[3353]: E0813 00:20:12.581772 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.581962 kubelet[3353]: I0813 00:20:12.581834 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/58a4d2bd-5ce6-4a65-a83a-e16060349add-socket-dir\") pod \"csi-node-driver-zw9d2\" (UID: \"58a4d2bd-5ce6-4a65-a83a-e16060349add\") " pod="calico-system/csi-node-driver-zw9d2" Aug 13 00:20:12.583075 kubelet[3353]: E0813 00:20:12.583021 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.583075 kubelet[3353]: W0813 00:20:12.583066 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.583232 kubelet[3353]: E0813 00:20:12.583099 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.583810 kubelet[3353]: I0813 00:20:12.583606 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/58a4d2bd-5ce6-4a65-a83a-e16060349add-varrun\") pod \"csi-node-driver-zw9d2\" (UID: \"58a4d2bd-5ce6-4a65-a83a-e16060349add\") " pod="calico-system/csi-node-driver-zw9d2" Aug 13 00:20:12.585086 kubelet[3353]: E0813 00:20:12.584937 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.585086 kubelet[3353]: W0813 00:20:12.584977 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.585086 kubelet[3353]: E0813 00:20:12.585011 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.586128 kubelet[3353]: E0813 00:20:12.586070 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.586128 kubelet[3353]: W0813 00:20:12.586096 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.586128 kubelet[3353]: E0813 00:20:12.586124 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.588350 kubelet[3353]: E0813 00:20:12.587325 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.588350 kubelet[3353]: W0813 00:20:12.587508 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.588350 kubelet[3353]: E0813 00:20:12.587541 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.588610 kubelet[3353]: E0813 00:20:12.588406 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.588610 kubelet[3353]: W0813 00:20:12.588431 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.588610 kubelet[3353]: E0813 00:20:12.588460 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.589783 kubelet[3353]: E0813 00:20:12.588970 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.589783 kubelet[3353]: W0813 00:20:12.589000 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.589783 kubelet[3353]: E0813 00:20:12.589053 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.591002 kubelet[3353]: E0813 00:20:12.590297 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.591002 kubelet[3353]: W0813 00:20:12.590344 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.591002 kubelet[3353]: E0813 00:20:12.590377 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.592218 kubelet[3353]: E0813 00:20:12.591872 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.592218 kubelet[3353]: W0813 00:20:12.591913 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.592218 kubelet[3353]: E0813 00:20:12.591946 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.594095 kubelet[3353]: E0813 00:20:12.593868 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.594095 kubelet[3353]: W0813 00:20:12.593908 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.594095 kubelet[3353]: E0813 00:20:12.593940 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.595828 kubelet[3353]: E0813 00:20:12.595299 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.595828 kubelet[3353]: W0813 00:20:12.595341 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.595828 kubelet[3353]: E0813 00:20:12.595380 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.597367 kubelet[3353]: E0813 00:20:12.597319 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.597367 kubelet[3353]: W0813 00:20:12.597359 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.597993 kubelet[3353]: E0813 00:20:12.597394 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.601783 containerd[2016]: time="2025-08-13T00:20:12.601457378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8hwtq,Uid:babccb91-89c0-4ef6-a59a-9d12ba327638,Namespace:calico-system,Attempt:0,}" Aug 13 00:20:12.680527 containerd[2016]: time="2025-08-13T00:20:12.680273918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:20:12.680527 containerd[2016]: time="2025-08-13T00:20:12.680461958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:20:12.681074 containerd[2016]: time="2025-08-13T00:20:12.680777150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:12.681227 containerd[2016]: time="2025-08-13T00:20:12.681037874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:12.686155 kubelet[3353]: E0813 00:20:12.685837 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.686155 kubelet[3353]: W0813 00:20:12.685880 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.686155 kubelet[3353]: E0813 00:20:12.685913 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.687171 kubelet[3353]: E0813 00:20:12.687019 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.687171 kubelet[3353]: W0813 00:20:12.687051 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.687171 kubelet[3353]: E0813 00:20:12.687081 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.692294 kubelet[3353]: E0813 00:20:12.692233 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.692294 kubelet[3353]: W0813 00:20:12.692280 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.692610 kubelet[3353]: E0813 00:20:12.692316 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.696238 kubelet[3353]: E0813 00:20:12.694098 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.696238 kubelet[3353]: W0813 00:20:12.694143 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.696238 kubelet[3353]: E0813 00:20:12.694176 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.696238 kubelet[3353]: E0813 00:20:12.694564 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.696238 kubelet[3353]: W0813 00:20:12.694585 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.696238 kubelet[3353]: E0813 00:20:12.694609 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.696238 kubelet[3353]: E0813 00:20:12.694961 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.696238 kubelet[3353]: W0813 00:20:12.694996 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.696238 kubelet[3353]: E0813 00:20:12.695017 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.696238 kubelet[3353]: E0813 00:20:12.695937 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.696893 kubelet[3353]: W0813 00:20:12.695961 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.696893 kubelet[3353]: E0813 00:20:12.695988 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.696893 kubelet[3353]: E0813 00:20:12.696341 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.696893 kubelet[3353]: W0813 00:20:12.696359 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.696893 kubelet[3353]: E0813 00:20:12.696380 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.699697 kubelet[3353]: E0813 00:20:12.697864 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.699697 kubelet[3353]: W0813 00:20:12.697898 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.699697 kubelet[3353]: E0813 00:20:12.697927 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.699697 kubelet[3353]: E0813 00:20:12.698821 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.699697 kubelet[3353]: W0813 00:20:12.698850 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.699697 kubelet[3353]: E0813 00:20:12.698879 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.700427 kubelet[3353]: E0813 00:20:12.700078 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.700427 kubelet[3353]: W0813 00:20:12.700105 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.700427 kubelet[3353]: E0813 00:20:12.700136 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.700805 kubelet[3353]: E0813 00:20:12.700620 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.700805 kubelet[3353]: W0813 00:20:12.700695 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.700805 kubelet[3353]: E0813 00:20:12.700722 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.701996 kubelet[3353]: E0813 00:20:12.701523 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.701996 kubelet[3353]: W0813 00:20:12.701559 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.701996 kubelet[3353]: E0813 00:20:12.701588 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.703597 kubelet[3353]: E0813 00:20:12.702534 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.703597 kubelet[3353]: W0813 00:20:12.702574 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.703597 kubelet[3353]: E0813 00:20:12.702605 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.703597 kubelet[3353]: E0813 00:20:12.703405 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.703597 kubelet[3353]: W0813 00:20:12.703429 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.703597 kubelet[3353]: E0813 00:20:12.703458 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.704945 kubelet[3353]: E0813 00:20:12.704169 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.704945 kubelet[3353]: W0813 00:20:12.704202 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.704945 kubelet[3353]: E0813 00:20:12.704229 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.704945 kubelet[3353]: E0813 00:20:12.704944 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.705292 kubelet[3353]: W0813 00:20:12.704966 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.705292 kubelet[3353]: E0813 00:20:12.704992 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.708124 kubelet[3353]: E0813 00:20:12.705684 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.708124 kubelet[3353]: W0813 00:20:12.705717 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.708124 kubelet[3353]: E0813 00:20:12.705746 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.708124 kubelet[3353]: E0813 00:20:12.706464 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.708124 kubelet[3353]: W0813 00:20:12.706569 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.708124 kubelet[3353]: E0813 00:20:12.706618 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.708124 kubelet[3353]: E0813 00:20:12.707380 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.708124 kubelet[3353]: W0813 00:20:12.707407 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.708124 kubelet[3353]: E0813 00:20:12.707436 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.709550 kubelet[3353]: E0813 00:20:12.708512 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.709550 kubelet[3353]: W0813 00:20:12.708660 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.709550 kubelet[3353]: E0813 00:20:12.708695 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.712183 kubelet[3353]: E0813 00:20:12.711823 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.712183 kubelet[3353]: W0813 00:20:12.711875 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.712183 kubelet[3353]: E0813 00:20:12.711911 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.713080 kubelet[3353]: E0813 00:20:12.712956 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.713080 kubelet[3353]: W0813 00:20:12.713014 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.713543 kubelet[3353]: E0813 00:20:12.713318 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.714376 kubelet[3353]: E0813 00:20:12.713912 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.714376 kubelet[3353]: W0813 00:20:12.713939 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.714376 kubelet[3353]: E0813 00:20:12.713968 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.714887 kubelet[3353]: E0813 00:20:12.714856 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.715345 kubelet[3353]: W0813 00:20:12.715048 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.715510 kubelet[3353]: E0813 00:20:12.715479 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.752021 kubelet[3353]: E0813 00:20:12.751984 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:12.752263 kubelet[3353]: W0813 00:20:12.752168 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:12.752263 kubelet[3353]: E0813 00:20:12.752208 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:12.756408 systemd[1]: Started cri-containerd-fba95ec12704329877f3c7e03e0adf4033f38d68e5d05faf79f3e0fcfd6e2c38.scope - libcontainer container fba95ec12704329877f3c7e03e0adf4033f38d68e5d05faf79f3e0fcfd6e2c38. Aug 13 00:20:12.801925 containerd[2016]: time="2025-08-13T00:20:12.800392779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-85dd7cc86d-d4xcr,Uid:7ceca7fb-7209-40f8-b58f-ae2e5d054906,Namespace:calico-system,Attempt:0,} returns sandbox id \"dd678f1e4e9c5645b67e844ecc8b2131beef92508dbd6b642abd277df7291028\"" Aug 13 00:20:12.806916 containerd[2016]: time="2025-08-13T00:20:12.806837631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 00:20:12.907849 containerd[2016]: time="2025-08-13T00:20:12.907752664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8hwtq,Uid:babccb91-89c0-4ef6-a59a-9d12ba327638,Namespace:calico-system,Attempt:0,} returns sandbox id \"fba95ec12704329877f3c7e03e0adf4033f38d68e5d05faf79f3e0fcfd6e2c38\"" Aug 13 00:20:13.203069 systemd[1]: run-containerd-runc-k8s.io-dd678f1e4e9c5645b67e844ecc8b2131beef92508dbd6b642abd277df7291028-runc.t5Dd6a.mount: Deactivated successfully. Aug 13 00:20:13.983095 kubelet[3353]: E0813 00:20:13.983043 3353 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zw9d2" podUID="58a4d2bd-5ce6-4a65-a83a-e16060349add" Aug 13 00:20:14.074379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2166503281.mount: Deactivated successfully. Aug 13 00:20:14.876609 containerd[2016]: time="2025-08-13T00:20:14.876545069Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:14.877995 containerd[2016]: time="2025-08-13T00:20:14.877937477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Aug 13 00:20:14.878917 containerd[2016]: time="2025-08-13T00:20:14.878828969Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:14.882558 containerd[2016]: time="2025-08-13T00:20:14.882456437Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:14.884171 containerd[2016]: time="2025-08-13T00:20:14.884003345Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 2.077100998s" Aug 13 00:20:14.884171 containerd[2016]: time="2025-08-13T00:20:14.884056721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Aug 13 00:20:14.886274 containerd[2016]: time="2025-08-13T00:20:14.886163789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 00:20:14.913181 containerd[2016]: time="2025-08-13T00:20:14.913024422Z" level=info msg="CreateContainer within sandbox \"dd678f1e4e9c5645b67e844ecc8b2131beef92508dbd6b642abd277df7291028\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 00:20:14.938436 containerd[2016]: time="2025-08-13T00:20:14.938361330Z" level=info msg="CreateContainer within sandbox \"dd678f1e4e9c5645b67e844ecc8b2131beef92508dbd6b642abd277df7291028\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"24c5d4eab241858dfe73bd7b40fa0f5c01270e2f571166f2881ea8abf3a48cd6\"" Aug 13 00:20:14.941306 containerd[2016]: time="2025-08-13T00:20:14.940916658Z" level=info msg="StartContainer for \"24c5d4eab241858dfe73bd7b40fa0f5c01270e2f571166f2881ea8abf3a48cd6\"" Aug 13 00:20:15.000954 systemd[1]: Started cri-containerd-24c5d4eab241858dfe73bd7b40fa0f5c01270e2f571166f2881ea8abf3a48cd6.scope - libcontainer container 24c5d4eab241858dfe73bd7b40fa0f5c01270e2f571166f2881ea8abf3a48cd6. Aug 13 00:20:15.067074 containerd[2016]: time="2025-08-13T00:20:15.066878186Z" level=info msg="StartContainer for \"24c5d4eab241858dfe73bd7b40fa0f5c01270e2f571166f2881ea8abf3a48cd6\" returns successfully" Aug 13 00:20:15.194923 kubelet[3353]: E0813 00:20:15.194083 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.194923 kubelet[3353]: W0813 00:20:15.194256 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.194923 kubelet[3353]: E0813 00:20:15.194682 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.197345 kubelet[3353]: E0813 00:20:15.196963 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.197345 kubelet[3353]: W0813 00:20:15.196993 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.197345 kubelet[3353]: E0813 00:20:15.197093 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.200143 kubelet[3353]: E0813 00:20:15.199853 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.200143 kubelet[3353]: W0813 00:20:15.199911 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.200143 kubelet[3353]: E0813 00:20:15.199970 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.201013 kubelet[3353]: E0813 00:20:15.200789 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.201013 kubelet[3353]: W0813 00:20:15.200817 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.201013 kubelet[3353]: E0813 00:20:15.200841 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.201545 kubelet[3353]: E0813 00:20:15.201209 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.201545 kubelet[3353]: W0813 00:20:15.201226 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.201545 kubelet[3353]: E0813 00:20:15.201246 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.201940 kubelet[3353]: E0813 00:20:15.201864 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.201940 kubelet[3353]: W0813 00:20:15.201886 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.202286 kubelet[3353]: E0813 00:20:15.201908 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.202541 kubelet[3353]: E0813 00:20:15.202493 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.202804 kubelet[3353]: W0813 00:20:15.202514 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.202804 kubelet[3353]: E0813 00:20:15.202692 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.203366 kubelet[3353]: E0813 00:20:15.203300 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.203366 kubelet[3353]: W0813 00:20:15.203323 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.203699 kubelet[3353]: E0813 00:20:15.203346 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.204428 kubelet[3353]: E0813 00:20:15.204215 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.204428 kubelet[3353]: W0813 00:20:15.204256 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.204428 kubelet[3353]: E0813 00:20:15.204283 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.204965 kubelet[3353]: E0813 00:20:15.204833 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.204965 kubelet[3353]: W0813 00:20:15.204867 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.204965 kubelet[3353]: E0813 00:20:15.204892 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.205925 kubelet[3353]: E0813 00:20:15.205807 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.205925 kubelet[3353]: W0813 00:20:15.205837 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.205925 kubelet[3353]: E0813 00:20:15.205866 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.206460 kubelet[3353]: E0813 00:20:15.206426 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.206555 kubelet[3353]: W0813 00:20:15.206459 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.206555 kubelet[3353]: E0813 00:20:15.206487 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.207066 kubelet[3353]: E0813 00:20:15.207020 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.207066 kubelet[3353]: W0813 00:20:15.207051 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.207240 kubelet[3353]: E0813 00:20:15.207078 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.207718 kubelet[3353]: E0813 00:20:15.207616 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.207718 kubelet[3353]: W0813 00:20:15.207711 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.207854 kubelet[3353]: E0813 00:20:15.207743 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.208378 kubelet[3353]: E0813 00:20:15.208344 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.208378 kubelet[3353]: W0813 00:20:15.208376 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.208558 kubelet[3353]: E0813 00:20:15.208404 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.226899 kubelet[3353]: E0813 00:20:15.226593 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.226899 kubelet[3353]: W0813 00:20:15.226627 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.226899 kubelet[3353]: E0813 00:20:15.226692 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.227461 kubelet[3353]: E0813 00:20:15.227424 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.228371 kubelet[3353]: W0813 00:20:15.228140 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.228371 kubelet[3353]: E0813 00:20:15.228194 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.229161 kubelet[3353]: E0813 00:20:15.229125 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.229559 kubelet[3353]: W0813 00:20:15.229412 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.229559 kubelet[3353]: E0813 00:20:15.229454 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.231102 kubelet[3353]: E0813 00:20:15.230804 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.231102 kubelet[3353]: W0813 00:20:15.230856 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.231102 kubelet[3353]: E0813 00:20:15.230890 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.232166 kubelet[3353]: E0813 00:20:15.232009 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.232166 kubelet[3353]: W0813 00:20:15.232042 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.232166 kubelet[3353]: E0813 00:20:15.232098 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.233768 kubelet[3353]: E0813 00:20:15.233256 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.233768 kubelet[3353]: W0813 00:20:15.233289 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.233768 kubelet[3353]: E0813 00:20:15.233320 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.234836 kubelet[3353]: E0813 00:20:15.234697 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.234836 kubelet[3353]: W0813 00:20:15.234755 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.234836 kubelet[3353]: E0813 00:20:15.234789 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.236870 kubelet[3353]: E0813 00:20:15.236786 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.236870 kubelet[3353]: W0813 00:20:15.236848 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.237197 kubelet[3353]: E0813 00:20:15.236885 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.238023 kubelet[3353]: E0813 00:20:15.237982 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.238023 kubelet[3353]: W0813 00:20:15.238021 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.238690 kubelet[3353]: E0813 00:20:15.238055 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.239048 kubelet[3353]: E0813 00:20:15.239013 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.239167 kubelet[3353]: W0813 00:20:15.239048 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.239167 kubelet[3353]: E0813 00:20:15.239080 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.241002 kubelet[3353]: E0813 00:20:15.240943 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.241002 kubelet[3353]: W0813 00:20:15.240985 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.241266 kubelet[3353]: E0813 00:20:15.241022 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.241764 kubelet[3353]: E0813 00:20:15.241702 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.241764 kubelet[3353]: W0813 00:20:15.241736 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.241764 kubelet[3353]: E0813 00:20:15.241766 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.243601 kubelet[3353]: E0813 00:20:15.243548 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.243601 kubelet[3353]: W0813 00:20:15.243588 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.243970 kubelet[3353]: E0813 00:20:15.243630 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.245713 kubelet[3353]: E0813 00:20:15.244122 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.245713 kubelet[3353]: W0813 00:20:15.244155 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.245713 kubelet[3353]: E0813 00:20:15.244182 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.245713 kubelet[3353]: E0813 00:20:15.244550 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.245713 kubelet[3353]: W0813 00:20:15.244568 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.245713 kubelet[3353]: E0813 00:20:15.244590 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.247084 kubelet[3353]: E0813 00:20:15.247046 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.247366 kubelet[3353]: W0813 00:20:15.247321 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.247594 kubelet[3353]: E0813 00:20:15.247471 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.248386 kubelet[3353]: E0813 00:20:15.248269 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.249311 kubelet[3353]: W0813 00:20:15.249250 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.249311 kubelet[3353]: E0813 00:20:15.249309 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.249910 kubelet[3353]: E0813 00:20:15.249867 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:15.249910 kubelet[3353]: W0813 00:20:15.249900 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:15.250047 kubelet[3353]: E0813 00:20:15.249928 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:15.982419 kubelet[3353]: E0813 00:20:15.982365 3353 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zw9d2" podUID="58a4d2bd-5ce6-4a65-a83a-e16060349add" Aug 13 00:20:16.182517 kubelet[3353]: I0813 00:20:16.181146 3353 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-85dd7cc86d-d4xcr" podStartSLOduration=3.101805902 podStartE2EDuration="5.181122556s" podCreationTimestamp="2025-08-13 00:20:11 +0000 UTC" firstStartedPulling="2025-08-13 00:20:12.806103183 +0000 UTC m=+29.161292978" lastFinishedPulling="2025-08-13 00:20:14.885419837 +0000 UTC m=+31.240609632" observedRunningTime="2025-08-13 00:20:15.167201871 +0000 UTC m=+31.522391690" watchObservedRunningTime="2025-08-13 00:20:16.181122556 +0000 UTC m=+32.536312339" Aug 13 00:20:16.189796 containerd[2016]: time="2025-08-13T00:20:16.189627652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:16.193787 containerd[2016]: time="2025-08-13T00:20:16.193235236Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Aug 13 00:20:16.198025 containerd[2016]: time="2025-08-13T00:20:16.196794340Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:16.205729 containerd[2016]: time="2025-08-13T00:20:16.205580512Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:16.206628 containerd[2016]: time="2025-08-13T00:20:16.205919452Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.319688439s" Aug 13 00:20:16.206628 containerd[2016]: time="2025-08-13T00:20:16.206106772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Aug 13 00:20:16.217546 kubelet[3353]: E0813 00:20:16.217507 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:16.218396 kubelet[3353]: W0813 00:20:16.217807 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:16.218396 kubelet[3353]: E0813 00:20:16.217848 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:16.220494 kubelet[3353]: E0813 00:20:16.219629 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:16.220494 kubelet[3353]: W0813 00:20:16.219708 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:16.220494 kubelet[3353]: E0813 00:20:16.219740 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:16.221071 containerd[2016]: time="2025-08-13T00:20:16.219723484Z" level=info msg="CreateContainer within sandbox \"fba95ec12704329877f3c7e03e0adf4033f38d68e5d05faf79f3e0fcfd6e2c38\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 00:20:16.221142 kubelet[3353]: E0813 00:20:16.220695 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:16.221142 kubelet[3353]: W0813 00:20:16.220719 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:16.221142 kubelet[3353]: E0813 00:20:16.220748 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:16.223225 kubelet[3353]: E0813 00:20:16.222924 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:16.223225 kubelet[3353]: W0813 00:20:16.222975 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:16.223225 kubelet[3353]: E0813 00:20:16.223008 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:16.223785 kubelet[3353]: E0813 00:20:16.223566 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:16.223785 kubelet[3353]: W0813 00:20:16.223588 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:16.223785 kubelet[3353]: E0813 00:20:16.223612 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:16.225004 kubelet[3353]: E0813 00:20:16.224704 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:16.225004 kubelet[3353]: W0813 00:20:16.224739 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:16.225004 kubelet[3353]: E0813 00:20:16.224767 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:16.227597 kubelet[3353]: E0813 00:20:16.227253 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:16.227597 kubelet[3353]: W0813 00:20:16.227283 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:16.227597 kubelet[3353]: E0813 00:20:16.227314 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:16.228189 kubelet[3353]: E0813 00:20:16.227989 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:16.228189 kubelet[3353]: W0813 00:20:16.228016 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:16.228189 kubelet[3353]: E0813 00:20:16.228044 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:16.229831 kubelet[3353]: E0813 00:20:16.229586 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:16.229831 kubelet[3353]: W0813 00:20:16.229618 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:16.229831 kubelet[3353]: E0813 00:20:16.229668 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:16.232885 kubelet[3353]: E0813 00:20:16.232749 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:16.233058 kubelet[3353]: W0813 00:20:16.233030 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:16.233212 kubelet[3353]: E0813 00:20:16.233185 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:16.234838 kubelet[3353]: E0813 00:20:16.234690 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:16.234838 kubelet[3353]: W0813 00:20:16.234715 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:16.234838 kubelet[3353]: E0813 00:20:16.234745 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:16.235823 kubelet[3353]: E0813 00:20:16.235421 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:16.235823 kubelet[3353]: W0813 00:20:16.235448 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:16.235823 kubelet[3353]: E0813 00:20:16.235475 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:16.238804 kubelet[3353]: E0813 00:20:16.238511 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:16.238804 kubelet[3353]: W0813 00:20:16.238545 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:16.238804 kubelet[3353]: E0813 00:20:16.238576 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:16.240371 kubelet[3353]: E0813 00:20:16.239942 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:16.240371 kubelet[3353]: W0813 00:20:16.240014 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:16.240371 kubelet[3353]: E0813 00:20:16.240088 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:16.242463 kubelet[3353]: E0813 00:20:16.241948 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:16.242463 kubelet[3353]: W0813 00:20:16.241995 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:16.242463 kubelet[3353]: E0813 00:20:16.242029 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:16.244294 kubelet[3353]: E0813 00:20:16.243887 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:16.244294 kubelet[3353]: W0813 00:20:16.243936 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:16.244294 kubelet[3353]: E0813 00:20:16.243971 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:16.245575 kubelet[3353]: E0813 00:20:16.244974 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:16.245575 kubelet[3353]: W0813 00:20:16.245052 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:16.245575 kubelet[3353]: E0813 00:20:16.245083 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:16.246682 kubelet[3353]: E0813 00:20:16.246299 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:16.246682 kubelet[3353]: W0813 00:20:16.246356 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:16.246682 kubelet[3353]: E0813 00:20:16.246385 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:16.249122 kubelet[3353]: E0813 00:20:16.248895 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:16.249122 kubelet[3353]: W0813 00:20:16.248940 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:16.249122 kubelet[3353]: E0813 00:20:16.248972 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:16.252572 kubelet[3353]: E0813 00:20:16.250327 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:16.252572 kubelet[3353]: W0813 00:20:16.250357 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:16.252572 kubelet[3353]: E0813 00:20:16.250386 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:16.253688 kubelet[3353]: E0813 00:20:16.253410 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:16.253688 kubelet[3353]: W0813 00:20:16.253439 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:16.253688 kubelet[3353]: E0813 00:20:16.253470 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:16.257391 kubelet[3353]: E0813 00:20:16.256387 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:16.257391 kubelet[3353]: W0813 00:20:16.256423 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:16.257391 kubelet[3353]: E0813 00:20:16.256456 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:16.260629 kubelet[3353]: E0813 00:20:16.259289 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:16.260629 kubelet[3353]: W0813 00:20:16.259324 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:16.260629 kubelet[3353]: E0813 00:20:16.259356 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:16.262613 kubelet[3353]: E0813 00:20:16.262357 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:16.262613 kubelet[3353]: W0813 00:20:16.262390 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:16.262613 kubelet[3353]: E0813 00:20:16.262423 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:16.263894 kubelet[3353]: E0813 00:20:16.262924 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:16.263894 kubelet[3353]: W0813 00:20:16.262962 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:16.263894 kubelet[3353]: E0813 00:20:16.262989 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:16.263894 kubelet[3353]: E0813 00:20:16.263373 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:16.263894 kubelet[3353]: W0813 00:20:16.263392 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:16.263894 kubelet[3353]: E0813 00:20:16.263414 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:16.264786 kubelet[3353]: E0813 00:20:16.264325 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:16.264786 kubelet[3353]: W0813 00:20:16.264351 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:16.264786 kubelet[3353]: E0813 00:20:16.264380 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:16.264983 containerd[2016]: time="2025-08-13T00:20:16.264910840Z" level=info msg="CreateContainer within sandbox \"fba95ec12704329877f3c7e03e0adf4033f38d68e5d05faf79f3e0fcfd6e2c38\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"339640d7dfb84fd761fa7672306cbf6727c0386431719d7da372df3cc72b3e7e\"" Aug 13 00:20:16.266961 kubelet[3353]: E0813 00:20:16.265277 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:16.266961 kubelet[3353]: W0813 00:20:16.265311 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:16.266961 kubelet[3353]: E0813 00:20:16.265341 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:16.268213 kubelet[3353]: E0813 00:20:16.268175 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:16.268482 kubelet[3353]: W0813 00:20:16.268353 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:16.268482 kubelet[3353]: E0813 00:20:16.268391 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:16.269030 containerd[2016]: time="2025-08-13T00:20:16.268918204Z" level=info msg="StartContainer for \"339640d7dfb84fd761fa7672306cbf6727c0386431719d7da372df3cc72b3e7e\"" Aug 13 00:20:16.270561 kubelet[3353]: E0813 00:20:16.270526 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:16.271334 kubelet[3353]: W0813 00:20:16.270869 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:16.271334 kubelet[3353]: E0813 00:20:16.270909 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:16.272578 kubelet[3353]: E0813 00:20:16.272438 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:16.272578 kubelet[3353]: W0813 00:20:16.272495 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:16.273102 kubelet[3353]: E0813 00:20:16.272526 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:16.275005 kubelet[3353]: E0813 00:20:16.274663 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:16.275005 kubelet[3353]: W0813 00:20:16.274698 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:16.275005 kubelet[3353]: E0813 00:20:16.274757 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:16.275964 kubelet[3353]: E0813 00:20:16.275845 3353 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:20:16.275964 kubelet[3353]: W0813 00:20:16.275873 3353 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:20:16.275964 kubelet[3353]: E0813 00:20:16.275904 3353 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:20:16.336977 systemd[1]: Started cri-containerd-339640d7dfb84fd761fa7672306cbf6727c0386431719d7da372df3cc72b3e7e.scope - libcontainer container 339640d7dfb84fd761fa7672306cbf6727c0386431719d7da372df3cc72b3e7e. Aug 13 00:20:16.399393 containerd[2016]: time="2025-08-13T00:20:16.399222089Z" level=info msg="StartContainer for \"339640d7dfb84fd761fa7672306cbf6727c0386431719d7da372df3cc72b3e7e\" returns successfully" Aug 13 00:20:16.427511 systemd[1]: cri-containerd-339640d7dfb84fd761fa7672306cbf6727c0386431719d7da372df3cc72b3e7e.scope: Deactivated successfully. Aug 13 00:20:16.836835 containerd[2016]: time="2025-08-13T00:20:16.836710339Z" level=info msg="shim disconnected" id=339640d7dfb84fd761fa7672306cbf6727c0386431719d7da372df3cc72b3e7e namespace=k8s.io Aug 13 00:20:16.836835 containerd[2016]: time="2025-08-13T00:20:16.836783479Z" level=warning msg="cleaning up after shim disconnected" id=339640d7dfb84fd761fa7672306cbf6727c0386431719d7da372df3cc72b3e7e namespace=k8s.io Aug 13 00:20:16.836835 containerd[2016]: time="2025-08-13T00:20:16.836803195Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:20:16.896628 systemd[1]: run-containerd-runc-k8s.io-339640d7dfb84fd761fa7672306cbf6727c0386431719d7da372df3cc72b3e7e-runc.2HDvkQ.mount: Deactivated successfully. Aug 13 00:20:16.896821 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-339640d7dfb84fd761fa7672306cbf6727c0386431719d7da372df3cc72b3e7e-rootfs.mount: Deactivated successfully. Aug 13 00:20:17.154298 containerd[2016]: time="2025-08-13T00:20:17.153664145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 00:20:17.984695 kubelet[3353]: E0813 00:20:17.984061 3353 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zw9d2" podUID="58a4d2bd-5ce6-4a65-a83a-e16060349add" Aug 13 00:20:19.850450 containerd[2016]: time="2025-08-13T00:20:19.849624094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:19.852089 containerd[2016]: time="2025-08-13T00:20:19.852026338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Aug 13 00:20:19.854713 containerd[2016]: time="2025-08-13T00:20:19.854580274Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:19.859906 containerd[2016]: time="2025-08-13T00:20:19.859841746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:19.862222 containerd[2016]: time="2025-08-13T00:20:19.862174774Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.708419129s" Aug 13 00:20:19.862465 containerd[2016]: time="2025-08-13T00:20:19.862392166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Aug 13 00:20:19.874674 containerd[2016]: time="2025-08-13T00:20:19.874594090Z" level=info msg="CreateContainer within sandbox \"fba95ec12704329877f3c7e03e0adf4033f38d68e5d05faf79f3e0fcfd6e2c38\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 00:20:19.906057 containerd[2016]: time="2025-08-13T00:20:19.905979682Z" level=info msg="CreateContainer within sandbox \"fba95ec12704329877f3c7e03e0adf4033f38d68e5d05faf79f3e0fcfd6e2c38\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"59286126de66555135f89f377b2dd10d03dd84c12431082288d8929590ad9e4d\"" Aug 13 00:20:19.906820 containerd[2016]: time="2025-08-13T00:20:19.906773074Z" level=info msg="StartContainer for \"59286126de66555135f89f377b2dd10d03dd84c12431082288d8929590ad9e4d\"" Aug 13 00:20:19.977926 systemd[1]: Started cri-containerd-59286126de66555135f89f377b2dd10d03dd84c12431082288d8929590ad9e4d.scope - libcontainer container 59286126de66555135f89f377b2dd10d03dd84c12431082288d8929590ad9e4d. Aug 13 00:20:19.984051 kubelet[3353]: E0813 00:20:19.983528 3353 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zw9d2" podUID="58a4d2bd-5ce6-4a65-a83a-e16060349add" Aug 13 00:20:20.147156 containerd[2016]: time="2025-08-13T00:20:20.146593844Z" level=info msg="StartContainer for \"59286126de66555135f89f377b2dd10d03dd84c12431082288d8929590ad9e4d\" returns successfully" Aug 13 00:20:21.095019 kubelet[3353]: I0813 00:20:21.094955 3353 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 00:20:21.100038 systemd[1]: cri-containerd-59286126de66555135f89f377b2dd10d03dd84c12431082288d8929590ad9e4d.scope: Deactivated successfully. Aug 13 00:20:21.189572 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59286126de66555135f89f377b2dd10d03dd84c12431082288d8929590ad9e4d-rootfs.mount: Deactivated successfully. Aug 13 00:20:21.199190 kubelet[3353]: I0813 00:20:21.193877 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/899e91ee-239a-47a8-b50b-26e16e9ebb04-config-volume\") pod \"coredns-674b8bbfcf-z8tcq\" (UID: \"899e91ee-239a-47a8-b50b-26e16e9ebb04\") " pod="kube-system/coredns-674b8bbfcf-z8tcq" Aug 13 00:20:21.201827 kubelet[3353]: I0813 00:20:21.201735 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjhpk\" (UniqueName: \"kubernetes.io/projected/899e91ee-239a-47a8-b50b-26e16e9ebb04-kube-api-access-rjhpk\") pod \"coredns-674b8bbfcf-z8tcq\" (UID: \"899e91ee-239a-47a8-b50b-26e16e9ebb04\") " pod="kube-system/coredns-674b8bbfcf-z8tcq" Aug 13 00:20:21.247711 systemd[1]: Created slice kubepods-burstable-pod3196cb5b_591a_4704_a9d8_fe628e63a24b.slice - libcontainer container kubepods-burstable-pod3196cb5b_591a_4704_a9d8_fe628e63a24b.slice. Aug 13 00:20:21.290520 systemd[1]: Created slice kubepods-burstable-pod899e91ee_239a_47a8_b50b_26e16e9ebb04.slice - libcontainer container kubepods-burstable-pod899e91ee_239a_47a8_b50b_26e16e9ebb04.slice. Aug 13 00:20:21.304710 kubelet[3353]: I0813 00:20:21.302219 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2ld4\" (UniqueName: \"kubernetes.io/projected/3196cb5b-591a-4704-a9d8-fe628e63a24b-kube-api-access-f2ld4\") pod \"coredns-674b8bbfcf-tbpmv\" (UID: \"3196cb5b-591a-4704-a9d8-fe628e63a24b\") " pod="kube-system/coredns-674b8bbfcf-tbpmv" Aug 13 00:20:21.308754 kubelet[3353]: I0813 00:20:21.304935 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550-whisker-backend-key-pair\") pod \"whisker-7c697d5669-s2sn4\" (UID: \"dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550\") " pod="calico-system/whisker-7c697d5669-s2sn4" Aug 13 00:20:21.308754 kubelet[3353]: I0813 00:20:21.304985 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550-whisker-ca-bundle\") pod \"whisker-7c697d5669-s2sn4\" (UID: \"dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550\") " pod="calico-system/whisker-7c697d5669-s2sn4" Aug 13 00:20:21.308754 kubelet[3353]: I0813 00:20:21.305076 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7a5dfad8-b698-48ca-a507-a2288b46e2e3-goldmane-key-pair\") pod \"goldmane-768f4c5c69-xkhnd\" (UID: \"7a5dfad8-b698-48ca-a507-a2288b46e2e3\") " pod="calico-system/goldmane-768f4c5c69-xkhnd" Aug 13 00:20:21.308754 kubelet[3353]: I0813 00:20:21.305115 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3196cb5b-591a-4704-a9d8-fe628e63a24b-config-volume\") pod \"coredns-674b8bbfcf-tbpmv\" (UID: \"3196cb5b-591a-4704-a9d8-fe628e63a24b\") " pod="kube-system/coredns-674b8bbfcf-tbpmv" Aug 13 00:20:21.308754 kubelet[3353]: I0813 00:20:21.305177 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbrqx\" (UniqueName: \"kubernetes.io/projected/b69d49e6-4f1c-4a7b-bc02-821b576330fe-kube-api-access-xbrqx\") pod \"calico-apiserver-76df78f95-t54cs\" (UID: \"b69d49e6-4f1c-4a7b-bc02-821b576330fe\") " pod="calico-apiserver/calico-apiserver-76df78f95-t54cs" Aug 13 00:20:21.315543 kubelet[3353]: I0813 00:20:21.305216 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8nqm\" (UniqueName: \"kubernetes.io/projected/7a5dfad8-b698-48ca-a507-a2288b46e2e3-kube-api-access-n8nqm\") pod \"goldmane-768f4c5c69-xkhnd\" (UID: \"7a5dfad8-b698-48ca-a507-a2288b46e2e3\") " pod="calico-system/goldmane-768f4c5c69-xkhnd" Aug 13 00:20:21.315543 kubelet[3353]: I0813 00:20:21.305261 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtb9x\" (UniqueName: \"kubernetes.io/projected/9ed40b4c-f347-4ecd-b45b-59769ec7e382-kube-api-access-vtb9x\") pod \"calico-apiserver-76df78f95-gh9qn\" (UID: \"9ed40b4c-f347-4ecd-b45b-59769ec7e382\") " pod="calico-apiserver/calico-apiserver-76df78f95-gh9qn" Aug 13 00:20:21.315543 kubelet[3353]: I0813 00:20:21.305398 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9ss4\" (UniqueName: \"kubernetes.io/projected/dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550-kube-api-access-c9ss4\") pod \"whisker-7c697d5669-s2sn4\" (UID: \"dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550\") " pod="calico-system/whisker-7c697d5669-s2sn4" Aug 13 00:20:21.315543 kubelet[3353]: I0813 00:20:21.305458 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0079dcf-b4da-4691-a20c-b4a4e5395937-tigera-ca-bundle\") pod \"calico-kube-controllers-cfdf54966-lrl8t\" (UID: \"d0079dcf-b4da-4691-a20c-b4a4e5395937\") " pod="calico-system/calico-kube-controllers-cfdf54966-lrl8t" Aug 13 00:20:21.315543 kubelet[3353]: I0813 00:20:21.305499 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcdfl\" (UniqueName: \"kubernetes.io/projected/d0079dcf-b4da-4691-a20c-b4a4e5395937-kube-api-access-xcdfl\") pod \"calico-kube-controllers-cfdf54966-lrl8t\" (UID: \"d0079dcf-b4da-4691-a20c-b4a4e5395937\") " pod="calico-system/calico-kube-controllers-cfdf54966-lrl8t" Aug 13 00:20:21.315874 kubelet[3353]: I0813 00:20:21.305540 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b69d49e6-4f1c-4a7b-bc02-821b576330fe-calico-apiserver-certs\") pod \"calico-apiserver-76df78f95-t54cs\" (UID: \"b69d49e6-4f1c-4a7b-bc02-821b576330fe\") " pod="calico-apiserver/calico-apiserver-76df78f95-t54cs" Aug 13 00:20:21.315874 kubelet[3353]: I0813 00:20:21.305578 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a5dfad8-b698-48ca-a507-a2288b46e2e3-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-xkhnd\" (UID: \"7a5dfad8-b698-48ca-a507-a2288b46e2e3\") " pod="calico-system/goldmane-768f4c5c69-xkhnd" Aug 13 00:20:21.315874 kubelet[3353]: I0813 00:20:21.305627 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a5dfad8-b698-48ca-a507-a2288b46e2e3-config\") pod \"goldmane-768f4c5c69-xkhnd\" (UID: \"7a5dfad8-b698-48ca-a507-a2288b46e2e3\") " pod="calico-system/goldmane-768f4c5c69-xkhnd" Aug 13 00:20:21.315874 kubelet[3353]: I0813 00:20:21.305724 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9ed40b4c-f347-4ecd-b45b-59769ec7e382-calico-apiserver-certs\") pod \"calico-apiserver-76df78f95-gh9qn\" (UID: \"9ed40b4c-f347-4ecd-b45b-59769ec7e382\") " pod="calico-apiserver/calico-apiserver-76df78f95-gh9qn" Aug 13 00:20:21.328513 systemd[1]: Created slice kubepods-besteffort-pod9ed40b4c_f347_4ecd_b45b_59769ec7e382.slice - libcontainer container kubepods-besteffort-pod9ed40b4c_f347_4ecd_b45b_59769ec7e382.slice. Aug 13 00:20:21.366791 systemd[1]: Created slice kubepods-besteffort-podb69d49e6_4f1c_4a7b_bc02_821b576330fe.slice - libcontainer container kubepods-besteffort-podb69d49e6_4f1c_4a7b_bc02_821b576330fe.slice. Aug 13 00:20:21.390742 systemd[1]: Created slice kubepods-besteffort-podd0079dcf_b4da_4691_a20c_b4a4e5395937.slice - libcontainer container kubepods-besteffort-podd0079dcf_b4da_4691_a20c_b4a4e5395937.slice. Aug 13 00:20:21.410409 systemd[1]: Created slice kubepods-besteffort-poddd1d7e2d_dae3_4cab_8465_dfe5cfe7f550.slice - libcontainer container kubepods-besteffort-poddd1d7e2d_dae3_4cab_8465_dfe5cfe7f550.slice. Aug 13 00:20:21.474351 systemd[1]: Created slice kubepods-besteffort-pod7a5dfad8_b698_48ca_a507_a2288b46e2e3.slice - libcontainer container kubepods-besteffort-pod7a5dfad8_b698_48ca_a507_a2288b46e2e3.slice. Aug 13 00:20:21.572924 containerd[2016]: time="2025-08-13T00:20:21.572871779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tbpmv,Uid:3196cb5b-591a-4704-a9d8-fe628e63a24b,Namespace:kube-system,Attempt:0,}" Aug 13 00:20:21.612868 containerd[2016]: time="2025-08-13T00:20:21.612696611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z8tcq,Uid:899e91ee-239a-47a8-b50b-26e16e9ebb04,Namespace:kube-system,Attempt:0,}" Aug 13 00:20:21.637278 containerd[2016]: time="2025-08-13T00:20:21.637020803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76df78f95-gh9qn,Uid:9ed40b4c-f347-4ecd-b45b-59769ec7e382,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:20:21.679796 containerd[2016]: time="2025-08-13T00:20:21.679735283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76df78f95-t54cs,Uid:b69d49e6-4f1c-4a7b-bc02-821b576330fe,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:20:21.703200 containerd[2016]: time="2025-08-13T00:20:21.703102775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cfdf54966-lrl8t,Uid:d0079dcf-b4da-4691-a20c-b4a4e5395937,Namespace:calico-system,Attempt:0,}" Aug 13 00:20:21.792849 containerd[2016]: time="2025-08-13T00:20:21.792761484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c697d5669-s2sn4,Uid:dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550,Namespace:calico-system,Attempt:0,}" Aug 13 00:20:21.800088 containerd[2016]: time="2025-08-13T00:20:21.799984464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-xkhnd,Uid:7a5dfad8-b698-48ca-a507-a2288b46e2e3,Namespace:calico-system,Attempt:0,}" Aug 13 00:20:21.816416 containerd[2016]: time="2025-08-13T00:20:21.816181704Z" level=info msg="shim disconnected" id=59286126de66555135f89f377b2dd10d03dd84c12431082288d8929590ad9e4d namespace=k8s.io Aug 13 00:20:21.816416 containerd[2016]: time="2025-08-13T00:20:21.816269736Z" level=warning msg="cleaning up after shim disconnected" id=59286126de66555135f89f377b2dd10d03dd84c12431082288d8929590ad9e4d namespace=k8s.io Aug 13 00:20:21.816416 containerd[2016]: time="2025-08-13T00:20:21.816297960Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:20:22.006964 systemd[1]: Created slice kubepods-besteffort-pod58a4d2bd_5ce6_4a65_a83a_e16060349add.slice - libcontainer container kubepods-besteffort-pod58a4d2bd_5ce6_4a65_a83a_e16060349add.slice. Aug 13 00:20:22.018244 containerd[2016]: time="2025-08-13T00:20:22.018012213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zw9d2,Uid:58a4d2bd-5ce6-4a65-a83a-e16060349add,Namespace:calico-system,Attempt:0,}" Aug 13 00:20:22.258022 containerd[2016]: time="2025-08-13T00:20:22.257401738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 00:20:22.421695 containerd[2016]: time="2025-08-13T00:20:22.420403211Z" level=error msg="Failed to destroy network for sandbox \"a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.425674 containerd[2016]: time="2025-08-13T00:20:22.425497835Z" level=error msg="encountered an error cleaning up failed sandbox \"a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.426854 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994-shm.mount: Deactivated successfully. Aug 13 00:20:22.427423 containerd[2016]: time="2025-08-13T00:20:22.425608475Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z8tcq,Uid:899e91ee-239a-47a8-b50b-26e16e9ebb04,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.428030 kubelet[3353]: E0813 00:20:22.427942 3353 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.428933 kubelet[3353]: E0813 00:20:22.428040 3353 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-z8tcq" Aug 13 00:20:22.428933 kubelet[3353]: E0813 00:20:22.428077 3353 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-z8tcq" Aug 13 00:20:22.428933 kubelet[3353]: E0813 00:20:22.428168 3353 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-z8tcq_kube-system(899e91ee-239a-47a8-b50b-26e16e9ebb04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-z8tcq_kube-system(899e91ee-239a-47a8-b50b-26e16e9ebb04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-z8tcq" podUID="899e91ee-239a-47a8-b50b-26e16e9ebb04" Aug 13 00:20:22.492329 containerd[2016]: time="2025-08-13T00:20:22.487814735Z" level=error msg="Failed to destroy network for sandbox \"64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.492329 containerd[2016]: time="2025-08-13T00:20:22.490051187Z" level=error msg="encountered an error cleaning up failed sandbox \"64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.493018 containerd[2016]: time="2025-08-13T00:20:22.490147943Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cfdf54966-lrl8t,Uid:d0079dcf-b4da-4691-a20c-b4a4e5395937,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.496312 kubelet[3353]: E0813 00:20:22.494717 3353 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.496312 kubelet[3353]: E0813 00:20:22.494807 3353 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cfdf54966-lrl8t" Aug 13 00:20:22.496312 kubelet[3353]: E0813 00:20:22.494843 3353 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cfdf54966-lrl8t" Aug 13 00:20:22.495659 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c-shm.mount: Deactivated successfully. Aug 13 00:20:22.496733 kubelet[3353]: E0813 00:20:22.494945 3353 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cfdf54966-lrl8t_calico-system(d0079dcf-b4da-4691-a20c-b4a4e5395937)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cfdf54966-lrl8t_calico-system(d0079dcf-b4da-4691-a20c-b4a4e5395937)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cfdf54966-lrl8t" podUID="d0079dcf-b4da-4691-a20c-b4a4e5395937" Aug 13 00:20:22.520839 containerd[2016]: time="2025-08-13T00:20:22.520496399Z" level=error msg="Failed to destroy network for sandbox \"17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.523326 containerd[2016]: time="2025-08-13T00:20:22.523098887Z" level=error msg="Failed to destroy network for sandbox \"532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.524009 containerd[2016]: time="2025-08-13T00:20:22.523819655Z" level=error msg="Failed to destroy network for sandbox \"305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.526302 containerd[2016]: time="2025-08-13T00:20:22.526120775Z" level=error msg="Failed to destroy network for sandbox \"e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.530242 containerd[2016]: time="2025-08-13T00:20:22.527139227Z" level=error msg="encountered an error cleaning up failed sandbox \"532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.530242 containerd[2016]: time="2025-08-13T00:20:22.527232275Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76df78f95-gh9qn,Uid:9ed40b4c-f347-4ecd-b45b-59769ec7e382,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.530499 kubelet[3353]: E0813 00:20:22.529822 3353 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.530499 kubelet[3353]: E0813 00:20:22.529897 3353 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76df78f95-gh9qn" Aug 13 00:20:22.530499 kubelet[3353]: E0813 00:20:22.529931 3353 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76df78f95-gh9qn" Aug 13 00:20:22.531125 kubelet[3353]: E0813 00:20:22.530009 3353 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76df78f95-gh9qn_calico-apiserver(9ed40b4c-f347-4ecd-b45b-59769ec7e382)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76df78f95-gh9qn_calico-apiserver(9ed40b4c-f347-4ecd-b45b-59769ec7e382)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76df78f95-gh9qn" podUID="9ed40b4c-f347-4ecd-b45b-59769ec7e382" Aug 13 00:20:22.531711 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c-shm.mount: Deactivated successfully. Aug 13 00:20:22.533279 containerd[2016]: time="2025-08-13T00:20:22.532120955Z" level=error msg="encountered an error cleaning up failed sandbox \"e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.533279 containerd[2016]: time="2025-08-13T00:20:22.532208291Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zw9d2,Uid:58a4d2bd-5ce6-4a65-a83a-e16060349add,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.532552 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6-shm.mount: Deactivated successfully. Aug 13 00:20:22.532731 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e-shm.mount: Deactivated successfully. Aug 13 00:20:22.544101 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94-shm.mount: Deactivated successfully. Aug 13 00:20:22.544744 containerd[2016]: time="2025-08-13T00:20:22.541907111Z" level=error msg="encountered an error cleaning up failed sandbox \"17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.544991 kubelet[3353]: E0813 00:20:22.544885 3353 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.544991 kubelet[3353]: E0813 00:20:22.544969 3353 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zw9d2" Aug 13 00:20:22.545258 kubelet[3353]: E0813 00:20:22.545003 3353 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zw9d2" Aug 13 00:20:22.545258 kubelet[3353]: E0813 00:20:22.545100 3353 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zw9d2_calico-system(58a4d2bd-5ce6-4a65-a83a-e16060349add)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zw9d2_calico-system(58a4d2bd-5ce6-4a65-a83a-e16060349add)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zw9d2" podUID="58a4d2bd-5ce6-4a65-a83a-e16060349add" Aug 13 00:20:22.549821 containerd[2016]: time="2025-08-13T00:20:22.548754503Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76df78f95-t54cs,Uid:b69d49e6-4f1c-4a7b-bc02-821b576330fe,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.550975 kubelet[3353]: E0813 00:20:22.550669 3353 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.550975 kubelet[3353]: E0813 00:20:22.550752 3353 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76df78f95-t54cs" Aug 13 00:20:22.550975 kubelet[3353]: E0813 00:20:22.550787 3353 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76df78f95-t54cs" Aug 13 00:20:22.551247 kubelet[3353]: E0813 00:20:22.550874 3353 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76df78f95-t54cs_calico-apiserver(b69d49e6-4f1c-4a7b-bc02-821b576330fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76df78f95-t54cs_calico-apiserver(b69d49e6-4f1c-4a7b-bc02-821b576330fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76df78f95-t54cs" podUID="b69d49e6-4f1c-4a7b-bc02-821b576330fe" Aug 13 00:20:22.552507 containerd[2016]: time="2025-08-13T00:20:22.551973191Z" level=error msg="encountered an error cleaning up failed sandbox \"305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.552507 containerd[2016]: time="2025-08-13T00:20:22.552332711Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tbpmv,Uid:3196cb5b-591a-4704-a9d8-fe628e63a24b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.553394 kubelet[3353]: E0813 00:20:22.553137 3353 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.553394 kubelet[3353]: E0813 00:20:22.553214 3353 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-tbpmv" Aug 13 00:20:22.553394 kubelet[3353]: E0813 00:20:22.553247 3353 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-tbpmv" Aug 13 00:20:22.553674 kubelet[3353]: E0813 00:20:22.553317 3353 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-tbpmv_kube-system(3196cb5b-591a-4704-a9d8-fe628e63a24b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-tbpmv_kube-system(3196cb5b-591a-4704-a9d8-fe628e63a24b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-tbpmv" podUID="3196cb5b-591a-4704-a9d8-fe628e63a24b" Aug 13 00:20:22.561245 containerd[2016]: time="2025-08-13T00:20:22.560128367Z" level=error msg="Failed to destroy network for sandbox \"89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.562105 containerd[2016]: time="2025-08-13T00:20:22.561891108Z" level=error msg="encountered an error cleaning up failed sandbox \"89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.562359 containerd[2016]: time="2025-08-13T00:20:22.562066572Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-xkhnd,Uid:7a5dfad8-b698-48ca-a507-a2288b46e2e3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.563425 kubelet[3353]: E0813 00:20:22.562908 3353 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.563425 kubelet[3353]: E0813 00:20:22.563230 3353 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-xkhnd" Aug 13 00:20:22.563425 kubelet[3353]: E0813 00:20:22.563264 3353 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-xkhnd" Aug 13 00:20:22.563774 kubelet[3353]: E0813 00:20:22.563350 3353 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-xkhnd_calico-system(7a5dfad8-b698-48ca-a507-a2288b46e2e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-xkhnd_calico-system(7a5dfad8-b698-48ca-a507-a2288b46e2e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-xkhnd" podUID="7a5dfad8-b698-48ca-a507-a2288b46e2e3" Aug 13 00:20:22.568695 containerd[2016]: time="2025-08-13T00:20:22.568596216Z" level=error msg="Failed to destroy network for sandbox \"b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.569528 containerd[2016]: time="2025-08-13T00:20:22.569387304Z" level=error msg="encountered an error cleaning up failed sandbox \"b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.569783 containerd[2016]: time="2025-08-13T00:20:22.569565384Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7c697d5669-s2sn4,Uid:dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.570378 kubelet[3353]: E0813 00:20:22.570120 3353 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:22.570378 kubelet[3353]: E0813 00:20:22.570199 3353 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7c697d5669-s2sn4" Aug 13 00:20:22.570378 kubelet[3353]: E0813 00:20:22.570233 3353 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7c697d5669-s2sn4" Aug 13 00:20:22.570619 kubelet[3353]: E0813 00:20:22.570304 3353 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7c697d5669-s2sn4_calico-system(dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7c697d5669-s2sn4_calico-system(dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7c697d5669-s2sn4" podUID="dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550" Aug 13 00:20:23.187427 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89-shm.mount: Deactivated successfully. Aug 13 00:20:23.188187 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b-shm.mount: Deactivated successfully. Aug 13 00:20:23.233613 kubelet[3353]: I0813 00:20:23.233560 3353 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" Aug 13 00:20:23.235259 containerd[2016]: time="2025-08-13T00:20:23.234719183Z" level=info msg="StopPodSandbox for \"89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89\"" Aug 13 00:20:23.237320 containerd[2016]: time="2025-08-13T00:20:23.235831499Z" level=info msg="Ensure that sandbox 89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89 in task-service has been cleanup successfully" Aug 13 00:20:23.239174 kubelet[3353]: I0813 00:20:23.239107 3353 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" Aug 13 00:20:23.242537 containerd[2016]: time="2025-08-13T00:20:23.242083751Z" level=info msg="StopPodSandbox for \"532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6\"" Aug 13 00:20:23.244667 containerd[2016]: time="2025-08-13T00:20:23.244021751Z" level=info msg="Ensure that sandbox 532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6 in task-service has been cleanup successfully" Aug 13 00:20:23.247295 kubelet[3353]: I0813 00:20:23.247243 3353 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" Aug 13 00:20:23.252045 containerd[2016]: time="2025-08-13T00:20:23.251980175Z" level=info msg="StopPodSandbox for \"a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994\"" Aug 13 00:20:23.252930 containerd[2016]: time="2025-08-13T00:20:23.252764999Z" level=info msg="Ensure that sandbox a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994 in task-service has been cleanup successfully" Aug 13 00:20:23.257315 kubelet[3353]: I0813 00:20:23.257217 3353 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" Aug 13 00:20:23.262764 containerd[2016]: time="2025-08-13T00:20:23.262675535Z" level=info msg="StopPodSandbox for \"305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e\"" Aug 13 00:20:23.264386 containerd[2016]: time="2025-08-13T00:20:23.264318851Z" level=info msg="Ensure that sandbox 305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e in task-service has been cleanup successfully" Aug 13 00:20:23.267321 kubelet[3353]: I0813 00:20:23.267050 3353 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" Aug 13 00:20:23.272751 containerd[2016]: time="2025-08-13T00:20:23.272071907Z" level=info msg="StopPodSandbox for \"e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94\"" Aug 13 00:20:23.272751 containerd[2016]: time="2025-08-13T00:20:23.272351279Z" level=info msg="Ensure that sandbox e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94 in task-service has been cleanup successfully" Aug 13 00:20:23.278469 kubelet[3353]: I0813 00:20:23.278421 3353 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" Aug 13 00:20:23.282564 containerd[2016]: time="2025-08-13T00:20:23.282479423Z" level=info msg="StopPodSandbox for \"b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b\"" Aug 13 00:20:23.284933 containerd[2016]: time="2025-08-13T00:20:23.284566655Z" level=info msg="Ensure that sandbox b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b in task-service has been cleanup successfully" Aug 13 00:20:23.290427 kubelet[3353]: I0813 00:20:23.290377 3353 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" Aug 13 00:20:23.293055 containerd[2016]: time="2025-08-13T00:20:23.292730435Z" level=info msg="StopPodSandbox for \"64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c\"" Aug 13 00:20:23.297973 containerd[2016]: time="2025-08-13T00:20:23.297368879Z" level=info msg="Ensure that sandbox 64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c in task-service has been cleanup successfully" Aug 13 00:20:23.300181 kubelet[3353]: I0813 00:20:23.300023 3353 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" Aug 13 00:20:23.304255 containerd[2016]: time="2025-08-13T00:20:23.302469299Z" level=info msg="StopPodSandbox for \"17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c\"" Aug 13 00:20:23.304255 containerd[2016]: time="2025-08-13T00:20:23.302785595Z" level=info msg="Ensure that sandbox 17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c in task-service has been cleanup successfully" Aug 13 00:20:23.460498 containerd[2016]: time="2025-08-13T00:20:23.460247808Z" level=error msg="StopPodSandbox for \"532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6\" failed" error="failed to destroy network for sandbox \"532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:23.461819 kubelet[3353]: E0813 00:20:23.460737 3353 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" Aug 13 00:20:23.461819 kubelet[3353]: E0813 00:20:23.460818 3353 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6"} Aug 13 00:20:23.461819 kubelet[3353]: E0813 00:20:23.460899 3353 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9ed40b4c-f347-4ecd-b45b-59769ec7e382\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:20:23.461819 kubelet[3353]: E0813 00:20:23.460940 3353 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9ed40b4c-f347-4ecd-b45b-59769ec7e382\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76df78f95-gh9qn" podUID="9ed40b4c-f347-4ecd-b45b-59769ec7e382" Aug 13 00:20:23.471964 containerd[2016]: time="2025-08-13T00:20:23.471479352Z" level=error msg="StopPodSandbox for \"89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89\" failed" error="failed to destroy network for sandbox \"89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:23.472972 kubelet[3353]: E0813 00:20:23.472701 3353 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" Aug 13 00:20:23.472972 kubelet[3353]: E0813 00:20:23.472792 3353 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89"} Aug 13 00:20:23.472972 kubelet[3353]: E0813 00:20:23.472847 3353 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7a5dfad8-b698-48ca-a507-a2288b46e2e3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:20:23.472972 kubelet[3353]: E0813 00:20:23.472887 3353 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7a5dfad8-b698-48ca-a507-a2288b46e2e3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-xkhnd" podUID="7a5dfad8-b698-48ca-a507-a2288b46e2e3" Aug 13 00:20:23.503535 containerd[2016]: time="2025-08-13T00:20:23.502920912Z" level=error msg="StopPodSandbox for \"305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e\" failed" error="failed to destroy network for sandbox \"305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:23.503758 kubelet[3353]: E0813 00:20:23.503237 3353 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" Aug 13 00:20:23.503758 kubelet[3353]: E0813 00:20:23.503306 3353 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e"} Aug 13 00:20:23.503758 kubelet[3353]: E0813 00:20:23.503359 3353 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3196cb5b-591a-4704-a9d8-fe628e63a24b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:20:23.503758 kubelet[3353]: E0813 00:20:23.503400 3353 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3196cb5b-591a-4704-a9d8-fe628e63a24b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-tbpmv" podUID="3196cb5b-591a-4704-a9d8-fe628e63a24b" Aug 13 00:20:23.509669 containerd[2016]: time="2025-08-13T00:20:23.509237184Z" level=error msg="StopPodSandbox for \"a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994\" failed" error="failed to destroy network for sandbox \"a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:23.510074 kubelet[3353]: E0813 00:20:23.509572 3353 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" Aug 13 00:20:23.510074 kubelet[3353]: E0813 00:20:23.509711 3353 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994"} Aug 13 00:20:23.511707 kubelet[3353]: E0813 00:20:23.510141 3353 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"899e91ee-239a-47a8-b50b-26e16e9ebb04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:20:23.511707 kubelet[3353]: E0813 00:20:23.510184 3353 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"899e91ee-239a-47a8-b50b-26e16e9ebb04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-z8tcq" podUID="899e91ee-239a-47a8-b50b-26e16e9ebb04" Aug 13 00:20:23.519021 containerd[2016]: time="2025-08-13T00:20:23.518949744Z" level=error msg="StopPodSandbox for \"17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c\" failed" error="failed to destroy network for sandbox \"17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:23.519657 kubelet[3353]: E0813 00:20:23.519476 3353 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" Aug 13 00:20:23.519766 kubelet[3353]: E0813 00:20:23.519687 3353 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c"} Aug 13 00:20:23.519766 kubelet[3353]: E0813 00:20:23.519744 3353 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b69d49e6-4f1c-4a7b-bc02-821b576330fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:20:23.520077 kubelet[3353]: E0813 00:20:23.519935 3353 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b69d49e6-4f1c-4a7b-bc02-821b576330fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76df78f95-t54cs" podUID="b69d49e6-4f1c-4a7b-bc02-821b576330fe" Aug 13 00:20:23.546656 containerd[2016]: time="2025-08-13T00:20:23.546574260Z" level=error msg="StopPodSandbox for \"e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94\" failed" error="failed to destroy network for sandbox \"e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:23.547473 kubelet[3353]: E0813 00:20:23.547393 3353 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" Aug 13 00:20:23.547606 kubelet[3353]: E0813 00:20:23.547476 3353 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94"} Aug 13 00:20:23.547606 kubelet[3353]: E0813 00:20:23.547530 3353 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"58a4d2bd-5ce6-4a65-a83a-e16060349add\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:20:23.547606 kubelet[3353]: E0813 00:20:23.547570 3353 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"58a4d2bd-5ce6-4a65-a83a-e16060349add\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zw9d2" podUID="58a4d2bd-5ce6-4a65-a83a-e16060349add" Aug 13 00:20:23.552261 containerd[2016]: time="2025-08-13T00:20:23.551214192Z" level=error msg="StopPodSandbox for \"b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b\" failed" error="failed to destroy network for sandbox \"b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:23.552517 kubelet[3353]: E0813 00:20:23.551522 3353 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" Aug 13 00:20:23.552517 kubelet[3353]: E0813 00:20:23.551592 3353 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b"} Aug 13 00:20:23.552517 kubelet[3353]: E0813 00:20:23.551743 3353 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:20:23.552517 kubelet[3353]: E0813 00:20:23.551822 3353 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7c697d5669-s2sn4" podUID="dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550" Aug 13 00:20:23.553613 containerd[2016]: time="2025-08-13T00:20:23.552777408Z" level=error msg="StopPodSandbox for \"64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c\" failed" error="failed to destroy network for sandbox \"64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:20:23.553786 kubelet[3353]: E0813 00:20:23.553385 3353 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" Aug 13 00:20:23.553786 kubelet[3353]: E0813 00:20:23.553447 3353 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c"} Aug 13 00:20:23.553786 kubelet[3353]: E0813 00:20:23.553501 3353 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d0079dcf-b4da-4691-a20c-b4a4e5395937\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:20:23.553786 kubelet[3353]: E0813 00:20:23.553545 3353 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d0079dcf-b4da-4691-a20c-b4a4e5395937\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cfdf54966-lrl8t" podUID="d0079dcf-b4da-4691-a20c-b4a4e5395937" Aug 13 00:20:28.579720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3316971998.mount: Deactivated successfully. Aug 13 00:20:28.656248 containerd[2016]: time="2025-08-13T00:20:28.655932978Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:28.658298 containerd[2016]: time="2025-08-13T00:20:28.658221978Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Aug 13 00:20:28.660829 containerd[2016]: time="2025-08-13T00:20:28.660752922Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:28.665826 containerd[2016]: time="2025-08-13T00:20:28.665716050Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:28.667419 containerd[2016]: time="2025-08-13T00:20:28.667213518Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 6.409746884s" Aug 13 00:20:28.667419 containerd[2016]: time="2025-08-13T00:20:28.667285914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Aug 13 00:20:28.713423 containerd[2016]: time="2025-08-13T00:20:28.713336754Z" level=info msg="CreateContainer within sandbox \"fba95ec12704329877f3c7e03e0adf4033f38d68e5d05faf79f3e0fcfd6e2c38\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 00:20:28.754335 containerd[2016]: time="2025-08-13T00:20:28.754277646Z" level=info msg="CreateContainer within sandbox \"fba95ec12704329877f3c7e03e0adf4033f38d68e5d05faf79f3e0fcfd6e2c38\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"571aa8f3407a865914165262b0a9f032a8fbab89999f846aefca568c7d1233de\"" Aug 13 00:20:28.755931 containerd[2016]: time="2025-08-13T00:20:28.755801418Z" level=info msg="StartContainer for \"571aa8f3407a865914165262b0a9f032a8fbab89999f846aefca568c7d1233de\"" Aug 13 00:20:28.809963 systemd[1]: Started cri-containerd-571aa8f3407a865914165262b0a9f032a8fbab89999f846aefca568c7d1233de.scope - libcontainer container 571aa8f3407a865914165262b0a9f032a8fbab89999f846aefca568c7d1233de. Aug 13 00:20:28.869953 containerd[2016]: time="2025-08-13T00:20:28.869778427Z" level=info msg="StartContainer for \"571aa8f3407a865914165262b0a9f032a8fbab89999f846aefca568c7d1233de\" returns successfully" Aug 13 00:20:29.123341 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 00:20:29.123587 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 00:20:29.317537 containerd[2016]: time="2025-08-13T00:20:29.317472869Z" level=info msg="StopPodSandbox for \"b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b\"" Aug 13 00:20:29.533187 kubelet[3353]: I0813 00:20:29.533093 3353 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8hwtq" podStartSLOduration=1.7759466320000001 podStartE2EDuration="17.533064042s" podCreationTimestamp="2025-08-13 00:20:12 +0000 UTC" firstStartedPulling="2025-08-13 00:20:12.912048412 +0000 UTC m=+29.267238207" lastFinishedPulling="2025-08-13 00:20:28.669165834 +0000 UTC m=+45.024355617" observedRunningTime="2025-08-13 00:20:29.436583166 +0000 UTC m=+45.791772985" watchObservedRunningTime="2025-08-13 00:20:29.533064042 +0000 UTC m=+45.888253885" Aug 13 00:20:29.696787 containerd[2016]: 2025-08-13 00:20:29.539 [INFO][4594] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" Aug 13 00:20:29.696787 containerd[2016]: 2025-08-13 00:20:29.541 [INFO][4594] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" iface="eth0" netns="/var/run/netns/cni-8c29bb2a-2b63-2364-28b6-903c71b997b0" Aug 13 00:20:29.696787 containerd[2016]: 2025-08-13 00:20:29.542 [INFO][4594] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" iface="eth0" netns="/var/run/netns/cni-8c29bb2a-2b63-2364-28b6-903c71b997b0" Aug 13 00:20:29.696787 containerd[2016]: 2025-08-13 00:20:29.545 [INFO][4594] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" iface="eth0" netns="/var/run/netns/cni-8c29bb2a-2b63-2364-28b6-903c71b997b0" Aug 13 00:20:29.696787 containerd[2016]: 2025-08-13 00:20:29.545 [INFO][4594] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" Aug 13 00:20:29.696787 containerd[2016]: 2025-08-13 00:20:29.545 [INFO][4594] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" Aug 13 00:20:29.696787 containerd[2016]: 2025-08-13 00:20:29.661 [INFO][4622] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" HandleID="k8s-pod-network.b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" Workload="ip--172--31--18--147-k8s-whisker--7c697d5669--s2sn4-eth0" Aug 13 00:20:29.696787 containerd[2016]: 2025-08-13 00:20:29.661 [INFO][4622] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:29.696787 containerd[2016]: 2025-08-13 00:20:29.661 [INFO][4622] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:29.696787 containerd[2016]: 2025-08-13 00:20:29.679 [WARNING][4622] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" HandleID="k8s-pod-network.b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" Workload="ip--172--31--18--147-k8s-whisker--7c697d5669--s2sn4-eth0" Aug 13 00:20:29.696787 containerd[2016]: 2025-08-13 00:20:29.679 [INFO][4622] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" HandleID="k8s-pod-network.b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" Workload="ip--172--31--18--147-k8s-whisker--7c697d5669--s2sn4-eth0" Aug 13 00:20:29.696787 containerd[2016]: 2025-08-13 00:20:29.683 [INFO][4622] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:29.696787 containerd[2016]: 2025-08-13 00:20:29.689 [INFO][4594] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" Aug 13 00:20:29.702070 containerd[2016]: time="2025-08-13T00:20:29.701751787Z" level=info msg="TearDown network for sandbox \"b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b\" successfully" Aug 13 00:20:29.702070 containerd[2016]: time="2025-08-13T00:20:29.701872567Z" level=info msg="StopPodSandbox for \"b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b\" returns successfully" Aug 13 00:20:29.707082 systemd[1]: run-netns-cni\x2d8c29bb2a\x2d2b63\x2d2364\x2d28b6\x2d903c71b997b0.mount: Deactivated successfully. Aug 13 00:20:29.782948 kubelet[3353]: I0813 00:20:29.782890 3353 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550-whisker-ca-bundle\") pod \"dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550\" (UID: \"dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550\") " Aug 13 00:20:29.782948 kubelet[3353]: I0813 00:20:29.782986 3353 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550-whisker-backend-key-pair\") pod \"dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550\" (UID: \"dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550\") " Aug 13 00:20:29.783265 kubelet[3353]: I0813 00:20:29.783043 3353 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9ss4\" (UniqueName: \"kubernetes.io/projected/dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550-kube-api-access-c9ss4\") pod \"dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550\" (UID: \"dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550\") " Aug 13 00:20:29.784251 kubelet[3353]: I0813 00:20:29.784180 3353 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550" (UID: "dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 00:20:29.793347 kubelet[3353]: I0813 00:20:29.790400 3353 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550-kube-api-access-c9ss4" (OuterVolumeSpecName: "kube-api-access-c9ss4") pod "dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550" (UID: "dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550"). InnerVolumeSpecName "kube-api-access-c9ss4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 00:20:29.799778 systemd[1]: var-lib-kubelet-pods-dd1d7e2d\x2ddae3\x2d4cab\x2d8465\x2ddfe5cfe7f550-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc9ss4.mount: Deactivated successfully. Aug 13 00:20:29.802040 kubelet[3353]: I0813 00:20:29.799969 3353 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550" (UID: "dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 00:20:29.808436 systemd[1]: var-lib-kubelet-pods-dd1d7e2d\x2ddae3\x2d4cab\x2d8465\x2ddfe5cfe7f550-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 00:20:29.884020 kubelet[3353]: I0813 00:20:29.883948 3353 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550-whisker-ca-bundle\") on node \"ip-172-31-18-147\" DevicePath \"\"" Aug 13 00:20:29.884020 kubelet[3353]: I0813 00:20:29.884014 3353 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550-whisker-backend-key-pair\") on node \"ip-172-31-18-147\" DevicePath \"\"" Aug 13 00:20:29.884238 kubelet[3353]: I0813 00:20:29.884041 3353 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c9ss4\" (UniqueName: \"kubernetes.io/projected/dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550-kube-api-access-c9ss4\") on node \"ip-172-31-18-147\" DevicePath \"\"" Aug 13 00:20:29.997894 systemd[1]: Removed slice kubepods-besteffort-poddd1d7e2d_dae3_4cab_8465_dfe5cfe7f550.slice - libcontainer container kubepods-besteffort-poddd1d7e2d_dae3_4cab_8465_dfe5cfe7f550.slice. Aug 13 00:20:30.482107 systemd[1]: Created slice kubepods-besteffort-pode64f00c4_d793_463d_aa2e_acb34c2c1e0d.slice - libcontainer container kubepods-besteffort-pode64f00c4_d793_463d_aa2e_acb34c2c1e0d.slice. Aug 13 00:20:30.497257 kubelet[3353]: I0813 00:20:30.497184 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e64f00c4-d793-463d-aa2e-acb34c2c1e0d-whisker-backend-key-pair\") pod \"whisker-866f5cf9c9-z24mw\" (UID: \"e64f00c4-d793-463d-aa2e-acb34c2c1e0d\") " pod="calico-system/whisker-866f5cf9c9-z24mw" Aug 13 00:20:30.497257 kubelet[3353]: I0813 00:20:30.497261 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e64f00c4-d793-463d-aa2e-acb34c2c1e0d-whisker-ca-bundle\") pod \"whisker-866f5cf9c9-z24mw\" (UID: \"e64f00c4-d793-463d-aa2e-acb34c2c1e0d\") " pod="calico-system/whisker-866f5cf9c9-z24mw" Aug 13 00:20:30.497484 kubelet[3353]: I0813 00:20:30.497304 3353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jdk4\" (UniqueName: \"kubernetes.io/projected/e64f00c4-d793-463d-aa2e-acb34c2c1e0d-kube-api-access-9jdk4\") pod \"whisker-866f5cf9c9-z24mw\" (UID: \"e64f00c4-d793-463d-aa2e-acb34c2c1e0d\") " pod="calico-system/whisker-866f5cf9c9-z24mw" Aug 13 00:20:30.577005 systemd[1]: run-containerd-runc-k8s.io-571aa8f3407a865914165262b0a9f032a8fbab89999f846aefca568c7d1233de-runc.79KNa8.mount: Deactivated successfully. Aug 13 00:20:30.791143 containerd[2016]: time="2025-08-13T00:20:30.790943192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-866f5cf9c9-z24mw,Uid:e64f00c4-d793-463d-aa2e-acb34c2c1e0d,Namespace:calico-system,Attempt:0,}" Aug 13 00:20:31.035091 (udev-worker)[4573]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:20:31.040909 systemd-networkd[1928]: cali20a622d7907: Link UP Aug 13 00:20:31.042597 systemd-networkd[1928]: cali20a622d7907: Gained carrier Aug 13 00:20:31.079962 containerd[2016]: 2025-08-13 00:20:30.866 [INFO][4666] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:20:31.079962 containerd[2016]: 2025-08-13 00:20:30.889 [INFO][4666] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--147-k8s-whisker--866f5cf9c9--z24mw-eth0 whisker-866f5cf9c9- calico-system e64f00c4-d793-463d-aa2e-acb34c2c1e0d 949 0 2025-08-13 00:20:30 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:866f5cf9c9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-18-147 whisker-866f5cf9c9-z24mw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali20a622d7907 [] [] }} ContainerID="0fdce3179b71aba090741f89f76750a192cb61f174475d6bd2addb925a9486cb" Namespace="calico-system" Pod="whisker-866f5cf9c9-z24mw" WorkloadEndpoint="ip--172--31--18--147-k8s-whisker--866f5cf9c9--z24mw-" Aug 13 00:20:31.079962 containerd[2016]: 2025-08-13 00:20:30.889 [INFO][4666] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0fdce3179b71aba090741f89f76750a192cb61f174475d6bd2addb925a9486cb" Namespace="calico-system" Pod="whisker-866f5cf9c9-z24mw" WorkloadEndpoint="ip--172--31--18--147-k8s-whisker--866f5cf9c9--z24mw-eth0" Aug 13 00:20:31.079962 containerd[2016]: 2025-08-13 00:20:30.940 [INFO][4678] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0fdce3179b71aba090741f89f76750a192cb61f174475d6bd2addb925a9486cb" HandleID="k8s-pod-network.0fdce3179b71aba090741f89f76750a192cb61f174475d6bd2addb925a9486cb" Workload="ip--172--31--18--147-k8s-whisker--866f5cf9c9--z24mw-eth0" Aug 13 00:20:31.079962 containerd[2016]: 2025-08-13 00:20:30.940 [INFO][4678] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0fdce3179b71aba090741f89f76750a192cb61f174475d6bd2addb925a9486cb" HandleID="k8s-pod-network.0fdce3179b71aba090741f89f76750a192cb61f174475d6bd2addb925a9486cb" Workload="ip--172--31--18--147-k8s-whisker--866f5cf9c9--z24mw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab4a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-147", "pod":"whisker-866f5cf9c9-z24mw", "timestamp":"2025-08-13 00:20:30.940377969 +0000 UTC"}, Hostname:"ip-172-31-18-147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:20:31.079962 containerd[2016]: 2025-08-13 00:20:30.940 [INFO][4678] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:31.079962 containerd[2016]: 2025-08-13 00:20:30.940 [INFO][4678] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:31.079962 containerd[2016]: 2025-08-13 00:20:30.940 [INFO][4678] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-147' Aug 13 00:20:31.079962 containerd[2016]: 2025-08-13 00:20:30.957 [INFO][4678] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0fdce3179b71aba090741f89f76750a192cb61f174475d6bd2addb925a9486cb" host="ip-172-31-18-147" Aug 13 00:20:31.079962 containerd[2016]: 2025-08-13 00:20:30.967 [INFO][4678] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-147" Aug 13 00:20:31.079962 containerd[2016]: 2025-08-13 00:20:30.978 [INFO][4678] ipam/ipam.go 511: Trying affinity for 192.168.83.128/26 host="ip-172-31-18-147" Aug 13 00:20:31.079962 containerd[2016]: 2025-08-13 00:20:30.986 [INFO][4678] ipam/ipam.go 158: Attempting to load block cidr=192.168.83.128/26 host="ip-172-31-18-147" Aug 13 00:20:31.079962 containerd[2016]: 2025-08-13 00:20:30.991 [INFO][4678] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.83.128/26 host="ip-172-31-18-147" Aug 13 00:20:31.079962 containerd[2016]: 2025-08-13 00:20:30.991 [INFO][4678] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.83.128/26 handle="k8s-pod-network.0fdce3179b71aba090741f89f76750a192cb61f174475d6bd2addb925a9486cb" host="ip-172-31-18-147" Aug 13 00:20:31.079962 containerd[2016]: 2025-08-13 00:20:30.995 [INFO][4678] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0fdce3179b71aba090741f89f76750a192cb61f174475d6bd2addb925a9486cb Aug 13 00:20:31.079962 containerd[2016]: 2025-08-13 00:20:31.005 [INFO][4678] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.83.128/26 handle="k8s-pod-network.0fdce3179b71aba090741f89f76750a192cb61f174475d6bd2addb925a9486cb" host="ip-172-31-18-147" Aug 13 00:20:31.079962 containerd[2016]: 2025-08-13 00:20:31.016 [INFO][4678] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.83.129/26] block=192.168.83.128/26 handle="k8s-pod-network.0fdce3179b71aba090741f89f76750a192cb61f174475d6bd2addb925a9486cb" host="ip-172-31-18-147" Aug 13 00:20:31.079962 containerd[2016]: 2025-08-13 00:20:31.016 [INFO][4678] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.83.129/26] handle="k8s-pod-network.0fdce3179b71aba090741f89f76750a192cb61f174475d6bd2addb925a9486cb" host="ip-172-31-18-147" Aug 13 00:20:31.079962 containerd[2016]: 2025-08-13 00:20:31.016 [INFO][4678] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:31.079962 containerd[2016]: 2025-08-13 00:20:31.016 [INFO][4678] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.129/26] IPv6=[] ContainerID="0fdce3179b71aba090741f89f76750a192cb61f174475d6bd2addb925a9486cb" HandleID="k8s-pod-network.0fdce3179b71aba090741f89f76750a192cb61f174475d6bd2addb925a9486cb" Workload="ip--172--31--18--147-k8s-whisker--866f5cf9c9--z24mw-eth0" Aug 13 00:20:31.081168 containerd[2016]: 2025-08-13 00:20:31.021 [INFO][4666] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0fdce3179b71aba090741f89f76750a192cb61f174475d6bd2addb925a9486cb" Namespace="calico-system" Pod="whisker-866f5cf9c9-z24mw" WorkloadEndpoint="ip--172--31--18--147-k8s-whisker--866f5cf9c9--z24mw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-whisker--866f5cf9c9--z24mw-eth0", GenerateName:"whisker-866f5cf9c9-", Namespace:"calico-system", SelfLink:"", UID:"e64f00c4-d793-463d-aa2e-acb34c2c1e0d", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"866f5cf9c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"", Pod:"whisker-866f5cf9c9-z24mw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.83.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali20a622d7907", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:31.081168 containerd[2016]: 2025-08-13 00:20:31.022 [INFO][4666] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.83.129/32] ContainerID="0fdce3179b71aba090741f89f76750a192cb61f174475d6bd2addb925a9486cb" Namespace="calico-system" Pod="whisker-866f5cf9c9-z24mw" WorkloadEndpoint="ip--172--31--18--147-k8s-whisker--866f5cf9c9--z24mw-eth0" Aug 13 00:20:31.081168 containerd[2016]: 2025-08-13 00:20:31.022 [INFO][4666] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali20a622d7907 ContainerID="0fdce3179b71aba090741f89f76750a192cb61f174475d6bd2addb925a9486cb" Namespace="calico-system" Pod="whisker-866f5cf9c9-z24mw" WorkloadEndpoint="ip--172--31--18--147-k8s-whisker--866f5cf9c9--z24mw-eth0" Aug 13 00:20:31.081168 containerd[2016]: 2025-08-13 00:20:31.044 [INFO][4666] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0fdce3179b71aba090741f89f76750a192cb61f174475d6bd2addb925a9486cb" Namespace="calico-system" Pod="whisker-866f5cf9c9-z24mw" WorkloadEndpoint="ip--172--31--18--147-k8s-whisker--866f5cf9c9--z24mw-eth0" Aug 13 00:20:31.081168 containerd[2016]: 2025-08-13 00:20:31.046 [INFO][4666] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0fdce3179b71aba090741f89f76750a192cb61f174475d6bd2addb925a9486cb" Namespace="calico-system" Pod="whisker-866f5cf9c9-z24mw" WorkloadEndpoint="ip--172--31--18--147-k8s-whisker--866f5cf9c9--z24mw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-whisker--866f5cf9c9--z24mw-eth0", GenerateName:"whisker-866f5cf9c9-", Namespace:"calico-system", SelfLink:"", UID:"e64f00c4-d793-463d-aa2e-acb34c2c1e0d", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"866f5cf9c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"0fdce3179b71aba090741f89f76750a192cb61f174475d6bd2addb925a9486cb", Pod:"whisker-866f5cf9c9-z24mw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.83.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali20a622d7907", MAC:"42:88:b7:07:50:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:31.081168 containerd[2016]: 2025-08-13 00:20:31.073 [INFO][4666] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0fdce3179b71aba090741f89f76750a192cb61f174475d6bd2addb925a9486cb" Namespace="calico-system" Pod="whisker-866f5cf9c9-z24mw" WorkloadEndpoint="ip--172--31--18--147-k8s-whisker--866f5cf9c9--z24mw-eth0" Aug 13 00:20:31.139755 containerd[2016]: time="2025-08-13T00:20:31.139537758Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:20:31.139933 containerd[2016]: time="2025-08-13T00:20:31.139800450Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:20:31.139992 containerd[2016]: time="2025-08-13T00:20:31.139832382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:31.139992 containerd[2016]: time="2025-08-13T00:20:31.140210766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:31.207946 systemd[1]: Started cri-containerd-0fdce3179b71aba090741f89f76750a192cb61f174475d6bd2addb925a9486cb.scope - libcontainer container 0fdce3179b71aba090741f89f76750a192cb61f174475d6bd2addb925a9486cb. Aug 13 00:20:31.366568 containerd[2016]: time="2025-08-13T00:20:31.366295711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-866f5cf9c9-z24mw,Uid:e64f00c4-d793-463d-aa2e-acb34c2c1e0d,Namespace:calico-system,Attempt:0,} returns sandbox id \"0fdce3179b71aba090741f89f76750a192cb61f174475d6bd2addb925a9486cb\"" Aug 13 00:20:31.374503 containerd[2016]: time="2025-08-13T00:20:31.374434939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 00:20:31.954678 kernel: bpftool[4853]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 13 00:20:31.989677 kubelet[3353]: I0813 00:20:31.989594 3353 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550" path="/var/lib/kubelet/pods/dd1d7e2d-dae3-4cab-8465-dfe5cfe7f550/volumes" Aug 13 00:20:32.292547 systemd-networkd[1928]: vxlan.calico: Link UP Aug 13 00:20:32.292564 systemd-networkd[1928]: vxlan.calico: Gained carrier Aug 13 00:20:32.338346 (udev-worker)[4574]: Network interface NamePolicy= disabled on kernel command line. Aug 13 00:20:32.815477 systemd-networkd[1928]: cali20a622d7907: Gained IPv6LL Aug 13 00:20:33.173710 containerd[2016]: time="2025-08-13T00:20:33.173515700Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:33.176430 containerd[2016]: time="2025-08-13T00:20:33.176345012Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Aug 13 00:20:33.181139 containerd[2016]: time="2025-08-13T00:20:33.181026500Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:33.215470 containerd[2016]: time="2025-08-13T00:20:33.215395868Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:33.218700 containerd[2016]: time="2025-08-13T00:20:33.218138744Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.843363173s" Aug 13 00:20:33.218700 containerd[2016]: time="2025-08-13T00:20:33.218204216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Aug 13 00:20:33.229838 containerd[2016]: time="2025-08-13T00:20:33.229767344Z" level=info msg="CreateContainer within sandbox \"0fdce3179b71aba090741f89f76750a192cb61f174475d6bd2addb925a9486cb\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 00:20:33.264603 containerd[2016]: time="2025-08-13T00:20:33.264525693Z" level=info msg="CreateContainer within sandbox \"0fdce3179b71aba090741f89f76750a192cb61f174475d6bd2addb925a9486cb\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"ddd0c2fc71e2fd09d7267738f600398516c4bcdfd1fb2d5ce55b68e60adede36\"" Aug 13 00:20:33.265722 containerd[2016]: time="2025-08-13T00:20:33.265624809Z" level=info msg="StartContainer for \"ddd0c2fc71e2fd09d7267738f600398516c4bcdfd1fb2d5ce55b68e60adede36\"" Aug 13 00:20:33.332967 systemd[1]: Started cri-containerd-ddd0c2fc71e2fd09d7267738f600398516c4bcdfd1fb2d5ce55b68e60adede36.scope - libcontainer container ddd0c2fc71e2fd09d7267738f600398516c4bcdfd1fb2d5ce55b68e60adede36. Aug 13 00:20:33.402784 containerd[2016]: time="2025-08-13T00:20:33.402611157Z" level=info msg="StartContainer for \"ddd0c2fc71e2fd09d7267738f600398516c4bcdfd1fb2d5ce55b68e60adede36\" returns successfully" Aug 13 00:20:33.409134 containerd[2016]: time="2025-08-13T00:20:33.408593133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 00:20:33.983880 containerd[2016]: time="2025-08-13T00:20:33.983808948Z" level=info msg="StopPodSandbox for \"305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e\"" Aug 13 00:20:34.139178 containerd[2016]: 2025-08-13 00:20:34.077 [INFO][4978] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" Aug 13 00:20:34.139178 containerd[2016]: 2025-08-13 00:20:34.077 [INFO][4978] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" iface="eth0" netns="/var/run/netns/cni-c98b9dd5-627c-0f9f-bd51-467723fc5bee" Aug 13 00:20:34.139178 containerd[2016]: 2025-08-13 00:20:34.078 [INFO][4978] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" iface="eth0" netns="/var/run/netns/cni-c98b9dd5-627c-0f9f-bd51-467723fc5bee" Aug 13 00:20:34.139178 containerd[2016]: 2025-08-13 00:20:34.078 [INFO][4978] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" iface="eth0" netns="/var/run/netns/cni-c98b9dd5-627c-0f9f-bd51-467723fc5bee" Aug 13 00:20:34.139178 containerd[2016]: 2025-08-13 00:20:34.078 [INFO][4978] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" Aug 13 00:20:34.139178 containerd[2016]: 2025-08-13 00:20:34.078 [INFO][4978] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" Aug 13 00:20:34.139178 containerd[2016]: 2025-08-13 00:20:34.115 [INFO][4985] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" HandleID="k8s-pod-network.305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" Workload="ip--172--31--18--147-k8s-coredns--674b8bbfcf--tbpmv-eth0" Aug 13 00:20:34.139178 containerd[2016]: 2025-08-13 00:20:34.116 [INFO][4985] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:34.139178 containerd[2016]: 2025-08-13 00:20:34.116 [INFO][4985] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:34.139178 containerd[2016]: 2025-08-13 00:20:34.129 [WARNING][4985] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" HandleID="k8s-pod-network.305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" Workload="ip--172--31--18--147-k8s-coredns--674b8bbfcf--tbpmv-eth0" Aug 13 00:20:34.139178 containerd[2016]: 2025-08-13 00:20:34.129 [INFO][4985] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" HandleID="k8s-pod-network.305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" Workload="ip--172--31--18--147-k8s-coredns--674b8bbfcf--tbpmv-eth0" Aug 13 00:20:34.139178 containerd[2016]: 2025-08-13 00:20:34.132 [INFO][4985] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:34.139178 containerd[2016]: 2025-08-13 00:20:34.134 [INFO][4978] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" Aug 13 00:20:34.142084 containerd[2016]: time="2025-08-13T00:20:34.141831321Z" level=info msg="TearDown network for sandbox \"305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e\" successfully" Aug 13 00:20:34.142084 containerd[2016]: time="2025-08-13T00:20:34.141883509Z" level=info msg="StopPodSandbox for \"305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e\" returns successfully" Aug 13 00:20:34.146243 containerd[2016]: time="2025-08-13T00:20:34.146179041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tbpmv,Uid:3196cb5b-591a-4704-a9d8-fe628e63a24b,Namespace:kube-system,Attempt:1,}" Aug 13 00:20:34.146532 systemd[1]: run-netns-cni\x2dc98b9dd5\x2d627c\x2d0f9f\x2dbd51\x2d467723fc5bee.mount: Deactivated successfully. Aug 13 00:20:34.159402 systemd-networkd[1928]: vxlan.calico: Gained IPv6LL Aug 13 00:20:34.425202 systemd-networkd[1928]: cali67440b301f0: Link UP Aug 13 00:20:34.428486 systemd-networkd[1928]: cali67440b301f0: Gained carrier Aug 13 00:20:34.463091 containerd[2016]: 2025-08-13 00:20:34.253 [INFO][4992] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--147-k8s-coredns--674b8bbfcf--tbpmv-eth0 coredns-674b8bbfcf- kube-system 3196cb5b-591a-4704-a9d8-fe628e63a24b 965 0 2025-08-13 00:19:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-147 coredns-674b8bbfcf-tbpmv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali67440b301f0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="59bee30b3f720d1d5455fa439f0318b34e59b177ab61824052cad47a74afeb2a" Namespace="kube-system" Pod="coredns-674b8bbfcf-tbpmv" WorkloadEndpoint="ip--172--31--18--147-k8s-coredns--674b8bbfcf--tbpmv-" Aug 13 00:20:34.463091 containerd[2016]: 2025-08-13 00:20:34.254 [INFO][4992] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="59bee30b3f720d1d5455fa439f0318b34e59b177ab61824052cad47a74afeb2a" Namespace="kube-system" Pod="coredns-674b8bbfcf-tbpmv" WorkloadEndpoint="ip--172--31--18--147-k8s-coredns--674b8bbfcf--tbpmv-eth0" Aug 13 00:20:34.463091 containerd[2016]: 2025-08-13 00:20:34.329 [INFO][5003] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="59bee30b3f720d1d5455fa439f0318b34e59b177ab61824052cad47a74afeb2a" HandleID="k8s-pod-network.59bee30b3f720d1d5455fa439f0318b34e59b177ab61824052cad47a74afeb2a" Workload="ip--172--31--18--147-k8s-coredns--674b8bbfcf--tbpmv-eth0" Aug 13 00:20:34.463091 containerd[2016]: 2025-08-13 00:20:34.330 [INFO][5003] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="59bee30b3f720d1d5455fa439f0318b34e59b177ab61824052cad47a74afeb2a" HandleID="k8s-pod-network.59bee30b3f720d1d5455fa439f0318b34e59b177ab61824052cad47a74afeb2a" Workload="ip--172--31--18--147-k8s-coredns--674b8bbfcf--tbpmv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d31f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-147", "pod":"coredns-674b8bbfcf-tbpmv", "timestamp":"2025-08-13 00:20:34.329966266 +0000 UTC"}, Hostname:"ip-172-31-18-147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:20:34.463091 containerd[2016]: 2025-08-13 00:20:34.330 [INFO][5003] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:34.463091 containerd[2016]: 2025-08-13 00:20:34.330 [INFO][5003] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:34.463091 containerd[2016]: 2025-08-13 00:20:34.330 [INFO][5003] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-147' Aug 13 00:20:34.463091 containerd[2016]: 2025-08-13 00:20:34.348 [INFO][5003] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.59bee30b3f720d1d5455fa439f0318b34e59b177ab61824052cad47a74afeb2a" host="ip-172-31-18-147" Aug 13 00:20:34.463091 containerd[2016]: 2025-08-13 00:20:34.360 [INFO][5003] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-147" Aug 13 00:20:34.463091 containerd[2016]: 2025-08-13 00:20:34.376 [INFO][5003] ipam/ipam.go 511: Trying affinity for 192.168.83.128/26 host="ip-172-31-18-147" Aug 13 00:20:34.463091 containerd[2016]: 2025-08-13 00:20:34.385 [INFO][5003] ipam/ipam.go 158: Attempting to load block cidr=192.168.83.128/26 host="ip-172-31-18-147" Aug 13 00:20:34.463091 containerd[2016]: 2025-08-13 00:20:34.390 [INFO][5003] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.83.128/26 host="ip-172-31-18-147" Aug 13 00:20:34.463091 containerd[2016]: 2025-08-13 00:20:34.390 [INFO][5003] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.83.128/26 handle="k8s-pod-network.59bee30b3f720d1d5455fa439f0318b34e59b177ab61824052cad47a74afeb2a" host="ip-172-31-18-147" Aug 13 00:20:34.463091 containerd[2016]: 2025-08-13 00:20:34.393 [INFO][5003] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.59bee30b3f720d1d5455fa439f0318b34e59b177ab61824052cad47a74afeb2a Aug 13 00:20:34.463091 containerd[2016]: 2025-08-13 00:20:34.401 [INFO][5003] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.83.128/26 handle="k8s-pod-network.59bee30b3f720d1d5455fa439f0318b34e59b177ab61824052cad47a74afeb2a" host="ip-172-31-18-147" Aug 13 00:20:34.463091 containerd[2016]: 2025-08-13 00:20:34.415 [INFO][5003] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.83.130/26] block=192.168.83.128/26 handle="k8s-pod-network.59bee30b3f720d1d5455fa439f0318b34e59b177ab61824052cad47a74afeb2a" host="ip-172-31-18-147" Aug 13 00:20:34.463091 containerd[2016]: 2025-08-13 00:20:34.415 [INFO][5003] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.83.130/26] handle="k8s-pod-network.59bee30b3f720d1d5455fa439f0318b34e59b177ab61824052cad47a74afeb2a" host="ip-172-31-18-147" Aug 13 00:20:34.463091 containerd[2016]: 2025-08-13 00:20:34.415 [INFO][5003] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:34.463091 containerd[2016]: 2025-08-13 00:20:34.416 [INFO][5003] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.130/26] IPv6=[] ContainerID="59bee30b3f720d1d5455fa439f0318b34e59b177ab61824052cad47a74afeb2a" HandleID="k8s-pod-network.59bee30b3f720d1d5455fa439f0318b34e59b177ab61824052cad47a74afeb2a" Workload="ip--172--31--18--147-k8s-coredns--674b8bbfcf--tbpmv-eth0" Aug 13 00:20:34.464914 containerd[2016]: 2025-08-13 00:20:34.419 [INFO][4992] cni-plugin/k8s.go 418: Populated endpoint ContainerID="59bee30b3f720d1d5455fa439f0318b34e59b177ab61824052cad47a74afeb2a" Namespace="kube-system" Pod="coredns-674b8bbfcf-tbpmv" WorkloadEndpoint="ip--172--31--18--147-k8s-coredns--674b8bbfcf--tbpmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-coredns--674b8bbfcf--tbpmv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3196cb5b-591a-4704-a9d8-fe628e63a24b", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"", Pod:"coredns-674b8bbfcf-tbpmv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67440b301f0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:34.464914 containerd[2016]: 2025-08-13 00:20:34.419 [INFO][4992] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.83.130/32] ContainerID="59bee30b3f720d1d5455fa439f0318b34e59b177ab61824052cad47a74afeb2a" Namespace="kube-system" Pod="coredns-674b8bbfcf-tbpmv" WorkloadEndpoint="ip--172--31--18--147-k8s-coredns--674b8bbfcf--tbpmv-eth0" Aug 13 00:20:34.464914 containerd[2016]: 2025-08-13 00:20:34.419 [INFO][4992] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali67440b301f0 ContainerID="59bee30b3f720d1d5455fa439f0318b34e59b177ab61824052cad47a74afeb2a" Namespace="kube-system" Pod="coredns-674b8bbfcf-tbpmv" WorkloadEndpoint="ip--172--31--18--147-k8s-coredns--674b8bbfcf--tbpmv-eth0" Aug 13 00:20:34.464914 containerd[2016]: 2025-08-13 00:20:34.429 [INFO][4992] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="59bee30b3f720d1d5455fa439f0318b34e59b177ab61824052cad47a74afeb2a" Namespace="kube-system" Pod="coredns-674b8bbfcf-tbpmv" WorkloadEndpoint="ip--172--31--18--147-k8s-coredns--674b8bbfcf--tbpmv-eth0" Aug 13 00:20:34.464914 containerd[2016]: 2025-08-13 00:20:34.431 [INFO][4992] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="59bee30b3f720d1d5455fa439f0318b34e59b177ab61824052cad47a74afeb2a" Namespace="kube-system" Pod="coredns-674b8bbfcf-tbpmv" WorkloadEndpoint="ip--172--31--18--147-k8s-coredns--674b8bbfcf--tbpmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-coredns--674b8bbfcf--tbpmv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3196cb5b-591a-4704-a9d8-fe628e63a24b", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"59bee30b3f720d1d5455fa439f0318b34e59b177ab61824052cad47a74afeb2a", Pod:"coredns-674b8bbfcf-tbpmv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67440b301f0", MAC:"a6:19:21:69:c1:ee", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:34.464914 containerd[2016]: 2025-08-13 00:20:34.457 [INFO][4992] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="59bee30b3f720d1d5455fa439f0318b34e59b177ab61824052cad47a74afeb2a" Namespace="kube-system" Pod="coredns-674b8bbfcf-tbpmv" WorkloadEndpoint="ip--172--31--18--147-k8s-coredns--674b8bbfcf--tbpmv-eth0" Aug 13 00:20:34.503723 containerd[2016]: time="2025-08-13T00:20:34.503273099Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:20:34.503723 containerd[2016]: time="2025-08-13T00:20:34.503383667Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:20:34.503723 containerd[2016]: time="2025-08-13T00:20:34.503439143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:34.503723 containerd[2016]: time="2025-08-13T00:20:34.503604515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:34.552972 systemd[1]: Started cri-containerd-59bee30b3f720d1d5455fa439f0318b34e59b177ab61824052cad47a74afeb2a.scope - libcontainer container 59bee30b3f720d1d5455fa439f0318b34e59b177ab61824052cad47a74afeb2a. Aug 13 00:20:34.620978 containerd[2016]: time="2025-08-13T00:20:34.620875079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tbpmv,Uid:3196cb5b-591a-4704-a9d8-fe628e63a24b,Namespace:kube-system,Attempt:1,} returns sandbox id \"59bee30b3f720d1d5455fa439f0318b34e59b177ab61824052cad47a74afeb2a\"" Aug 13 00:20:34.634759 containerd[2016]: time="2025-08-13T00:20:34.634700759Z" level=info msg="CreateContainer within sandbox \"59bee30b3f720d1d5455fa439f0318b34e59b177ab61824052cad47a74afeb2a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:20:34.673786 containerd[2016]: time="2025-08-13T00:20:34.672724080Z" level=info msg="CreateContainer within sandbox \"59bee30b3f720d1d5455fa439f0318b34e59b177ab61824052cad47a74afeb2a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"722db2a2b70249718a8e355803ce6be4cc0fd170bc59c2d79c6aa9ea1662b10e\"" Aug 13 00:20:34.680885 containerd[2016]: time="2025-08-13T00:20:34.676121316Z" level=info msg="StartContainer for \"722db2a2b70249718a8e355803ce6be4cc0fd170bc59c2d79c6aa9ea1662b10e\"" Aug 13 00:20:34.739975 systemd[1]: Started cri-containerd-722db2a2b70249718a8e355803ce6be4cc0fd170bc59c2d79c6aa9ea1662b10e.scope - libcontainer container 722db2a2b70249718a8e355803ce6be4cc0fd170bc59c2d79c6aa9ea1662b10e. Aug 13 00:20:34.806309 containerd[2016]: time="2025-08-13T00:20:34.806235372Z" level=info msg="StartContainer for \"722db2a2b70249718a8e355803ce6be4cc0fd170bc59c2d79c6aa9ea1662b10e\" returns successfully" Aug 13 00:20:34.985866 containerd[2016]: time="2025-08-13T00:20:34.985795585Z" level=info msg="StopPodSandbox for \"64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c\"" Aug 13 00:20:35.230348 containerd[2016]: 2025-08-13 00:20:35.116 [INFO][5101] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" Aug 13 00:20:35.230348 containerd[2016]: 2025-08-13 00:20:35.116 [INFO][5101] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" iface="eth0" netns="/var/run/netns/cni-e8175e1a-a91c-e863-129e-8971b9ecd765" Aug 13 00:20:35.230348 containerd[2016]: 2025-08-13 00:20:35.117 [INFO][5101] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" iface="eth0" netns="/var/run/netns/cni-e8175e1a-a91c-e863-129e-8971b9ecd765" Aug 13 00:20:35.230348 containerd[2016]: 2025-08-13 00:20:35.118 [INFO][5101] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" iface="eth0" netns="/var/run/netns/cni-e8175e1a-a91c-e863-129e-8971b9ecd765" Aug 13 00:20:35.230348 containerd[2016]: 2025-08-13 00:20:35.118 [INFO][5101] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" Aug 13 00:20:35.230348 containerd[2016]: 2025-08-13 00:20:35.118 [INFO][5101] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" Aug 13 00:20:35.230348 containerd[2016]: 2025-08-13 00:20:35.198 [INFO][5111] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" HandleID="k8s-pod-network.64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" Workload="ip--172--31--18--147-k8s-calico--kube--controllers--cfdf54966--lrl8t-eth0" Aug 13 00:20:35.230348 containerd[2016]: 2025-08-13 00:20:35.198 [INFO][5111] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:35.230348 containerd[2016]: 2025-08-13 00:20:35.198 [INFO][5111] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:35.230348 containerd[2016]: 2025-08-13 00:20:35.217 [WARNING][5111] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" HandleID="k8s-pod-network.64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" Workload="ip--172--31--18--147-k8s-calico--kube--controllers--cfdf54966--lrl8t-eth0" Aug 13 00:20:35.230348 containerd[2016]: 2025-08-13 00:20:35.218 [INFO][5111] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" HandleID="k8s-pod-network.64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" Workload="ip--172--31--18--147-k8s-calico--kube--controllers--cfdf54966--lrl8t-eth0" Aug 13 00:20:35.230348 containerd[2016]: 2025-08-13 00:20:35.221 [INFO][5111] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:35.230348 containerd[2016]: 2025-08-13 00:20:35.226 [INFO][5101] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" Aug 13 00:20:35.232412 containerd[2016]: time="2025-08-13T00:20:35.230928070Z" level=info msg="TearDown network for sandbox \"64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c\" successfully" Aug 13 00:20:35.232412 containerd[2016]: time="2025-08-13T00:20:35.230969854Z" level=info msg="StopPodSandbox for \"64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c\" returns successfully" Aug 13 00:20:35.248772 containerd[2016]: time="2025-08-13T00:20:35.247373951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cfdf54966-lrl8t,Uid:d0079dcf-b4da-4691-a20c-b4a4e5395937,Namespace:calico-system,Attempt:1,}" Aug 13 00:20:35.261274 systemd[1]: run-netns-cni\x2de8175e1a\x2da91c\x2de863\x2d129e\x2d8971b9ecd765.mount: Deactivated successfully. Aug 13 00:20:35.581217 systemd-networkd[1928]: cali0ea23f8ed33: Link UP Aug 13 00:20:35.585283 systemd-networkd[1928]: cali0ea23f8ed33: Gained carrier Aug 13 00:20:35.611971 kubelet[3353]: I0813 00:20:35.611552 3353 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-tbpmv" podStartSLOduration=48.611513808 podStartE2EDuration="48.611513808s" podCreationTimestamp="2025-08-13 00:19:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:20:35.450410316 +0000 UTC m=+51.805600123" watchObservedRunningTime="2025-08-13 00:20:35.611513808 +0000 UTC m=+51.966703603" Aug 13 00:20:35.617154 containerd[2016]: 2025-08-13 00:20:35.395 [INFO][5117] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--147-k8s-calico--kube--controllers--cfdf54966--lrl8t-eth0 calico-kube-controllers-cfdf54966- calico-system d0079dcf-b4da-4691-a20c-b4a4e5395937 977 0 2025-08-13 00:20:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:cfdf54966 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-18-147 calico-kube-controllers-cfdf54966-lrl8t eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0ea23f8ed33 [] [] }} ContainerID="37a7d766225d9366f085bd281fbd6272880d49a5ff778b28ad395e1bc64d22d1" Namespace="calico-system" Pod="calico-kube-controllers-cfdf54966-lrl8t" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--kube--controllers--cfdf54966--lrl8t-" Aug 13 00:20:35.617154 containerd[2016]: 2025-08-13 00:20:35.395 [INFO][5117] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="37a7d766225d9366f085bd281fbd6272880d49a5ff778b28ad395e1bc64d22d1" Namespace="calico-system" Pod="calico-kube-controllers-cfdf54966-lrl8t" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--kube--controllers--cfdf54966--lrl8t-eth0" Aug 13 00:20:35.617154 containerd[2016]: 2025-08-13 00:20:35.474 [INFO][5131] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="37a7d766225d9366f085bd281fbd6272880d49a5ff778b28ad395e1bc64d22d1" HandleID="k8s-pod-network.37a7d766225d9366f085bd281fbd6272880d49a5ff778b28ad395e1bc64d22d1" Workload="ip--172--31--18--147-k8s-calico--kube--controllers--cfdf54966--lrl8t-eth0" Aug 13 00:20:35.617154 containerd[2016]: 2025-08-13 00:20:35.475 [INFO][5131] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="37a7d766225d9366f085bd281fbd6272880d49a5ff778b28ad395e1bc64d22d1" HandleID="k8s-pod-network.37a7d766225d9366f085bd281fbd6272880d49a5ff778b28ad395e1bc64d22d1" Workload="ip--172--31--18--147-k8s-calico--kube--controllers--cfdf54966--lrl8t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003880a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-147", "pod":"calico-kube-controllers-cfdf54966-lrl8t", "timestamp":"2025-08-13 00:20:35.469290624 +0000 UTC"}, Hostname:"ip-172-31-18-147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:20:35.617154 containerd[2016]: 2025-08-13 00:20:35.476 [INFO][5131] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:35.617154 containerd[2016]: 2025-08-13 00:20:35.476 [INFO][5131] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:35.617154 containerd[2016]: 2025-08-13 00:20:35.476 [INFO][5131] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-147' Aug 13 00:20:35.617154 containerd[2016]: 2025-08-13 00:20:35.507 [INFO][5131] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.37a7d766225d9366f085bd281fbd6272880d49a5ff778b28ad395e1bc64d22d1" host="ip-172-31-18-147" Aug 13 00:20:35.617154 containerd[2016]: 2025-08-13 00:20:35.519 [INFO][5131] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-147" Aug 13 00:20:35.617154 containerd[2016]: 2025-08-13 00:20:35.528 [INFO][5131] ipam/ipam.go 511: Trying affinity for 192.168.83.128/26 host="ip-172-31-18-147" Aug 13 00:20:35.617154 containerd[2016]: 2025-08-13 00:20:35.534 [INFO][5131] ipam/ipam.go 158: Attempting to load block cidr=192.168.83.128/26 host="ip-172-31-18-147" Aug 13 00:20:35.617154 containerd[2016]: 2025-08-13 00:20:35.543 [INFO][5131] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.83.128/26 host="ip-172-31-18-147" Aug 13 00:20:35.617154 containerd[2016]: 2025-08-13 00:20:35.543 [INFO][5131] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.83.128/26 handle="k8s-pod-network.37a7d766225d9366f085bd281fbd6272880d49a5ff778b28ad395e1bc64d22d1" host="ip-172-31-18-147" Aug 13 00:20:35.617154 containerd[2016]: 2025-08-13 00:20:35.551 [INFO][5131] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.37a7d766225d9366f085bd281fbd6272880d49a5ff778b28ad395e1bc64d22d1 Aug 13 00:20:35.617154 containerd[2016]: 2025-08-13 00:20:35.558 [INFO][5131] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.83.128/26 handle="k8s-pod-network.37a7d766225d9366f085bd281fbd6272880d49a5ff778b28ad395e1bc64d22d1" host="ip-172-31-18-147" Aug 13 00:20:35.617154 containerd[2016]: 2025-08-13 00:20:35.572 [INFO][5131] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.83.131/26] block=192.168.83.128/26 handle="k8s-pod-network.37a7d766225d9366f085bd281fbd6272880d49a5ff778b28ad395e1bc64d22d1" host="ip-172-31-18-147" Aug 13 00:20:35.617154 containerd[2016]: 2025-08-13 00:20:35.572 [INFO][5131] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.83.131/26] handle="k8s-pod-network.37a7d766225d9366f085bd281fbd6272880d49a5ff778b28ad395e1bc64d22d1" host="ip-172-31-18-147" Aug 13 00:20:35.617154 containerd[2016]: 2025-08-13 00:20:35.572 [INFO][5131] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:35.617154 containerd[2016]: 2025-08-13 00:20:35.572 [INFO][5131] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.131/26] IPv6=[] ContainerID="37a7d766225d9366f085bd281fbd6272880d49a5ff778b28ad395e1bc64d22d1" HandleID="k8s-pod-network.37a7d766225d9366f085bd281fbd6272880d49a5ff778b28ad395e1bc64d22d1" Workload="ip--172--31--18--147-k8s-calico--kube--controllers--cfdf54966--lrl8t-eth0" Aug 13 00:20:35.621627 containerd[2016]: 2025-08-13 00:20:35.576 [INFO][5117] cni-plugin/k8s.go 418: Populated endpoint ContainerID="37a7d766225d9366f085bd281fbd6272880d49a5ff778b28ad395e1bc64d22d1" Namespace="calico-system" Pod="calico-kube-controllers-cfdf54966-lrl8t" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--kube--controllers--cfdf54966--lrl8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-calico--kube--controllers--cfdf54966--lrl8t-eth0", GenerateName:"calico-kube-controllers-cfdf54966-", Namespace:"calico-system", SelfLink:"", UID:"d0079dcf-b4da-4691-a20c-b4a4e5395937", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cfdf54966", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"", Pod:"calico-kube-controllers-cfdf54966-lrl8t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.83.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0ea23f8ed33", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:35.621627 containerd[2016]: 2025-08-13 00:20:35.576 [INFO][5117] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.83.131/32] ContainerID="37a7d766225d9366f085bd281fbd6272880d49a5ff778b28ad395e1bc64d22d1" Namespace="calico-system" Pod="calico-kube-controllers-cfdf54966-lrl8t" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--kube--controllers--cfdf54966--lrl8t-eth0" Aug 13 00:20:35.621627 containerd[2016]: 2025-08-13 00:20:35.576 [INFO][5117] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0ea23f8ed33 ContainerID="37a7d766225d9366f085bd281fbd6272880d49a5ff778b28ad395e1bc64d22d1" Namespace="calico-system" Pod="calico-kube-controllers-cfdf54966-lrl8t" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--kube--controllers--cfdf54966--lrl8t-eth0" Aug 13 00:20:35.621627 containerd[2016]: 2025-08-13 00:20:35.586 [INFO][5117] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="37a7d766225d9366f085bd281fbd6272880d49a5ff778b28ad395e1bc64d22d1" Namespace="calico-system" Pod="calico-kube-controllers-cfdf54966-lrl8t" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--kube--controllers--cfdf54966--lrl8t-eth0" Aug 13 00:20:35.621627 containerd[2016]: 2025-08-13 00:20:35.588 [INFO][5117] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="37a7d766225d9366f085bd281fbd6272880d49a5ff778b28ad395e1bc64d22d1" Namespace="calico-system" Pod="calico-kube-controllers-cfdf54966-lrl8t" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--kube--controllers--cfdf54966--lrl8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-calico--kube--controllers--cfdf54966--lrl8t-eth0", GenerateName:"calico-kube-controllers-cfdf54966-", Namespace:"calico-system", SelfLink:"", UID:"d0079dcf-b4da-4691-a20c-b4a4e5395937", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cfdf54966", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"37a7d766225d9366f085bd281fbd6272880d49a5ff778b28ad395e1bc64d22d1", Pod:"calico-kube-controllers-cfdf54966-lrl8t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.83.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0ea23f8ed33", MAC:"ba:06:ff:9f:91:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:35.621627 containerd[2016]: 2025-08-13 00:20:35.611 [INFO][5117] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="37a7d766225d9366f085bd281fbd6272880d49a5ff778b28ad395e1bc64d22d1" Namespace="calico-system" Pod="calico-kube-controllers-cfdf54966-lrl8t" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--kube--controllers--cfdf54966--lrl8t-eth0" Aug 13 00:20:35.656901 containerd[2016]: time="2025-08-13T00:20:35.656697769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:20:35.656901 containerd[2016]: time="2025-08-13T00:20:35.656801449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:20:35.656901 containerd[2016]: time="2025-08-13T00:20:35.656840101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:35.659404 containerd[2016]: time="2025-08-13T00:20:35.658870729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:35.713239 systemd[1]: run-containerd-runc-k8s.io-37a7d766225d9366f085bd281fbd6272880d49a5ff778b28ad395e1bc64d22d1-runc.duoWxK.mount: Deactivated successfully. Aug 13 00:20:35.730995 systemd[1]: Started cri-containerd-37a7d766225d9366f085bd281fbd6272880d49a5ff778b28ad395e1bc64d22d1.scope - libcontainer container 37a7d766225d9366f085bd281fbd6272880d49a5ff778b28ad395e1bc64d22d1. Aug 13 00:20:35.760018 systemd-networkd[1928]: cali67440b301f0: Gained IPv6LL Aug 13 00:20:35.802910 containerd[2016]: time="2025-08-13T00:20:35.802741417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cfdf54966-lrl8t,Uid:d0079dcf-b4da-4691-a20c-b4a4e5395937,Namespace:calico-system,Attempt:1,} returns sandbox id \"37a7d766225d9366f085bd281fbd6272880d49a5ff778b28ad395e1bc64d22d1\"" Aug 13 00:20:35.987322 containerd[2016]: time="2025-08-13T00:20:35.985879682Z" level=info msg="StopPodSandbox for \"89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89\"" Aug 13 00:20:36.242162 containerd[2016]: 2025-08-13 00:20:36.127 [INFO][5204] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" Aug 13 00:20:36.242162 containerd[2016]: 2025-08-13 00:20:36.128 [INFO][5204] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" iface="eth0" netns="/var/run/netns/cni-2a14e5df-6c7e-0b7c-29fe-a0b7ebf2a565" Aug 13 00:20:36.242162 containerd[2016]: 2025-08-13 00:20:36.130 [INFO][5204] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" iface="eth0" netns="/var/run/netns/cni-2a14e5df-6c7e-0b7c-29fe-a0b7ebf2a565" Aug 13 00:20:36.242162 containerd[2016]: 2025-08-13 00:20:36.133 [INFO][5204] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" iface="eth0" netns="/var/run/netns/cni-2a14e5df-6c7e-0b7c-29fe-a0b7ebf2a565" Aug 13 00:20:36.242162 containerd[2016]: 2025-08-13 00:20:36.134 [INFO][5204] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" Aug 13 00:20:36.242162 containerd[2016]: 2025-08-13 00:20:36.134 [INFO][5204] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" Aug 13 00:20:36.242162 containerd[2016]: 2025-08-13 00:20:36.206 [INFO][5212] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" HandleID="k8s-pod-network.89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" Workload="ip--172--31--18--147-k8s-goldmane--768f4c5c69--xkhnd-eth0" Aug 13 00:20:36.242162 containerd[2016]: 2025-08-13 00:20:36.207 [INFO][5212] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:36.242162 containerd[2016]: 2025-08-13 00:20:36.207 [INFO][5212] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:36.242162 containerd[2016]: 2025-08-13 00:20:36.230 [WARNING][5212] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" HandleID="k8s-pod-network.89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" Workload="ip--172--31--18--147-k8s-goldmane--768f4c5c69--xkhnd-eth0" Aug 13 00:20:36.242162 containerd[2016]: 2025-08-13 00:20:36.230 [INFO][5212] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" HandleID="k8s-pod-network.89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" Workload="ip--172--31--18--147-k8s-goldmane--768f4c5c69--xkhnd-eth0" Aug 13 00:20:36.242162 containerd[2016]: 2025-08-13 00:20:36.233 [INFO][5212] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:36.242162 containerd[2016]: 2025-08-13 00:20:36.239 [INFO][5204] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" Aug 13 00:20:36.243910 containerd[2016]: time="2025-08-13T00:20:36.243177527Z" level=info msg="TearDown network for sandbox \"89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89\" successfully" Aug 13 00:20:36.243910 containerd[2016]: time="2025-08-13T00:20:36.243248351Z" level=info msg="StopPodSandbox for \"89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89\" returns successfully" Aug 13 00:20:36.245365 containerd[2016]: time="2025-08-13T00:20:36.244995659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-xkhnd,Uid:7a5dfad8-b698-48ca-a507-a2288b46e2e3,Namespace:calico-system,Attempt:1,}" Aug 13 00:20:36.262305 systemd[1]: run-netns-cni\x2d2a14e5df\x2d6c7e\x2d0b7c\x2d29fe\x2da0b7ebf2a565.mount: Deactivated successfully. Aug 13 00:20:36.774447 systemd-networkd[1928]: calib6ba1bbdcb2: Link UP Aug 13 00:20:36.778851 systemd-networkd[1928]: calib6ba1bbdcb2: Gained carrier Aug 13 00:20:36.842137 containerd[2016]: 2025-08-13 00:20:36.409 [INFO][5219] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--147-k8s-goldmane--768f4c5c69--xkhnd-eth0 goldmane-768f4c5c69- calico-system 7a5dfad8-b698-48ca-a507-a2288b46e2e3 986 0 2025-08-13 00:20:13 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-18-147 goldmane-768f4c5c69-xkhnd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib6ba1bbdcb2 [] [] }} ContainerID="65bc3516e57b7020293e0ad5e56779b6d1697e7e459d769e4aac6542a2c181f8" Namespace="calico-system" Pod="goldmane-768f4c5c69-xkhnd" WorkloadEndpoint="ip--172--31--18--147-k8s-goldmane--768f4c5c69--xkhnd-" Aug 13 00:20:36.842137 containerd[2016]: 2025-08-13 00:20:36.409 [INFO][5219] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="65bc3516e57b7020293e0ad5e56779b6d1697e7e459d769e4aac6542a2c181f8" Namespace="calico-system" Pod="goldmane-768f4c5c69-xkhnd" WorkloadEndpoint="ip--172--31--18--147-k8s-goldmane--768f4c5c69--xkhnd-eth0" Aug 13 00:20:36.842137 containerd[2016]: 2025-08-13 00:20:36.630 [INFO][5232] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="65bc3516e57b7020293e0ad5e56779b6d1697e7e459d769e4aac6542a2c181f8" HandleID="k8s-pod-network.65bc3516e57b7020293e0ad5e56779b6d1697e7e459d769e4aac6542a2c181f8" Workload="ip--172--31--18--147-k8s-goldmane--768f4c5c69--xkhnd-eth0" Aug 13 00:20:36.842137 containerd[2016]: 2025-08-13 00:20:36.630 [INFO][5232] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="65bc3516e57b7020293e0ad5e56779b6d1697e7e459d769e4aac6542a2c181f8" HandleID="k8s-pod-network.65bc3516e57b7020293e0ad5e56779b6d1697e7e459d769e4aac6542a2c181f8" Workload="ip--172--31--18--147-k8s-goldmane--768f4c5c69--xkhnd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003139a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-147", "pod":"goldmane-768f4c5c69-xkhnd", "timestamp":"2025-08-13 00:20:36.628492693 +0000 UTC"}, Hostname:"ip-172-31-18-147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:20:36.842137 containerd[2016]: 2025-08-13 00:20:36.630 [INFO][5232] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:36.842137 containerd[2016]: 2025-08-13 00:20:36.630 [INFO][5232] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:36.842137 containerd[2016]: 2025-08-13 00:20:36.631 [INFO][5232] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-147' Aug 13 00:20:36.842137 containerd[2016]: 2025-08-13 00:20:36.660 [INFO][5232] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.65bc3516e57b7020293e0ad5e56779b6d1697e7e459d769e4aac6542a2c181f8" host="ip-172-31-18-147" Aug 13 00:20:36.842137 containerd[2016]: 2025-08-13 00:20:36.673 [INFO][5232] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-147" Aug 13 00:20:36.842137 containerd[2016]: 2025-08-13 00:20:36.687 [INFO][5232] ipam/ipam.go 511: Trying affinity for 192.168.83.128/26 host="ip-172-31-18-147" Aug 13 00:20:36.842137 containerd[2016]: 2025-08-13 00:20:36.693 [INFO][5232] ipam/ipam.go 158: Attempting to load block cidr=192.168.83.128/26 host="ip-172-31-18-147" Aug 13 00:20:36.842137 containerd[2016]: 2025-08-13 00:20:36.701 [INFO][5232] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.83.128/26 host="ip-172-31-18-147" Aug 13 00:20:36.842137 containerd[2016]: 2025-08-13 00:20:36.701 [INFO][5232] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.83.128/26 handle="k8s-pod-network.65bc3516e57b7020293e0ad5e56779b6d1697e7e459d769e4aac6542a2c181f8" host="ip-172-31-18-147" Aug 13 00:20:36.842137 containerd[2016]: 2025-08-13 00:20:36.708 [INFO][5232] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.65bc3516e57b7020293e0ad5e56779b6d1697e7e459d769e4aac6542a2c181f8 Aug 13 00:20:36.842137 containerd[2016]: 2025-08-13 00:20:36.724 [INFO][5232] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.83.128/26 handle="k8s-pod-network.65bc3516e57b7020293e0ad5e56779b6d1697e7e459d769e4aac6542a2c181f8" host="ip-172-31-18-147" Aug 13 00:20:36.842137 containerd[2016]: 2025-08-13 00:20:36.744 [INFO][5232] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.83.132/26] block=192.168.83.128/26 handle="k8s-pod-network.65bc3516e57b7020293e0ad5e56779b6d1697e7e459d769e4aac6542a2c181f8" host="ip-172-31-18-147" Aug 13 00:20:36.842137 containerd[2016]: 2025-08-13 00:20:36.745 [INFO][5232] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.83.132/26] handle="k8s-pod-network.65bc3516e57b7020293e0ad5e56779b6d1697e7e459d769e4aac6542a2c181f8" host="ip-172-31-18-147" Aug 13 00:20:36.842137 containerd[2016]: 2025-08-13 00:20:36.745 [INFO][5232] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:36.842137 containerd[2016]: 2025-08-13 00:20:36.745 [INFO][5232] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.132/26] IPv6=[] ContainerID="65bc3516e57b7020293e0ad5e56779b6d1697e7e459d769e4aac6542a2c181f8" HandleID="k8s-pod-network.65bc3516e57b7020293e0ad5e56779b6d1697e7e459d769e4aac6542a2c181f8" Workload="ip--172--31--18--147-k8s-goldmane--768f4c5c69--xkhnd-eth0" Aug 13 00:20:36.843911 containerd[2016]: 2025-08-13 00:20:36.759 [INFO][5219] cni-plugin/k8s.go 418: Populated endpoint ContainerID="65bc3516e57b7020293e0ad5e56779b6d1697e7e459d769e4aac6542a2c181f8" Namespace="calico-system" Pod="goldmane-768f4c5c69-xkhnd" WorkloadEndpoint="ip--172--31--18--147-k8s-goldmane--768f4c5c69--xkhnd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-goldmane--768f4c5c69--xkhnd-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"7a5dfad8-b698-48ca-a507-a2288b46e2e3", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"", Pod:"goldmane-768f4c5c69-xkhnd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.83.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib6ba1bbdcb2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:36.843911 containerd[2016]: 2025-08-13 00:20:36.762 [INFO][5219] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.83.132/32] ContainerID="65bc3516e57b7020293e0ad5e56779b6d1697e7e459d769e4aac6542a2c181f8" Namespace="calico-system" Pod="goldmane-768f4c5c69-xkhnd" WorkloadEndpoint="ip--172--31--18--147-k8s-goldmane--768f4c5c69--xkhnd-eth0" Aug 13 00:20:36.843911 containerd[2016]: 2025-08-13 00:20:36.762 [INFO][5219] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6ba1bbdcb2 ContainerID="65bc3516e57b7020293e0ad5e56779b6d1697e7e459d769e4aac6542a2c181f8" Namespace="calico-system" Pod="goldmane-768f4c5c69-xkhnd" WorkloadEndpoint="ip--172--31--18--147-k8s-goldmane--768f4c5c69--xkhnd-eth0" Aug 13 00:20:36.843911 containerd[2016]: 2025-08-13 00:20:36.786 [INFO][5219] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="65bc3516e57b7020293e0ad5e56779b6d1697e7e459d769e4aac6542a2c181f8" Namespace="calico-system" Pod="goldmane-768f4c5c69-xkhnd" WorkloadEndpoint="ip--172--31--18--147-k8s-goldmane--768f4c5c69--xkhnd-eth0" Aug 13 00:20:36.843911 containerd[2016]: 2025-08-13 00:20:36.790 [INFO][5219] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="65bc3516e57b7020293e0ad5e56779b6d1697e7e459d769e4aac6542a2c181f8" Namespace="calico-system" Pod="goldmane-768f4c5c69-xkhnd" WorkloadEndpoint="ip--172--31--18--147-k8s-goldmane--768f4c5c69--xkhnd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-goldmane--768f4c5c69--xkhnd-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"7a5dfad8-b698-48ca-a507-a2288b46e2e3", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"65bc3516e57b7020293e0ad5e56779b6d1697e7e459d769e4aac6542a2c181f8", Pod:"goldmane-768f4c5c69-xkhnd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.83.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib6ba1bbdcb2", MAC:"96:b7:03:2e:76:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:36.843911 containerd[2016]: 2025-08-13 00:20:36.831 [INFO][5219] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="65bc3516e57b7020293e0ad5e56779b6d1697e7e459d769e4aac6542a2c181f8" Namespace="calico-system" Pod="goldmane-768f4c5c69-xkhnd" WorkloadEndpoint="ip--172--31--18--147-k8s-goldmane--768f4c5c69--xkhnd-eth0" Aug 13 00:20:36.936768 containerd[2016]: time="2025-08-13T00:20:36.933984735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:20:36.936768 containerd[2016]: time="2025-08-13T00:20:36.934079535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:20:36.936768 containerd[2016]: time="2025-08-13T00:20:36.934116771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:36.936768 containerd[2016]: time="2025-08-13T00:20:36.934263219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:36.988887 containerd[2016]: time="2025-08-13T00:20:36.988829655Z" level=info msg="StopPodSandbox for \"e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94\"" Aug 13 00:20:37.071819 systemd[1]: Started cri-containerd-65bc3516e57b7020293e0ad5e56779b6d1697e7e459d769e4aac6542a2c181f8.scope - libcontainer container 65bc3516e57b7020293e0ad5e56779b6d1697e7e459d769e4aac6542a2c181f8. Aug 13 00:20:37.262804 systemd[1]: run-containerd-runc-k8s.io-65bc3516e57b7020293e0ad5e56779b6d1697e7e459d769e4aac6542a2c181f8-runc.z80Q4f.mount: Deactivated successfully. Aug 13 00:20:37.444016 containerd[2016]: 2025-08-13 00:20:37.242 [INFO][5285] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" Aug 13 00:20:37.444016 containerd[2016]: 2025-08-13 00:20:37.243 [INFO][5285] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" iface="eth0" netns="/var/run/netns/cni-8f53e427-8293-0fa4-65e4-73b28e9836c9" Aug 13 00:20:37.444016 containerd[2016]: 2025-08-13 00:20:37.245 [INFO][5285] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" iface="eth0" netns="/var/run/netns/cni-8f53e427-8293-0fa4-65e4-73b28e9836c9" Aug 13 00:20:37.444016 containerd[2016]: 2025-08-13 00:20:37.248 [INFO][5285] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" iface="eth0" netns="/var/run/netns/cni-8f53e427-8293-0fa4-65e4-73b28e9836c9" Aug 13 00:20:37.444016 containerd[2016]: 2025-08-13 00:20:37.250 [INFO][5285] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" Aug 13 00:20:37.444016 containerd[2016]: 2025-08-13 00:20:37.251 [INFO][5285] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" Aug 13 00:20:37.444016 containerd[2016]: 2025-08-13 00:20:37.384 [INFO][5301] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" HandleID="k8s-pod-network.e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" Workload="ip--172--31--18--147-k8s-csi--node--driver--zw9d2-eth0" Aug 13 00:20:37.444016 containerd[2016]: 2025-08-13 00:20:37.386 [INFO][5301] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:37.444016 containerd[2016]: 2025-08-13 00:20:37.387 [INFO][5301] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:37.444016 containerd[2016]: 2025-08-13 00:20:37.415 [WARNING][5301] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" HandleID="k8s-pod-network.e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" Workload="ip--172--31--18--147-k8s-csi--node--driver--zw9d2-eth0" Aug 13 00:20:37.444016 containerd[2016]: 2025-08-13 00:20:37.416 [INFO][5301] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" HandleID="k8s-pod-network.e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" Workload="ip--172--31--18--147-k8s-csi--node--driver--zw9d2-eth0" Aug 13 00:20:37.444016 containerd[2016]: 2025-08-13 00:20:37.428 [INFO][5301] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:37.444016 containerd[2016]: 2025-08-13 00:20:37.436 [INFO][5285] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" Aug 13 00:20:37.447439 containerd[2016]: time="2025-08-13T00:20:37.444458437Z" level=info msg="TearDown network for sandbox \"e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94\" successfully" Aug 13 00:20:37.447439 containerd[2016]: time="2025-08-13T00:20:37.444499297Z" level=info msg="StopPodSandbox for \"e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94\" returns successfully" Aug 13 00:20:37.447439 containerd[2016]: time="2025-08-13T00:20:37.446956129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zw9d2,Uid:58a4d2bd-5ce6-4a65-a83a-e16060349add,Namespace:calico-system,Attempt:1,}" Aug 13 00:20:37.454904 systemd[1]: run-netns-cni\x2d8f53e427\x2d8293\x2d0fa4\x2d65e4\x2d73b28e9836c9.mount: Deactivated successfully. Aug 13 00:20:37.612136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4239664228.mount: Deactivated successfully. Aug 13 00:20:37.618760 systemd-networkd[1928]: cali0ea23f8ed33: Gained IPv6LL Aug 13 00:20:37.659273 containerd[2016]: time="2025-08-13T00:20:37.659161538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:37.667663 containerd[2016]: time="2025-08-13T00:20:37.666711219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Aug 13 00:20:37.675909 containerd[2016]: time="2025-08-13T00:20:37.675840495Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:37.688197 containerd[2016]: time="2025-08-13T00:20:37.688035051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:37.696842 containerd[2016]: time="2025-08-13T00:20:37.696009147Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 4.287234154s" Aug 13 00:20:37.697808 containerd[2016]: time="2025-08-13T00:20:37.697047255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Aug 13 00:20:37.704158 containerd[2016]: time="2025-08-13T00:20:37.704024571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 00:20:37.716363 containerd[2016]: time="2025-08-13T00:20:37.716307975Z" level=info msg="CreateContainer within sandbox \"0fdce3179b71aba090741f89f76750a192cb61f174475d6bd2addb925a9486cb\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 00:20:37.723074 containerd[2016]: time="2025-08-13T00:20:37.722848347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-xkhnd,Uid:7a5dfad8-b698-48ca-a507-a2288b46e2e3,Namespace:calico-system,Attempt:1,} returns sandbox id \"65bc3516e57b7020293e0ad5e56779b6d1697e7e459d769e4aac6542a2c181f8\"" Aug 13 00:20:37.762971 containerd[2016]: time="2025-08-13T00:20:37.762721743Z" level=info msg="CreateContainer within sandbox \"0fdce3179b71aba090741f89f76750a192cb61f174475d6bd2addb925a9486cb\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"40b30803128ba357e043140e0539d073d0b3915cd49c93fa1fbffe113fecd10d\"" Aug 13 00:20:37.763854 containerd[2016]: time="2025-08-13T00:20:37.763727007Z" level=info msg="StartContainer for \"40b30803128ba357e043140e0539d073d0b3915cd49c93fa1fbffe113fecd10d\"" Aug 13 00:20:37.808111 systemd-networkd[1928]: calib6ba1bbdcb2: Gained IPv6LL Aug 13 00:20:37.869053 systemd[1]: Started cri-containerd-40b30803128ba357e043140e0539d073d0b3915cd49c93fa1fbffe113fecd10d.scope - libcontainer container 40b30803128ba357e043140e0539d073d0b3915cd49c93fa1fbffe113fecd10d. Aug 13 00:20:37.932249 systemd-networkd[1928]: cali8eae0b5a5c6: Link UP Aug 13 00:20:37.936294 systemd-networkd[1928]: cali8eae0b5a5c6: Gained carrier Aug 13 00:20:37.973277 containerd[2016]: 2025-08-13 00:20:37.680 [INFO][5307] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--147-k8s-csi--node--driver--zw9d2-eth0 csi-node-driver- calico-system 58a4d2bd-5ce6-4a65-a83a-e16060349add 1000 0 2025-08-13 00:20:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-18-147 csi-node-driver-zw9d2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8eae0b5a5c6 [] [] }} ContainerID="58e87984fd78f784e58df2d6014a214f5340586b764d17f12837298f58495bde" Namespace="calico-system" Pod="csi-node-driver-zw9d2" WorkloadEndpoint="ip--172--31--18--147-k8s-csi--node--driver--zw9d2-" Aug 13 00:20:37.973277 containerd[2016]: 2025-08-13 00:20:37.683 [INFO][5307] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="58e87984fd78f784e58df2d6014a214f5340586b764d17f12837298f58495bde" Namespace="calico-system" Pod="csi-node-driver-zw9d2" WorkloadEndpoint="ip--172--31--18--147-k8s-csi--node--driver--zw9d2-eth0" Aug 13 00:20:37.973277 containerd[2016]: 2025-08-13 00:20:37.809 [INFO][5329] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="58e87984fd78f784e58df2d6014a214f5340586b764d17f12837298f58495bde" HandleID="k8s-pod-network.58e87984fd78f784e58df2d6014a214f5340586b764d17f12837298f58495bde" Workload="ip--172--31--18--147-k8s-csi--node--driver--zw9d2-eth0" Aug 13 00:20:37.973277 containerd[2016]: 2025-08-13 00:20:37.811 [INFO][5329] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="58e87984fd78f784e58df2d6014a214f5340586b764d17f12837298f58495bde" HandleID="k8s-pod-network.58e87984fd78f784e58df2d6014a214f5340586b764d17f12837298f58495bde" Workload="ip--172--31--18--147-k8s-csi--node--driver--zw9d2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000103850), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-147", "pod":"csi-node-driver-zw9d2", "timestamp":"2025-08-13 00:20:37.809610207 +0000 UTC"}, Hostname:"ip-172-31-18-147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:20:37.973277 containerd[2016]: 2025-08-13 00:20:37.811 [INFO][5329] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:37.973277 containerd[2016]: 2025-08-13 00:20:37.811 [INFO][5329] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:37.973277 containerd[2016]: 2025-08-13 00:20:37.811 [INFO][5329] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-147' Aug 13 00:20:37.973277 containerd[2016]: 2025-08-13 00:20:37.835 [INFO][5329] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.58e87984fd78f784e58df2d6014a214f5340586b764d17f12837298f58495bde" host="ip-172-31-18-147" Aug 13 00:20:37.973277 containerd[2016]: 2025-08-13 00:20:37.851 [INFO][5329] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-147" Aug 13 00:20:37.973277 containerd[2016]: 2025-08-13 00:20:37.879 [INFO][5329] ipam/ipam.go 511: Trying affinity for 192.168.83.128/26 host="ip-172-31-18-147" Aug 13 00:20:37.973277 containerd[2016]: 2025-08-13 00:20:37.886 [INFO][5329] ipam/ipam.go 158: Attempting to load block cidr=192.168.83.128/26 host="ip-172-31-18-147" Aug 13 00:20:37.973277 containerd[2016]: 2025-08-13 00:20:37.891 [INFO][5329] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.83.128/26 host="ip-172-31-18-147" Aug 13 00:20:37.973277 containerd[2016]: 2025-08-13 00:20:37.891 [INFO][5329] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.83.128/26 handle="k8s-pod-network.58e87984fd78f784e58df2d6014a214f5340586b764d17f12837298f58495bde" host="ip-172-31-18-147" Aug 13 00:20:37.973277 containerd[2016]: 2025-08-13 00:20:37.895 [INFO][5329] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.58e87984fd78f784e58df2d6014a214f5340586b764d17f12837298f58495bde Aug 13 00:20:37.973277 containerd[2016]: 2025-08-13 00:20:37.903 [INFO][5329] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.83.128/26 handle="k8s-pod-network.58e87984fd78f784e58df2d6014a214f5340586b764d17f12837298f58495bde" host="ip-172-31-18-147" Aug 13 00:20:37.973277 containerd[2016]: 2025-08-13 00:20:37.915 [INFO][5329] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.83.133/26] block=192.168.83.128/26 handle="k8s-pod-network.58e87984fd78f784e58df2d6014a214f5340586b764d17f12837298f58495bde" host="ip-172-31-18-147" Aug 13 00:20:37.973277 containerd[2016]: 2025-08-13 00:20:37.916 [INFO][5329] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.83.133/26] handle="k8s-pod-network.58e87984fd78f784e58df2d6014a214f5340586b764d17f12837298f58495bde" host="ip-172-31-18-147" Aug 13 00:20:37.973277 containerd[2016]: 2025-08-13 00:20:37.916 [INFO][5329] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:37.973277 containerd[2016]: 2025-08-13 00:20:37.916 [INFO][5329] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.133/26] IPv6=[] ContainerID="58e87984fd78f784e58df2d6014a214f5340586b764d17f12837298f58495bde" HandleID="k8s-pod-network.58e87984fd78f784e58df2d6014a214f5340586b764d17f12837298f58495bde" Workload="ip--172--31--18--147-k8s-csi--node--driver--zw9d2-eth0" Aug 13 00:20:37.978460 containerd[2016]: 2025-08-13 00:20:37.921 [INFO][5307] cni-plugin/k8s.go 418: Populated endpoint ContainerID="58e87984fd78f784e58df2d6014a214f5340586b764d17f12837298f58495bde" Namespace="calico-system" Pod="csi-node-driver-zw9d2" WorkloadEndpoint="ip--172--31--18--147-k8s-csi--node--driver--zw9d2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-csi--node--driver--zw9d2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"58a4d2bd-5ce6-4a65-a83a-e16060349add", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"", Pod:"csi-node-driver-zw9d2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.83.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8eae0b5a5c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:37.978460 containerd[2016]: 2025-08-13 00:20:37.921 [INFO][5307] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.83.133/32] ContainerID="58e87984fd78f784e58df2d6014a214f5340586b764d17f12837298f58495bde" Namespace="calico-system" Pod="csi-node-driver-zw9d2" WorkloadEndpoint="ip--172--31--18--147-k8s-csi--node--driver--zw9d2-eth0" Aug 13 00:20:37.978460 containerd[2016]: 2025-08-13 00:20:37.921 [INFO][5307] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8eae0b5a5c6 ContainerID="58e87984fd78f784e58df2d6014a214f5340586b764d17f12837298f58495bde" Namespace="calico-system" Pod="csi-node-driver-zw9d2" WorkloadEndpoint="ip--172--31--18--147-k8s-csi--node--driver--zw9d2-eth0" Aug 13 00:20:37.978460 containerd[2016]: 2025-08-13 00:20:37.940 [INFO][5307] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="58e87984fd78f784e58df2d6014a214f5340586b764d17f12837298f58495bde" Namespace="calico-system" Pod="csi-node-driver-zw9d2" WorkloadEndpoint="ip--172--31--18--147-k8s-csi--node--driver--zw9d2-eth0" Aug 13 00:20:37.978460 containerd[2016]: 2025-08-13 00:20:37.944 [INFO][5307] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="58e87984fd78f784e58df2d6014a214f5340586b764d17f12837298f58495bde" Namespace="calico-system" Pod="csi-node-driver-zw9d2" WorkloadEndpoint="ip--172--31--18--147-k8s-csi--node--driver--zw9d2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-csi--node--driver--zw9d2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"58a4d2bd-5ce6-4a65-a83a-e16060349add", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"58e87984fd78f784e58df2d6014a214f5340586b764d17f12837298f58495bde", Pod:"csi-node-driver-zw9d2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.83.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8eae0b5a5c6", MAC:"5e:06:b5:1f:28:cb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:37.978460 containerd[2016]: 2025-08-13 00:20:37.965 [INFO][5307] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="58e87984fd78f784e58df2d6014a214f5340586b764d17f12837298f58495bde" Namespace="calico-system" Pod="csi-node-driver-zw9d2" WorkloadEndpoint="ip--172--31--18--147-k8s-csi--node--driver--zw9d2-eth0" Aug 13 00:20:37.995372 containerd[2016]: time="2025-08-13T00:20:37.993421300Z" level=info msg="StopPodSandbox for \"17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c\"" Aug 13 00:20:37.998459 containerd[2016]: time="2025-08-13T00:20:37.998379856Z" level=info msg="StopPodSandbox for \"532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6\"" Aug 13 00:20:38.109503 containerd[2016]: time="2025-08-13T00:20:38.108221533Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:20:38.109503 containerd[2016]: time="2025-08-13T00:20:38.108323113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:20:38.109503 containerd[2016]: time="2025-08-13T00:20:38.108351541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:38.109503 containerd[2016]: time="2025-08-13T00:20:38.108511333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:38.149365 containerd[2016]: time="2025-08-13T00:20:38.147110665Z" level=info msg="StartContainer for \"40b30803128ba357e043140e0539d073d0b3915cd49c93fa1fbffe113fecd10d\" returns successfully" Aug 13 00:20:38.185943 systemd[1]: Started cri-containerd-58e87984fd78f784e58df2d6014a214f5340586b764d17f12837298f58495bde.scope - libcontainer container 58e87984fd78f784e58df2d6014a214f5340586b764d17f12837298f58495bde. Aug 13 00:20:38.393981 containerd[2016]: time="2025-08-13T00:20:38.393903470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zw9d2,Uid:58a4d2bd-5ce6-4a65-a83a-e16060349add,Namespace:calico-system,Attempt:1,} returns sandbox id \"58e87984fd78f784e58df2d6014a214f5340586b764d17f12837298f58495bde\"" Aug 13 00:20:38.443291 containerd[2016]: 2025-08-13 00:20:38.234 [INFO][5397] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" Aug 13 00:20:38.443291 containerd[2016]: 2025-08-13 00:20:38.235 [INFO][5397] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" iface="eth0" netns="/var/run/netns/cni-370d56fa-c9cb-b1d3-c7dd-23304404a661" Aug 13 00:20:38.443291 containerd[2016]: 2025-08-13 00:20:38.236 [INFO][5397] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" iface="eth0" netns="/var/run/netns/cni-370d56fa-c9cb-b1d3-c7dd-23304404a661" Aug 13 00:20:38.443291 containerd[2016]: 2025-08-13 00:20:38.236 [INFO][5397] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" iface="eth0" netns="/var/run/netns/cni-370d56fa-c9cb-b1d3-c7dd-23304404a661" Aug 13 00:20:38.443291 containerd[2016]: 2025-08-13 00:20:38.236 [INFO][5397] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" Aug 13 00:20:38.443291 containerd[2016]: 2025-08-13 00:20:38.236 [INFO][5397] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" Aug 13 00:20:38.443291 containerd[2016]: 2025-08-13 00:20:38.391 [INFO][5453] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" HandleID="k8s-pod-network.17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" Workload="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--t54cs-eth0" Aug 13 00:20:38.443291 containerd[2016]: 2025-08-13 00:20:38.393 [INFO][5453] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:38.443291 containerd[2016]: 2025-08-13 00:20:38.393 [INFO][5453] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:38.443291 containerd[2016]: 2025-08-13 00:20:38.419 [WARNING][5453] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" HandleID="k8s-pod-network.17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" Workload="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--t54cs-eth0" Aug 13 00:20:38.443291 containerd[2016]: 2025-08-13 00:20:38.419 [INFO][5453] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" HandleID="k8s-pod-network.17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" Workload="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--t54cs-eth0" Aug 13 00:20:38.443291 containerd[2016]: 2025-08-13 00:20:38.424 [INFO][5453] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:38.443291 containerd[2016]: 2025-08-13 00:20:38.428 [INFO][5397] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" Aug 13 00:20:38.453709 containerd[2016]: time="2025-08-13T00:20:38.453376142Z" level=info msg="TearDown network for sandbox \"17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c\" successfully" Aug 13 00:20:38.453709 containerd[2016]: time="2025-08-13T00:20:38.453454322Z" level=info msg="StopPodSandbox for \"17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c\" returns successfully" Aug 13 00:20:38.458546 containerd[2016]: time="2025-08-13T00:20:38.458492210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76df78f95-t54cs,Uid:b69d49e6-4f1c-4a7b-bc02-821b576330fe,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:20:38.459621 systemd[1]: run-netns-cni\x2d370d56fa\x2dc9cb\x2db1d3\x2dc7dd\x2d23304404a661.mount: Deactivated successfully. Aug 13 00:20:38.475712 kubelet[3353]: I0813 00:20:38.475588 3353 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-866f5cf9c9-z24mw" podStartSLOduration=2.143378443 podStartE2EDuration="8.473714571s" podCreationTimestamp="2025-08-13 00:20:30 +0000 UTC" firstStartedPulling="2025-08-13 00:20:31.372479383 +0000 UTC m=+47.727669178" lastFinishedPulling="2025-08-13 00:20:37.702815511 +0000 UTC m=+54.058005306" observedRunningTime="2025-08-13 00:20:38.471736311 +0000 UTC m=+54.826926130" watchObservedRunningTime="2025-08-13 00:20:38.473714571 +0000 UTC m=+54.828904366" Aug 13 00:20:38.492915 containerd[2016]: 2025-08-13 00:20:38.291 [INFO][5398] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" Aug 13 00:20:38.492915 containerd[2016]: 2025-08-13 00:20:38.291 [INFO][5398] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" iface="eth0" netns="/var/run/netns/cni-db68c59f-cdb6-dada-6e6a-1d3c6cec069d" Aug 13 00:20:38.492915 containerd[2016]: 2025-08-13 00:20:38.294 [INFO][5398] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" iface="eth0" netns="/var/run/netns/cni-db68c59f-cdb6-dada-6e6a-1d3c6cec069d" Aug 13 00:20:38.492915 containerd[2016]: 2025-08-13 00:20:38.297 [INFO][5398] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" iface="eth0" netns="/var/run/netns/cni-db68c59f-cdb6-dada-6e6a-1d3c6cec069d" Aug 13 00:20:38.492915 containerd[2016]: 2025-08-13 00:20:38.297 [INFO][5398] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" Aug 13 00:20:38.492915 containerd[2016]: 2025-08-13 00:20:38.297 [INFO][5398] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" Aug 13 00:20:38.492915 containerd[2016]: 2025-08-13 00:20:38.396 [INFO][5458] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" HandleID="k8s-pod-network.532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" Workload="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--gh9qn-eth0" Aug 13 00:20:38.492915 containerd[2016]: 2025-08-13 00:20:38.396 [INFO][5458] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:38.492915 containerd[2016]: 2025-08-13 00:20:38.424 [INFO][5458] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:38.492915 containerd[2016]: 2025-08-13 00:20:38.466 [WARNING][5458] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" HandleID="k8s-pod-network.532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" Workload="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--gh9qn-eth0" Aug 13 00:20:38.492915 containerd[2016]: 2025-08-13 00:20:38.466 [INFO][5458] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" HandleID="k8s-pod-network.532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" Workload="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--gh9qn-eth0" Aug 13 00:20:38.492915 containerd[2016]: 2025-08-13 00:20:38.471 [INFO][5458] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:38.492915 containerd[2016]: 2025-08-13 00:20:38.484 [INFO][5398] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" Aug 13 00:20:38.506887 systemd[1]: run-netns-cni\x2ddb68c59f\x2dcdb6\x2ddada\x2d6e6a\x2d1d3c6cec069d.mount: Deactivated successfully. Aug 13 00:20:38.523856 containerd[2016]: time="2025-08-13T00:20:38.523563315Z" level=info msg="TearDown network for sandbox \"532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6\" successfully" Aug 13 00:20:38.523856 containerd[2016]: time="2025-08-13T00:20:38.523617891Z" level=info msg="StopPodSandbox for \"532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6\" returns successfully" Aug 13 00:20:38.525694 containerd[2016]: time="2025-08-13T00:20:38.525415215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76df78f95-gh9qn,Uid:9ed40b4c-f347-4ecd-b45b-59769ec7e382,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:20:38.916495 systemd-networkd[1928]: cali467feaa9e43: Link UP Aug 13 00:20:38.916968 systemd-networkd[1928]: cali467feaa9e43: Gained carrier Aug 13 00:20:38.964027 containerd[2016]: 2025-08-13 00:20:38.703 [INFO][5477] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--147-k8s-calico--apiserver--76df78f95--t54cs-eth0 calico-apiserver-76df78f95- calico-apiserver b69d49e6-4f1c-4a7b-bc02-821b576330fe 1010 0 2025-08-13 00:20:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76df78f95 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-147 calico-apiserver-76df78f95-t54cs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali467feaa9e43 [] [] }} ContainerID="c0ff3abfd459e06c340b0efec02b265fa88693f7f680d1938e3e8852418266fd" Namespace="calico-apiserver" Pod="calico-apiserver-76df78f95-t54cs" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--t54cs-" Aug 13 00:20:38.964027 containerd[2016]: 2025-08-13 00:20:38.704 [INFO][5477] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c0ff3abfd459e06c340b0efec02b265fa88693f7f680d1938e3e8852418266fd" Namespace="calico-apiserver" Pod="calico-apiserver-76df78f95-t54cs" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--t54cs-eth0" Aug 13 00:20:38.964027 containerd[2016]: 2025-08-13 00:20:38.817 [INFO][5500] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0ff3abfd459e06c340b0efec02b265fa88693f7f680d1938e3e8852418266fd" HandleID="k8s-pod-network.c0ff3abfd459e06c340b0efec02b265fa88693f7f680d1938e3e8852418266fd" Workload="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--t54cs-eth0" Aug 13 00:20:38.964027 containerd[2016]: 2025-08-13 00:20:38.818 [INFO][5500] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c0ff3abfd459e06c340b0efec02b265fa88693f7f680d1938e3e8852418266fd" HandleID="k8s-pod-network.c0ff3abfd459e06c340b0efec02b265fa88693f7f680d1938e3e8852418266fd" Workload="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--t54cs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d38e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-147", "pod":"calico-apiserver-76df78f95-t54cs", "timestamp":"2025-08-13 00:20:38.817612816 +0000 UTC"}, Hostname:"ip-172-31-18-147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:20:38.964027 containerd[2016]: 2025-08-13 00:20:38.820 [INFO][5500] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:38.964027 containerd[2016]: 2025-08-13 00:20:38.820 [INFO][5500] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:38.964027 containerd[2016]: 2025-08-13 00:20:38.820 [INFO][5500] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-147' Aug 13 00:20:38.964027 containerd[2016]: 2025-08-13 00:20:38.846 [INFO][5500] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c0ff3abfd459e06c340b0efec02b265fa88693f7f680d1938e3e8852418266fd" host="ip-172-31-18-147" Aug 13 00:20:38.964027 containerd[2016]: 2025-08-13 00:20:38.857 [INFO][5500] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-147" Aug 13 00:20:38.964027 containerd[2016]: 2025-08-13 00:20:38.870 [INFO][5500] ipam/ipam.go 511: Trying affinity for 192.168.83.128/26 host="ip-172-31-18-147" Aug 13 00:20:38.964027 containerd[2016]: 2025-08-13 00:20:38.874 [INFO][5500] ipam/ipam.go 158: Attempting to load block cidr=192.168.83.128/26 host="ip-172-31-18-147" Aug 13 00:20:38.964027 containerd[2016]: 2025-08-13 00:20:38.880 [INFO][5500] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.83.128/26 host="ip-172-31-18-147" Aug 13 00:20:38.964027 containerd[2016]: 2025-08-13 00:20:38.880 [INFO][5500] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.83.128/26 handle="k8s-pod-network.c0ff3abfd459e06c340b0efec02b265fa88693f7f680d1938e3e8852418266fd" host="ip-172-31-18-147" Aug 13 00:20:38.964027 containerd[2016]: 2025-08-13 00:20:38.883 [INFO][5500] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c0ff3abfd459e06c340b0efec02b265fa88693f7f680d1938e3e8852418266fd Aug 13 00:20:38.964027 containerd[2016]: 2025-08-13 00:20:38.890 [INFO][5500] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.83.128/26 handle="k8s-pod-network.c0ff3abfd459e06c340b0efec02b265fa88693f7f680d1938e3e8852418266fd" host="ip-172-31-18-147" Aug 13 00:20:38.964027 containerd[2016]: 2025-08-13 00:20:38.903 [INFO][5500] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.83.134/26] block=192.168.83.128/26 handle="k8s-pod-network.c0ff3abfd459e06c340b0efec02b265fa88693f7f680d1938e3e8852418266fd" host="ip-172-31-18-147" Aug 13 00:20:38.964027 containerd[2016]: 2025-08-13 00:20:38.903 [INFO][5500] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.83.134/26] handle="k8s-pod-network.c0ff3abfd459e06c340b0efec02b265fa88693f7f680d1938e3e8852418266fd" host="ip-172-31-18-147" Aug 13 00:20:38.964027 containerd[2016]: 2025-08-13 00:20:38.903 [INFO][5500] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:38.964027 containerd[2016]: 2025-08-13 00:20:38.903 [INFO][5500] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.134/26] IPv6=[] ContainerID="c0ff3abfd459e06c340b0efec02b265fa88693f7f680d1938e3e8852418266fd" HandleID="k8s-pod-network.c0ff3abfd459e06c340b0efec02b265fa88693f7f680d1938e3e8852418266fd" Workload="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--t54cs-eth0" Aug 13 00:20:38.968463 containerd[2016]: 2025-08-13 00:20:38.907 [INFO][5477] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c0ff3abfd459e06c340b0efec02b265fa88693f7f680d1938e3e8852418266fd" Namespace="calico-apiserver" Pod="calico-apiserver-76df78f95-t54cs" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--t54cs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-calico--apiserver--76df78f95--t54cs-eth0", GenerateName:"calico-apiserver-76df78f95-", Namespace:"calico-apiserver", SelfLink:"", UID:"b69d49e6-4f1c-4a7b-bc02-821b576330fe", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76df78f95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"", Pod:"calico-apiserver-76df78f95-t54cs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali467feaa9e43", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:38.968463 containerd[2016]: 2025-08-13 00:20:38.907 [INFO][5477] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.83.134/32] ContainerID="c0ff3abfd459e06c340b0efec02b265fa88693f7f680d1938e3e8852418266fd" Namespace="calico-apiserver" Pod="calico-apiserver-76df78f95-t54cs" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--t54cs-eth0" Aug 13 00:20:38.968463 containerd[2016]: 2025-08-13 00:20:38.907 [INFO][5477] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali467feaa9e43 ContainerID="c0ff3abfd459e06c340b0efec02b265fa88693f7f680d1938e3e8852418266fd" Namespace="calico-apiserver" Pod="calico-apiserver-76df78f95-t54cs" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--t54cs-eth0" Aug 13 00:20:38.968463 containerd[2016]: 2025-08-13 00:20:38.916 [INFO][5477] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0ff3abfd459e06c340b0efec02b265fa88693f7f680d1938e3e8852418266fd" Namespace="calico-apiserver" Pod="calico-apiserver-76df78f95-t54cs" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--t54cs-eth0" Aug 13 00:20:38.968463 containerd[2016]: 2025-08-13 00:20:38.918 [INFO][5477] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c0ff3abfd459e06c340b0efec02b265fa88693f7f680d1938e3e8852418266fd" Namespace="calico-apiserver" Pod="calico-apiserver-76df78f95-t54cs" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--t54cs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-calico--apiserver--76df78f95--t54cs-eth0", GenerateName:"calico-apiserver-76df78f95-", Namespace:"calico-apiserver", SelfLink:"", UID:"b69d49e6-4f1c-4a7b-bc02-821b576330fe", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76df78f95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"c0ff3abfd459e06c340b0efec02b265fa88693f7f680d1938e3e8852418266fd", Pod:"calico-apiserver-76df78f95-t54cs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali467feaa9e43", MAC:"ae:76:05:b3:a6:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:38.968463 containerd[2016]: 2025-08-13 00:20:38.955 [INFO][5477] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c0ff3abfd459e06c340b0efec02b265fa88693f7f680d1938e3e8852418266fd" Namespace="calico-apiserver" Pod="calico-apiserver-76df78f95-t54cs" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--t54cs-eth0" Aug 13 00:20:38.986464 containerd[2016]: time="2025-08-13T00:20:38.984492245Z" level=info msg="StopPodSandbox for \"a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994\"" Aug 13 00:20:39.024679 systemd-networkd[1928]: cali8eae0b5a5c6: Gained IPv6LL Aug 13 00:20:39.099679 containerd[2016]: time="2025-08-13T00:20:39.095708414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:20:39.099679 containerd[2016]: time="2025-08-13T00:20:39.096200330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:20:39.099679 containerd[2016]: time="2025-08-13T00:20:39.096803774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:39.099679 containerd[2016]: time="2025-08-13T00:20:39.096977582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:39.128381 systemd-networkd[1928]: cali77d9de0d760: Link UP Aug 13 00:20:39.134976 systemd-networkd[1928]: cali77d9de0d760: Gained carrier Aug 13 00:20:39.199620 containerd[2016]: 2025-08-13 00:20:38.709 [INFO][5487] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--147-k8s-calico--apiserver--76df78f95--gh9qn-eth0 calico-apiserver-76df78f95- calico-apiserver 9ed40b4c-f347-4ecd-b45b-59769ec7e382 1011 0 2025-08-13 00:20:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76df78f95 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-147 calico-apiserver-76df78f95-gh9qn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali77d9de0d760 [] [] }} ContainerID="cd36401c6852cf8f06556aba39f43fa7654f86e7d72d36d0b935ec6be0be1140" Namespace="calico-apiserver" Pod="calico-apiserver-76df78f95-gh9qn" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--gh9qn-" Aug 13 00:20:39.199620 containerd[2016]: 2025-08-13 00:20:38.709 [INFO][5487] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cd36401c6852cf8f06556aba39f43fa7654f86e7d72d36d0b935ec6be0be1140" Namespace="calico-apiserver" Pod="calico-apiserver-76df78f95-gh9qn" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--gh9qn-eth0" Aug 13 00:20:39.199620 containerd[2016]: 2025-08-13 00:20:38.852 [INFO][5502] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd36401c6852cf8f06556aba39f43fa7654f86e7d72d36d0b935ec6be0be1140" HandleID="k8s-pod-network.cd36401c6852cf8f06556aba39f43fa7654f86e7d72d36d0b935ec6be0be1140" Workload="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--gh9qn-eth0" Aug 13 00:20:39.199620 containerd[2016]: 2025-08-13 00:20:38.853 [INFO][5502] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cd36401c6852cf8f06556aba39f43fa7654f86e7d72d36d0b935ec6be0be1140" HandleID="k8s-pod-network.cd36401c6852cf8f06556aba39f43fa7654f86e7d72d36d0b935ec6be0be1140" Workload="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--gh9qn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034e290), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-147", "pod":"calico-apiserver-76df78f95-gh9qn", "timestamp":"2025-08-13 00:20:38.852789304 +0000 UTC"}, Hostname:"ip-172-31-18-147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:20:39.199620 containerd[2016]: 2025-08-13 00:20:38.853 [INFO][5502] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:39.199620 containerd[2016]: 2025-08-13 00:20:38.904 [INFO][5502] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:39.199620 containerd[2016]: 2025-08-13 00:20:38.904 [INFO][5502] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-147' Aug 13 00:20:39.199620 containerd[2016]: 2025-08-13 00:20:38.956 [INFO][5502] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cd36401c6852cf8f06556aba39f43fa7654f86e7d72d36d0b935ec6be0be1140" host="ip-172-31-18-147" Aug 13 00:20:39.199620 containerd[2016]: 2025-08-13 00:20:38.973 [INFO][5502] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-147" Aug 13 00:20:39.199620 containerd[2016]: 2025-08-13 00:20:39.011 [INFO][5502] ipam/ipam.go 511: Trying affinity for 192.168.83.128/26 host="ip-172-31-18-147" Aug 13 00:20:39.199620 containerd[2016]: 2025-08-13 00:20:39.028 [INFO][5502] ipam/ipam.go 158: Attempting to load block cidr=192.168.83.128/26 host="ip-172-31-18-147" Aug 13 00:20:39.199620 containerd[2016]: 2025-08-13 00:20:39.036 [INFO][5502] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.83.128/26 host="ip-172-31-18-147" Aug 13 00:20:39.199620 containerd[2016]: 2025-08-13 00:20:39.037 [INFO][5502] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.83.128/26 handle="k8s-pod-network.cd36401c6852cf8f06556aba39f43fa7654f86e7d72d36d0b935ec6be0be1140" host="ip-172-31-18-147" Aug 13 00:20:39.199620 containerd[2016]: 2025-08-13 00:20:39.045 [INFO][5502] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cd36401c6852cf8f06556aba39f43fa7654f86e7d72d36d0b935ec6be0be1140 Aug 13 00:20:39.199620 containerd[2016]: 2025-08-13 00:20:39.066 [INFO][5502] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.83.128/26 handle="k8s-pod-network.cd36401c6852cf8f06556aba39f43fa7654f86e7d72d36d0b935ec6be0be1140" host="ip-172-31-18-147" Aug 13 00:20:39.199620 containerd[2016]: 2025-08-13 00:20:39.095 [INFO][5502] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.83.135/26] block=192.168.83.128/26 handle="k8s-pod-network.cd36401c6852cf8f06556aba39f43fa7654f86e7d72d36d0b935ec6be0be1140" host="ip-172-31-18-147" Aug 13 00:20:39.199620 containerd[2016]: 2025-08-13 00:20:39.096 [INFO][5502] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.83.135/26] handle="k8s-pod-network.cd36401c6852cf8f06556aba39f43fa7654f86e7d72d36d0b935ec6be0be1140" host="ip-172-31-18-147" Aug 13 00:20:39.199620 containerd[2016]: 2025-08-13 00:20:39.096 [INFO][5502] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:39.199620 containerd[2016]: 2025-08-13 00:20:39.096 [INFO][5502] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.135/26] IPv6=[] ContainerID="cd36401c6852cf8f06556aba39f43fa7654f86e7d72d36d0b935ec6be0be1140" HandleID="k8s-pod-network.cd36401c6852cf8f06556aba39f43fa7654f86e7d72d36d0b935ec6be0be1140" Workload="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--gh9qn-eth0" Aug 13 00:20:39.206387 containerd[2016]: 2025-08-13 00:20:39.111 [INFO][5487] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cd36401c6852cf8f06556aba39f43fa7654f86e7d72d36d0b935ec6be0be1140" Namespace="calico-apiserver" Pod="calico-apiserver-76df78f95-gh9qn" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--gh9qn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-calico--apiserver--76df78f95--gh9qn-eth0", GenerateName:"calico-apiserver-76df78f95-", Namespace:"calico-apiserver", SelfLink:"", UID:"9ed40b4c-f347-4ecd-b45b-59769ec7e382", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76df78f95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"", Pod:"calico-apiserver-76df78f95-gh9qn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77d9de0d760", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:39.206387 containerd[2016]: 2025-08-13 00:20:39.112 [INFO][5487] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.83.135/32] ContainerID="cd36401c6852cf8f06556aba39f43fa7654f86e7d72d36d0b935ec6be0be1140" Namespace="calico-apiserver" Pod="calico-apiserver-76df78f95-gh9qn" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--gh9qn-eth0" Aug 13 00:20:39.206387 containerd[2016]: 2025-08-13 00:20:39.112 [INFO][5487] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali77d9de0d760 ContainerID="cd36401c6852cf8f06556aba39f43fa7654f86e7d72d36d0b935ec6be0be1140" Namespace="calico-apiserver" Pod="calico-apiserver-76df78f95-gh9qn" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--gh9qn-eth0" Aug 13 00:20:39.206387 containerd[2016]: 2025-08-13 00:20:39.142 [INFO][5487] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd36401c6852cf8f06556aba39f43fa7654f86e7d72d36d0b935ec6be0be1140" Namespace="calico-apiserver" Pod="calico-apiserver-76df78f95-gh9qn" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--gh9qn-eth0" Aug 13 00:20:39.206387 containerd[2016]: 2025-08-13 00:20:39.150 [INFO][5487] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cd36401c6852cf8f06556aba39f43fa7654f86e7d72d36d0b935ec6be0be1140" Namespace="calico-apiserver" Pod="calico-apiserver-76df78f95-gh9qn" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--gh9qn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-calico--apiserver--76df78f95--gh9qn-eth0", GenerateName:"calico-apiserver-76df78f95-", Namespace:"calico-apiserver", SelfLink:"", UID:"9ed40b4c-f347-4ecd-b45b-59769ec7e382", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76df78f95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"cd36401c6852cf8f06556aba39f43fa7654f86e7d72d36d0b935ec6be0be1140", Pod:"calico-apiserver-76df78f95-gh9qn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77d9de0d760", MAC:"02:9c:6b:f7:a6:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:39.206387 containerd[2016]: 2025-08-13 00:20:39.183 [INFO][5487] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cd36401c6852cf8f06556aba39f43fa7654f86e7d72d36d0b935ec6be0be1140" Namespace="calico-apiserver" Pod="calico-apiserver-76df78f95-gh9qn" WorkloadEndpoint="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--gh9qn-eth0" Aug 13 00:20:39.217002 systemd[1]: Started cri-containerd-c0ff3abfd459e06c340b0efec02b265fa88693f7f680d1938e3e8852418266fd.scope - libcontainer container c0ff3abfd459e06c340b0efec02b265fa88693f7f680d1938e3e8852418266fd. Aug 13 00:20:39.313181 containerd[2016]: time="2025-08-13T00:20:39.312782247Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:20:39.313181 containerd[2016]: time="2025-08-13T00:20:39.312907659Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:20:39.313181 containerd[2016]: time="2025-08-13T00:20:39.312939171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:39.313563 containerd[2016]: time="2025-08-13T00:20:39.313132659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:39.403347 systemd[1]: Started cri-containerd-cd36401c6852cf8f06556aba39f43fa7654f86e7d72d36d0b935ec6be0be1140.scope - libcontainer container cd36401c6852cf8f06556aba39f43fa7654f86e7d72d36d0b935ec6be0be1140. Aug 13 00:20:39.483008 containerd[2016]: 2025-08-13 00:20:39.291 [INFO][5532] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" Aug 13 00:20:39.483008 containerd[2016]: 2025-08-13 00:20:39.293 [INFO][5532] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" iface="eth0" netns="/var/run/netns/cni-85052467-0c81-3073-a689-876e5102b087" Aug 13 00:20:39.483008 containerd[2016]: 2025-08-13 00:20:39.293 [INFO][5532] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" iface="eth0" netns="/var/run/netns/cni-85052467-0c81-3073-a689-876e5102b087" Aug 13 00:20:39.483008 containerd[2016]: 2025-08-13 00:20:39.294 [INFO][5532] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" iface="eth0" netns="/var/run/netns/cni-85052467-0c81-3073-a689-876e5102b087" Aug 13 00:20:39.483008 containerd[2016]: 2025-08-13 00:20:39.295 [INFO][5532] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" Aug 13 00:20:39.483008 containerd[2016]: 2025-08-13 00:20:39.295 [INFO][5532] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" Aug 13 00:20:39.483008 containerd[2016]: 2025-08-13 00:20:39.436 [INFO][5595] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" HandleID="k8s-pod-network.a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" Workload="ip--172--31--18--147-k8s-coredns--674b8bbfcf--z8tcq-eth0" Aug 13 00:20:39.483008 containerd[2016]: 2025-08-13 00:20:39.437 [INFO][5595] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:39.483008 containerd[2016]: 2025-08-13 00:20:39.438 [INFO][5595] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:39.483008 containerd[2016]: 2025-08-13 00:20:39.465 [WARNING][5595] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" HandleID="k8s-pod-network.a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" Workload="ip--172--31--18--147-k8s-coredns--674b8bbfcf--z8tcq-eth0" Aug 13 00:20:39.483008 containerd[2016]: 2025-08-13 00:20:39.466 [INFO][5595] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" HandleID="k8s-pod-network.a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" Workload="ip--172--31--18--147-k8s-coredns--674b8bbfcf--z8tcq-eth0" Aug 13 00:20:39.483008 containerd[2016]: 2025-08-13 00:20:39.470 [INFO][5595] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:39.483008 containerd[2016]: 2025-08-13 00:20:39.475 [INFO][5532] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" Aug 13 00:20:39.491066 containerd[2016]: time="2025-08-13T00:20:39.483036688Z" level=info msg="TearDown network for sandbox \"a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994\" successfully" Aug 13 00:20:39.491066 containerd[2016]: time="2025-08-13T00:20:39.483075784Z" level=info msg="StopPodSandbox for \"a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994\" returns successfully" Aug 13 00:20:39.491066 containerd[2016]: time="2025-08-13T00:20:39.488763964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z8tcq,Uid:899e91ee-239a-47a8-b50b-26e16e9ebb04,Namespace:kube-system,Attempt:1,}" Aug 13 00:20:39.500346 systemd[1]: run-netns-cni\x2d85052467\x2d0c81\x2d3073\x2da689\x2d876e5102b087.mount: Deactivated successfully. Aug 13 00:20:39.571524 containerd[2016]: time="2025-08-13T00:20:39.571452292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76df78f95-t54cs,Uid:b69d49e6-4f1c-4a7b-bc02-821b576330fe,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c0ff3abfd459e06c340b0efec02b265fa88693f7f680d1938e3e8852418266fd\"" Aug 13 00:20:39.628433 containerd[2016]: time="2025-08-13T00:20:39.628268596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76df78f95-gh9qn,Uid:9ed40b4c-f347-4ecd-b45b-59769ec7e382,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"cd36401c6852cf8f06556aba39f43fa7654f86e7d72d36d0b935ec6be0be1140\"" Aug 13 00:20:39.885513 systemd-networkd[1928]: cali1deb6f634d9: Link UP Aug 13 00:20:39.890116 systemd-networkd[1928]: cali1deb6f634d9: Gained carrier Aug 13 00:20:39.938177 containerd[2016]: 2025-08-13 00:20:39.699 [INFO][5632] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--147-k8s-coredns--674b8bbfcf--z8tcq-eth0 coredns-674b8bbfcf- kube-system 899e91ee-239a-47a8-b50b-26e16e9ebb04 1028 0 2025-08-13 00:19:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-147 coredns-674b8bbfcf-z8tcq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1deb6f634d9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="487f6981ade884b4e16a1f2806bf6eb12e8d621babf8f8e5a13c99c684aede4b" Namespace="kube-system" Pod="coredns-674b8bbfcf-z8tcq" WorkloadEndpoint="ip--172--31--18--147-k8s-coredns--674b8bbfcf--z8tcq-" Aug 13 00:20:39.938177 containerd[2016]: 2025-08-13 00:20:39.699 [INFO][5632] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="487f6981ade884b4e16a1f2806bf6eb12e8d621babf8f8e5a13c99c684aede4b" Namespace="kube-system" Pod="coredns-674b8bbfcf-z8tcq" WorkloadEndpoint="ip--172--31--18--147-k8s-coredns--674b8bbfcf--z8tcq-eth0" Aug 13 00:20:39.938177 containerd[2016]: 2025-08-13 00:20:39.765 [INFO][5651] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="487f6981ade884b4e16a1f2806bf6eb12e8d621babf8f8e5a13c99c684aede4b" HandleID="k8s-pod-network.487f6981ade884b4e16a1f2806bf6eb12e8d621babf8f8e5a13c99c684aede4b" Workload="ip--172--31--18--147-k8s-coredns--674b8bbfcf--z8tcq-eth0" Aug 13 00:20:39.938177 containerd[2016]: 2025-08-13 00:20:39.766 [INFO][5651] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="487f6981ade884b4e16a1f2806bf6eb12e8d621babf8f8e5a13c99c684aede4b" HandleID="k8s-pod-network.487f6981ade884b4e16a1f2806bf6eb12e8d621babf8f8e5a13c99c684aede4b" Workload="ip--172--31--18--147-k8s-coredns--674b8bbfcf--z8tcq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b870), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-147", "pod":"coredns-674b8bbfcf-z8tcq", "timestamp":"2025-08-13 00:20:39.765865385 +0000 UTC"}, Hostname:"ip-172-31-18-147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:20:39.938177 containerd[2016]: 2025-08-13 00:20:39.767 [INFO][5651] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:39.938177 containerd[2016]: 2025-08-13 00:20:39.767 [INFO][5651] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:39.938177 containerd[2016]: 2025-08-13 00:20:39.767 [INFO][5651] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-147' Aug 13 00:20:39.938177 containerd[2016]: 2025-08-13 00:20:39.788 [INFO][5651] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.487f6981ade884b4e16a1f2806bf6eb12e8d621babf8f8e5a13c99c684aede4b" host="ip-172-31-18-147" Aug 13 00:20:39.938177 containerd[2016]: 2025-08-13 00:20:39.802 [INFO][5651] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-147" Aug 13 00:20:39.938177 containerd[2016]: 2025-08-13 00:20:39.821 [INFO][5651] ipam/ipam.go 511: Trying affinity for 192.168.83.128/26 host="ip-172-31-18-147" Aug 13 00:20:39.938177 containerd[2016]: 2025-08-13 00:20:39.826 [INFO][5651] ipam/ipam.go 158: Attempting to load block cidr=192.168.83.128/26 host="ip-172-31-18-147" Aug 13 00:20:39.938177 containerd[2016]: 2025-08-13 00:20:39.834 [INFO][5651] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.83.128/26 host="ip-172-31-18-147" Aug 13 00:20:39.938177 containerd[2016]: 2025-08-13 00:20:39.835 [INFO][5651] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.83.128/26 handle="k8s-pod-network.487f6981ade884b4e16a1f2806bf6eb12e8d621babf8f8e5a13c99c684aede4b" host="ip-172-31-18-147" Aug 13 00:20:39.938177 containerd[2016]: 2025-08-13 00:20:39.839 [INFO][5651] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.487f6981ade884b4e16a1f2806bf6eb12e8d621babf8f8e5a13c99c684aede4b Aug 13 00:20:39.938177 containerd[2016]: 2025-08-13 00:20:39.853 [INFO][5651] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.83.128/26 handle="k8s-pod-network.487f6981ade884b4e16a1f2806bf6eb12e8d621babf8f8e5a13c99c684aede4b" host="ip-172-31-18-147" Aug 13 00:20:39.938177 containerd[2016]: 2025-08-13 00:20:39.870 [INFO][5651] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.83.136/26] block=192.168.83.128/26 handle="k8s-pod-network.487f6981ade884b4e16a1f2806bf6eb12e8d621babf8f8e5a13c99c684aede4b" host="ip-172-31-18-147" Aug 13 00:20:39.938177 containerd[2016]: 2025-08-13 00:20:39.870 [INFO][5651] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.83.136/26] handle="k8s-pod-network.487f6981ade884b4e16a1f2806bf6eb12e8d621babf8f8e5a13c99c684aede4b" host="ip-172-31-18-147" Aug 13 00:20:39.938177 containerd[2016]: 2025-08-13 00:20:39.870 [INFO][5651] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:39.938177 containerd[2016]: 2025-08-13 00:20:39.870 [INFO][5651] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.83.136/26] IPv6=[] ContainerID="487f6981ade884b4e16a1f2806bf6eb12e8d621babf8f8e5a13c99c684aede4b" HandleID="k8s-pod-network.487f6981ade884b4e16a1f2806bf6eb12e8d621babf8f8e5a13c99c684aede4b" Workload="ip--172--31--18--147-k8s-coredns--674b8bbfcf--z8tcq-eth0" Aug 13 00:20:39.939499 containerd[2016]: 2025-08-13 00:20:39.877 [INFO][5632] cni-plugin/k8s.go 418: Populated endpoint ContainerID="487f6981ade884b4e16a1f2806bf6eb12e8d621babf8f8e5a13c99c684aede4b" Namespace="kube-system" Pod="coredns-674b8bbfcf-z8tcq" WorkloadEndpoint="ip--172--31--18--147-k8s-coredns--674b8bbfcf--z8tcq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-coredns--674b8bbfcf--z8tcq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"899e91ee-239a-47a8-b50b-26e16e9ebb04", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"", Pod:"coredns-674b8bbfcf-z8tcq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1deb6f634d9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:39.939499 containerd[2016]: 2025-08-13 00:20:39.877 [INFO][5632] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.83.136/32] ContainerID="487f6981ade884b4e16a1f2806bf6eb12e8d621babf8f8e5a13c99c684aede4b" Namespace="kube-system" Pod="coredns-674b8bbfcf-z8tcq" WorkloadEndpoint="ip--172--31--18--147-k8s-coredns--674b8bbfcf--z8tcq-eth0" Aug 13 00:20:39.939499 containerd[2016]: 2025-08-13 00:20:39.877 [INFO][5632] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1deb6f634d9 ContainerID="487f6981ade884b4e16a1f2806bf6eb12e8d621babf8f8e5a13c99c684aede4b" Namespace="kube-system" Pod="coredns-674b8bbfcf-z8tcq" WorkloadEndpoint="ip--172--31--18--147-k8s-coredns--674b8bbfcf--z8tcq-eth0" Aug 13 00:20:39.939499 containerd[2016]: 2025-08-13 00:20:39.892 [INFO][5632] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="487f6981ade884b4e16a1f2806bf6eb12e8d621babf8f8e5a13c99c684aede4b" Namespace="kube-system" Pod="coredns-674b8bbfcf-z8tcq" WorkloadEndpoint="ip--172--31--18--147-k8s-coredns--674b8bbfcf--z8tcq-eth0" Aug 13 00:20:39.939499 containerd[2016]: 2025-08-13 00:20:39.896 [INFO][5632] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="487f6981ade884b4e16a1f2806bf6eb12e8d621babf8f8e5a13c99c684aede4b" Namespace="kube-system" Pod="coredns-674b8bbfcf-z8tcq" WorkloadEndpoint="ip--172--31--18--147-k8s-coredns--674b8bbfcf--z8tcq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-coredns--674b8bbfcf--z8tcq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"899e91ee-239a-47a8-b50b-26e16e9ebb04", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"487f6981ade884b4e16a1f2806bf6eb12e8d621babf8f8e5a13c99c684aede4b", Pod:"coredns-674b8bbfcf-z8tcq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1deb6f634d9", MAC:"1e:57:3d:20:60:da", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:39.939499 containerd[2016]: 2025-08-13 00:20:39.927 [INFO][5632] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="487f6981ade884b4e16a1f2806bf6eb12e8d621babf8f8e5a13c99c684aede4b" Namespace="kube-system" Pod="coredns-674b8bbfcf-z8tcq" WorkloadEndpoint="ip--172--31--18--147-k8s-coredns--674b8bbfcf--z8tcq-eth0" Aug 13 00:20:40.011301 containerd[2016]: time="2025-08-13T00:20:40.010991306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:20:40.011301 containerd[2016]: time="2025-08-13T00:20:40.011107838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:20:40.011301 containerd[2016]: time="2025-08-13T00:20:40.011144870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:40.017224 containerd[2016]: time="2025-08-13T00:20:40.016805150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:20:40.059044 systemd[1]: Started cri-containerd-487f6981ade884b4e16a1f2806bf6eb12e8d621babf8f8e5a13c99c684aede4b.scope - libcontainer container 487f6981ade884b4e16a1f2806bf6eb12e8d621babf8f8e5a13c99c684aede4b. Aug 13 00:20:40.185939 containerd[2016]: time="2025-08-13T00:20:40.185733075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-z8tcq,Uid:899e91ee-239a-47a8-b50b-26e16e9ebb04,Namespace:kube-system,Attempt:1,} returns sandbox id \"487f6981ade884b4e16a1f2806bf6eb12e8d621babf8f8e5a13c99c684aede4b\"" Aug 13 00:20:40.212628 containerd[2016]: time="2025-08-13T00:20:40.212114823Z" level=info msg="CreateContainer within sandbox \"487f6981ade884b4e16a1f2806bf6eb12e8d621babf8f8e5a13c99c684aede4b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:20:40.294505 containerd[2016]: time="2025-08-13T00:20:40.294342796Z" level=info msg="CreateContainer within sandbox \"487f6981ade884b4e16a1f2806bf6eb12e8d621babf8f8e5a13c99c684aede4b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4b6b520c847869ac7116856eb8b02595fba0cb3c4a2dd9e406c5c5a32e41a987\"" Aug 13 00:20:40.300113 containerd[2016]: time="2025-08-13T00:20:40.298254988Z" level=info msg="StartContainer for \"4b6b520c847869ac7116856eb8b02595fba0cb3c4a2dd9e406c5c5a32e41a987\"" Aug 13 00:20:40.421793 systemd[1]: Started cri-containerd-4b6b520c847869ac7116856eb8b02595fba0cb3c4a2dd9e406c5c5a32e41a987.scope - libcontainer container 4b6b520c847869ac7116856eb8b02595fba0cb3c4a2dd9e406c5c5a32e41a987. Aug 13 00:20:40.542499 containerd[2016]: time="2025-08-13T00:20:40.541868453Z" level=info msg="StartContainer for \"4b6b520c847869ac7116856eb8b02595fba0cb3c4a2dd9e406c5c5a32e41a987\" returns successfully" Aug 13 00:20:40.687552 systemd-networkd[1928]: cali77d9de0d760: Gained IPv6LL Aug 13 00:20:40.750991 systemd-networkd[1928]: cali467feaa9e43: Gained IPv6LL Aug 13 00:20:41.007032 systemd-networkd[1928]: cali1deb6f634d9: Gained IPv6LL Aug 13 00:20:41.464656 containerd[2016]: time="2025-08-13T00:20:41.464486345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:41.466907 containerd[2016]: time="2025-08-13T00:20:41.466843529Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Aug 13 00:20:41.469855 containerd[2016]: time="2025-08-13T00:20:41.469734077Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:41.476997 containerd[2016]: time="2025-08-13T00:20:41.476909513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:41.478708 containerd[2016]: time="2025-08-13T00:20:41.478422677Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 3.774009078s" Aug 13 00:20:41.478708 containerd[2016]: time="2025-08-13T00:20:41.478483253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Aug 13 00:20:41.481345 containerd[2016]: time="2025-08-13T00:20:41.481184237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 00:20:41.516069 containerd[2016]: time="2025-08-13T00:20:41.515885166Z" level=info msg="CreateContainer within sandbox \"37a7d766225d9366f085bd281fbd6272880d49a5ff778b28ad395e1bc64d22d1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 00:20:41.548221 containerd[2016]: time="2025-08-13T00:20:41.547958514Z" level=info msg="CreateContainer within sandbox \"37a7d766225d9366f085bd281fbd6272880d49a5ff778b28ad395e1bc64d22d1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"650852e6404f51febad6de4166c78504efbf595bc3592c4d9a553cccaf97e515\"" Aug 13 00:20:41.550247 containerd[2016]: time="2025-08-13T00:20:41.549907818Z" level=info msg="StartContainer for \"650852e6404f51febad6de4166c78504efbf595bc3592c4d9a553cccaf97e515\"" Aug 13 00:20:41.640127 kubelet[3353]: I0813 00:20:41.639332 3353 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-z8tcq" podStartSLOduration=54.639309054 podStartE2EDuration="54.639309054s" podCreationTimestamp="2025-08-13 00:19:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:20:41.582002994 +0000 UTC m=+57.937192813" watchObservedRunningTime="2025-08-13 00:20:41.639309054 +0000 UTC m=+57.994498849" Aug 13 00:20:41.685910 systemd[1]: Started cri-containerd-650852e6404f51febad6de4166c78504efbf595bc3592c4d9a553cccaf97e515.scope - libcontainer container 650852e6404f51febad6de4166c78504efbf595bc3592c4d9a553cccaf97e515. Aug 13 00:20:41.792174 containerd[2016]: time="2025-08-13T00:20:41.791845543Z" level=info msg="StartContainer for \"650852e6404f51febad6de4166c78504efbf595bc3592c4d9a553cccaf97e515\" returns successfully" Aug 13 00:20:42.573424 kubelet[3353]: I0813 00:20:42.573017 3353 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-cfdf54966-lrl8t" podStartSLOduration=24.898161255 podStartE2EDuration="30.572862967s" podCreationTimestamp="2025-08-13 00:20:12 +0000 UTC" firstStartedPulling="2025-08-13 00:20:35.805318057 +0000 UTC m=+52.160507840" lastFinishedPulling="2025-08-13 00:20:41.480019757 +0000 UTC m=+57.835209552" observedRunningTime="2025-08-13 00:20:42.569303755 +0000 UTC m=+58.924493550" watchObservedRunningTime="2025-08-13 00:20:42.572862967 +0000 UTC m=+58.928052774" Aug 13 00:20:43.171411 ntpd[1995]: Listen normally on 8 vxlan.calico 192.168.83.128:123 Aug 13 00:20:43.172089 ntpd[1995]: 13 Aug 00:20:43 ntpd[1995]: Listen normally on 8 vxlan.calico 192.168.83.128:123 Aug 13 00:20:43.172089 ntpd[1995]: 13 Aug 00:20:43 ntpd[1995]: Listen normally on 9 cali20a622d7907 [fe80::ecee:eeff:feee:eeee%4]:123 Aug 13 00:20:43.172089 ntpd[1995]: 13 Aug 00:20:43 ntpd[1995]: Listen normally on 10 vxlan.calico [fe80::6484:64ff:fe11:e2b8%5]:123 Aug 13 00:20:43.172089 ntpd[1995]: 13 Aug 00:20:43 ntpd[1995]: Listen normally on 11 cali67440b301f0 [fe80::ecee:eeff:feee:eeee%8]:123 Aug 13 00:20:43.172089 ntpd[1995]: 13 Aug 00:20:43 ntpd[1995]: Listen normally on 12 cali0ea23f8ed33 [fe80::ecee:eeff:feee:eeee%9]:123 Aug 13 00:20:43.172089 ntpd[1995]: 13 Aug 00:20:43 ntpd[1995]: Listen normally on 13 calib6ba1bbdcb2 [fe80::ecee:eeff:feee:eeee%10]:123 Aug 13 00:20:43.172089 ntpd[1995]: 13 Aug 00:20:43 ntpd[1995]: Listen normally on 14 cali8eae0b5a5c6 [fe80::ecee:eeff:feee:eeee%11]:123 Aug 13 00:20:43.172089 ntpd[1995]: 13 Aug 00:20:43 ntpd[1995]: Listen normally on 15 cali467feaa9e43 [fe80::ecee:eeff:feee:eeee%12]:123 Aug 13 00:20:43.172089 ntpd[1995]: 13 Aug 00:20:43 ntpd[1995]: Listen normally on 16 cali77d9de0d760 [fe80::ecee:eeff:feee:eeee%13]:123 Aug 13 00:20:43.171561 ntpd[1995]: Listen normally on 9 cali20a622d7907 [fe80::ecee:eeff:feee:eeee%4]:123 Aug 13 00:20:43.173099 ntpd[1995]: 13 Aug 00:20:43 ntpd[1995]: Listen normally on 17 cali1deb6f634d9 [fe80::ecee:eeff:feee:eeee%14]:123 Aug 13 00:20:43.171691 ntpd[1995]: Listen normally on 10 vxlan.calico [fe80::6484:64ff:fe11:e2b8%5]:123 Aug 13 00:20:43.171768 ntpd[1995]: Listen normally on 11 cali67440b301f0 [fe80::ecee:eeff:feee:eeee%8]:123 Aug 13 00:20:43.171837 ntpd[1995]: Listen normally on 12 cali0ea23f8ed33 [fe80::ecee:eeff:feee:eeee%9]:123 Aug 13 00:20:43.171906 ntpd[1995]: Listen normally on 13 calib6ba1bbdcb2 [fe80::ecee:eeff:feee:eeee%10]:123 Aug 13 00:20:43.171976 ntpd[1995]: Listen normally on 14 cali8eae0b5a5c6 [fe80::ecee:eeff:feee:eeee%11]:123 Aug 13 00:20:43.172043 ntpd[1995]: Listen normally on 15 cali467feaa9e43 [fe80::ecee:eeff:feee:eeee%12]:123 Aug 13 00:20:43.172112 ntpd[1995]: Listen normally on 16 cali77d9de0d760 [fe80::ecee:eeff:feee:eeee%13]:123 Aug 13 00:20:43.172179 ntpd[1995]: Listen normally on 17 cali1deb6f634d9 [fe80::ecee:eeff:feee:eeee%14]:123 Aug 13 00:20:43.618887 systemd[1]: Started sshd@9-172.31.18.147:22-139.178.89.65:45704.service - OpenSSH per-connection server daemon (139.178.89.65:45704). Aug 13 00:20:43.813698 sshd[5830]: Accepted publickey for core from 139.178.89.65 port 45704 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:43.817956 sshd[5830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:43.830083 systemd-logind[2000]: New session 10 of user core. Aug 13 00:20:43.836984 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 00:20:43.881776 containerd[2016]: time="2025-08-13T00:20:43.881616729Z" level=info msg="StopPodSandbox for \"89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89\"" Aug 13 00:20:44.235700 containerd[2016]: 2025-08-13 00:20:44.049 [WARNING][5844] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-goldmane--768f4c5c69--xkhnd-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"7a5dfad8-b698-48ca-a507-a2288b46e2e3", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"65bc3516e57b7020293e0ad5e56779b6d1697e7e459d769e4aac6542a2c181f8", Pod:"goldmane-768f4c5c69-xkhnd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.83.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib6ba1bbdcb2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:44.235700 containerd[2016]: 2025-08-13 00:20:44.055 [INFO][5844] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" Aug 13 00:20:44.235700 containerd[2016]: 2025-08-13 00:20:44.055 [INFO][5844] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" iface="eth0" netns="" Aug 13 00:20:44.235700 containerd[2016]: 2025-08-13 00:20:44.056 [INFO][5844] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" Aug 13 00:20:44.235700 containerd[2016]: 2025-08-13 00:20:44.056 [INFO][5844] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" Aug 13 00:20:44.235700 containerd[2016]: 2025-08-13 00:20:44.189 [INFO][5861] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" HandleID="k8s-pod-network.89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" Workload="ip--172--31--18--147-k8s-goldmane--768f4c5c69--xkhnd-eth0" Aug 13 00:20:44.235700 containerd[2016]: 2025-08-13 00:20:44.191 [INFO][5861] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:44.235700 containerd[2016]: 2025-08-13 00:20:44.192 [INFO][5861] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:44.235700 containerd[2016]: 2025-08-13 00:20:44.223 [WARNING][5861] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" HandleID="k8s-pod-network.89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" Workload="ip--172--31--18--147-k8s-goldmane--768f4c5c69--xkhnd-eth0" Aug 13 00:20:44.235700 containerd[2016]: 2025-08-13 00:20:44.223 [INFO][5861] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" HandleID="k8s-pod-network.89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" Workload="ip--172--31--18--147-k8s-goldmane--768f4c5c69--xkhnd-eth0" Aug 13 00:20:44.235700 containerd[2016]: 2025-08-13 00:20:44.225 [INFO][5861] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:44.235700 containerd[2016]: 2025-08-13 00:20:44.231 [INFO][5844] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" Aug 13 00:20:44.237512 containerd[2016]: time="2025-08-13T00:20:44.235741867Z" level=info msg="TearDown network for sandbox \"89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89\" successfully" Aug 13 00:20:44.237512 containerd[2016]: time="2025-08-13T00:20:44.235780123Z" level=info msg="StopPodSandbox for \"89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89\" returns successfully" Aug 13 00:20:44.239301 containerd[2016]: time="2025-08-13T00:20:44.238906387Z" level=info msg="RemovePodSandbox for \"89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89\"" Aug 13 00:20:44.239301 containerd[2016]: time="2025-08-13T00:20:44.238989991Z" level=info msg="Forcibly stopping sandbox \"89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89\"" Aug 13 00:20:44.258596 sshd[5830]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:44.268989 systemd[1]: sshd@9-172.31.18.147:22-139.178.89.65:45704.service: Deactivated successfully. Aug 13 00:20:44.275146 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:20:44.277224 systemd-logind[2000]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:20:44.281407 systemd-logind[2000]: Removed session 10. Aug 13 00:20:44.388305 containerd[2016]: 2025-08-13 00:20:44.325 [WARNING][5876] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-goldmane--768f4c5c69--xkhnd-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"7a5dfad8-b698-48ca-a507-a2288b46e2e3", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"65bc3516e57b7020293e0ad5e56779b6d1697e7e459d769e4aac6542a2c181f8", Pod:"goldmane-768f4c5c69-xkhnd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.83.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib6ba1bbdcb2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:44.388305 containerd[2016]: 2025-08-13 00:20:44.325 [INFO][5876] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" Aug 13 00:20:44.388305 containerd[2016]: 2025-08-13 00:20:44.325 [INFO][5876] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" iface="eth0" netns="" Aug 13 00:20:44.388305 containerd[2016]: 2025-08-13 00:20:44.325 [INFO][5876] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" Aug 13 00:20:44.388305 containerd[2016]: 2025-08-13 00:20:44.325 [INFO][5876] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" Aug 13 00:20:44.388305 containerd[2016]: 2025-08-13 00:20:44.364 [INFO][5886] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" HandleID="k8s-pod-network.89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" Workload="ip--172--31--18--147-k8s-goldmane--768f4c5c69--xkhnd-eth0" Aug 13 00:20:44.388305 containerd[2016]: 2025-08-13 00:20:44.365 [INFO][5886] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:44.388305 containerd[2016]: 2025-08-13 00:20:44.365 [INFO][5886] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:44.388305 containerd[2016]: 2025-08-13 00:20:44.380 [WARNING][5886] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" HandleID="k8s-pod-network.89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" Workload="ip--172--31--18--147-k8s-goldmane--768f4c5c69--xkhnd-eth0" Aug 13 00:20:44.388305 containerd[2016]: 2025-08-13 00:20:44.380 [INFO][5886] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" HandleID="k8s-pod-network.89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" Workload="ip--172--31--18--147-k8s-goldmane--768f4c5c69--xkhnd-eth0" Aug 13 00:20:44.388305 containerd[2016]: 2025-08-13 00:20:44.382 [INFO][5886] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:44.388305 containerd[2016]: 2025-08-13 00:20:44.385 [INFO][5876] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89" Aug 13 00:20:44.389690 containerd[2016]: time="2025-08-13T00:20:44.388395320Z" level=info msg="TearDown network for sandbox \"89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89\" successfully" Aug 13 00:20:44.401193 containerd[2016]: time="2025-08-13T00:20:44.401125352Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:20:44.401360 containerd[2016]: time="2025-08-13T00:20:44.401230796Z" level=info msg="RemovePodSandbox \"89a891f8d9823bc646219a5b44003c5ee3055c69434a1976c364485b3f0cea89\" returns successfully" Aug 13 00:20:44.403540 containerd[2016]: time="2025-08-13T00:20:44.403358288Z" level=info msg="StopPodSandbox for \"b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b\"" Aug 13 00:20:44.561835 containerd[2016]: 2025-08-13 00:20:44.489 [WARNING][5901] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" WorkloadEndpoint="ip--172--31--18--147-k8s-whisker--7c697d5669--s2sn4-eth0" Aug 13 00:20:44.561835 containerd[2016]: 2025-08-13 00:20:44.490 [INFO][5901] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" Aug 13 00:20:44.561835 containerd[2016]: 2025-08-13 00:20:44.490 [INFO][5901] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" iface="eth0" netns="" Aug 13 00:20:44.561835 containerd[2016]: 2025-08-13 00:20:44.490 [INFO][5901] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" Aug 13 00:20:44.561835 containerd[2016]: 2025-08-13 00:20:44.490 [INFO][5901] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" Aug 13 00:20:44.561835 containerd[2016]: 2025-08-13 00:20:44.533 [INFO][5909] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" HandleID="k8s-pod-network.b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" Workload="ip--172--31--18--147-k8s-whisker--7c697d5669--s2sn4-eth0" Aug 13 00:20:44.561835 containerd[2016]: 2025-08-13 00:20:44.533 [INFO][5909] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:44.561835 containerd[2016]: 2025-08-13 00:20:44.533 [INFO][5909] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:44.561835 containerd[2016]: 2025-08-13 00:20:44.547 [WARNING][5909] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" HandleID="k8s-pod-network.b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" Workload="ip--172--31--18--147-k8s-whisker--7c697d5669--s2sn4-eth0" Aug 13 00:20:44.561835 containerd[2016]: 2025-08-13 00:20:44.548 [INFO][5909] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" HandleID="k8s-pod-network.b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" Workload="ip--172--31--18--147-k8s-whisker--7c697d5669--s2sn4-eth0" Aug 13 00:20:44.561835 containerd[2016]: 2025-08-13 00:20:44.551 [INFO][5909] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:44.561835 containerd[2016]: 2025-08-13 00:20:44.557 [INFO][5901] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" Aug 13 00:20:44.561835 containerd[2016]: time="2025-08-13T00:20:44.561088065Z" level=info msg="TearDown network for sandbox \"b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b\" successfully" Aug 13 00:20:44.561835 containerd[2016]: time="2025-08-13T00:20:44.561164961Z" level=info msg="StopPodSandbox for \"b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b\" returns successfully" Aug 13 00:20:44.563727 containerd[2016]: time="2025-08-13T00:20:44.562045593Z" level=info msg="RemovePodSandbox for \"b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b\"" Aug 13 00:20:44.563727 containerd[2016]: time="2025-08-13T00:20:44.562127865Z" level=info msg="Forcibly stopping sandbox \"b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b\"" Aug 13 00:20:44.689041 containerd[2016]: 2025-08-13 00:20:44.627 [WARNING][5924] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" WorkloadEndpoint="ip--172--31--18--147-k8s-whisker--7c697d5669--s2sn4-eth0" Aug 13 00:20:44.689041 containerd[2016]: 2025-08-13 00:20:44.627 [INFO][5924] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" Aug 13 00:20:44.689041 containerd[2016]: 2025-08-13 00:20:44.627 [INFO][5924] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" iface="eth0" netns="" Aug 13 00:20:44.689041 containerd[2016]: 2025-08-13 00:20:44.627 [INFO][5924] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" Aug 13 00:20:44.689041 containerd[2016]: 2025-08-13 00:20:44.627 [INFO][5924] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" Aug 13 00:20:44.689041 containerd[2016]: 2025-08-13 00:20:44.666 [INFO][5931] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" HandleID="k8s-pod-network.b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" Workload="ip--172--31--18--147-k8s-whisker--7c697d5669--s2sn4-eth0" Aug 13 00:20:44.689041 containerd[2016]: 2025-08-13 00:20:44.667 [INFO][5931] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:44.689041 containerd[2016]: 2025-08-13 00:20:44.667 [INFO][5931] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:44.689041 containerd[2016]: 2025-08-13 00:20:44.680 [WARNING][5931] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" HandleID="k8s-pod-network.b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" Workload="ip--172--31--18--147-k8s-whisker--7c697d5669--s2sn4-eth0" Aug 13 00:20:44.689041 containerd[2016]: 2025-08-13 00:20:44.681 [INFO][5931] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" HandleID="k8s-pod-network.b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" Workload="ip--172--31--18--147-k8s-whisker--7c697d5669--s2sn4-eth0" Aug 13 00:20:44.689041 containerd[2016]: 2025-08-13 00:20:44.684 [INFO][5931] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:44.689041 containerd[2016]: 2025-08-13 00:20:44.686 [INFO][5924] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b" Aug 13 00:20:44.690277 containerd[2016]: time="2025-08-13T00:20:44.689086965Z" level=info msg="TearDown network for sandbox \"b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b\" successfully" Aug 13 00:20:44.696917 containerd[2016]: time="2025-08-13T00:20:44.696833553Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:20:44.697084 containerd[2016]: time="2025-08-13T00:20:44.696976401Z" level=info msg="RemovePodSandbox \"b5e385c0959cdd04009ccbaca68fb06d1befec76bcafa6d2b7134dc8f97e0e9b\" returns successfully" Aug 13 00:20:44.697968 containerd[2016]: time="2025-08-13T00:20:44.697806849Z" level=info msg="StopPodSandbox for \"532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6\"" Aug 13 00:20:44.828676 containerd[2016]: 2025-08-13 00:20:44.761 [WARNING][5945] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-calico--apiserver--76df78f95--gh9qn-eth0", GenerateName:"calico-apiserver-76df78f95-", Namespace:"calico-apiserver", SelfLink:"", UID:"9ed40b4c-f347-4ecd-b45b-59769ec7e382", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76df78f95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"cd36401c6852cf8f06556aba39f43fa7654f86e7d72d36d0b935ec6be0be1140", Pod:"calico-apiserver-76df78f95-gh9qn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77d9de0d760", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:44.828676 containerd[2016]: 2025-08-13 00:20:44.761 [INFO][5945] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" Aug 13 00:20:44.828676 containerd[2016]: 2025-08-13 00:20:44.761 [INFO][5945] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" iface="eth0" netns="" Aug 13 00:20:44.828676 containerd[2016]: 2025-08-13 00:20:44.761 [INFO][5945] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" Aug 13 00:20:44.828676 containerd[2016]: 2025-08-13 00:20:44.761 [INFO][5945] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" Aug 13 00:20:44.828676 containerd[2016]: 2025-08-13 00:20:44.802 [INFO][5952] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" HandleID="k8s-pod-network.532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" Workload="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--gh9qn-eth0" Aug 13 00:20:44.828676 containerd[2016]: 2025-08-13 00:20:44.802 [INFO][5952] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:44.828676 containerd[2016]: 2025-08-13 00:20:44.802 [INFO][5952] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:44.828676 containerd[2016]: 2025-08-13 00:20:44.818 [WARNING][5952] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" HandleID="k8s-pod-network.532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" Workload="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--gh9qn-eth0" Aug 13 00:20:44.828676 containerd[2016]: 2025-08-13 00:20:44.818 [INFO][5952] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" HandleID="k8s-pod-network.532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" Workload="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--gh9qn-eth0" Aug 13 00:20:44.828676 containerd[2016]: 2025-08-13 00:20:44.823 [INFO][5952] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:44.828676 containerd[2016]: 2025-08-13 00:20:44.825 [INFO][5945] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" Aug 13 00:20:44.830107 containerd[2016]: time="2025-08-13T00:20:44.829849750Z" level=info msg="TearDown network for sandbox \"532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6\" successfully" Aug 13 00:20:44.830107 containerd[2016]: time="2025-08-13T00:20:44.829895398Z" level=info msg="StopPodSandbox for \"532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6\" returns successfully" Aug 13 00:20:44.831606 containerd[2016]: time="2025-08-13T00:20:44.831558790Z" level=info msg="RemovePodSandbox for \"532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6\"" Aug 13 00:20:44.831731 containerd[2016]: time="2025-08-13T00:20:44.831618322Z" level=info msg="Forcibly stopping sandbox \"532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6\"" Aug 13 00:20:45.052712 containerd[2016]: 2025-08-13 00:20:44.896 [WARNING][5966] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-calico--apiserver--76df78f95--gh9qn-eth0", GenerateName:"calico-apiserver-76df78f95-", Namespace:"calico-apiserver", SelfLink:"", UID:"9ed40b4c-f347-4ecd-b45b-59769ec7e382", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76df78f95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"cd36401c6852cf8f06556aba39f43fa7654f86e7d72d36d0b935ec6be0be1140", Pod:"calico-apiserver-76df78f95-gh9qn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77d9de0d760", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:45.052712 containerd[2016]: 2025-08-13 00:20:44.897 [INFO][5966] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" Aug 13 00:20:45.052712 containerd[2016]: 2025-08-13 00:20:44.897 [INFO][5966] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" iface="eth0" netns="" Aug 13 00:20:45.052712 containerd[2016]: 2025-08-13 00:20:44.897 [INFO][5966] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" Aug 13 00:20:45.052712 containerd[2016]: 2025-08-13 00:20:44.897 [INFO][5966] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" Aug 13 00:20:45.052712 containerd[2016]: 2025-08-13 00:20:44.998 [INFO][5973] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" HandleID="k8s-pod-network.532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" Workload="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--gh9qn-eth0" Aug 13 00:20:45.052712 containerd[2016]: 2025-08-13 00:20:44.999 [INFO][5973] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:45.052712 containerd[2016]: 2025-08-13 00:20:44.999 [INFO][5973] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:45.052712 containerd[2016]: 2025-08-13 00:20:45.035 [WARNING][5973] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" HandleID="k8s-pod-network.532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" Workload="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--gh9qn-eth0" Aug 13 00:20:45.052712 containerd[2016]: 2025-08-13 00:20:45.036 [INFO][5973] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" HandleID="k8s-pod-network.532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" Workload="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--gh9qn-eth0" Aug 13 00:20:45.052712 containerd[2016]: 2025-08-13 00:20:45.041 [INFO][5973] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:45.052712 containerd[2016]: 2025-08-13 00:20:45.046 [INFO][5966] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6" Aug 13 00:20:45.052712 containerd[2016]: time="2025-08-13T00:20:45.052385671Z" level=info msg="TearDown network for sandbox \"532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6\" successfully" Aug 13 00:20:45.072381 containerd[2016]: time="2025-08-13T00:20:45.070157935Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:20:45.072381 containerd[2016]: time="2025-08-13T00:20:45.070558975Z" level=info msg="RemovePodSandbox \"532c696ea735c03de603ea17185c5df9df161a55777dfce96443c511445a47e6\" returns successfully" Aug 13 00:20:45.077769 containerd[2016]: time="2025-08-13T00:20:45.077505043Z" level=info msg="StopPodSandbox for \"64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c\"" Aug 13 00:20:45.231761 containerd[2016]: 2025-08-13 00:20:45.166 [WARNING][5990] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-calico--kube--controllers--cfdf54966--lrl8t-eth0", GenerateName:"calico-kube-controllers-cfdf54966-", Namespace:"calico-system", SelfLink:"", UID:"d0079dcf-b4da-4691-a20c-b4a4e5395937", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cfdf54966", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"37a7d766225d9366f085bd281fbd6272880d49a5ff778b28ad395e1bc64d22d1", Pod:"calico-kube-controllers-cfdf54966-lrl8t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.83.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0ea23f8ed33", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:45.231761 containerd[2016]: 2025-08-13 00:20:45.166 [INFO][5990] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" Aug 13 00:20:45.231761 containerd[2016]: 2025-08-13 00:20:45.167 [INFO][5990] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" iface="eth0" netns="" Aug 13 00:20:45.231761 containerd[2016]: 2025-08-13 00:20:45.167 [INFO][5990] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" Aug 13 00:20:45.231761 containerd[2016]: 2025-08-13 00:20:45.167 [INFO][5990] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" Aug 13 00:20:45.231761 containerd[2016]: 2025-08-13 00:20:45.207 [INFO][5997] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" HandleID="k8s-pod-network.64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" Workload="ip--172--31--18--147-k8s-calico--kube--controllers--cfdf54966--lrl8t-eth0" Aug 13 00:20:45.231761 containerd[2016]: 2025-08-13 00:20:45.207 [INFO][5997] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:45.231761 containerd[2016]: 2025-08-13 00:20:45.207 [INFO][5997] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:45.231761 containerd[2016]: 2025-08-13 00:20:45.222 [WARNING][5997] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" HandleID="k8s-pod-network.64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" Workload="ip--172--31--18--147-k8s-calico--kube--controllers--cfdf54966--lrl8t-eth0" Aug 13 00:20:45.231761 containerd[2016]: 2025-08-13 00:20:45.223 [INFO][5997] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" HandleID="k8s-pod-network.64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" Workload="ip--172--31--18--147-k8s-calico--kube--controllers--cfdf54966--lrl8t-eth0" Aug 13 00:20:45.231761 containerd[2016]: 2025-08-13 00:20:45.225 [INFO][5997] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:45.231761 containerd[2016]: 2025-08-13 00:20:45.228 [INFO][5990] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" Aug 13 00:20:45.232587 containerd[2016]: time="2025-08-13T00:20:45.231840776Z" level=info msg="TearDown network for sandbox \"64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c\" successfully" Aug 13 00:20:45.232587 containerd[2016]: time="2025-08-13T00:20:45.231902828Z" level=info msg="StopPodSandbox for \"64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c\" returns successfully" Aug 13 00:20:45.232587 containerd[2016]: time="2025-08-13T00:20:45.232561904Z" level=info msg="RemovePodSandbox for \"64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c\"" Aug 13 00:20:45.232784 containerd[2016]: time="2025-08-13T00:20:45.232606928Z" level=info msg="Forcibly stopping sandbox \"64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c\"" Aug 13 00:20:45.380086 containerd[2016]: 2025-08-13 00:20:45.314 [WARNING][6012] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-calico--kube--controllers--cfdf54966--lrl8t-eth0", GenerateName:"calico-kube-controllers-cfdf54966-", Namespace:"calico-system", SelfLink:"", UID:"d0079dcf-b4da-4691-a20c-b4a4e5395937", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cfdf54966", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"37a7d766225d9366f085bd281fbd6272880d49a5ff778b28ad395e1bc64d22d1", Pod:"calico-kube-controllers-cfdf54966-lrl8t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.83.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0ea23f8ed33", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:45.380086 containerd[2016]: 2025-08-13 00:20:45.314 [INFO][6012] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" Aug 13 00:20:45.380086 containerd[2016]: 2025-08-13 00:20:45.314 [INFO][6012] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" iface="eth0" netns="" Aug 13 00:20:45.380086 containerd[2016]: 2025-08-13 00:20:45.314 [INFO][6012] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" Aug 13 00:20:45.380086 containerd[2016]: 2025-08-13 00:20:45.314 [INFO][6012] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" Aug 13 00:20:45.380086 containerd[2016]: 2025-08-13 00:20:45.355 [INFO][6019] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" HandleID="k8s-pod-network.64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" Workload="ip--172--31--18--147-k8s-calico--kube--controllers--cfdf54966--lrl8t-eth0" Aug 13 00:20:45.380086 containerd[2016]: 2025-08-13 00:20:45.356 [INFO][6019] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:45.380086 containerd[2016]: 2025-08-13 00:20:45.356 [INFO][6019] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:45.380086 containerd[2016]: 2025-08-13 00:20:45.371 [WARNING][6019] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" HandleID="k8s-pod-network.64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" Workload="ip--172--31--18--147-k8s-calico--kube--controllers--cfdf54966--lrl8t-eth0" Aug 13 00:20:45.380086 containerd[2016]: 2025-08-13 00:20:45.371 [INFO][6019] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" HandleID="k8s-pod-network.64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" Workload="ip--172--31--18--147-k8s-calico--kube--controllers--cfdf54966--lrl8t-eth0" Aug 13 00:20:45.380086 containerd[2016]: 2025-08-13 00:20:45.374 [INFO][6019] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:45.380086 containerd[2016]: 2025-08-13 00:20:45.377 [INFO][6012] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c" Aug 13 00:20:45.381538 containerd[2016]: time="2025-08-13T00:20:45.380099409Z" level=info msg="TearDown network for sandbox \"64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c\" successfully" Aug 13 00:20:45.384768 containerd[2016]: time="2025-08-13T00:20:45.384685653Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:20:45.384909 containerd[2016]: time="2025-08-13T00:20:45.384819885Z" level=info msg="RemovePodSandbox \"64ff83dc10441a491b52ae4e184775f6355bbcfac3eabe542803f1a28451774c\" returns successfully" Aug 13 00:20:45.385456 containerd[2016]: time="2025-08-13T00:20:45.385407249Z" level=info msg="StopPodSandbox for \"305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e\"" Aug 13 00:20:45.514000 containerd[2016]: 2025-08-13 00:20:45.451 [WARNING][6033] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-coredns--674b8bbfcf--tbpmv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3196cb5b-591a-4704-a9d8-fe628e63a24b", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"59bee30b3f720d1d5455fa439f0318b34e59b177ab61824052cad47a74afeb2a", Pod:"coredns-674b8bbfcf-tbpmv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67440b301f0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:45.514000 containerd[2016]: 2025-08-13 00:20:45.453 [INFO][6033] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" Aug 13 00:20:45.514000 containerd[2016]: 2025-08-13 00:20:45.453 [INFO][6033] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" iface="eth0" netns="" Aug 13 00:20:45.514000 containerd[2016]: 2025-08-13 00:20:45.453 [INFO][6033] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" Aug 13 00:20:45.514000 containerd[2016]: 2025-08-13 00:20:45.453 [INFO][6033] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" Aug 13 00:20:45.514000 containerd[2016]: 2025-08-13 00:20:45.491 [INFO][6041] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" HandleID="k8s-pod-network.305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" Workload="ip--172--31--18--147-k8s-coredns--674b8bbfcf--tbpmv-eth0" Aug 13 00:20:45.514000 containerd[2016]: 2025-08-13 00:20:45.491 [INFO][6041] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:45.514000 containerd[2016]: 2025-08-13 00:20:45.491 [INFO][6041] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:45.514000 containerd[2016]: 2025-08-13 00:20:45.504 [WARNING][6041] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" HandleID="k8s-pod-network.305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" Workload="ip--172--31--18--147-k8s-coredns--674b8bbfcf--tbpmv-eth0" Aug 13 00:20:45.514000 containerd[2016]: 2025-08-13 00:20:45.504 [INFO][6041] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" HandleID="k8s-pod-network.305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" Workload="ip--172--31--18--147-k8s-coredns--674b8bbfcf--tbpmv-eth0" Aug 13 00:20:45.514000 containerd[2016]: 2025-08-13 00:20:45.508 [INFO][6041] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:45.514000 containerd[2016]: 2025-08-13 00:20:45.510 [INFO][6033] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" Aug 13 00:20:45.517119 containerd[2016]: time="2025-08-13T00:20:45.514932010Z" level=info msg="TearDown network for sandbox \"305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e\" successfully" Aug 13 00:20:45.517119 containerd[2016]: time="2025-08-13T00:20:45.515196250Z" level=info msg="StopPodSandbox for \"305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e\" returns successfully" Aug 13 00:20:45.517119 containerd[2016]: time="2025-08-13T00:20:45.516292834Z" level=info msg="RemovePodSandbox for \"305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e\"" Aug 13 00:20:45.517119 containerd[2016]: time="2025-08-13T00:20:45.516342586Z" level=info msg="Forcibly stopping sandbox \"305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e\"" Aug 13 00:20:45.666771 containerd[2016]: 2025-08-13 00:20:45.593 [WARNING][6055] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-coredns--674b8bbfcf--tbpmv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3196cb5b-591a-4704-a9d8-fe628e63a24b", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"59bee30b3f720d1d5455fa439f0318b34e59b177ab61824052cad47a74afeb2a", Pod:"coredns-674b8bbfcf-tbpmv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67440b301f0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:45.666771 containerd[2016]: 2025-08-13 00:20:45.594 [INFO][6055] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" Aug 13 00:20:45.666771 containerd[2016]: 2025-08-13 00:20:45.594 [INFO][6055] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" iface="eth0" netns="" Aug 13 00:20:45.666771 containerd[2016]: 2025-08-13 00:20:45.594 [INFO][6055] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" Aug 13 00:20:45.666771 containerd[2016]: 2025-08-13 00:20:45.594 [INFO][6055] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" Aug 13 00:20:45.666771 containerd[2016]: 2025-08-13 00:20:45.637 [INFO][6062] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" HandleID="k8s-pod-network.305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" Workload="ip--172--31--18--147-k8s-coredns--674b8bbfcf--tbpmv-eth0" Aug 13 00:20:45.666771 containerd[2016]: 2025-08-13 00:20:45.637 [INFO][6062] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:45.666771 containerd[2016]: 2025-08-13 00:20:45.637 [INFO][6062] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:45.666771 containerd[2016]: 2025-08-13 00:20:45.657 [WARNING][6062] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" HandleID="k8s-pod-network.305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" Workload="ip--172--31--18--147-k8s-coredns--674b8bbfcf--tbpmv-eth0" Aug 13 00:20:45.666771 containerd[2016]: 2025-08-13 00:20:45.657 [INFO][6062] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" HandleID="k8s-pod-network.305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" Workload="ip--172--31--18--147-k8s-coredns--674b8bbfcf--tbpmv-eth0" Aug 13 00:20:45.666771 containerd[2016]: 2025-08-13 00:20:45.661 [INFO][6062] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:45.666771 containerd[2016]: 2025-08-13 00:20:45.664 [INFO][6055] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e" Aug 13 00:20:45.666771 containerd[2016]: time="2025-08-13T00:20:45.666575974Z" level=info msg="TearDown network for sandbox \"305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e\" successfully" Aug 13 00:20:45.670829 containerd[2016]: time="2025-08-13T00:20:45.670755190Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:20:45.670960 containerd[2016]: time="2025-08-13T00:20:45.670851478Z" level=info msg="RemovePodSandbox \"305160cb88d23986fbe098265fc48be301b66a32f3541096229d7f959fd1dd7e\" returns successfully" Aug 13 00:20:45.672286 containerd[2016]: time="2025-08-13T00:20:45.672024430Z" level=info msg="StopPodSandbox for \"17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c\"" Aug 13 00:20:45.796901 containerd[2016]: 2025-08-13 00:20:45.736 [WARNING][6076] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-calico--apiserver--76df78f95--t54cs-eth0", GenerateName:"calico-apiserver-76df78f95-", Namespace:"calico-apiserver", SelfLink:"", UID:"b69d49e6-4f1c-4a7b-bc02-821b576330fe", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76df78f95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"c0ff3abfd459e06c340b0efec02b265fa88693f7f680d1938e3e8852418266fd", Pod:"calico-apiserver-76df78f95-t54cs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali467feaa9e43", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:45.796901 containerd[2016]: 2025-08-13 00:20:45.737 [INFO][6076] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" Aug 13 00:20:45.796901 containerd[2016]: 2025-08-13 00:20:45.737 [INFO][6076] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" iface="eth0" netns="" Aug 13 00:20:45.796901 containerd[2016]: 2025-08-13 00:20:45.737 [INFO][6076] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" Aug 13 00:20:45.796901 containerd[2016]: 2025-08-13 00:20:45.737 [INFO][6076] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" Aug 13 00:20:45.796901 containerd[2016]: 2025-08-13 00:20:45.775 [INFO][6083] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" HandleID="k8s-pod-network.17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" Workload="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--t54cs-eth0" Aug 13 00:20:45.796901 containerd[2016]: 2025-08-13 00:20:45.775 [INFO][6083] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:45.796901 containerd[2016]: 2025-08-13 00:20:45.775 [INFO][6083] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:45.796901 containerd[2016]: 2025-08-13 00:20:45.788 [WARNING][6083] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" HandleID="k8s-pod-network.17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" Workload="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--t54cs-eth0" Aug 13 00:20:45.796901 containerd[2016]: 2025-08-13 00:20:45.788 [INFO][6083] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" HandleID="k8s-pod-network.17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" Workload="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--t54cs-eth0" Aug 13 00:20:45.796901 containerd[2016]: 2025-08-13 00:20:45.791 [INFO][6083] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:45.796901 containerd[2016]: 2025-08-13 00:20:45.794 [INFO][6076] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" Aug 13 00:20:45.796901 containerd[2016]: time="2025-08-13T00:20:45.796860623Z" level=info msg="TearDown network for sandbox \"17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c\" successfully" Aug 13 00:20:45.796901 containerd[2016]: time="2025-08-13T00:20:45.796898591Z" level=info msg="StopPodSandbox for \"17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c\" returns successfully" Aug 13 00:20:45.798521 containerd[2016]: time="2025-08-13T00:20:45.797479151Z" level=info msg="RemovePodSandbox for \"17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c\"" Aug 13 00:20:45.798521 containerd[2016]: time="2025-08-13T00:20:45.797522975Z" level=info msg="Forcibly stopping sandbox \"17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c\"" Aug 13 00:20:45.963573 containerd[2016]: 2025-08-13 00:20:45.893 [WARNING][6098] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-calico--apiserver--76df78f95--t54cs-eth0", GenerateName:"calico-apiserver-76df78f95-", Namespace:"calico-apiserver", SelfLink:"", UID:"b69d49e6-4f1c-4a7b-bc02-821b576330fe", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76df78f95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"c0ff3abfd459e06c340b0efec02b265fa88693f7f680d1938e3e8852418266fd", Pod:"calico-apiserver-76df78f95-t54cs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.83.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali467feaa9e43", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:45.963573 containerd[2016]: 2025-08-13 00:20:45.894 [INFO][6098] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" Aug 13 00:20:45.963573 containerd[2016]: 2025-08-13 00:20:45.894 [INFO][6098] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" iface="eth0" netns="" Aug 13 00:20:45.963573 containerd[2016]: 2025-08-13 00:20:45.894 [INFO][6098] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" Aug 13 00:20:45.963573 containerd[2016]: 2025-08-13 00:20:45.895 [INFO][6098] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" Aug 13 00:20:45.963573 containerd[2016]: 2025-08-13 00:20:45.937 [INFO][6105] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" HandleID="k8s-pod-network.17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" Workload="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--t54cs-eth0" Aug 13 00:20:45.963573 containerd[2016]: 2025-08-13 00:20:45.937 [INFO][6105] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:45.963573 containerd[2016]: 2025-08-13 00:20:45.937 [INFO][6105] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:45.963573 containerd[2016]: 2025-08-13 00:20:45.954 [WARNING][6105] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" HandleID="k8s-pod-network.17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" Workload="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--t54cs-eth0" Aug 13 00:20:45.963573 containerd[2016]: 2025-08-13 00:20:45.955 [INFO][6105] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" HandleID="k8s-pod-network.17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" Workload="ip--172--31--18--147-k8s-calico--apiserver--76df78f95--t54cs-eth0" Aug 13 00:20:45.963573 containerd[2016]: 2025-08-13 00:20:45.958 [INFO][6105] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:45.963573 containerd[2016]: 2025-08-13 00:20:45.960 [INFO][6098] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c" Aug 13 00:20:45.964513 containerd[2016]: time="2025-08-13T00:20:45.963668844Z" level=info msg="TearDown network for sandbox \"17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c\" successfully" Aug 13 00:20:45.968216 containerd[2016]: time="2025-08-13T00:20:45.968101092Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:20:45.968216 containerd[2016]: time="2025-08-13T00:20:45.968212848Z" level=info msg="RemovePodSandbox \"17cf0d2705564465c83017b087baa5376cf9cb4c1caec4643a23e030c65f695c\" returns successfully" Aug 13 00:20:45.970214 containerd[2016]: time="2025-08-13T00:20:45.969040032Z" level=info msg="StopPodSandbox for \"a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994\"" Aug 13 00:20:46.105213 containerd[2016]: 2025-08-13 00:20:46.043 [WARNING][6119] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-coredns--674b8bbfcf--z8tcq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"899e91ee-239a-47a8-b50b-26e16e9ebb04", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"487f6981ade884b4e16a1f2806bf6eb12e8d621babf8f8e5a13c99c684aede4b", Pod:"coredns-674b8bbfcf-z8tcq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1deb6f634d9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:46.105213 containerd[2016]: 2025-08-13 00:20:46.044 [INFO][6119] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" Aug 13 00:20:46.105213 containerd[2016]: 2025-08-13 00:20:46.044 [INFO][6119] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" iface="eth0" netns="" Aug 13 00:20:46.105213 containerd[2016]: 2025-08-13 00:20:46.044 [INFO][6119] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" Aug 13 00:20:46.105213 containerd[2016]: 2025-08-13 00:20:46.044 [INFO][6119] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" Aug 13 00:20:46.105213 containerd[2016]: 2025-08-13 00:20:46.082 [INFO][6126] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" HandleID="k8s-pod-network.a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" Workload="ip--172--31--18--147-k8s-coredns--674b8bbfcf--z8tcq-eth0" Aug 13 00:20:46.105213 containerd[2016]: 2025-08-13 00:20:46.083 [INFO][6126] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:46.105213 containerd[2016]: 2025-08-13 00:20:46.083 [INFO][6126] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:46.105213 containerd[2016]: 2025-08-13 00:20:46.096 [WARNING][6126] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" HandleID="k8s-pod-network.a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" Workload="ip--172--31--18--147-k8s-coredns--674b8bbfcf--z8tcq-eth0" Aug 13 00:20:46.105213 containerd[2016]: 2025-08-13 00:20:46.096 [INFO][6126] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" HandleID="k8s-pod-network.a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" Workload="ip--172--31--18--147-k8s-coredns--674b8bbfcf--z8tcq-eth0" Aug 13 00:20:46.105213 containerd[2016]: 2025-08-13 00:20:46.099 [INFO][6126] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:46.105213 containerd[2016]: 2025-08-13 00:20:46.102 [INFO][6119] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" Aug 13 00:20:46.105213 containerd[2016]: time="2025-08-13T00:20:46.105149192Z" level=info msg="TearDown network for sandbox \"a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994\" successfully" Aug 13 00:20:46.107977 containerd[2016]: time="2025-08-13T00:20:46.105212336Z" level=info msg="StopPodSandbox for \"a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994\" returns successfully" Aug 13 00:20:46.107977 containerd[2016]: time="2025-08-13T00:20:46.106467740Z" level=info msg="RemovePodSandbox for \"a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994\"" Aug 13 00:20:46.107977 containerd[2016]: time="2025-08-13T00:20:46.106516580Z" level=info msg="Forcibly stopping sandbox \"a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994\"" Aug 13 00:20:46.247993 containerd[2016]: 2025-08-13 00:20:46.182 [WARNING][6141] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-coredns--674b8bbfcf--z8tcq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"899e91ee-239a-47a8-b50b-26e16e9ebb04", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 19, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"487f6981ade884b4e16a1f2806bf6eb12e8d621babf8f8e5a13c99c684aede4b", Pod:"coredns-674b8bbfcf-z8tcq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.83.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1deb6f634d9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:46.247993 containerd[2016]: 2025-08-13 00:20:46.183 [INFO][6141] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" Aug 13 00:20:46.247993 containerd[2016]: 2025-08-13 00:20:46.183 [INFO][6141] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" iface="eth0" netns="" Aug 13 00:20:46.247993 containerd[2016]: 2025-08-13 00:20:46.183 [INFO][6141] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" Aug 13 00:20:46.247993 containerd[2016]: 2025-08-13 00:20:46.183 [INFO][6141] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" Aug 13 00:20:46.247993 containerd[2016]: 2025-08-13 00:20:46.221 [INFO][6149] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" HandleID="k8s-pod-network.a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" Workload="ip--172--31--18--147-k8s-coredns--674b8bbfcf--z8tcq-eth0" Aug 13 00:20:46.247993 containerd[2016]: 2025-08-13 00:20:46.221 [INFO][6149] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:46.247993 containerd[2016]: 2025-08-13 00:20:46.221 [INFO][6149] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:46.247993 containerd[2016]: 2025-08-13 00:20:46.238 [WARNING][6149] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" HandleID="k8s-pod-network.a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" Workload="ip--172--31--18--147-k8s-coredns--674b8bbfcf--z8tcq-eth0" Aug 13 00:20:46.247993 containerd[2016]: 2025-08-13 00:20:46.238 [INFO][6149] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" HandleID="k8s-pod-network.a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" Workload="ip--172--31--18--147-k8s-coredns--674b8bbfcf--z8tcq-eth0" Aug 13 00:20:46.247993 containerd[2016]: 2025-08-13 00:20:46.242 [INFO][6149] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:46.247993 containerd[2016]: 2025-08-13 00:20:46.245 [INFO][6141] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994" Aug 13 00:20:46.249413 containerd[2016]: time="2025-08-13T00:20:46.248029881Z" level=info msg="TearDown network for sandbox \"a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994\" successfully" Aug 13 00:20:46.253950 containerd[2016]: time="2025-08-13T00:20:46.253870017Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:20:46.254123 containerd[2016]: time="2025-08-13T00:20:46.253972953Z" level=info msg="RemovePodSandbox \"a0d2a00e692ebe8428b865ba47bd90c6a4e33a1e7fb8aa6ad340debaf1b43994\" returns successfully" Aug 13 00:20:46.255060 containerd[2016]: time="2025-08-13T00:20:46.254595309Z" level=info msg="StopPodSandbox for \"e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94\"" Aug 13 00:20:46.380971 containerd[2016]: 2025-08-13 00:20:46.322 [WARNING][6163] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-csi--node--driver--zw9d2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"58a4d2bd-5ce6-4a65-a83a-e16060349add", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"58e87984fd78f784e58df2d6014a214f5340586b764d17f12837298f58495bde", Pod:"csi-node-driver-zw9d2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.83.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8eae0b5a5c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:46.380971 containerd[2016]: 2025-08-13 00:20:46.322 [INFO][6163] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" Aug 13 00:20:46.380971 containerd[2016]: 2025-08-13 00:20:46.322 [INFO][6163] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" iface="eth0" netns="" Aug 13 00:20:46.380971 containerd[2016]: 2025-08-13 00:20:46.322 [INFO][6163] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" Aug 13 00:20:46.380971 containerd[2016]: 2025-08-13 00:20:46.322 [INFO][6163] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" Aug 13 00:20:46.380971 containerd[2016]: 2025-08-13 00:20:46.358 [INFO][6170] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" HandleID="k8s-pod-network.e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" Workload="ip--172--31--18--147-k8s-csi--node--driver--zw9d2-eth0" Aug 13 00:20:46.380971 containerd[2016]: 2025-08-13 00:20:46.358 [INFO][6170] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:46.380971 containerd[2016]: 2025-08-13 00:20:46.358 [INFO][6170] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:46.380971 containerd[2016]: 2025-08-13 00:20:46.372 [WARNING][6170] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" HandleID="k8s-pod-network.e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" Workload="ip--172--31--18--147-k8s-csi--node--driver--zw9d2-eth0" Aug 13 00:20:46.380971 containerd[2016]: 2025-08-13 00:20:46.373 [INFO][6170] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" HandleID="k8s-pod-network.e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" Workload="ip--172--31--18--147-k8s-csi--node--driver--zw9d2-eth0" Aug 13 00:20:46.380971 containerd[2016]: 2025-08-13 00:20:46.375 [INFO][6170] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:46.380971 containerd[2016]: 2025-08-13 00:20:46.377 [INFO][6163] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" Aug 13 00:20:46.383233 containerd[2016]: time="2025-08-13T00:20:46.381859282Z" level=info msg="TearDown network for sandbox \"e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94\" successfully" Aug 13 00:20:46.383233 containerd[2016]: time="2025-08-13T00:20:46.381899626Z" level=info msg="StopPodSandbox for \"e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94\" returns successfully" Aug 13 00:20:46.383999 containerd[2016]: time="2025-08-13T00:20:46.383487286Z" level=info msg="RemovePodSandbox for \"e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94\"" Aug 13 00:20:46.383999 containerd[2016]: time="2025-08-13T00:20:46.383562214Z" level=info msg="Forcibly stopping sandbox \"e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94\"" Aug 13 00:20:46.506455 containerd[2016]: 2025-08-13 00:20:46.448 [WARNING][6185] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--147-k8s-csi--node--driver--zw9d2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"58a4d2bd-5ce6-4a65-a83a-e16060349add", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 20, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-147", ContainerID:"58e87984fd78f784e58df2d6014a214f5340586b764d17f12837298f58495bde", Pod:"csi-node-driver-zw9d2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.83.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8eae0b5a5c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:20:46.506455 containerd[2016]: 2025-08-13 00:20:46.448 [INFO][6185] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" Aug 13 00:20:46.506455 containerd[2016]: 2025-08-13 00:20:46.449 [INFO][6185] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" iface="eth0" netns="" Aug 13 00:20:46.506455 containerd[2016]: 2025-08-13 00:20:46.449 [INFO][6185] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" Aug 13 00:20:46.506455 containerd[2016]: 2025-08-13 00:20:46.449 [INFO][6185] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" Aug 13 00:20:46.506455 containerd[2016]: 2025-08-13 00:20:46.485 [INFO][6192] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" HandleID="k8s-pod-network.e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" Workload="ip--172--31--18--147-k8s-csi--node--driver--zw9d2-eth0" Aug 13 00:20:46.506455 containerd[2016]: 2025-08-13 00:20:46.485 [INFO][6192] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:20:46.506455 containerd[2016]: 2025-08-13 00:20:46.485 [INFO][6192] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:20:46.506455 containerd[2016]: 2025-08-13 00:20:46.498 [WARNING][6192] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" HandleID="k8s-pod-network.e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" Workload="ip--172--31--18--147-k8s-csi--node--driver--zw9d2-eth0" Aug 13 00:20:46.506455 containerd[2016]: 2025-08-13 00:20:46.498 [INFO][6192] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" HandleID="k8s-pod-network.e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" Workload="ip--172--31--18--147-k8s-csi--node--driver--zw9d2-eth0" Aug 13 00:20:46.506455 containerd[2016]: 2025-08-13 00:20:46.501 [INFO][6192] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:20:46.506455 containerd[2016]: 2025-08-13 00:20:46.504 [INFO][6185] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94" Aug 13 00:20:46.507485 containerd[2016]: time="2025-08-13T00:20:46.506492506Z" level=info msg="TearDown network for sandbox \"e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94\" successfully" Aug 13 00:20:46.514984 containerd[2016]: time="2025-08-13T00:20:46.514615066Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:20:46.515346 containerd[2016]: time="2025-08-13T00:20:46.515036770Z" level=info msg="RemovePodSandbox \"e540bf919f506627e6e6d29cc5f77062a478039f0c548744e642ef5c189b1a94\" returns successfully" Aug 13 00:20:47.438728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1831643608.mount: Deactivated successfully. Aug 13 00:20:48.212010 containerd[2016]: time="2025-08-13T00:20:48.211946459Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:48.213764 containerd[2016]: time="2025-08-13T00:20:48.213706079Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Aug 13 00:20:48.215215 containerd[2016]: time="2025-08-13T00:20:48.215118011Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:48.220528 containerd[2016]: time="2025-08-13T00:20:48.220444703Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:48.223654 containerd[2016]: time="2025-08-13T00:20:48.222766763Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 6.741474682s" Aug 13 00:20:48.223654 containerd[2016]: time="2025-08-13T00:20:48.222857915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Aug 13 00:20:48.225567 containerd[2016]: time="2025-08-13T00:20:48.225520763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 00:20:48.232294 containerd[2016]: time="2025-08-13T00:20:48.232231883Z" level=info msg="CreateContainer within sandbox \"65bc3516e57b7020293e0ad5e56779b6d1697e7e459d769e4aac6542a2c181f8\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 00:20:48.255582 containerd[2016]: time="2025-08-13T00:20:48.255507515Z" level=info msg="CreateContainer within sandbox \"65bc3516e57b7020293e0ad5e56779b6d1697e7e459d769e4aac6542a2c181f8\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"e55cb64682343d3a13ea997c9cffa932d6c5448f62804913ba528837d35a084d\"" Aug 13 00:20:48.257887 containerd[2016]: time="2025-08-13T00:20:48.257072231Z" level=info msg="StartContainer for \"e55cb64682343d3a13ea997c9cffa932d6c5448f62804913ba528837d35a084d\"" Aug 13 00:20:48.323981 systemd[1]: Started cri-containerd-e55cb64682343d3a13ea997c9cffa932d6c5448f62804913ba528837d35a084d.scope - libcontainer container e55cb64682343d3a13ea997c9cffa932d6c5448f62804913ba528837d35a084d. Aug 13 00:20:48.389376 containerd[2016]: time="2025-08-13T00:20:48.389172168Z" level=info msg="StartContainer for \"e55cb64682343d3a13ea997c9cffa932d6c5448f62804913ba528837d35a084d\" returns successfully" Aug 13 00:20:49.302401 systemd[1]: Started sshd@10-172.31.18.147:22-139.178.89.65:48566.service - OpenSSH per-connection server daemon (139.178.89.65:48566). Aug 13 00:20:49.486224 sshd[6270]: Accepted publickey for core from 139.178.89.65 port 48566 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:49.490914 sshd[6270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:49.502827 systemd-logind[2000]: New session 11 of user core. Aug 13 00:20:49.509959 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 00:20:49.516726 containerd[2016]: time="2025-08-13T00:20:49.516077221Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:49.519164 containerd[2016]: time="2025-08-13T00:20:49.518895949Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Aug 13 00:20:49.522828 containerd[2016]: time="2025-08-13T00:20:49.522695125Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:49.528627 containerd[2016]: time="2025-08-13T00:20:49.528264973Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:49.534298 containerd[2016]: time="2025-08-13T00:20:49.534188089Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.30778475s" Aug 13 00:20:49.534298 containerd[2016]: time="2025-08-13T00:20:49.534272197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Aug 13 00:20:49.539139 containerd[2016]: time="2025-08-13T00:20:49.538857170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:20:49.548083 containerd[2016]: time="2025-08-13T00:20:49.548029682Z" level=info msg="CreateContainer within sandbox \"58e87984fd78f784e58df2d6014a214f5340586b764d17f12837298f58495bde\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 00:20:49.581154 containerd[2016]: time="2025-08-13T00:20:49.579726866Z" level=info msg="CreateContainer within sandbox \"58e87984fd78f784e58df2d6014a214f5340586b764d17f12837298f58495bde\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"76f17b0e9a745a25964201a8e5891c3beecca8503020e5c270b21078e360b4dd\"" Aug 13 00:20:49.583715 containerd[2016]: time="2025-08-13T00:20:49.583566134Z" level=info msg="StartContainer for \"76f17b0e9a745a25964201a8e5891c3beecca8503020e5c270b21078e360b4dd\"" Aug 13 00:20:49.737257 systemd[1]: Started cri-containerd-76f17b0e9a745a25964201a8e5891c3beecca8503020e5c270b21078e360b4dd.scope - libcontainer container 76f17b0e9a745a25964201a8e5891c3beecca8503020e5c270b21078e360b4dd. Aug 13 00:20:49.883678 containerd[2016]: time="2025-08-13T00:20:49.880893999Z" level=info msg="StartContainer for \"76f17b0e9a745a25964201a8e5891c3beecca8503020e5c270b21078e360b4dd\" returns successfully" Aug 13 00:20:49.944028 sshd[6270]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:49.954607 systemd[1]: sshd@10-172.31.18.147:22-139.178.89.65:48566.service: Deactivated successfully. Aug 13 00:20:49.962265 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:20:49.964169 systemd-logind[2000]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:20:49.967171 systemd-logind[2000]: Removed session 11. Aug 13 00:20:52.011489 containerd[2016]: time="2025-08-13T00:20:52.010982570Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:52.012613 containerd[2016]: time="2025-08-13T00:20:52.012506054Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Aug 13 00:20:52.014181 containerd[2016]: time="2025-08-13T00:20:52.014119766Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:52.018849 containerd[2016]: time="2025-08-13T00:20:52.018776990Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:52.023317 containerd[2016]: time="2025-08-13T00:20:52.023133878Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 2.484186732s" Aug 13 00:20:52.023317 containerd[2016]: time="2025-08-13T00:20:52.023196434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 13 00:20:52.029385 containerd[2016]: time="2025-08-13T00:20:52.028605158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:20:52.043308 containerd[2016]: time="2025-08-13T00:20:52.042801566Z" level=info msg="CreateContainer within sandbox \"c0ff3abfd459e06c340b0efec02b265fa88693f7f680d1938e3e8852418266fd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:20:52.091591 containerd[2016]: time="2025-08-13T00:20:52.090831242Z" level=info msg="CreateContainer within sandbox \"c0ff3abfd459e06c340b0efec02b265fa88693f7f680d1938e3e8852418266fd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c4309b1de2390103e65fd42bfcdcdd62eb8e514b56bdc1f92641229ed07d8d14\"" Aug 13 00:20:52.093024 containerd[2016]: time="2025-08-13T00:20:52.092932490Z" level=info msg="StartContainer for \"c4309b1de2390103e65fd42bfcdcdd62eb8e514b56bdc1f92641229ed07d8d14\"" Aug 13 00:20:52.182804 systemd[1]: Started cri-containerd-c4309b1de2390103e65fd42bfcdcdd62eb8e514b56bdc1f92641229ed07d8d14.scope - libcontainer container c4309b1de2390103e65fd42bfcdcdd62eb8e514b56bdc1f92641229ed07d8d14. Aug 13 00:20:52.295684 containerd[2016]: time="2025-08-13T00:20:52.294953727Z" level=info msg="StartContainer for \"c4309b1de2390103e65fd42bfcdcdd62eb8e514b56bdc1f92641229ed07d8d14\" returns successfully" Aug 13 00:20:52.328649 containerd[2016]: time="2025-08-13T00:20:52.328554831Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:52.332132 containerd[2016]: time="2025-08-13T00:20:52.331955247Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 00:20:52.341669 containerd[2016]: time="2025-08-13T00:20:52.341290047Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 312.393073ms" Aug 13 00:20:52.341669 containerd[2016]: time="2025-08-13T00:20:52.341363823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 13 00:20:52.345248 containerd[2016]: time="2025-08-13T00:20:52.343338867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 00:20:52.351167 containerd[2016]: time="2025-08-13T00:20:52.351116727Z" level=info msg="CreateContainer within sandbox \"cd36401c6852cf8f06556aba39f43fa7654f86e7d72d36d0b935ec6be0be1140\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:20:52.381080 containerd[2016]: time="2025-08-13T00:20:52.380930848Z" level=info msg="CreateContainer within sandbox \"cd36401c6852cf8f06556aba39f43fa7654f86e7d72d36d0b935ec6be0be1140\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"52d61b268c20d435da9d7b8db29dc1fbb7dba1b36ae069c0eb937f9ab0178fd4\"" Aug 13 00:20:52.384696 containerd[2016]: time="2025-08-13T00:20:52.383306584Z" level=info msg="StartContainer for \"52d61b268c20d435da9d7b8db29dc1fbb7dba1b36ae069c0eb937f9ab0178fd4\"" Aug 13 00:20:52.443966 systemd[1]: Started cri-containerd-52d61b268c20d435da9d7b8db29dc1fbb7dba1b36ae069c0eb937f9ab0178fd4.scope - libcontainer container 52d61b268c20d435da9d7b8db29dc1fbb7dba1b36ae069c0eb937f9ab0178fd4. Aug 13 00:20:52.533901 containerd[2016]: time="2025-08-13T00:20:52.533009692Z" level=info msg="StartContainer for \"52d61b268c20d435da9d7b8db29dc1fbb7dba1b36ae069c0eb937f9ab0178fd4\" returns successfully" Aug 13 00:20:52.674760 kubelet[3353]: I0813 00:20:52.674489 3353 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-76df78f95-gh9qn" podStartSLOduration=36.97357005 podStartE2EDuration="49.674349449s" podCreationTimestamp="2025-08-13 00:20:03 +0000 UTC" firstStartedPulling="2025-08-13 00:20:39.642138232 +0000 UTC m=+55.997328027" lastFinishedPulling="2025-08-13 00:20:52.342917631 +0000 UTC m=+68.698107426" observedRunningTime="2025-08-13 00:20:52.672807749 +0000 UTC m=+69.027997556" watchObservedRunningTime="2025-08-13 00:20:52.674349449 +0000 UTC m=+69.029539244" Aug 13 00:20:52.675420 kubelet[3353]: I0813 00:20:52.674961 3353 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-xkhnd" podStartSLOduration=29.183058313 podStartE2EDuration="39.674943653s" podCreationTimestamp="2025-08-13 00:20:13 +0000 UTC" firstStartedPulling="2025-08-13 00:20:37.733048935 +0000 UTC m=+54.088238742" lastFinishedPulling="2025-08-13 00:20:48.224934275 +0000 UTC m=+64.580124082" observedRunningTime="2025-08-13 00:20:48.629194933 +0000 UTC m=+64.984384728" watchObservedRunningTime="2025-08-13 00:20:52.674943653 +0000 UTC m=+69.030133604" Aug 13 00:20:52.718332 kubelet[3353]: I0813 00:20:52.717503 3353 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-76df78f95-t54cs" podStartSLOduration=37.269418015 podStartE2EDuration="49.716928821s" podCreationTimestamp="2025-08-13 00:20:03 +0000 UTC" firstStartedPulling="2025-08-13 00:20:39.578915788 +0000 UTC m=+55.934105583" lastFinishedPulling="2025-08-13 00:20:52.026426594 +0000 UTC m=+68.381616389" observedRunningTime="2025-08-13 00:20:52.714896357 +0000 UTC m=+69.070086176" watchObservedRunningTime="2025-08-13 00:20:52.716928821 +0000 UTC m=+69.072118628" Aug 13 00:20:53.647510 kubelet[3353]: I0813 00:20:53.647438 3353 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:20:54.067662 containerd[2016]: time="2025-08-13T00:20:54.066963712Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:54.082765 containerd[2016]: time="2025-08-13T00:20:54.079884124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Aug 13 00:20:54.082765 containerd[2016]: time="2025-08-13T00:20:54.082680316Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:54.093838 containerd[2016]: time="2025-08-13T00:20:54.092892712Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:20:54.108157 containerd[2016]: time="2025-08-13T00:20:54.107706508Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.764295929s" Aug 13 00:20:54.108654 containerd[2016]: time="2025-08-13T00:20:54.108255940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Aug 13 00:20:54.125668 containerd[2016]: time="2025-08-13T00:20:54.123752788Z" level=info msg="CreateContainer within sandbox \"58e87984fd78f784e58df2d6014a214f5340586b764d17f12837298f58495bde\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 00:20:54.159102 containerd[2016]: time="2025-08-13T00:20:54.159037708Z" level=info msg="CreateContainer within sandbox \"58e87984fd78f784e58df2d6014a214f5340586b764d17f12837298f58495bde\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7cfaf2638246141f4dbf1cc7dd688c1013a828b9f50994bd9f931392a2c9dada\"" Aug 13 00:20:54.163065 containerd[2016]: time="2025-08-13T00:20:54.161449312Z" level=info msg="StartContainer for \"7cfaf2638246141f4dbf1cc7dd688c1013a828b9f50994bd9f931392a2c9dada\"" Aug 13 00:20:54.257985 systemd[1]: run-containerd-runc-k8s.io-7cfaf2638246141f4dbf1cc7dd688c1013a828b9f50994bd9f931392a2c9dada-runc.sgmafC.mount: Deactivated successfully. Aug 13 00:20:54.276480 systemd[1]: Started cri-containerd-7cfaf2638246141f4dbf1cc7dd688c1013a828b9f50994bd9f931392a2c9dada.scope - libcontainer container 7cfaf2638246141f4dbf1cc7dd688c1013a828b9f50994bd9f931392a2c9dada. Aug 13 00:20:54.378624 containerd[2016]: time="2025-08-13T00:20:54.378327462Z" level=info msg="StartContainer for \"7cfaf2638246141f4dbf1cc7dd688c1013a828b9f50994bd9f931392a2c9dada\" returns successfully" Aug 13 00:20:54.993553 systemd[1]: Started sshd@11-172.31.18.147:22-139.178.89.65:48582.service - OpenSSH per-connection server daemon (139.178.89.65:48582). Aug 13 00:20:55.103872 kubelet[3353]: I0813 00:20:55.103420 3353 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 00:20:55.105594 kubelet[3353]: I0813 00:20:55.105085 3353 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 00:20:55.186374 sshd[6513]: Accepted publickey for core from 139.178.89.65 port 48582 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:55.191495 sshd[6513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:55.203731 systemd-logind[2000]: New session 12 of user core. Aug 13 00:20:55.211962 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 00:20:55.498980 sshd[6513]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:55.511165 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:20:55.513618 systemd[1]: sshd@11-172.31.18.147:22-139.178.89.65:48582.service: Deactivated successfully. Aug 13 00:20:55.524009 systemd-logind[2000]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:20:55.547863 systemd[1]: Started sshd@12-172.31.18.147:22-139.178.89.65:48584.service - OpenSSH per-connection server daemon (139.178.89.65:48584). Aug 13 00:20:55.550439 systemd-logind[2000]: Removed session 12. Aug 13 00:20:55.741689 sshd[6527]: Accepted publickey for core from 139.178.89.65 port 48584 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:55.744895 kubelet[3353]: I0813 00:20:55.744006 3353 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zw9d2" podStartSLOduration=28.032811722 podStartE2EDuration="43.743977628s" podCreationTimestamp="2025-08-13 00:20:12 +0000 UTC" firstStartedPulling="2025-08-13 00:20:38.399909326 +0000 UTC m=+54.755099121" lastFinishedPulling="2025-08-13 00:20:54.111075244 +0000 UTC m=+70.466265027" observedRunningTime="2025-08-13 00:20:54.684348043 +0000 UTC m=+71.039537862" watchObservedRunningTime="2025-08-13 00:20:55.743977628 +0000 UTC m=+72.099167435" Aug 13 00:20:55.755215 sshd[6527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:55.776025 systemd-logind[2000]: New session 13 of user core. Aug 13 00:20:55.782038 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 00:20:56.215162 sshd[6527]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:56.227816 systemd[1]: sshd@12-172.31.18.147:22-139.178.89.65:48584.service: Deactivated successfully. Aug 13 00:20:56.236825 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:20:56.239112 systemd-logind[2000]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:20:56.272167 systemd[1]: Started sshd@13-172.31.18.147:22-139.178.89.65:48590.service - OpenSSH per-connection server daemon (139.178.89.65:48590). Aug 13 00:20:56.274783 systemd-logind[2000]: Removed session 13. Aug 13 00:20:56.477045 sshd[6539]: Accepted publickey for core from 139.178.89.65 port 48590 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:20:56.481888 sshd[6539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:20:56.493104 systemd-logind[2000]: New session 14 of user core. Aug 13 00:20:56.498910 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 00:20:56.770161 sshd[6539]: pam_unix(sshd:session): session closed for user core Aug 13 00:20:56.779848 systemd[1]: sshd@13-172.31.18.147:22-139.178.89.65:48590.service: Deactivated successfully. Aug 13 00:20:56.779924 systemd-logind[2000]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:20:56.785154 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:20:56.788150 systemd-logind[2000]: Removed session 14. Aug 13 00:21:01.814157 systemd[1]: Started sshd@14-172.31.18.147:22-139.178.89.65:53764.service - OpenSSH per-connection server daemon (139.178.89.65:53764). Aug 13 00:21:02.006845 sshd[6580]: Accepted publickey for core from 139.178.89.65 port 53764 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:21:02.010958 sshd[6580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:02.018757 systemd-logind[2000]: New session 15 of user core. Aug 13 00:21:02.028984 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 00:21:02.290944 sshd[6580]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:02.298047 systemd[1]: sshd@14-172.31.18.147:22-139.178.89.65:53764.service: Deactivated successfully. Aug 13 00:21:02.302345 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:21:02.307560 systemd-logind[2000]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:21:02.310085 systemd-logind[2000]: Removed session 15. Aug 13 00:21:07.334305 systemd[1]: Started sshd@15-172.31.18.147:22-139.178.89.65:53778.service - OpenSSH per-connection server daemon (139.178.89.65:53778). Aug 13 00:21:07.504964 sshd[6592]: Accepted publickey for core from 139.178.89.65 port 53778 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:21:07.507719 sshd[6592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:07.516246 systemd-logind[2000]: New session 16 of user core. Aug 13 00:21:07.522929 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 00:21:07.779471 sshd[6592]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:07.788115 systemd[1]: sshd@15-172.31.18.147:22-139.178.89.65:53778.service: Deactivated successfully. Aug 13 00:21:07.792603 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:21:07.794049 systemd-logind[2000]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:21:07.798042 systemd-logind[2000]: Removed session 16. Aug 13 00:21:10.522550 kubelet[3353]: I0813 00:21:10.522392 3353 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:21:12.827312 systemd[1]: Started sshd@16-172.31.18.147:22-139.178.89.65:45706.service - OpenSSH per-connection server daemon (139.178.89.65:45706). Aug 13 00:21:13.008796 sshd[6613]: Accepted publickey for core from 139.178.89.65 port 45706 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:21:13.013121 sshd[6613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:13.024379 systemd-logind[2000]: New session 17 of user core. Aug 13 00:21:13.033162 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 00:21:13.351622 sshd[6613]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:13.361201 systemd-logind[2000]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:21:13.363928 systemd[1]: sshd@16-172.31.18.147:22-139.178.89.65:45706.service: Deactivated successfully. Aug 13 00:21:13.371608 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:21:13.377176 systemd-logind[2000]: Removed session 17. Aug 13 00:21:18.392187 systemd[1]: Started sshd@17-172.31.18.147:22-139.178.89.65:45708.service - OpenSSH per-connection server daemon (139.178.89.65:45708). Aug 13 00:21:18.584530 sshd[6650]: Accepted publickey for core from 139.178.89.65 port 45708 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:21:18.587675 sshd[6650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:18.597942 systemd-logind[2000]: New session 18 of user core. Aug 13 00:21:18.600953 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 00:21:18.864567 sshd[6650]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:18.871257 systemd[1]: sshd@17-172.31.18.147:22-139.178.89.65:45708.service: Deactivated successfully. Aug 13 00:21:18.876775 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:21:18.878774 systemd-logind[2000]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:21:18.881007 systemd-logind[2000]: Removed session 18. Aug 13 00:21:18.903204 systemd[1]: Started sshd@18-172.31.18.147:22-139.178.89.65:45716.service - OpenSSH per-connection server daemon (139.178.89.65:45716). Aug 13 00:21:19.080176 sshd[6663]: Accepted publickey for core from 139.178.89.65 port 45716 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:21:19.082997 sshd[6663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:19.092497 systemd-logind[2000]: New session 19 of user core. Aug 13 00:21:19.103922 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 00:21:19.696320 sshd[6663]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:19.705027 systemd[1]: sshd@18-172.31.18.147:22-139.178.89.65:45716.service: Deactivated successfully. Aug 13 00:21:19.710489 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:21:19.713604 systemd-logind[2000]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:21:19.735263 systemd-logind[2000]: Removed session 19. Aug 13 00:21:19.741598 systemd[1]: Started sshd@19-172.31.18.147:22-139.178.89.65:58070.service - OpenSSH per-connection server daemon (139.178.89.65:58070). Aug 13 00:21:19.934792 sshd[6694]: Accepted publickey for core from 139.178.89.65 port 58070 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:21:19.938998 sshd[6694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:19.949600 systemd-logind[2000]: New session 20 of user core. Aug 13 00:21:19.960915 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 00:21:20.991297 sshd[6694]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:21.007394 systemd-logind[2000]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:21:21.007999 systemd[1]: sshd@19-172.31.18.147:22-139.178.89.65:58070.service: Deactivated successfully. Aug 13 00:21:21.013426 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:21:21.039614 systemd-logind[2000]: Removed session 20. Aug 13 00:21:21.049520 systemd[1]: Started sshd@20-172.31.18.147:22-139.178.89.65:58072.service - OpenSSH per-connection server daemon (139.178.89.65:58072). Aug 13 00:21:21.252270 sshd[6715]: Accepted publickey for core from 139.178.89.65 port 58072 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:21:21.255337 sshd[6715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:21.263443 systemd-logind[2000]: New session 21 of user core. Aug 13 00:21:21.271911 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 00:21:21.812375 sshd[6715]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:21.822993 systemd-logind[2000]: Session 21 logged out. Waiting for processes to exit. Aug 13 00:21:21.824428 systemd[1]: sshd@20-172.31.18.147:22-139.178.89.65:58072.service: Deactivated successfully. Aug 13 00:21:21.828346 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 00:21:21.830894 systemd-logind[2000]: Removed session 21. Aug 13 00:21:21.853179 systemd[1]: Started sshd@21-172.31.18.147:22-139.178.89.65:58082.service - OpenSSH per-connection server daemon (139.178.89.65:58082). Aug 13 00:21:22.024960 sshd[6728]: Accepted publickey for core from 139.178.89.65 port 58082 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:21:22.027597 sshd[6728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:22.036451 systemd-logind[2000]: New session 22 of user core. Aug 13 00:21:22.041912 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 00:21:22.297010 sshd[6728]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:22.303839 systemd[1]: sshd@21-172.31.18.147:22-139.178.89.65:58082.service: Deactivated successfully. Aug 13 00:21:22.307299 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 00:21:22.310919 systemd-logind[2000]: Session 22 logged out. Waiting for processes to exit. Aug 13 00:21:22.313603 systemd-logind[2000]: Removed session 22. Aug 13 00:21:27.342185 systemd[1]: Started sshd@22-172.31.18.147:22-139.178.89.65:58086.service - OpenSSH per-connection server daemon (139.178.89.65:58086). Aug 13 00:21:27.518618 sshd[6741]: Accepted publickey for core from 139.178.89.65 port 58086 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:21:27.521476 sshd[6741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:27.529049 systemd-logind[2000]: New session 23 of user core. Aug 13 00:21:27.544920 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 00:21:27.792011 sshd[6741]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:27.798655 systemd[1]: sshd@22-172.31.18.147:22-139.178.89.65:58086.service: Deactivated successfully. Aug 13 00:21:27.803032 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 00:21:27.804959 systemd-logind[2000]: Session 23 logged out. Waiting for processes to exit. Aug 13 00:21:27.807453 systemd-logind[2000]: Removed session 23. Aug 13 00:21:32.835008 systemd[1]: Started sshd@23-172.31.18.147:22-139.178.89.65:55270.service - OpenSSH per-connection server daemon (139.178.89.65:55270). Aug 13 00:21:33.016291 sshd[6778]: Accepted publickey for core from 139.178.89.65 port 55270 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:21:33.019052 sshd[6778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:33.032082 systemd-logind[2000]: New session 24 of user core. Aug 13 00:21:33.040938 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 00:21:33.285078 sshd[6778]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:33.291569 systemd[1]: sshd@23-172.31.18.147:22-139.178.89.65:55270.service: Deactivated successfully. Aug 13 00:21:33.296475 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 00:21:33.298282 systemd-logind[2000]: Session 24 logged out. Waiting for processes to exit. Aug 13 00:21:33.300310 systemd-logind[2000]: Removed session 24. Aug 13 00:21:38.329398 systemd[1]: Started sshd@24-172.31.18.147:22-139.178.89.65:55274.service - OpenSSH per-connection server daemon (139.178.89.65:55274). Aug 13 00:21:38.512693 sshd[6809]: Accepted publickey for core from 139.178.89.65 port 55274 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:21:38.516168 sshd[6809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:38.528940 systemd-logind[2000]: New session 25 of user core. Aug 13 00:21:38.532998 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 00:21:38.840340 sshd[6809]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:38.848573 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 00:21:38.851182 systemd[1]: sshd@24-172.31.18.147:22-139.178.89.65:55274.service: Deactivated successfully. Aug 13 00:21:38.862158 systemd-logind[2000]: Session 25 logged out. Waiting for processes to exit. Aug 13 00:21:38.869994 systemd-logind[2000]: Removed session 25. Aug 13 00:21:43.881503 systemd[1]: Started sshd@25-172.31.18.147:22-139.178.89.65:49238.service - OpenSSH per-connection server daemon (139.178.89.65:49238). Aug 13 00:21:44.113193 sshd[6842]: Accepted publickey for core from 139.178.89.65 port 49238 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:21:44.119078 sshd[6842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:44.132965 systemd-logind[2000]: New session 26 of user core. Aug 13 00:21:44.142002 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 00:21:44.477014 sshd[6842]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:44.486590 systemd[1]: sshd@25-172.31.18.147:22-139.178.89.65:49238.service: Deactivated successfully. Aug 13 00:21:44.492263 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 00:21:44.495776 systemd-logind[2000]: Session 26 logged out. Waiting for processes to exit. Aug 13 00:21:44.498938 systemd-logind[2000]: Removed session 26. Aug 13 00:21:49.521409 systemd[1]: Started sshd@26-172.31.18.147:22-139.178.89.65:40962.service - OpenSSH per-connection server daemon (139.178.89.65:40962). Aug 13 00:21:49.717828 sshd[6859]: Accepted publickey for core from 139.178.89.65 port 40962 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:21:49.722007 sshd[6859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:49.737226 systemd-logind[2000]: New session 27 of user core. Aug 13 00:21:49.744012 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 00:21:50.095214 sshd[6859]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:50.102773 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 00:21:50.105630 systemd[1]: sshd@26-172.31.18.147:22-139.178.89.65:40962.service: Deactivated successfully. Aug 13 00:21:50.118819 systemd-logind[2000]: Session 27 logged out. Waiting for processes to exit. Aug 13 00:21:50.122856 systemd-logind[2000]: Removed session 27. Aug 13 00:21:52.174258 systemd[1]: run-containerd-runc-k8s.io-e55cb64682343d3a13ea997c9cffa932d6c5448f62804913ba528837d35a084d-runc.p2muDC.mount: Deactivated successfully. Aug 13 00:21:55.134429 systemd[1]: Started sshd@27-172.31.18.147:22-139.178.89.65:40974.service - OpenSSH per-connection server daemon (139.178.89.65:40974). Aug 13 00:21:55.320374 sshd[6921]: Accepted publickey for core from 139.178.89.65 port 40974 ssh2: RSA SHA256:5ZP49ylZaeKoaoG/AzraaaovTV7vWS+bRyuygC4N/Z4 Aug 13 00:21:55.324702 sshd[6921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:21:55.336886 systemd-logind[2000]: New session 28 of user core. Aug 13 00:21:55.342427 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 00:21:55.649575 sshd[6921]: pam_unix(sshd:session): session closed for user core Aug 13 00:21:55.661907 systemd-logind[2000]: Session 28 logged out. Waiting for processes to exit. Aug 13 00:21:55.664094 systemd[1]: sshd@27-172.31.18.147:22-139.178.89.65:40974.service: Deactivated successfully. Aug 13 00:21:55.672058 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 00:21:55.676489 systemd-logind[2000]: Removed session 28. Aug 13 00:22:10.746819 systemd[1]: cri-containerd-27475d5f3af118bd12df2523de05c231105f6e6dda697c7281f6a13734af73d2.scope: Deactivated successfully. Aug 13 00:22:10.747337 systemd[1]: cri-containerd-27475d5f3af118bd12df2523de05c231105f6e6dda697c7281f6a13734af73d2.scope: Consumed 7.356s CPU time, 25.7M memory peak, 0B memory swap peak. Aug 13 00:22:10.806443 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27475d5f3af118bd12df2523de05c231105f6e6dda697c7281f6a13734af73d2-rootfs.mount: Deactivated successfully. Aug 13 00:22:10.812648 containerd[2016]: time="2025-08-13T00:22:10.799543473Z" level=info msg="shim disconnected" id=27475d5f3af118bd12df2523de05c231105f6e6dda697c7281f6a13734af73d2 namespace=k8s.io Aug 13 00:22:10.812648 containerd[2016]: time="2025-08-13T00:22:10.812629797Z" level=warning msg="cleaning up after shim disconnected" id=27475d5f3af118bd12df2523de05c231105f6e6dda697c7281f6a13734af73d2 namespace=k8s.io Aug 13 00:22:10.813466 containerd[2016]: time="2025-08-13T00:22:10.812690385Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:22:10.876527 systemd[1]: cri-containerd-61866350d4565583b758baf6816e4c5d20b1712f9588187456521cc1c1a0f7d5.scope: Deactivated successfully. Aug 13 00:22:10.878843 systemd[1]: cri-containerd-61866350d4565583b758baf6816e4c5d20b1712f9588187456521cc1c1a0f7d5.scope: Consumed 23.514s CPU time. Aug 13 00:22:10.919735 containerd[2016]: time="2025-08-13T00:22:10.919583686Z" level=info msg="shim disconnected" id=61866350d4565583b758baf6816e4c5d20b1712f9588187456521cc1c1a0f7d5 namespace=k8s.io Aug 13 00:22:10.919735 containerd[2016]: time="2025-08-13T00:22:10.919702690Z" level=warning msg="cleaning up after shim disconnected" id=61866350d4565583b758baf6816e4c5d20b1712f9588187456521cc1c1a0f7d5 namespace=k8s.io Aug 13 00:22:10.919735 containerd[2016]: time="2025-08-13T00:22:10.919725106Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:22:10.933907 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61866350d4565583b758baf6816e4c5d20b1712f9588187456521cc1c1a0f7d5-rootfs.mount: Deactivated successfully. Aug 13 00:22:11.006991 kubelet[3353]: I0813 00:22:11.006615 3353 scope.go:117] "RemoveContainer" containerID="27475d5f3af118bd12df2523de05c231105f6e6dda697c7281f6a13734af73d2" Aug 13 00:22:11.011279 kubelet[3353]: I0813 00:22:11.010556 3353 scope.go:117] "RemoveContainer" containerID="61866350d4565583b758baf6816e4c5d20b1712f9588187456521cc1c1a0f7d5" Aug 13 00:22:11.015929 containerd[2016]: time="2025-08-13T00:22:11.015730890Z" level=info msg="CreateContainer within sandbox \"4e60e27934ce4e466e5f56646a5a9388c59a9ceb79ca5ca41c861d8b26cb08bb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Aug 13 00:22:11.017765 containerd[2016]: time="2025-08-13T00:22:11.017685366Z" level=info msg="CreateContainer within sandbox \"0b13b5a84a29606db9932909b8d40e708dd0e3a9d379bfb06dd341e8f2bd2134\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Aug 13 00:22:11.060351 containerd[2016]: time="2025-08-13T00:22:11.060166482Z" level=info msg="CreateContainer within sandbox \"4e60e27934ce4e466e5f56646a5a9388c59a9ceb79ca5ca41c861d8b26cb08bb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"87aca067a82fe3ad71a252b311c05ee7e6b08a6b0c0653dbe16d4167b69f05db\"" Aug 13 00:22:11.061232 containerd[2016]: time="2025-08-13T00:22:11.061162206Z" level=info msg="StartContainer for \"87aca067a82fe3ad71a252b311c05ee7e6b08a6b0c0653dbe16d4167b69f05db\"" Aug 13 00:22:11.069380 containerd[2016]: time="2025-08-13T00:22:11.069161934Z" level=info msg="CreateContainer within sandbox \"0b13b5a84a29606db9932909b8d40e708dd0e3a9d379bfb06dd341e8f2bd2134\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"42d2c6f4ca7255be5a9ccd6e04c73f2e6c08370e54706af115a07897c401f337\"" Aug 13 00:22:11.070499 containerd[2016]: time="2025-08-13T00:22:11.069995346Z" level=info msg="StartContainer for \"42d2c6f4ca7255be5a9ccd6e04c73f2e6c08370e54706af115a07897c401f337\"" Aug 13 00:22:11.116986 systemd[1]: Started cri-containerd-87aca067a82fe3ad71a252b311c05ee7e6b08a6b0c0653dbe16d4167b69f05db.scope - libcontainer container 87aca067a82fe3ad71a252b311c05ee7e6b08a6b0c0653dbe16d4167b69f05db. Aug 13 00:22:11.161323 systemd[1]: Started cri-containerd-42d2c6f4ca7255be5a9ccd6e04c73f2e6c08370e54706af115a07897c401f337.scope - libcontainer container 42d2c6f4ca7255be5a9ccd6e04c73f2e6c08370e54706af115a07897c401f337. Aug 13 00:22:11.220027 containerd[2016]: time="2025-08-13T00:22:11.219852271Z" level=info msg="StartContainer for \"87aca067a82fe3ad71a252b311c05ee7e6b08a6b0c0653dbe16d4167b69f05db\" returns successfully" Aug 13 00:22:11.232293 containerd[2016]: time="2025-08-13T00:22:11.232203715Z" level=info msg="StartContainer for \"42d2c6f4ca7255be5a9ccd6e04c73f2e6c08370e54706af115a07897c401f337\" returns successfully" Aug 13 00:22:16.019458 systemd[1]: cri-containerd-571d5fd75098040ce032bd2c474c4d26415381d881f0b9773d678ce8a79a6915.scope: Deactivated successfully. Aug 13 00:22:16.021792 systemd[1]: cri-containerd-571d5fd75098040ce032bd2c474c4d26415381d881f0b9773d678ce8a79a6915.scope: Consumed 6.428s CPU time, 16.3M memory peak, 0B memory swap peak. Aug 13 00:22:16.088972 containerd[2016]: time="2025-08-13T00:22:16.088826303Z" level=info msg="shim disconnected" id=571d5fd75098040ce032bd2c474c4d26415381d881f0b9773d678ce8a79a6915 namespace=k8s.io Aug 13 00:22:16.090154 containerd[2016]: time="2025-08-13T00:22:16.088973459Z" level=warning msg="cleaning up after shim disconnected" id=571d5fd75098040ce032bd2c474c4d26415381d881f0b9773d678ce8a79a6915 namespace=k8s.io Aug 13 00:22:16.090154 containerd[2016]: time="2025-08-13T00:22:16.088996979Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:22:16.092736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-571d5fd75098040ce032bd2c474c4d26415381d881f0b9773d678ce8a79a6915-rootfs.mount: Deactivated successfully. Aug 13 00:22:16.971779 kubelet[3353]: E0813 00:22:16.971675 3353 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-147?timeout=10s\": context deadline exceeded" Aug 13 00:22:17.052014 kubelet[3353]: I0813 00:22:17.051967 3353 scope.go:117] "RemoveContainer" containerID="571d5fd75098040ce032bd2c474c4d26415381d881f0b9773d678ce8a79a6915" Aug 13 00:22:17.055093 containerd[2016]: time="2025-08-13T00:22:17.054947004Z" level=info msg="CreateContainer within sandbox \"77e989ebe5db930fb1e048ee9220aae585838b435901cb1d878c949a049c6db5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Aug 13 00:22:17.085596 containerd[2016]: time="2025-08-13T00:22:17.085541688Z" level=info msg="CreateContainer within sandbox \"77e989ebe5db930fb1e048ee9220aae585838b435901cb1d878c949a049c6db5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"9a2b2a0037e724e0fd5bc1d175071a845ca7901937666ea05911dcd86ef5e469\"" Aug 13 00:22:17.086866 containerd[2016]: time="2025-08-13T00:22:17.086805972Z" level=info msg="StartContainer for \"9a2b2a0037e724e0fd5bc1d175071a845ca7901937666ea05911dcd86ef5e469\"" Aug 13 00:22:17.157948 systemd[1]: Started cri-containerd-9a2b2a0037e724e0fd5bc1d175071a845ca7901937666ea05911dcd86ef5e469.scope - libcontainer container 9a2b2a0037e724e0fd5bc1d175071a845ca7901937666ea05911dcd86ef5e469. Aug 13 00:22:17.234253 containerd[2016]: time="2025-08-13T00:22:17.233961481Z" level=info msg="StartContainer for \"9a2b2a0037e724e0fd5bc1d175071a845ca7901937666ea05911dcd86ef5e469\" returns successfully" Aug 13 00:22:22.704623 systemd[1]: cri-containerd-42d2c6f4ca7255be5a9ccd6e04c73f2e6c08370e54706af115a07897c401f337.scope: Deactivated successfully. Aug 13 00:22:22.742908 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42d2c6f4ca7255be5a9ccd6e04c73f2e6c08370e54706af115a07897c401f337-rootfs.mount: Deactivated successfully. Aug 13 00:22:22.754000 containerd[2016]: time="2025-08-13T00:22:22.753907689Z" level=info msg="shim disconnected" id=42d2c6f4ca7255be5a9ccd6e04c73f2e6c08370e54706af115a07897c401f337 namespace=k8s.io Aug 13 00:22:22.754963 containerd[2016]: time="2025-08-13T00:22:22.754313025Z" level=warning msg="cleaning up after shim disconnected" id=42d2c6f4ca7255be5a9ccd6e04c73f2e6c08370e54706af115a07897c401f337 namespace=k8s.io Aug 13 00:22:22.754963 containerd[2016]: time="2025-08-13T00:22:22.754340589Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:22:23.086782 kubelet[3353]: I0813 00:22:23.086739 3353 scope.go:117] "RemoveContainer" containerID="61866350d4565583b758baf6816e4c5d20b1712f9588187456521cc1c1a0f7d5" Aug 13 00:22:23.087412 kubelet[3353]: I0813 00:22:23.087307 3353 scope.go:117] "RemoveContainer" containerID="42d2c6f4ca7255be5a9ccd6e04c73f2e6c08370e54706af115a07897c401f337" Aug 13 00:22:23.088040 kubelet[3353]: E0813 00:22:23.087527 3353 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-747864d56d-7rvck_tigera-operator(c723ebbb-e05b-4f11-8f31-60a3c17ea0e4)\"" pod="tigera-operator/tigera-operator-747864d56d-7rvck" podUID="c723ebbb-e05b-4f11-8f31-60a3c17ea0e4" Aug 13 00:22:23.089898 containerd[2016]: time="2025-08-13T00:22:23.089852406Z" level=info msg="RemoveContainer for \"61866350d4565583b758baf6816e4c5d20b1712f9588187456521cc1c1a0f7d5\"" Aug 13 00:22:23.097030 containerd[2016]: time="2025-08-13T00:22:23.096851178Z" level=info msg="RemoveContainer for \"61866350d4565583b758baf6816e4c5d20b1712f9588187456521cc1c1a0f7d5\" returns successfully" Aug 13 00:22:26.973162 kubelet[3353]: E0813 00:22:26.972944 3353 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-147?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"